WorldWideScience

Sample records for estimating polynya cloudiness

  1. Arctic polynya and glacier interactions

    Science.gov (United States)

    Edwards, Laura

    2013-04-01

    Major uncertainties surround future estimates of sea level rise attributable to mass loss from the polar ice sheets and ice caps. Understanding changes across the Arctic is vital as major potential contributors to sea level, the Greenland Ice Sheet and the ice caps and glaciers of the Canadian Arctic archipelago, have experienced dramatic changes in recent times. Most ice mass loss is currently focused at a relatively small number of glacier catchments where ice acceleration, thinning and calving occurs at ocean margins. Research suggests that these tidewater glaciers accelerate and iceberg calving rates increase when warming ocean currents increase melt on the underside of floating glacier ice and when adjacent sea ice is removed causing a reduction in 'buttressing' back stress. Thus localised changes in ocean temperatures and in sea ice (extent and thickness) adjacent to major glacial catchments can impact hugely on the dynamics of, and hence mass lost from, terrestrial ice sheets and ice caps. Polynyas are areas of open water within sea ice which remain unfrozen for much of the year. They vary significantly in size (~3 km2 to > ~50,000 km2 in the Arctic), recurrence rates and duration. Despite their relatively small size, polynyas play a vital role in the heat balance of the polar oceans and strongly impact regional oceanography. Where polynyas develop adjacent to tidewater glaciers their influence on ocean circulation and water temperatures may play a major part in controlling subsurface ice melt rates by impacting on the water masses reaching the calving front. Areas of open water also play a significant role in controlling the potential of the atmosphere to carry moisture, as well as allowing heat exchange between the atmosphere and ocean, and so can influence accumulation on (and hence thickness of) glaciers and ice caps. Polynya presence and size also has implications for sea ice extent and therefore potentially the buttressing effect on neighbouring

  2. Modelling sea ice formation in the Terra Nova Bay polynya

    Science.gov (United States)

    Sansiviero, M.; Morales Maqueda, M. Á.; Fusco, G.; Aulicino, G.; Flocco, D.; Budillon, G.

    2017-02-01

    realistic polynya extent estimates. The model-derived polynya extent has been validated by comparing the modelled sea ice concentration against MODIS high resolution satellite images, confirming that the model is able to reproduce reasonably well the TNB polynya evolution in terms of both shape and extent.

  3. Mercury bioaccumulation and biomagnification in a small Arctic polynya ecosystem

    Energy Technology Data Exchange (ETDEWEB)

    Clayden, Meredith G., E-mail: meredith.clayden@gmail.com [Canadian Rivers Institute and Biology Department, University of New Brunswick, Saint John, NB E2L 4L5 (Canada); Arsenault, Lilianne M. [Canadian Rivers Institute and Biology Department, University of New Brunswick, Saint John, NB E2L 4L5 (Canada); Department of Earth and Environmental Science, Acadia University, Wolfville, NS B4P 2R6 (Canada); Department of Biology, Acadia University, Wolfville, NS B4P 2R6 (Canada); Kidd, Karen A. [Canadian Rivers Institute and Biology Department, University of New Brunswick, Saint John, NB E2L 4L5 (Canada); O' Driscoll, Nelson J. [Department of Earth and Environmental Science, Acadia University, Wolfville, NS B4P 2R6 (Canada); Mallory, Mark L. [Department of Biology, Acadia University, Wolfville, NS B4P 2R6 (Canada)

    2015-03-15

    Recurring polynyas are important areas of biological productivity and feeding grounds for seabirds and mammals in the Arctic marine environment. In this study, we examined food web structure (using carbon and nitrogen isotopes, δ{sup 13}C and δ{sup 15}N) and mercury (Hg) bioaccumulation and biomagnification in a small recurring polynya ecosystem near Nasaruvaalik Island (Nunavut, Canada). Methyl Hg (MeHg) concentrations increased by more than 50-fold from copepods (Calanus hyperboreus) to Arctic terns (Sterna paradisaea), the abundant predators at this site. The biomagnification of MeHg through members of the food web – using the slope of log MeHg versus δ{sup 15}N – was 0.157 from copepods (C. hyperboreus) to fish. This slope was higher (0.267) when seabird chicks were included in the analyses. Collectively, our results indicate that MeHg biomagnification is occurring in this small polynya and that its trophic transfer is at the lower end of the range of estimates from other Arctic marine ecosystems. In addition, we measured Hg concentrations in some poorly studied members of Arctic marine food webs [e.g. Arctic alligatorfish (Ulcina olrikii) and jellyfish, Medusozoa], and found that MeHg concentrations in jellyfish were lower than expected given their trophic position. Overall, these findings provide fundamental information about food web structure and mercury contamination in a small Arctic polynya, which will inform future research in such ecosystems and provide a baseline against which to assess changes over time resulting from environmental disturbance. - Highlights: • Polynyas are recurring sites of open water in polar marine areas • Mercury (Hg) biomagnification was studied in a small polynya near Nasaruvaalik Island, NU, Canada • Hg biomagnification estimates for invertebrates to fish were low compared to other Arctic systems • Factors underlying this result are unknown but may relate to primary productivity in small polynyas.

  4. Mercury bioaccumulation and biomagnification in a small Arctic polynya ecosystem

    International Nuclear Information System (INIS)

    Clayden, Meredith G.; Arsenault, Lilianne M.; Kidd, Karen A.; O'Driscoll, Nelson J.; Mallory, Mark L.

    2015-01-01

    Recurring polynyas are important areas of biological productivity and feeding grounds for seabirds and mammals in the Arctic marine environment. In this study, we examined food web structure (using carbon and nitrogen isotopes, δ 13 C and δ 15 N) and mercury (Hg) bioaccumulation and biomagnification in a small recurring polynya ecosystem near Nasaruvaalik Island (Nunavut, Canada). Methyl Hg (MeHg) concentrations increased by more than 50-fold from copepods (Calanus hyperboreus) to Arctic terns (Sterna paradisaea), the abundant predators at this site. The biomagnification of MeHg through members of the food web – using the slope of log MeHg versus δ 15 N – was 0.157 from copepods (C. hyperboreus) to fish. This slope was higher (0.267) when seabird chicks were included in the analyses. Collectively, our results indicate that MeHg biomagnification is occurring in this small polynya and that its trophic transfer is at the lower end of the range of estimates from other Arctic marine ecosystems. In addition, we measured Hg concentrations in some poorly studied members of Arctic marine food webs [e.g. Arctic alligatorfish (Ulcina olrikii) and jellyfish, Medusozoa], and found that MeHg concentrations in jellyfish were lower than expected given their trophic position. Overall, these findings provide fundamental information about food web structure and mercury contamination in a small Arctic polynya, which will inform future research in such ecosystems and provide a baseline against which to assess changes over time resulting from environmental disturbance. - Highlights: • Polynyas are recurring sites of open water in polar marine areas • Mercury (Hg) biomagnification was studied in a small polynya near Nasaruvaalik Island, NU, Canada • Hg biomagnification estimates for invertebrates to fish were low compared to other Arctic systems • Factors underlying this result are unknown but may relate to primary productivity in small polynyas

  5. Clearness index in cloudy days estimated with meteorological information by multiple regression analysis; Kisho joho wo riyoshita kaiki bunseki ni yoru dontenbi no seiten shisu no suitei

    Energy Technology Data Exchange (ETDEWEB)

    Nakagawa, S [Maizuru National College of Technology, Kyoto (Japan); Kenmoku, Y; Sakakibara, T [Toyohashi University of Technology, Aichi (Japan); Kawamoto, T [Shizuoka University, Shizuoka (Japan). Faculty of Engineering

    1996-10-27

    Study is under way for a more accurate solar radiation quantity prediction for the enhancement of solar energy utilization efficiency. Utilizing the technique of roughly estimating the day`s clearness index from forecast weather, the forecast weather (constituted of weather conditions such as `clear,` `cloudy,` etc., and adverbs or adjectives such as `afterward,` `temporary,` and `intermittent`) has been quantified relative to the clearness index. This index is named the `weather index` for the purpose of this article. The error high in rate in the weather index relates to cloudy days, which means a weather index falling in 0.2-0.5. It has also been found that there is a high correlation between the clearness index and the north-south wind direction component. A multiple regression analysis has been carried out, under the circumstances, for the estimation of clearness index from the maximum temperature and the north-south wind direction component. As compared with estimation of the clearness index on the basis only of the weather index, estimation using the weather index and maximum temperature achieves a 3% improvement throughout the year. It has also been learned that estimation by use of the weather index and north-south wind direction component enables a 2% improvement for summer and a 5% or higher improvement for winter. 2 refs., 6 figs., 4 tabs.

  6. Coastal polynyas in the southern Weddell Sea: Variability of the surface energy budget

    Science.gov (United States)

    Renfrew, Ian A.; King, John C.; Markus, Thorsten

    2002-06-01

    . The standard deviation of the energy exchange during the freezing (melting) season is 28% (95%) of the mean. During the freezing season, positive surface heat fluxes are equated with ice production rates. The average annual coastal polynya ice production is 1.11 × 1011 m3 (or 24 m per unit area), with a range from 0.71 × 1011 (in 1994) to 1.55 × 1011 m3 (in 1995). This can be compared to the estimated total ice production for the entire Weddell Sea: on average the coastal polynya ice production makes up 6.08% of the total, with a range from 3.65 (in 1994) to 9.11% (in 1995).

  7. Ross Sea Polynyas: Response of Ice Concentration Retrievals to Large Areas of Thin Ice

    Science.gov (United States)

    Kwok, R.; Comiso, J. C.; Martin, S.; Drucker, R.

    2007-01-01

    For a 3-month period between May and July of 2005, we examine the response of the Advanced Microwave Scanning Radiometer (AMSR-E) Enhanced NASA Team 2 (NT2) and AMSR-E Bootstrap (ABA) ice concentration algorithms to large areas of thin ice of the Ross Sea polynyas. Coincident Envisat Synthetic Aperture Radar (SAR) coverage of the region during this period offers a detailed look at the development of the polynyas within several hundred kilometers of the ice front. The high-resolution imagery and derived ice motion fields show bands of polynya ice, covering up to approximately 105 km(sup 2) of the Ross Sea, that are associated with wind-forced advection. In this study, ice thickness from AMSR-E 36 GHz polarization information serves as the basis for examination of the response. The quality of the thickness of newly formed sea ice (<10 cm) from AMSR-E is first assessed with thickness estimates derived from ice surface temperatures from the Moderate Resolution Imaging Spectroradiometer (MODIS) instrument. The effect of large areas of thin ice in lowering the ice concentration estimates from both NT2/ABA approaches is clearly demonstrated. Results show relatively robust relationships between retrieved ice concentrations and thin ice thickness estimates that differ between the two algorithms. These relationships define the approximate spatial coincidence of ice concentration and thickness isopleths. Using the 83% (ABA) and 91% (NT2) isopleths as polynya boundaries, we show that the computed coverage compares well with that using the estimated 10-cm thickness contour. The thin ice response characterized here suggests that in regions with polynyas, the retrieval results could be used to provide useful geophysical information, namely thickness and coverage.

  8. Structure and forcing of the overflow at the Storfjorden sill and its connection to the Arctic coastal polynya in Storfjorden

    Directory of Open Access Journals (Sweden)

    F. Geyer

    2010-03-01

    Full Text Available Storfjorden (Svalbard is a sill-fjord with an active polynya and exemplifies the dense water formation process over the Arctic shelves. Here we report on our simulations of Storfjorden covering the freezing season of 1999–2000 using an eddy-permitting 3-D ocean circulation model with a fully coupled dynamic and thermodynamic sea-ice model. The model results in the polynya region and of the dense water plume flowing over the sill crest are compared to observations. The connections of the overflow at the sill to the dense water production at the polynya and to the local wind forcing are investigated. Both the overflow and the polynya dynamics are found to be sensitive to wind forcing. In response to freezing and brine rejection over the polynya, the buoyancy forcing initiates an abrupt positive density anomaly. While the ocean integrates the buoyancy forcing over several polynya events (about 25 days, the wind forcing dominates the overflow response at the sill at weather scale. In the model, the density excess is diluted in the basin and leads to a gradual build-up of dense water behind the sill. The overflow transport is typically inferred from observations using a single current profiler at the sill crest. Despite the significant variability of the plume width, we show that a constant overflow width of 15 km produces realistic estimates of the overflow volume transport. Another difficulty in monitoring the overflow is measuring the plume thickness in the absence of hydrographic profiles. Volume flux estimates assuming a constant plume width and the thickness inferred from velocity profiles explain 58% of the modelled overflow volume flux variance and agrees to within 10% when averaged over the overflow season.

  9. The 2017 Release Cloudy

    Science.gov (United States)

    Ferland, G. J.; Chatzikos, M.; Guzmán, F.; Lykins, M. L.; van Hoof, P. A. M.; Williams, R. J. R.; Abel, N. P.; Badnell, N. R.; Keenan, F. P.; Porter, R. L.; Stancil, P. C.

    2017-10-01

    We describe the 2017 release of the spectral synthesis code Cloudy, summarizing the many improvements to the scope and accuracy of the physics which have been made since the previous release. Exporting the atomic data into external data files has enabled many new large datasets to be incorporated into the code. The use of the complete datasets is not realistic for most calculations, so we describe the limited subset of data used by default, which predicts significantly more lines than the previous release of Cloudy. This version is nevertheless faster than the previous release, as a result of code optimizations. We give examples of the accuracy limits using small models, and the performance requirements of large complete models. We summarize several advances in the H- and He-like iso-electronic sequences and use our complete collisional-radiative models to establish the densities where the coronal and local thermodynamic equilibrium approximations work.

  10. The Northeast Greenland Sirius Water Polynya dynamics and variability inferred from satellite imagery

    DEFF Research Database (Denmark)

    Pedersen, Jørn Bjarke Torp; Kaufmann, Laura Hauch; Kroon, Aart

    2010-01-01

    One of the most prominent polynyas in Northeast Greenland, already noted by the early expeditions in the area, is located around Shannon Ø and Pendulum Øer between 75° and 74°N in the transition zone between the fast ice and pack ice. This study names the polynya the ‘Sirius Water Polynya...... and summer regimes in the seasonal evolution of the polynya. During the winter regime, both the size of and the ice cover within the polynya varies significantly on a temporal and spatial scale. Intermittent wind-driven openings of the polynya alternate with periods of increasing ice cover. Some of the most...

  11. Polynyas in a dynamic-thermodynamic sea-ice model

    Directory of Open Access Journals (Sweden)

    E. Ö. Ólason

    2010-04-01

    Full Text Available The representation of polynyas in viscous-plastic dynamic-thermodynamic sea-ice models is studied in a simplified test domain, in order to give recommendations about parametrisation choices. Bjornsson et al. (2001 validated their dynamic-thermodynamic model against a polynya flux model in a similar setup and we expand on that work here, testing more sea-ice rheologies and new-ice thickness formulations. The two additional rheologies tested give nearly identical results whereas the two new-ice thickness parametrisations tested give widely different results. Based on our results we argue for using the new-ice thickness parametrisation of Hibler (1979. We also implement a new parametrisation for the parameter h0 from Hibler's scheme, based on ideas from a collection depth parametrisation for flux polynya models.

  12. Spatial and temporal distribution patterns of benthic foraminifera in the Northeast Water Polynya, Greenland

    Science.gov (United States)

    Ahrens, Michael J.; Graf, Gerhard; Altenbach, Alexander V.

    1997-01-01

    Abundance, biofacies and ATP content of benthic foraminifera (>63 μm) were studied in the Northeast Water (NEW) Polynya (77-81°N, 5-17°W) over the ice-free summer, 1993, to investigate how a polynya system might influence the underlying benthic community. In the living assemblage, distinguished by Rose Bengal staining, over 60 taxa could be identified. The biofacies identified was similar to that of other Arctic shelf habitats. Foraminifera were counted in 3 size fractions (63-125 μm, 125-250 μm and >250 μm), with 65% of the foraminifera occurring in the smallest size fraction (63-125 μm). Total abundances (>63 μm) in the uppermost 1 cm averaged approximately 200 ind/10 cm 3 and declined down-core, as did the number of species. Abundances and species composition correlated positively with sediment chlorophyll and ATP content, with maxima occurring in the shallower northern regions of the polynya, suggesting a general dependence on food. Foraminera biomass was estimated to be 0.1-0.3 g C org/m 2. Abundances, biomass and ATP content were comparable to ice-free, deep-sea regions in the Norwegian Sea. Temporal changes observed over a 2 month period at one location were difficult to distinguish from spatial and analytical variability. Contrary to expectations, growth was unpronounced at the community and at a species level, implying either a delayed response of the benthic foraminiferal community to food inputs from the overlying water column or the presence of biological limitations other than food, such as predation.

  13. Particle flux on the continental shelf in the Amundsen Sea Polynya and Western Antarctic Peninsula

    Directory of Open Access Journals (Sweden)

    Hugh W. Ducklow

    2015-04-01

    Full Text Available Abstract We report results from a yearlong, moored sediment trap in the Amundsen Sea Polynya (ASP, the first such time series in this remote and productive ecosystem. Results are compared to a long-term (1992–2013 time series from the western Antarctic Peninsula (WAP. The ASP trap was deployed from December 2010 to December 2011 at 350 m depth. We observed two brief, but high flux events, peaking at 8 and 5 mmol C m−2 d−1 in January and December 2011, respectively, with a total annual capture of 315 mmol C m−2. Both peak fluxes and annual capture exceeded the comparable WAP observations. Like the overlying phytoplankton bloom observed during the cruise in the ASP (December 2010 to January 2011, particle flux was dominated by Phaeocystis antarctica, which produced phytodetrital aggregates. Particles at the start of the bloom were highly depleted in 13C, indicating their origin in the cold, CO2-rich winter waters exposed by retreating sea ice. As the bloom progressed, microscope visualization and stable isotopic composition provided evidence for an increasing contribution by zooplankton fecal material. Incubation experiments and zooplankton observations suggested that fecal pellet production likely contributed 10–40% of the total flux during the first flux event, and could be very high during episodic krill swarms. Independent estimates of export from the surface (100 m were about 5–10 times that captured in the trap at 350 m. Estimated bacterial respiration was sufficient to account for much of the decline in the flux between 50 and 350 m, whereas zooplankton respiration was much lower. The ASP system appears to export only a small fraction of its production deeper than 350 m within the polynya region. The export efficiency was comparable to other polar regions where phytoplankton blooms were not dominated by diatoms.

  14. In situ phytoplankton distributions in the Amundsen Sea Polynya measured by autonomous gliders

    Directory of Open Access Journals (Sweden)

    Oscar Schofield

    2015-10-01

    Full Text Available Abstract The Amundsen Sea Polynya is characterized by large phytoplankton blooms, which makes this region disproportionately important relative to its size for the biogeochemistry of the Southern Ocean. In situ data on phytoplankton are limited, which is problematic given recent reports of sustained change in the Amundsen Sea. During two field expeditions to the Amundsen Sea during austral summer 2010–2011 and 2014, we collected physical and bio-optical data from ships and autonomous underwater gliders. Gliders documented large phytoplankton blooms associated with Antarctic Surface Waters with low salinity surface water and shallow upper mixed layers (< 50 m. High biomass was not always associated with a specific water mass, suggesting the importance of upper mixed depth and light in influencing phytoplankton biomass. Spectral optical backscatter and ship pigment data suggested that the composition of phytoplankton was spatially heterogeneous, with the large blooms dominated by Phaeocystis and non-bloom waters dominated by diatoms. Phytoplankton growth rates estimated from field data (≤ 0.10 day−1 were at the lower end of the range measured during ship-based incubations, reflecting both in situ nutrient and light limitations. In the bloom waters, phytoplankton biomass was high throughout the 50-m thick upper mixed layer. Those biomass levels, along with the presence of colored dissolved organic matter and detritus, resulted in a euphotic zone that was often < 10 m deep. The net result was that the majority of phytoplankton were light-limited, suggesting that mixing rates within the upper mixed layer were critical to determining the overall productivity; however, regional productivity will ultimately be controlled by water column stability and the depth of the upper mixed layer, which may be enhanced with continued ice melt in the Amundsen Sea Polynya.

  15. Polynyas and Ice Production Evolution in the Ross Sea (PIPERS)

    Science.gov (United States)

    Ackley, S. F.

    2017-12-01

    One focus of the PIPERS cruise into the Ross Sea ice cover during April-June 2017 was the Terra Nova Bay (TNB) polynya where joint measurements of air-ice-ocean wave interaction were conducted over twelve days. In Terra Nova Bay, measurements were made in three katabatic wind events each with sustained winds over 35 ms-1 and air temperatures below -15C. Near shore, intense wave fields with wave amplitudes of over 2m and 7-9 sec periods built and large amounts of frazil ice crystals grew. The frazil ice gathered initially into short and narrow plumes that eventually were added laterally to create longer and wider streaks or bands. Breaking waves within these wider streaks were dampened which appeared to enhance the development of pancake ice. Eventually, the open water areas between the streaks sealed off, developing a complete ice cover of 100 percent concentration (80-90 percent pancakes, 20-10 percent frazil) over a wide front (30km). The pancakes continued to grow in diameter and thickness as waves alternately contracted and expanded the ice cover, with the thicker larger floes further diminishing the wave field and lateral motion between pancakes until the initial pancake ice growth ceased. The equilibrium thickness of the ice was 20-30cm in the pancake ice. While the waves had died off however, katabatic wind velocities were sustained and resulted in a wide area of concentrated, rafted, pancake ice that was rapidly advected downstream until the end of the katabatic event. High resolution TerraSar-X radar satellite imagery showed the length of the ice area produced in one single event extended over 300km or ten times the length of the open water area during one polynya event. The TNB polynya is therefore an "ice factory" where frazil ice is manufactured into pancake ice floes that are then pushed out of the assembly area and advected, rafted (and occasionally piled up into "dragon skin" ice), until the katabatic wind dies off at the coastal source.

  16. Distribution and diet of Ivory gulls (Pagophila eburnea) in the North Water polynya

    OpenAIRE

    Karnovsky, NJ; Hobson, KA; Brown, ZW; Hunt, GL

    2009-01-01

    Ivory gulls (Pagophila eburnea, Phipps, 1774), one of the world's least-known species, have declined throughout their range in recent years. This study describes the patterns of ivory gull use of the North Water polynya, a large polynya that occurs every year near ivory gull breeding sites on Ellesmere Island, Nunavut, Canada. We conducted at-sea surveys from Canadian icebreakers during the summers of 1997-99. In 1998, stomach contents of five ivory gulls were analyzed. We measured stable iso...

  17. The Okhotsk Sea Kashevarov Bank Polynya: Its Dependence on Diurnal and Fortnightly Tides and its Initial Formation

    Science.gov (United States)

    Martin, Seelye; Polyakov, Igor; Markus, Thorsten; Drucker, Robert

    2003-01-01

    Open water areas within the sea ice (polynyas) are sources of intense heat exchange between the ocean and the atmosphere. In this paper, we used microwave and visible/infrared satellite data together with a sea ice model to investigate the polynya opening mechanisms. The satellite data and the model show significant agreement and prove that tides play an active role in the polynya dynamics.

  18. Polynya dynamics and associated atmospheric forcing at the Ronne Ice Shelf

    Science.gov (United States)

    Ebner, Lars; Heinemann, Günther

    2014-05-01

    The Ronne Ice Shelf is known as one of the most active regions of polynya developments around the Antarctic continent. Low temperatures are prevailing throughout the whole year, particularly in winter. It is generally recognized that polynya formations are primarily forced by offshore winds and secondarily by ocean currents. Many authors have addressed this issue previously at the Ross Ice Shelf and Adélie Coast and connected polynya dynamics to strong katabatic surge events. Such investigations of atmospheric dynamics and simultaneous polynya occurrence are still severely underrepresented for the southwestern part of the Weddell Sea and especially for the Ronne Ice Shelf. Due to the very flat terrain gradients of the ice shelf katabatic winds are of minor importance in that area. Other atmospheric processes must therefore play a crucial role for polynya developments at the Ronne Ice Shelf. High-resolution simulations have been carried out for the Weddell Sea region using the non-hydrostatic NWP model COSMO from the German Meteorological Service (DWD). For the austral autumn and winter (March to August) 2008 daily forecast simulations were conducted with the consideration of daily sea-ice coverage deduced from the passive microwave system AMSR-E. These simulations are used to analyze the synoptic and mesoscale atmospheric dynamics of the Weddell Sea region and find linkages to polynya occurrence at the Ronne Ice Shelf. For that reason, the relation between the surface wind speed, the synoptic pressure gradient in the free atmosphere and polynya area is investigated. Seven significant polynya events are identified for the simulation period, three in the autumn and four in the winter season. It can be shown that in almost all cases synoptic cyclones are the primary polynya forcing systems. In most cases the timely interaction of several passing cyclones in the northern and central Weddell Sea leads to maintenance of a strong synoptic pressure gradient above the

  19. Conception rate of artificially inseminated Holstein cows affected by cloudy vaginal mucus, under intense heat conditions

    OpenAIRE

    Miguel Mellado; Laura Maricela Lara; Francisco Gerardo Veliz; María Ángeles de Santiago; Leonel Avendaño-Reyes; Cesar Meza-Herrera; José Eduardo Garcia

    2015-01-01

    The objective of this work was to obtain prevalence estimates of cloudy vaginal mucus in artificially inseminated Holstein cows raised under intense heat, in order to assess the effect of meteorological conditions on its occurrence during estrus and to determine its effect on conception rate. In a first study, an association was established between the occurrence of cloudy vaginal mucus during estrus and the conception rate of inseminated cows (18,620 services), raised under intense heat (mea...

  20. On the formation of coastal polynyas in the area of Commonwealth Bay, Eastern Antarctica

    Science.gov (United States)

    Wendler, Gerd; Gilmore, Dan; Curtis, Jan

    Antarctica's King George V Land and Adélie Land were first explored by Sir Douglas Mawson and his party during their 1911-1913 expedition. They were astounded by the strength of the katabatic wind, which is so dominant in this area. These strong offshore winds can move the sea ice away from shore, forming coastal polynya, not only in summer but even in midwinter. Poor visibility due to darkness and frequently occurring blowing snow make the study of these polynyas from land-based observations difficult. Recently, coverage of this area by synthetic aperture radar (SAR) satellite imagery, which has a high resolution of 40 m (pixel size 12.5 m), gave additional insight into the characteristics of these polynyas. This high resolution is needed because the width of the polynya is small (10 km or so). Furthermore, of special importance is the fact that SAR data can be obtained during darkness and overcast conditions. Following original Russian work, we modified a simple model for wind-driven coastal polynyas, using actual meteorological data from our coastal automatic weather stations as input. Using mean monthly data for the stations, we show that coastal polynyas are to be expected in the windiest area (Cape Denison-Port Martin); while to the west (Dumont d'Urville) and east (Penguin Point), the average conditions do not produce them. Here, they occur only during strong and long-lasting storms. Our observational data of the polynyas as viewed from SAR and advanced very high resolution radiometer (AVHRR) confirm these findings.

  1. The importance of polynyas, ice edges, and leads to marine mammals and birds

    Science.gov (United States)

    Stirling, Ian

    1997-01-01

    The correlation between areas of open water in ice-covered seas and increased biological productivity has been noted for some time. To date, most attention has been focused on larger polynyas, such as the Northeast Water and the Northwater. Although spectacular in their own right, these large polynyas represent only part of a vitally important continuum of biological productivity that varies significantly between geographic areas and ice habitats, that includes the multi-year pack of the polar ocean and small localized polynyas in annual ice. Surveys of the distribution and abundance of ringed seals in the Canadian Arctic Archipelago have shown differences in density that are correlated with the presence or absence of polynyas. There is also significant variation in the biological productivity of polynya areas of the Canadian High Arctic Archipelago and northern Greenland, all of which receive inflow from the polar basin. Long-term studies of polar bears and ringed seals in western Hudson Bay and the eastern Beaufort Sea show significant but dissimilar patterns of change in condition and reproductive rates between the two regions and suggest that fundamentally different climatic or oceanographic processes may be involved. Projections of climate models suggest that, if warming occurs, then the extent of ice cover in Hudson Bay may be among the first things affected. Long-term studies of polar bears and ringed seals in the eastern Beaufort Sea and Hudson Bay would suggest these two species to be suitable indicators of significant climatic or oceanographic changes in the marine ecosystem.

  2. Sources of iron in the Ross Sea Polynya in early summer

    NARCIS (Netherlands)

    Gerringa, L. J. A.; Laan, P.; van Dijken, G. L.; van Haren, H.; De Baar, H. J. W.; Arrigo, K. R.; Alderkamp, A. -C.

    2015-01-01

    Dissolved Fe (DFe) was measured in the Ross Sea Polynya (RSP), Antarctica, during a GEOTRACES cruise between 20 December 2013 and 5 January 2014. DFe was measured over the full water column with special emphasis on samples near the seafloor. In the upper mixed layer, DFe was very low everywhere (

  3. The evolution of water property in the Mackenzie Bay polynya during Antarctic winter

    Science.gov (United States)

    Xu, Zhixin; Gao, Guoping; Xu, Jianping; Shi, Maochong

    2017-10-01

    Temperature and salinity profile data, collected by southern elephant seals equipped with autonomous CTD-Satellite Relay Data Loggers (CTD-SRDLs) during the Antarctic wintertime in 2011 and 2012, were used to study the evolution of water property and the resultant formation of the high density water in the Mackenzie Bay polynya (MBP) in front of the Amery Ice Shelf (AIS). In late March the upper 100-200 m layer is characterized by strong halocline and inversion thermocline. The mixed layer keeps deepening up to 250 m by mid-April with potential temperature remaining nearly the surface freezing point and sea surface salinity increasing from 34.00 to 34.21. From then on until mid-May, the whole water column stays isothermally at about -1.90℃ while the surface salinity increases by a further 0.23. Hereafter the temperature increases while salinity decreases along with the increasing depth both by 0.1 order of magnitude vertically. The upper ocean heat content ranging from 120.5 to 2.9 MJ m-2, heat flux with the values of 9.8-287.0 W m-2 loss and the sea ice growth rates of 4.3-11.7 cm d-1 were estimated by using simple 1-D heat and salt budget methods. The MBP exists throughout the whole Antarctic winter (March to October) due to the air-sea-ice interaction, with an average size of about 5.0×103 km2. It can be speculated that the decrease of the salinity of the upper ocean may occur after October each year. The recurring sea-ice production and the associated brine rejection process increase the salinity of the water column in the MBP progressively, resulting in, eventually, the formation of a large body of high density water.

  4. Cloudiness and Its Relationship to Saturation Pressure Differences during a Developing East Coast Winter Storm.

    Science.gov (United States)

    Alliss, Randall J.; Raman, Sethu

    1995-11-01

    Cloudiness derived from surface observations and the Geostationary Operational Environmental Satellite VISSR (Visible Infrared Spin Scan Radiometer) Atmospheric Sounder (VAS) are compared with thermodynamic properties derived from upper-air soundings over the Gulf Stream locale during a developing winter storm. The Gulf Stream locale covers the United States mid-Atlantic coastal states, the Gulf Stream, and portions of the Sargasso Sea. Cloudiness is found quite frequently in this region. Cloud-top pressures are derived from VAS using the CO2 slicing technique and a simple threshold procedure. Cloud-base heights and cloud fractions are obtained from National Weather Service hourly reporting stations. The saturation pressure differences, defined as the difference between air parcel pressure and saturation-level pressure (lifted condensation level), are derived from upper-air soundings. Collocated comparisons with VAS and surface observations are also made. Results indicate that cloudiness is observed nearly all of the time during the 6-day period, well above the 8-yr mean. High, middle, and low opaque cloudiness are found approximately equally. Furthermore, of the high- and midlevel cloudiness observed, a considerable amount is determined to be semitransparent to terrestrial radiation. Comparisons of satellite-inferred cloudiness with surface observations indicate that the satellite can complement surface observations of cloud cover, particularly above 700 mb.Surface-observed cloudiness is segregated according to a composite cloud fraction and compared to the mean saturation pressure difference for a 1000 600-mb layer. The analysis suggests that this conserved variable may be a good indicator for estimating cloud fraction. Large negative values of saturation pressure difference correlate highly with clear skies, while those approaching zero correlate with overcast conditions. Scattered and broken cloud fractions are associated with increasing values of the

  5. Conception rate of artificially inseminated Holstein cows affected by cloudy vaginal mucus, under intense heat conditions

    Directory of Open Access Journals (Sweden)

    Miguel Mellado

    2015-06-01

    Full Text Available The objective of this work was to obtain prevalence estimates of cloudy vaginal mucus in artificially inseminated Holstein cows raised under intense heat, in order to assess the effect of meteorological conditions on its occurrence during estrus and to determine its effect on conception rate. In a first study, an association was established between the occurrence of cloudy vaginal mucus during estrus and the conception rate of inseminated cows (18,620 services, raised under intense heat (mean annual temperature of 22°C, at highly technified farms, in the arid region of northern Mexico. In a second study, data from these large dairy operations were used to assess the effect of meteorological conditions throughout the year on the occurrence of cloudy vaginal mucus during artificial insemination (76,899 estruses. The overall rate of estruses with cloudy vaginal mucus was 21.4% (16,470/76,899; 95% confidence interval = 21.1-21.7%. The conception rate of cows with clean vaginal mucus was higher than that of cows with abnormal mucus (30.6 vs. 22%. Prevalence of estruses with cloudy vaginal mucus was strongly dependent on high ambient temperature and markedly higher in May and June. Acceptable conception rates in high milk-yielding Holstein cows can only be obtained with cows showing clear and translucid mucus at artificial insemination.

  6. Modeling of Dense Water Production and Salt Transport from Alaskan Coastal Polynyas

    Science.gov (United States)

    Signorini, Sergio R.; Cavalieri, Donald J.

    2000-01-01

    The main significance of this paper is that a realistic, three-dimensional, high-resolution primitive equation model has been developed to study the effects of dense water formation in Arctic coastal polynyas. The model includes realistic ambient stratification, realistic bottom topography, and is forced by time-variant surface heat flux, surface salt flux, and time-dependent coastal flow. The salt and heat fluxes, and the surface ice drift, are derived from satellite observations (SSM/I and NSCAT sensors). The model is used to study the stratification, salt transport, and circulation in the vicinity of Barrow Canyon during the 1996/97 winter season. The coastal flow (Alaska coastal current), which is an extension of the Bering Sea throughflow, is formulated in the model using the wind-transport regression. The results show that for the 1996/97 winter the northeastward coastal current exports 13% to 26% of the salt produced by coastal polynyas upstream of Barrow Canyon in 20 to 30 days. The salt export occurs more rapidly during less persistent polynyas. The inclusion of ice-water stress in the model makes the coastal current slightly weaker and much wider due to the combined effects of surface drag and offshore Ekman transport.

  7. Brief communication : Impacts of a developing polynya off Commonwealth Bay, East Antarctica, triggered by grounding of iceberg B09B

    NARCIS (Netherlands)

    Fogwill, Christopher J.; Van Sebille, Erik; Cougnon, Eva A.; Turney, Chris S M; Rintoul, Steve R.; Galton-Fenzi, Benjamin K.; Clark, Graeme F.; Marzinelli, E. M.; Rainsley, Eleanor B.; Carter, Lionel

    2016-01-01

    The dramatic calving of the Mertz Glacier tongue in 2010, precipitated by the movement of iceberg B09B, reshaped the oceanographic regime across the Mertz Polynya and Commonwealth Bay, regions where high-salinity shelf water (HSSW) - the precursor to Antarctic bottom water (AABW) - is formed. Here

  8. Key role of organic complexation of iron in sustaining phytoplankton blooms in the Pine Island and Amundsen Polynyas (Southern Ocean)

    NARCIS (Netherlands)

    Thuroczy, Charles-Edouard; Alderkamp, Anne-Carlijn; Laan, Patrick; Gerringa, Loes J. A.; Mills, Matthew M.; Van Dijken, Gert L.; De Baar, Hein J. W.; Arrigo, Kevin R.

    2012-01-01

    Primary productivity in the Amundsen Sea (Southern Ocean) is among the highest in Antarctica. The summer phytoplankton bloom in 2009 lasted for > 70 days in both the Pine Island and Amundsen Polynyas. Such productive blooms require a large supply of nutrients, including the trace metal iron (Fe).

  9. Quantitative analysis of night skyglow amplification under cloudy conditions

    Science.gov (United States)

    Kocifaj, Miroslav; Solano Lamphar, Héctor Antonio

    2014-10-01

    The radiance produced by artificial light is a major source of nighttime over-illumination. It can, however, be treated experimentally using ground-based and satellite data. These two types of data complement each other and together have a high information content. For instance, the satellite data enable upward light emissions to be normalized, and this in turn allows skyglow levels at the ground to be modelled under cloudy or overcast conditions. Excessive night lighting imposes an unacceptable burden on nature, humans and professional astronomy. For this reason, there is a pressing need to determine the total amount of downwelling diffuse radiation. Undoubtedly, cloudy periods can cause a significant increase in skyglow as a result of amplification owing to diffuse reflection from clouds. While it is recognized that the amplification factor (AF) varies with cloud cover, the effects of different types of clouds, of atmospheric turbidity and of the geometrical relationships between the positions of an individual observer, the cloud layer, and the light source are in general poorly known. In this paper the AF is quantitatively analysed considering different aerosol optical depths (AODs), urban layout sizes and cloud types with specific albedos and altitudes. The computational results show that the AF peaks near the edges of a city rather than at its centre. In addition, the AF appears to be a decreasing function of AOD, which is particularly important when modelling the skyglow in regions with apparent temporal or seasonal variability of atmospheric turbidity. The findings in this paper will be useful to those designing engineering applications or modelling light pollution, as well as to astronomers and environmental scientists who aim to predict the amplification of skyglow caused by clouds. In addition, the semi-analytical formulae can be used to estimate the AF levels, especially in densely populated metropolitan regions for which detailed computations may be CPU

  10. Extensions and applications of the Cloudy Bag Model

    International Nuclear Information System (INIS)

    Morgan, M.A.

    1984-01-01

    Three separate calculations involving the Cloudy Bag Model (CBM) of physical baryons are presented. First, two methods are used to investigate higher order corrections to the self-energy of the nucleon. Both methods are found to yield self-energies which are less negative than the standard second order perturbation theory. The second calculation is a correction to the predictions for baryon magnetic moments in the volume coupling version of the CBM. The correction is due to an extra term in the electromagnetic current and is found to be not larger than other theoretical uncertainties such as those due to the motion of the center of mass. The last calculation is an estimate of the electric dipole moment (EDM) of the neutron. A parity and time violating quark-pion interaction motivated by QCD is added to the CBM lagrangian. The CBM is a natural model to use in this calculation since it includes the effects of both quarks and pions which have been, until now, employed separately in QCD motivated calculations of the neutron EDM

  11. A Neural Network Based Intelligent Predictive Sensor for Cloudiness, Solar Radiation and Air Temperature

    Science.gov (United States)

    Ferreira, Pedro M.; Gomes, João M.; Martins, Igor A. C.; Ruano, António E.

    2012-01-01

    Accurate measurements of global solar radiation and atmospheric temperature, as well as the availability of the predictions of their evolution over time, are important for different areas of applications, such as agriculture, renewable energy and energy management, or thermal comfort in buildings. For this reason, an intelligent, light-weight and portable sensor was developed, using artificial neural network models as the time-series predictor mechanisms. These have been identified with the aid of a procedure based on the multi-objective genetic algorithm. As cloudiness is the most significant factor affecting the solar radiation reaching a particular location on the Earth surface, it has great impact on the performance of predictive solar radiation models for that location. This work also represents one step towards the improvement of such models by using ground-to-sky hemispherical colour digital images as a means to estimate cloudiness by the fraction of visible sky corresponding to clouds and to clear sky. The implementation of predictive models in the prototype has been validated and the system is able to function reliably, providing measurements and four-hour forecasts of cloudiness, solar radiation and air temperature. PMID:23202230

  12. A combined approach of remote sensing and airborne electromagnetics to determine the volume of polynya sea ice in the Laptev Sea

    Directory of Open Access Journals (Sweden)

    L. Rabenstein

    2013-06-01

    Full Text Available A combined interpretation of synthetic aperture radar (SAR satellite images and helicopter electromagnetic (HEM sea-ice thickness data has provided an estimate of sea-ice volume formed in Laptev Sea polynyas during the winter of 2007/08. The evolution of the surveyed sea-ice areas, which were formed between late December 2007 and middle April 2008, was tracked using a series of SAR images with a sampling interval of 2–3 days. Approximately 160 km of HEM data recorded in April 2008 provided sea-ice thicknesses along profiles that transected sea ice varying in age from 1 to 116 days. For the volume estimates, thickness information along the HEM profiles was extrapolated to zones of the same age. The error of areal mean thickness information was estimated to be between 0.2 m for younger ice and up to 1.55 m for older ice, with the primary error source being the spatially limited HEM coverage. Our results have demonstrated that the modal thicknesses and mean thicknesses of level ice correlated with the sea-ice age, but that varying dynamic and thermodynamic sea-ice growth conditions resulted in a rather heterogeneous sea-ice thickness distribution on scales of tens of kilometers. Taking all uncertainties into account, total sea-ice area and volume produced within the entire surveyed area were 52 650 km2 and 93.6 ± 26.6 km3. The surveyed polynya contributed 2.0 ± 0.5% of the sea-ice produced throughout the Arctic during the 2007/08 winter. The SAR-HEM volume estimate compares well with the 112 km3 ice production calculated with a~high-resolution ocean sea-ice model. Measured modal and mean-level ice thicknesses correlate with calculated freezing-degree-day thicknesses with a factor of 0.87–0.89, which was too low to justify the assumption of homogeneous thermodynamic growth conditions in the area, or indicates a strong dynamic thickening of level ice by rafting of even thicker ice.

  13. Laser experiments in light cloudiness with the geostationary satellite ARTEMIS

    Science.gov (United States)

    Kuzkov, V.; Kuzkov, S.; Sodnik, Z.

    2016-08-01

    The geostationary satellite ARTEMIS was launched in July 2001. The satellite is equipped with a laser communication terminal, which was used for the world's first inter-satellite laser communication link between ARTEMIS and the low earth orbit satellite SPOT-4. Ground-to-space laser communication experiments were also conducted under various atmospheric conditions involving ESA's optical ground station. With a rapidly increasing volume of information transferred by geostationary satellites, there is a rising demand for high-speed data links between ground stations and satellites. For ground-to-space laser communications there are a number of important design parameters that need to be addressed, among them, the influence of atmospheric turbulence in different atmospheric conditions and link geometries. The Main Astronomical Observatory of NAS of Ukraine developed a precise computer tracking system for its 0.7 m AZT-2 telescope and a compact laser communication package LACES (Laser Atmosphere and Communication experiments with Satellites) for laser communication experiments with geostationary satellites. The specially developed software allows computerized tracking of the satellites using their orbital data. A number of laser experiments between MAO and ARTEMIS were conducted in partial cloudiness with some amount of laser light observed through clouds. Such conditions caused high break-up (splitting) of images from the laser beacon of ARTEMIS. One possible explanation is Raman scattering of photons on molecules of a water vapor in the atmosphere. Raman scattering causes a shift in a wavelength of the photons.In addition, a different value for the refraction index appears in the direction of the meridian for the wavelength-shifted photons. This is similar to the anomalous atmospheric refraction that appears at low angular altitudes above the horizon. We have also estimated the atmospheric attenuation and the influence of atmospheric turbulence on observed results

  14. Cloudy's Journey from FORTRAN to C, Why and How

    Science.gov (United States)

    Ferland, G. J.

    Cloudy is a large-scale plasma simulation code that is widely used across the astronomical community as an aid in the interpretation of spectroscopic data. The cover of the ADAS VI book featured predictions of the code. The FORTRAN 77 source code has always been freely available on the Internet, contributing to its widespread use. The coming of PCs and Linux has fundamentally changed the computing environment. Modern Fortran compilers (F90 and F95) are not freely available. A common-use code must be written in either FORTRAN 77 or C to be Open Source/GNU/Linux friendly. F77 has serious drawbacks - modern language constructs cannot be used, students do not have skills in this language, and it does not contribute to their future employability. It became clear that the code would have to be ported to C to have a viable future. I describe the approach I used to convert Cloudy from FORTRAN 77 with MILSPEC extensions to ANSI/ISO 89 C. Cloudy is now openly available as a C code, and will evolve to C++ as gcc and standard C++ mature. Cloudy looks to a bright future with a modern language.

  15. The neutron electric dipole moment in the cloudy bag model

    International Nuclear Information System (INIS)

    Morgan, M.A.; Miller, G.A.

    1986-01-01

    An evaluation of the neutron electric dipole moment (NEDM), using the cloudy bag model (CBM) shows that two CP-violating effects (a quark mass term and a pion-quark interaction) have contributions that are about equal in magnitude, but opposite in sign. This cancellation allows the upper limit on the θ parameter to increase by about an order of magnitude. (orig.)

  16. Simulations of the broad line region of NGC 5548 with CLOUDY code: Temperature determination

    Directory of Open Access Journals (Sweden)

    Ilić D.

    2007-01-01

    Full Text Available In this paper an analysis of the physical properties of the Broad Line Region (BLR of the active galaxy NGC 5548 is presented. Using the photoionization code CLOUDY and the measurements of Peterson et al. (2002, the physical conditions of the BLR are simulated and the BLR temperature is obtained. This temperature was compared to the temperature estimated with the Boltzmann-Plot (BP method (Popović et al. 2007. It was shown that the measured variability in the BLR temperature could be due to the change in the hydrogen density.

  17. Properties of the cloudy bag in nuclear matter

    International Nuclear Information System (INIS)

    Bunatyan, G.G.

    1986-01-01

    Because of the pion mode softening, the pion field of the clody bag in the nuclear matter increases if the nuclear density increases. This causes in its turn the decreasing of the bag size and at a sufficiently large density of the nuclear matter lead to absolute instability of the cloudy bag-nucleon, which means the transition of the nuclear matter in another nonnucleon phase

  18. Reconstruction of MODIS Spectral Reflectance under Cloudy-Sky Condition

    Directory of Open Access Journals (Sweden)

    Bo Gao

    2016-09-01

    Full Text Available Clouds usually cause invalid observations for sensors aboard satellites, which corrupts the spatio-temporal continuity of land surface parameters retrieved from remote sensing data (e.g., MODerate Resolution Imaging Spectroradiometer (MODIS data and prevents the fusing of multi-source remote sensing data in the field of quantitative remote sensing. Based on the requirements of spatio-temporal continuity and the necessity of methods to restore bad pixels, primarily resulting from image processing, this study developed a novel method to derive the spectral reflectance for MODIS band of cloudy pixels in the visual–near infrared (VIS–NIR spectral channel based on the Bidirectional Reflectance Distribution Function (BRDF and multi-spatio-temporal observations. The proposed method first constructs the spatial distribution of land surface reflectance based on the corresponding BRDF and the solar-viewing geometry; then, a geographically weighted regression (GWR is introduced to individually derive the spectral surface reflectance for MODIS band of cloudy pixels. A validation of the proposed method shows that a total root-mean-square error (RMSE of less than 6% and a total R2 of more than 90% are detected, which indicates considerably better precision than those exhibited by other existing methods. Further validation of the retrieved white-sky albedo based on the spectral reflectance for MODIS band of cloudy pixels confirms an RMSE of 3.6% and a bias of 2.2%, demonstrating very high accuracy of the proposed method.

  19. Changes in extratropical storm track cloudiness 1983-2008: observational support for a poleward shift

    Energy Technology Data Exchange (ETDEWEB)

    Bender, Frida A.M.; Ramanathan, V. [University of California, Center for Clouds, Chemistry and Climate (C4), Scripps Institution of Oceanography, San Diego, La Jolla, CA (United States); Tselioudis, George [Columbia University, NASA Goddard Institute for Space Studies, New York, NY (United States)

    2012-05-15

    Climate model simulations suggest that the extratropical storm tracks will shift poleward as a consequence of global warming. In this study the northern and southern hemisphere storm tracks over the Pacific and Atlantic ocean basins are studied using observational data, primarily from the International Satellite Cloud Climatology Project, ISCCP. Potential shifts in the storm tracks are examined using the observed cloud structures as proxies for cyclone activity. Different data analysis methods are employed, with the objective to address difficulties and uncertainties in using ISCCP data for regional trend analysis. In particular, three data filtering techniques are explored; excluding specific problematic regions from the analysis, regressing out a spurious viewing geometry effect, and excluding specific cloud types from the analysis. These adjustments all, to varying degree, moderate the cloud trends in the original data but leave the qualitative aspects of those trends largely unaffected. Therefore, our analysis suggests that ISCCP data can be used to interpret regional trends in cloudiness, provided that data and instrumental artefacts are recognized and accounted for. The variation in magnitude between trends emerging from application of different data correction methods, allows us to estimate possible ranges for the observational changes. It is found that the storm tracks, here represented by the extent of the midlatitude-centered band of maximum cloud cover over the studied ocean basins, experience a poleward shift as well as a narrowing over the 25 year period covered by ISCCP. The observed magnitudes of these effects are larger than in current generation climate models (CMIP3). The magnitude of the shift is particularly large in the northern hemisphere Atlantic. This is also the one of the four regions in which imperfect data primarily prevents us from drawing firm conclusions. The shifted path and reduced extent of the storm track cloudiness is accompanied

  20. Changes in Extratropical Storm Track Cloudiness 1983-2008: Observational Support for a Poleward Shift

    Science.gov (United States)

    Bender, Frida A-M.; Rananathan, V.; Tselioudis, G.

    2012-01-01

    Climate model simulations suggest that the extratropical storm tracks will shift poleward as a consequence of global warming. In this study the northern and southern hemisphere storm tracks over the Pacific and Atlantic ocean basins are studied using observational data, primarily from the International Satellite Cloud Climatology Project, ISCCP. Potential shifts in the storm tracks are examined using the observed cloud structures as proxies for cyclone activity. Different data analysis methods are employed, with the objective to address difficulties and uncertainties in using ISCCP data for regional trend analysis. In particular, three data filtering techniques are explored; excluding specific problematic regions from the analysis, regressing out a spurious viewing geometry effect, and excluding specific cloud types from the analysis. These adjustments all, to varying degree, moderate the cloud trends in the original data but leave the qualitative aspects of those trends largely unaffected. Therefore, our analysis suggests that ISCCP data can be used to interpret regional trends in cloudiness, provided that data and instrumental artefacts are recognized and accounted for. The variation in magnitude between trends emerging from application of different data correction methods, allows us to estimate possible ranges for the observational changes. It is found that the storm tracks, here represented by the extent of the midlatitude-centered band of maximum cloud cover over the studied ocean basins, experience a poleward shift as well as a narrowing over the 25 year period covered by ISCCP. The observed magnitudes of these effects are larger than in current generation climate models (CMIP3). The magnitude of the shift is particularly large in the northern hemisphere Atlantic. This is also the one of the four regions in which imperfect data primarily prevents us from drawing firm conclusions. The shifted path and reduced extent of the storm track cloudiness is accompanied

  1. Kiloniella antarctica sp. nov., isolated from a polynya of Amundsen Sea in Western Antarctic Sea.

    Science.gov (United States)

    Si, Ok-Ja; Yang, Hye-Young; Hwang, Chung Yeon; Kim, So-Jeong; Choi, Sun-Bin; Kim, Jong-Geol; Jung, Man-Young; Kim, Song-Gun; Roh, Seong Woon; Rhee, Sung-Keun

    2017-07-01

    A taxonomic study was conducted on strain soj2014T, which was isolated from the surface water of a polynya in the Antarctic Sea. Comparative 16S rRNA gene sequence analysis showed that strain soj2014T belongs to the family Kiloniellaceae and is closely related to Kiloniella spongiae MEBiC09566T, 'Kiloniella litopenaei' P1-1T and Kiloniella laminariae LD81T (98.0 %, 97.8 % and 96.2 % 16S rRNA gene sequence similarity, respectively). The DNA-DNA hybridization values between strain soj2014T and closely related strains were below 28.6 %. The G+C content of the genomic DNA of strain soj2014T was 45.5 mol%. The predominant cellular fatty acids were summed feature 8 (composed of C18 : 1ω6c/C18 : 1ω7c, 57.0 %) and summed feature 3 (composed of C16 : 1ω6c/C16 : 1ω7c, 23.5 %). Strain soj2014T was Gram-stain-negative, slightly curved, spiral-shaped, and motile with a single polar flagellum. The strain grew at 0-30 °C (optimum, 25 °C), in 1.5-5.1 % (w/v) NaCl (optimum, 2.1-2.4 %) and at pH 5.5-9.5 (optimum, 7.5-8.0). It also had differential carbohydrate utilization traits and enzyme activities compared with closely related strains. Based on these phylogenetic, phenotypic and chemotaxonomic analyses, strain soj2014T represents a distinct species, separable from the reference strains, and is, therefore, proposed as a novel species, Kiloniella antarctica sp. nov. The type strain is soj2014T (=KCTC 42186T=JCM 30386T).

  2. Mercury and other trace elements in a pelagic Arctic marine food web (Northwater Polynya, Baffin Bay)

    International Nuclear Information System (INIS)

    Campbell, Linda M.; Norstrom, Ross J.; Hobson, Keith A.; Muir, Derek C.G.; Backus, Sean; Fisk, Aaron T.

    2005-01-01

    Total mercury (THg), methylmercury (MeHg) and 22 other trace elements were measured in ice algae, three species of zooplankton, mixed zooplankton samples, Arctic cod (Boreogadus saida), ringed seals (Phoca hispida) and eight species of seabirds to examine the trophodynamics of these metals in an Arctic marine food web. All samples were collected in 1998 in the Northwater Polynya (NOW) located between Ellesmere Island and Greenland in Baffin Bay. THg and MeHg were found to biomagnify through the NOW food web, based on significant positive relationships between log THg and log MeHg concentrations vs. δ 15 N muscle and liver . The slope of these relationships for muscle THg and MeHg concentrations (slope = 0.197 and 0.223, respectively) were similar to those reported for other aquatic food webs. The food web behavior of THg and δ 15 N appears constant, regardless of trophic state (eutrophic vs. oligotrophic), latitude (Arctic vs. tropical) or salinity (marine vs. freshwater) of the ecosystem. Rb in both liver and muscle tissue and Zn in muscle tissue were also found to biomagnify through this food web, although at a rate that is approximately 25% of that of THg. A number of elements (Cd, Pb and Ni in muscle tissue and Cd and Li in seabird liver tissue) were found to decrease trophically through the food web, as indicated by significantly negative relationships with tissue-specific δ 15 N. A diverse group of metals (Ag, Ba, La, Li, Sb, Sr, U and V) were found to have higher concentrations in zooplankton than seabirds or marine mammals due to bioconcentration from seawater. The remaining metals (As, Co, Cu, Ga, Mn, Mo and Se in muscle tissue) showed no relationship with trophic position, as indicated by δ 15 N values, although As in liver tissue showed significant biomagnification in the seabird portion of the food web

  3. Recent changes in solar irradiance and infrared irradiance related with air temperature and cloudiness at the King Sejong Station, Antarctica

    Science.gov (United States)

    Jung, Y.; Kim, J.; Cho, H.; Lee, B.

    2006-12-01

    The polar region play a critical role in the surface energy balance and the climate system of the Earth. The important question in the region is that what is the role of the Antarctic atmospheric heat sink of global climate. Thus, this study shows the trends of global solar irradiance, infrared irradiance, air temperature and cloudiness measured at the King Sejong station, Antarctica, during the period of 1996-2004 and determines their relationship and variability of the surface energy balance. Annual average of solar radiation and cloudiness is 81.8 Wm-2 and 6.8 oktas and their trends show the decrease of -0.24 Wm-2yr-1(-0.30 %yr-1) and 0.02 oktas yr-1(0.30 %yr-1). The change of solar irradiance is directly related to change of cloudiness and decrease of solar irradiance presents radiative cooling at the surface. Monthly mean infrared irradiance, air temperature and specific humidity shows the decrease of -2.11 Wm^{- 2}yr-1(-0.75 %yr-1), -0.07 'Cyr-1(-5.15 %yr-1) and -0.044 gkg-1yr-1(-1.42 %yr-1), respectively. Annual average of the infrared irradiance is 279.9 Wm-2 and correlated with the air temperature, specific humidity and cloudiness. A multiple regression model for estimation of the infrared irradiance using the components has been developed. Effects of the components on the infrared irradiance changes show 52 %, 19 % and 10 % for air temperature, specific humidity and cloudiness, respectively. Among the components, air temperature has a great influence on infrared irradiance. Despite the increase of cloudiness, the decrease in the infrared irradiance is due to the decrease of air temperature and specific humidity which have a cooling effect. Therefore, the net radiation of the surface energy balance shows radiative cooling of negative 11-24 Wm^{- 2} during winter and radiative warming of positive 32-83 Wm-2 during the summer. Thus, the amount of shortage and surplus at the surface is mostly balanced by turbulent flux of sensible and latent heat.

  4. Impact of the large-scale Arctic circulation and the North Water Polynya on nutrient inventories in Baffin Bay

    Science.gov (United States)

    Tremblay, Jean-Éric; Gratton, Yves; Carmack, Eddy C.; Payne, Christopher D.; Price, Neil M.

    2002-08-01

    The distributions of nitrate, phosphate, and silicate in northern Baffin Bay were determined from 90 bottle casts taken between April 11 and July 21, 1998. During late spring, low-salinity Arctic water entered northern Smith Sound and mixed with Baffin Bay water (BBW) within the North Water Polynya. The Arctic water originated from the Bering Sea and contained high concentrations of phosphate and silicate (referred to as silicate-rich Arctic water (SRAW)). The distribution of the two water masses was established using a new tracer, Siex, which showed moderate penetration of SRAW into Smith Sound during April and a very strong incursion in May and June, consistent with the intensification of southward current velocities. Biological depletion of macronutrients in BBW began in April and continued until nitrate was exhausted from the upper mixed layer in early June. Beneath the Polynya the deep waters (>450 m) showed a marked increase in nutrient concentration toward the bottom, which was most pronounced in the south and much stronger for silicate than nitrate and phosphate. The silicate enrichment was consistent with dissolution of diatom-derived biogenic silica in deep waters. The results indicate that the North Water acts as a silicate trap in which the biota differentially transports surface silicate to depth, thereby influencing local and downstream nutrient signatures.

  5. Opalescent and cloudy fruit juices: formation and particle stability.

    Science.gov (United States)

    Beveridge, Tom

    2002-07-01

    Cloudy fruit juices, particularly from tropical fruit, are becoming a fast-growing part of the fruit juice sector. The classification of cloud as coarse and fine clouds by centrifugation and composition of cloud from apple, pineapple, orange, guava, and lemon juice are described. Fine particulate is shown to be the true stable cloud and to contain considerable protein, carbohydrate, and lipid components. Often, tannin is present as well. The fine cloud probably arises from cell membranes and appears not to be simply cell debris. Factors relating to the stability of fruit juice cloud, including particle sizes, size distribution, and density, are described and discussed. Factors promoting stable cloud in juice are presented.

  6. Cloudiness over the Amazon rainforest: Meteorology and thermodynamics

    Science.gov (United States)

    Collow, Allison B. Marquardt; Miller, Mark A.; Trabachino, Lynne C.

    2016-07-01

    Comprehensive meteorological observations collected during GOAmazon2014/15 using the Atmospheric Radiation Measurement Mobile Facility no. 1 and assimilated observations from the Modern-Era Retrospective Analysis for Research and Applications, Version 2 are used to document the seasonal cycle of cloudiness, thermodynamics, and precipitation above the Amazon rainforest. The reversal of synoptic-scale vertical motions modulates the transition between the wet and dry seasons. Ascending moist air during the wet season originates near the surface of the Atlantic Ocean and is advected into the Amazon rainforest, where it experiences convergence and, ultimately, precipitates. The dry season is characterized by weaker winds and synoptic-scale subsidence with little or no moisture convergence accompanying moisture advection. This combination results in the drying of the midtroposphere during June through October as indicated by a decrease in liquid water path, integrated water, and the vertical profile of water vapor mixing ratio. The vertical profile of cloud fraction exhibits a relatively consistent decline in cloud fraction from the lifting condensation level (LCL) to the freezing level where a minimum is observed, unlike many other tropical regions. Coefficients of determination between the LCL and cloud fractional coverage suggest a relatively robust relationship between the LCL and cloudiness beneath 5 km during the dry season (R2 = 0.42) but a weak relationship during the wet season (0.12).

  7. Distribution of dissolved labile and particulate iron and copper in Terra Nova Bay polynya (Ross Sea, Antarctica) surface waters in relation to nutrients and phytoplankton growth

    Science.gov (United States)

    Rivaro, Paola; Ianni, Carmela; Massolo, Serena; Abelmoschi, M. Luisa; De Vittor, Cinzia; Frache, Roberto

    2011-05-01

    The distribution of the dissolved labile and of the particulate Fe and Cu together with dissolved oxygen, nutrients, chlorophyll a and total particulate matter was investigated in the surface waters of Terra Nova Bay polynya in mid-January 2003. The measurements were conducted within the framework of the Italian Climatic Long-term Interactions of the Mass balance in Antarctica (CLIMA) Project activities. The labile dissolved fraction was operationally defined by employing the chelating resin Chelex-100, which retains free and loosely bound trace metal species. The dissolved labile Fe ranges from below the detection limit (0.15 nM) to 3.71 nM, while the dissolved labile Cu from below the detection limit (0.10 nM) to 0.90 nM. The lowest concentrations for both metals were observed at 20 m depth (the shallowest depth for which metals were measured). The concentration of the particulate Fe was about 5 times higher than the dissolved Fe concentration, ranging from 0.56 to 24.83 nM with an average of 6.45 nM. The concentration of the particulate Cu ranged from 0.01 to 0.71 nM with an average of 0.17 nM. The values are in agreement with the previous data collected in the same area. We evaluated the role of the Fe and Cu as biolimiting metals. The N:dissolved labile Fe ratios (18,900-130,666) would or would not allow a complete nitrate removal, on the basis of the N:Fe requirement ratios that we calculated considering the N:P and the C:P ratios estimated for diatoms. This finding partially agrees with the Si:N ratio that we found (2.29). Moreover we considered a possible influence of the dissolved labile Cu on the Fe uptake process.

  8. Simulation of snow distribution and melt under cloudy conditions in an Alpine watershed

    Directory of Open Access Journals (Sweden)

    H.-Y. Li

    2011-07-01

    Full Text Available An energy balance method and remote-sensing data were used to simulate snow distribution and melt in an alpine watershed in northwestern China within a complete snow accumulation-melt period. The spatial energy budgets were simulated using meteorological observations and a digital elevation model of the watershed. A linear interpolation method was used to estimate the daily snow cover area under cloudy conditions, using Moderate Resolution Imaging Spectroradiometer (MODIS data. Hourly snow distribution and melt, snow cover extent and daily discharge were included in the simulated results. The root mean square error between the measured snow-water equivalent samplings and the simulated results is 3.2 cm. The Nash and Sutcliffe efficiency statistic (NSE between the measured and simulated discharges is 0.673, and the volume difference (Dv is 3.9 %. Using the method introduced in this article, modelling spatial snow distribution and melt runoff will become relatively convenient.

  9. K-nucleon scattering and the cloudy bag model

    International Nuclear Information System (INIS)

    Jennings, B.K.

    1986-01-01

    The cloudy bag model (CBM) has been applied with considerable success to low energy meson-nucleon scattering. In this talk I will describe in particular calculations for kaon-nucleon and antikaon-nucleon scattering. The main emphasis will be on s-waves with special attention paid to the antikaon-nucleon system in the isospin zero channel where the Λ(1405) is important. In the CBM the Λ(1405) is an antikaon-nucleon bound state and I show that this interpretation is consistent with the antikaon-nucleon scattering in the region of the Λ(1670) and Λ(1800) although ambiguities in the phase shift analysis prevent a definite conclusion

  10. K-nucleon scattering and the cloudy bag model

    Science.gov (United States)

    Jennings, B. K.

    1986-10-01

    The cloudy bag model (CBM) has been applied with considerable success to low energy meson-nucleon scattering. In this talk I will describe in particular calculations for kaon-nucleon and antikaon-nucleon scattering. The main emphasis will be on s-waves with special attention paid to the antikaon-nucleon system in the isospin zero channel where the Λ(1405) is important. In the CBM the Λ(1405) is an antikaon-nucleon bound state and I show that this interpretation is consistent with the antikaon-nucleon scattering in the region of the Λ(1670) and Λ(1800) although ambiguities in the phase shift analysis prevent a definite conclusion.

  11. Psychophysical study of the visual sun location in pictures of cloudy and twilight skies inspired by Viking navigation.

    Science.gov (United States)

    Barta, András; Horváth, Gábor; Meyer-Rochow, Victor Benno

    2005-06-01

    In the late 1960s it was hypothesized that Vikings had been able to navigate the open seas, even when the sun was occluded by clouds or below the sea horizon, by using the angle of polarization of skylight. To detect the direction of skylight polarization, they were thought to have made use of birefringent crystals, called "sun-stones," and a large part of the scientific community still firmly believe that Vikings were capable of polarimetric navigation. However, there are some critics who treat the usefulness of skylight polarization for orientation under partly cloudy or twilight conditions with extreme skepticism. One of their counterarguments has been the assumption that solar positions or solar azimuth directions could be estimated quite accurately by the naked eye, even if the sun was behind clouds or below the sea horizon. Thus under partly cloudy or twilight conditions there might have been no serious need for a polarimetric method to determine the position of the sun. The aim of our study was to test quantitatively the validity of this qualitative counterargument. In our psychophysical laboratory experiments, test subjects were confronted with numerous 180 degrees field-of-view color photographs of partly cloudy skies with the sun occluded by clouds or of twilight skies with the sun below the horizon. The task of the subjects was to guess the position or the azimuth direction of the invisible sun with the naked eye. We calculated means and standard deviations of the estimated solar positions and azimuth angles to characterize the accuracy of the visual sun location. Our data do not support the common belief that the invisible sun can be located quite accurately from the celestial brightness and/or color patterns under cloudy or twilight conditions. Although our results underestimate the accuracy of visual sun location by experienced Viking navigators, the mentioned counterargument cannot be taken seriously as a valid criticism of the theory of the alleged

  12. Digital all-sky polarization imaging of partly cloudy skies.

    Science.gov (United States)

    Pust, Nathan J; Shaw, Joseph A

    2008-12-01

    Clouds reduce the degree of linear polarization (DOLP) of skylight relative to that of a clear sky. Even thin subvisual clouds in the "twilight zone" between clouds and aerosols produce a drop in skylight DOLP long before clouds become visible in the sky. In contrast, the angle of polarization (AOP) of light scattered by a cloud in a partly cloudy sky remains the same as in the clear sky for most cases. In unique instances, though, select clouds display AOP signatures that are oriented 90 degrees from the clear-sky AOP. For these clouds, scattered light oriented parallel to the scattering plane dominates the perpendicularly polarized Rayleigh-scattered light between the instrument and the cloud. For liquid clouds, this effect may assist cloud particle size identification because it occurs only over a relatively limited range of particle radii that will scatter parallel polarized light. Images are shown from a digital all-sky-polarization imager to illustrate these effects. Images are also shown that provide validation of previously published theories for weak (approximately 2%) polarization parallel to the scattering plane for a 22 degrees halo.

  13. Enhanced solar global irradiance during cloudy sky conditions

    Energy Technology Data Exchange (ETDEWEB)

    Schade, N.H.; Sandmann, H.; Stick, C. [Kiel Univ. (Germany). Inst. fuer Medizinische Klimatologie; Macke, A. [Kiel Univ. (DE). Leibniz Inst. fuer Meereswissenschaften (IFM-GEOMAR)

    2007-06-15

    The impact of cloudiness on the shortwave downwelling radiation (SDR) at the surface is investigated by means of collocated pyranometer radiation measurements and all-sky imager observations. The measurements have been performed in Westerland, a seaside resort on the North Sea island of Sylt, Germany, during summer 2004 and 2005. A main improvement to previous studies on this subject resulted from the very high temporal resolution of cloud images and radiation measurements and, therefore, a more robust statistical analysis of the occurrence of this effect. It was possible to observe an excess of solar irradiation compared to clear sky irradiation by more than 500 W/m{sup 2}, the largest observed excess irradiation to our knowledge so far. Camera images reveal that largest excess radiation is reached close to overcast situations with altocumulus clouds partly obscuring the solar disk, and preferably with cumulus clouds in lower levels. The maximum duration of the enhancements depends on its strength and ranges from 20 seconds (enhancements > 400 W/m{sup 2}) up to 140 seconds (enhancements > 200 W/m{sup 2}). (orig.)

  14. Sensitivity of aerosol loading and properties to cloudiness

    Science.gov (United States)

    Iversen, T.; Seland, O.; Kirkevag, A.; Kristjansson, J. E.

    2005-12-01

    Clouds influence aerosols in various ways. Sulfate is swiftly produced in liquid phase provided there is both sulfur dioxide and oxidants available. Nucleation and Aitken mode aerosol particles efficiently grow in size by collision and coagulation with cloud droplets. When precipitation is formed, aerosol and precursor gases may be quickly removed bay rainout. The dynamics associated with clouds in some cases may swiftly mix aerosols deeply into the troposphere. In some cases Aitken-mode particles may be formed in cloud droplets by splitting agglomerates of particulate matter such as black carbon In this presentation we will discuss how global cloudiness may influence the burden, residence time, and spatial distribution of sulfate, black carbon and particulate organic matter. A similar physico-chemical scheme for there compounds has been implemented in three generations of the NCAR community climate model (CCM3, CAM2 and CAM3). The scheme is documented in the literature and is a part of the Aerocom-intercomparison. There are many differences between these models. With respect to aerosols, a major difference is that CAM3 has a considerably higher global cloud volume and more then twice the amount of cloud water than CAM2 and CCM3. Atmospheric simulations have been made with prescribed ocean temperatures. It is slightly surprising to discover that certain aspects of the aerosols are not particularly sensitive to these differences in cloud availability. This sensitivity will be compared to sensitivities with respect to processing in deep convective clouds.

  15. Daily sums of solar radiation during summer period in Felin near Lublin and their relationship with sushine duration and cloudiness

    International Nuclear Information System (INIS)

    Kossowski, J.; Łykowski, B.

    2007-01-01

    The paper presents the relationships between daily sums of global solar radiation and real sunshine duration and separately, mean for a day amount of total cloudiness (computed from three standard observation terms) within 11 May - 31 July period in Felin (Poland). This period is approximated to the insolation summer one and is characterized by a long day (at least 15.5) and slight differentiation in time. The relationships were obtained on the basis of mean decade data and a stretch of single days during five seasons (but not successive). They were described with two types of equation regressions: linear and curvilinear (a 2-degree polynomial). Additionally, the same relationships were determined regarding the days without and with precipitation during the examined summer periods. The analysis performed in relation to effectiveness of each of these equations application to estimate daily sums of global solar radiation (the relative and standard errors) showed that better results are obtained from the equations concerning relations of the solar radiation with sunshine duration than cloudiness. It also revealed that to describe both types of the relationships, a polynomial is more efficient than linear regression (particularly in the case of daily data examination). Moreover, it was stated that determination of these relationships for some specific days (eg. days without precipitation) is appropriate as it allows to estimate the daily sum of solar radiation with smaller errors [pl

  16. DISCRIMINATING BETWEEN CLOUDY, HAZY, AND CLEAR SKY EXOPLANETS USING REFRACTION

    International Nuclear Information System (INIS)

    Misra, Amit K.; Meadows, Victoria S.

    2014-01-01

    We propose a method to distinguish between cloudy, hazy, and clear sky (free of clouds and hazes) exoplanet atmospheres that could be applicable to upcoming large aperture space- and ground-based telescopes such as the James Webb Space Telescope (JWST) and the European Extremely Large Telescope (E-ELT). These facilities will be powerful tools for characterizing transiting exoplanets, but only after a considerable amount of telescope time is devoted to a single planet. A technique that could provide a relatively rapid means of identifying haze-free targets (which may be more valuable targets for characterization) could potentially increase the science return for these telescopes. Our proposed method utilizes broadband observations of refracted light in the out-of-transit spectrum. Light refracted through an exoplanet atmosphere can lead to an increase of flux prior to ingress and subsequent to egress. Because this light is transmitted at pressures greater than those for typical cloud and haze layers, the detection of refracted light could indicate a cloud- or haze-free atmosphere. A detection of refracted light could be accomplished in <10 hr for Jovian exoplanets with JWST and <5 hr for super-Earths/mini-Neptunes with E-ELT. We find that this technique is most effective for planets with equilibrium temperatures between 200 and 500 K, which may include potentially habitable planets. A detection of refracted light for a potentially habitable planet would strongly suggest the planet was free of a global cloud or haze layer, and therefore a promising candidate for follow-up observations

  17. DISCRIMINATING BETWEEN CLOUDY, HAZY, AND CLEAR SKY EXOPLANETS USING REFRACTION

    Energy Technology Data Exchange (ETDEWEB)

    Misra, Amit K.; Meadows, Victoria S. [Astronomy Department, University of Washington, Seattle, WA 98195 (United States)

    2014-11-01

    We propose a method to distinguish between cloudy, hazy, and clear sky (free of clouds and hazes) exoplanet atmospheres that could be applicable to upcoming large aperture space- and ground-based telescopes such as the James Webb Space Telescope (JWST) and the European Extremely Large Telescope (E-ELT). These facilities will be powerful tools for characterizing transiting exoplanets, but only after a considerable amount of telescope time is devoted to a single planet. A technique that could provide a relatively rapid means of identifying haze-free targets (which may be more valuable targets for characterization) could potentially increase the science return for these telescopes. Our proposed method utilizes broadband observations of refracted light in the out-of-transit spectrum. Light refracted through an exoplanet atmosphere can lead to an increase of flux prior to ingress and subsequent to egress. Because this light is transmitted at pressures greater than those for typical cloud and haze layers, the detection of refracted light could indicate a cloud- or haze-free atmosphere. A detection of refracted light could be accomplished in <10 hr for Jovian exoplanets with JWST and <5 hr for super-Earths/mini-Neptunes with E-ELT. We find that this technique is most effective for planets with equilibrium temperatures between 200 and 500 K, which may include potentially habitable planets. A detection of refracted light for a potentially habitable planet would strongly suggest the planet was free of a global cloud or haze layer, and therefore a promising candidate for follow-up observations.

  18. Drivers of Intra-Summer Seasonality and Daily Variability of Coastal Low Cloudiness in California Subregions

    Science.gov (United States)

    Schwartz, R. E.; Iacobellis, S.; Gershunov, A.; Williams, P.; Cayan, D. R.

    2014-12-01

    Summertime low cloud intrusion into the terrestrial west coast of North America impacts human, ecological, and logistical systems. Over a broad region of the West Coast, summer (May - September) coastal low cloudiness (CLC) varies coherently on interannual to interdecadal timescales and has been found to be organized by North Pacific sea surface temperature. Broad-scale studies of low stratiform cloudiness over ocean basins also find that the season of maximum low stratus corresponds to the season of maximum lower tropospheric stability (LTS) or estimated inversion strength. We utilize a 18-summer record of CLC derived from NASA/NOAA Geostationary Operational Environmental Satellite (GOES) at 4km resolution over California (CA) to make a more nuanced spatial and temporal examination of intra-summer variability in CLC and its drivers. We find that uniform spatial coherency over CA is not apparent for intra-summer variability in CLC. On monthly to daily timescales, at least two distinct subregions of coastal California (CA) can be identified, where relationships between meteorology and stratus variability appear to change throughout summer in each subregion. While north of Point Conception and offshore the timing of maximum CLC is closely coincident with maximum LTS, in the Southern CA Bight and northern Baja region, maximum CLC occurs up to about a month before maximum LTS. It appears that summertime CLC in this southern region is not as strongly related as in the northern region to LTS. In particular, although the relationship is strong in May and June, starting in July the daily relationship between LTS and CLC in the south begins to deteriorate. Preliminary results indicate a moderate association between decreased CLC in the south and increased precipitable water content above 850 hPa on daily time scales beginning in July. Relationships between daily CLC variability and meteorological variables including winds, inland temperatures, relative humidity, and

  19. A CLOUDINESS INDEX FOR TRANSITING EXOPLANETS BASED ON THE SODIUM AND POTASSIUM LINES: TENTATIVE EVIDENCE FOR HOTTER ATMOSPHERES BEING LESS CLOUDY AT VISIBLE WAVELENGTHS

    Energy Technology Data Exchange (ETDEWEB)

    Heng, Kevin, E-mail: kevin.heng@csh.unibe.ch [University of Bern, Center for Space and Habitability, Sidlerstrasse 5, CH-3012, Bern (Switzerland)

    2016-07-20

    We present a dimensionless index that quantifies the degree of cloudiness of the atmosphere of a transiting exoplanet. Our cloudiness index is based on measuring the transit radii associated with the line center and wing of the sodium or potassium line. In deriving this index, we revisited the algebraic formulae for inferring the isothermal pressure scale height from transit measurements. We demonstrate that the formulae of Lecavelier et al. and Benneke and Seager are identical: the former is inferring the temperature while assuming a value for the mean molecular mass and the latter is inferring the mean molecular mass while assuming a value for the temperature. More importantly, these formulae cannot be used to distinguish between cloudy and cloud-free atmospheres. We derive values of our cloudiness index for a small sample of seven hot Saturns/Jupiters taken from Sing et al. We show that WASP-17b, WASP-31b, and HAT-P-1b are nearly cloud-free at visible wavelengths. We find the tentative trend that more irradiated atmospheres tend to have fewer clouds consisting of sub-micron-sized particles. We also derive absolute sodium and/or potassium abundances ∼10{sup 2} cm{sup −3} for WASP-17b, WASP-31b, and HAT-P-1b (and upper limits for the other objects). Higher-resolution measurements of both the sodium and potassium lines, for a larger sample of exoplanetary atmospheres, are needed to confirm or refute this trend.

  20. New models to compute solar global hourly irradiation from point cloudiness

    International Nuclear Information System (INIS)

    Badescu, Viorel; Dumitrescu, Alexandru

    2013-01-01

    Highlights: ► Kasten–Czeplak cloudy sky model is tested under the climate of South-Eastern Europe. ► Very simple cloudy sky models based on atmospheric transmission factors. ► Transmission factors are nonlinear functions of the cosine of zenith angle. ► New models’ performance is good for low and intermediate cloudy skies. ► Models show good performance when applied in stations other than the origin station. - Abstract: The Kasten–Czeplak (KC) model [16] is tested against data measured in five meteorological stations covering the latitudes and longitudes of Romania (South-Eastern Europe). Generally, the KC cloudy sky model underestimates the measured values. Its performance is (marginally) good enough for point cloudiness C = 0–1. The performance is good for skies with few clouds (C < 0.3), good enough for skies with medium amount of clouds (C = 0.3–0.7) and poor on very cloudy and overcast skies. New very simple empirical cloudy sky models are proposed. They bring two novelties in respect to KC model. First, new basic clear sky models are used, which evaluate separately the direct and diffuse radiation, respectively. Second, some of the new models assume the atmospheric transmission factor is a nonlinear function of the cosine of zenith angle Z. The performance of the new models is generally better than that of the KC model, for all cloudiness classes. One class of models (called S4) has been further tested. The sub-model S4TOT has been obtained by fitting the generic model S4 to all available data, for all stations. Generally, S4TOT has good accuracy in all stations, for low and intermediate cloudy skies (C < 0.7). The accuracy of S4TOT is good and good enough at intermediate zenith angles (Z = 30–70°) but worse for small and larger zenith angles (Z = 0–30° and Z = 70–85°, respectively). Several S4 sub-models were tested in stations different from the origin station. Almost all sub-models have good or good enough performance for skies

  1. Cloudiness and weather variation in central Svalbard in July 2013 as related to atmospheric circulation

    Czech Academy of Sciences Publication Activity Database

    Láska, K.; Chládová, Zuzana; Ambrožová, K.; Husák, J.

    2013-01-01

    Roč. 3, č. 2 (2013), s. 184-195 ISSN 1805-0689 Institutional support: RVO:68378289 Keywords : atmospheric circulation * climate * cloudiness * weather * Svalbard * Arctic Subject RIV: DO - Wilderness Conservation http://www.sci.muni.cz/CPR/6cislo/Laska.pdf

  2. Optimizing sensitivity of Unmanned Aerial System optical sensors for low zenith angles and cloudy conditions

    DEFF Research Database (Denmark)

    Wang, Sheng; Dam-Hansen, Carsten; Zarco Tejada, Pablo J.

    . The multispectral camera (Tetra Mini-MCA6) has 6 channels in the visible and near Infrared. For the laboratory calibration experiment, different camera settings and typical irradiance levels from cloudy to clear sky were designed. The light-source is based on super-continuum generation to produce a continuous solar...

  3. The Nonlinear Effects of Pion-Quark Coupling in the Cloudy Bag Model

    OpenAIRE

    Yasuhiko, FUTAMI; Satoru, AKIYAMA; Department of Physics, Faculty of Science and Technology Science University of Tokyo; Department of Physics, Faculty of Science and Technology Science University of Tokyo

    1990-01-01

    The nonlinear pion-quark interaction in the Cloudy Bag Model is investigated. The Hamiltonian is normal-ordered. The vacuum expectation value of pion field squared is evaluated by introducting some cutoff momentum for the virtual pions.We then calculate g_A, including other corrections.

  4. The nonlinear effects of pion-quark coupling in the Cloudy Bag Model

    International Nuclear Information System (INIS)

    Futami, Yasuhiko; Akiyama, Satoru

    1990-01-01

    The nonlinear pion-quark interaction in the Cloudy Bag Model is investigated. The Hamiltonian is normal-ordered. The vacuum expectation value of pion field squared is evaluated by introducing some cutoff momentum for the virtual pions. We then calculate g A , including other corrections. (author)

  5. Effects of cloudy/clear air mixing and droplet pH on sulfate aerosol formation in a coupled chemistry/climate global model

    Energy Technology Data Exchange (ETDEWEB)

    Molenkamp, C.R.; Atherton, C.A. [Lawrence Livermore National Lab., CA (United States); Penner, J.E.; Walton, J.J. [Michigan Univ., Ann Arbor, MI (United States). Dept. of Atmospheric, Oceanic and Space Sciences

    1996-10-01

    In this paper we will briefly describe our coupled ECHAM/GRANTOUR model, provide a detailed description of our atmospheric chemistry parameterizations, and discuss a couple of numerical experiments in which we explore the influence of assumed pH and rate of mixing between cloudy and clear air on aqueous sulfate formation and concentration. We have used our tropospheric chemistry and transport model, GRANTOUR, to estimate the life cycle and global distributions of many trace species. Recently, we have coupled GRANTOUR with the ECHAM global climate model, which provides several enhanced capabilities in the representation of aerosol interactions.

  6. Employment from Solar Energy: A Bright but Partly Cloudy Future.

    Science.gov (United States)

    Smeltzer, K. K.; Santini, D. J.

    A comparison of quantitative and qualitative employment effects of solar and conventional systems can prove the increased employment postulated as one of the significant secondary benefits of a shift from conventional to solar energy use. Current quantitative employment estimates show solar technology-induced employment to be generally greater…

  7. The little auk population at the North Water Polynya. How palaeohistory, archaeology and anthropology adds new dimensions to the ecology of a high arctic seabird

    DEFF Research Database (Denmark)

    Mosbech, Anders; Johansen, Kasper Lambert; Lyngs, Peter

    interdisciplinary approach to the analysis of little auk ecology in times of change. Recent and ongoing little auk studies at the North Water Polynya have shown the high densities of little auks (about 2 pairs/m2) breeding under the stones in the vast scree slopes, the highly specialized chick diet (80 % Calanus...... hyperboreus), the foraging ranges (75 km, GPS tracking) the local foraging behaviour (TDR), and yearly migration pattern (gls) where little auks disperse over the north-eastern Atlantic during winter. An interdisciplinary approach has added new dimensions to population history and human harvest. Lakes......-feeding Bowhead whale population, took place. Anthropological research reveals how, though small in size, little auk is a significant resource for the Inuit with important cultural values attached and adding resilience to human populations in times where the dominant marine mammal prey is inaccessible due...

  8. Nonlinear cloudy bag model in the meson mean-field approximation

    International Nuclear Information System (INIS)

    Bunatyan, G.G.

    1989-01-01

    We investigate the cloudy bag model for the nucleon, including the essentially nonlinear interaction of the quarks with the meson field. From the boundary conditions, which guarantee the stability of the bag, we obtain equations for the size R of the bag, for the momentum p of the quarks, and for the mean pion field var-phi. We obtain an expression for the total energy E of the bag nucleon. By taking the appropriate averages of all the relations the calculations reduce to the case of a spherically symmetric bag. We show that in the general nonlinear cloudy bag model in question the equations for R, p, and var-phi have a simultaneous solution which corresponds to the absolute minimum of the bag energy E and, consequently, that there exists a stable equilibrium state of the bag nucleon

  9. Remote sensing of PM2.5 during cloudy and nighttime periods using ceilometer backscatter

    Science.gov (United States)

    Li, Siwei; Joseph, Everette; Min, Qilong; Yin, Bangsheng; Sakai, Ricardo; Payne, Megan K.

    2017-06-01

    Monitoring PM2.5 (particulate matter with aerodynamic diameter d ≤ 2.5 µm) mass concentration has become of more importance recently because of the negative impacts of fine particles on human health. However, monitoring PM2.5 during cloudy and nighttime periods is difficult since nearly all the passive instruments used for aerosol remote sensing are not able to measure aerosol optical depth (AOD) under either cloudy or nighttime conditions. In this study, an empirical model based on the regression between PM2.5 and the near-surface backscatter measured by ceilometers was developed and tested using 6 years of data (2006 to 2011) from the Howard University Beltsville Campus (HUBC) site. The empirical model can explain ˜ 56, ˜ 34 and ˜ 42 % of the variability in the hourly average PM2.5 during daytime clear, daytime cloudy and nighttime periods, respectively. Meteorological conditions and seasons were found to influence the relationship between PM2.5 mass concentration and the surface backscatter. Overall the model can explain ˜ 48 % of the variability in the hourly average PM2.5 at the HUBC site when considering the seasonal variation. The model also was tested using 4 years of data (2012 to 2015) from the Atmospheric Radiation Measurement (ARM) Southern Great Plains (SGP) site, which was geographically and climatologically different from the HUBC site. The results show that the empirical model can explain ˜ 66 and ˜ 82 % of the variability in the daily average PM2.5 at the ARM SGP site and HUBC site, respectively. The findings of this study illustrate the strong need for ceilometer data in air quality monitoring under cloudy and nighttime conditions. Since ceilometers are used broadly over the world, they may provide an important supplemental source of information of aerosols to determine surface PM2.5 concentrations.

  10. Cloudy bag model calculation of P11 πN scattering

    International Nuclear Information System (INIS)

    Rinat, A.S.

    1981-05-01

    πN, πΔ scattering in the cloudy bag model (CBM) is considered using an elementary π field and bare bag states for N, Δ, Nsup(*)(1470). The resulting 2-channel problem is solved neglecting intermediate states with anti-baryons and states with more than a single pion. It is shown that delta 11 may be reproduced for parameters close to their theoretical values. The fit thus provides a test for the CBM. (author)

  11. Monitoring of cloudiness in the function of the forests fire protection

    Directory of Open Access Journals (Sweden)

    Živanović Stanimir

    2016-01-01

    Full Text Available Fires in forests are seasonal in nature, conditioned by the moisture content of the fuel material. The emergence of these fires in Serbia is becoming more common and depending on the intensity and duration, fires have a major impact on the state of vegetation. The aim of this study was to determine the correlation between dynamics of cloudiness occurrence and forest fires. To study the correlation of these elements, Pearson correlation coefficients were used. The analysis is based on the meteorological data obtained from meteorological station Negotin for the period from 1991 to 2010. Among the tested influences, the degree of cloudiness showed positive correlative interdependence with the dynamics of fire occurrence in nature. The annual number of fires correlates positively with the average number of clear days (p = 0.25. Also, it was found that the annual number of fires with medium intensity, correlated negatively with the average number of cloudy days (p= -0.26, but not statistically significant (p> 0.05.

  12. Minimizing quality changes of cloudy apple juice: The use of kiwifruit puree and high pressure homogenization.

    Science.gov (United States)

    Yi, Junjie; Kebede, Biniam; Kristiani, Kristiani; Grauwet, Tara; Van Loey, Ann; Hendrickx, Marc

    2018-05-30

    Cloud loss, enzymatic browning, and flavor changes are important quality defects of cloudy fruit juices determining consumer acceptability. The development of clean label options to overcome such quality problems is currently of high interest. Therefore, this study investigated the effect of kiwifruit puree (clean label ingredient) and high pressure homogenization on quality changes of cloudy apple juice using a multivariate approach. The use of kiwifruit puree addition and high pressure homogenization resulted in a juice with improved uniformity and cloud stability by reducing particle size and increasing viscosity and yield stress (p < 0.01). Furthermore, kiwifruit puree addition reduced enzymatic browning (ΔE ∗  < 3), due to the increased ascorbic acid and contributed to a more saturated and bright yellow color, a better taste balance, and a more fruity aroma of juice. This work demonstrates that clean label options to control quality degradation of cloudy fruit juice might offer new opportunities. Copyright © 2018 Elsevier Ltd. All rights reserved.

  13. Winter cloudiness variability over Northern Eurasia related to the Siberian High during 1966–2010

    International Nuclear Information System (INIS)

    Chernokulsky, Alexander; Mokhov, Igor I; Nikitina, Natalia

    2013-01-01

    This letter presents an assessment of winter cloudiness variability over Northern Eurasia regions related to the Siberian High intensity (SHI) variations during 1966–2010. An analysis of cloud fraction and the occurrence of different cloud types was carried out based on visual observations from almost 500 Russian meteorological stations. The moonlight criterion was implemented to reduce the uncertainty of night observations. The SHI was defined based on sea-level pressure fields from different reanalyses. We found a statistically significant negative correlation of cloud cover with the SHI over central and southern Siberia and the southern Urals with regression coefficients around 3% hPa −1 for total cloud fraction (TCF) for particular stations near the Siberian High center. Cross-wavelet analysis of TCF and SHI revealed a long-term relationship between cloudiness and the Siberian High. Generally, the Siberian High intensification by 1 hPa leads to a replacement of one overcast day with one day without clouds, which is associated mainly with a decrease in precipitating and stratiform clouds. These changes point to a positive feedback between cloudiness and the Siberian High. (letter)

  14. A Quality Control study of the distribution of NOAA MIRS Cloudy retrievals during Hurricane Sandy

    Science.gov (United States)

    Fletcher, S. J.

    2013-12-01

    Cloudy radiance present a difficult challenge to data assimilation (DA) systems, through both the radiative transfer system as well the hydrometers required to resolve the cloud and precipitation. In most DA systems the hydrometers are not control variables due to many limitations. The National Oceanic and Atmospheric Administration's (NOAA) Microwave Integrated Retrieval System (MIRS) is producing products from the NPP-ATMS satellite where the scene is cloud and precipitation affected. The test case that we present here is the life time of Hurricane and then Superstorm Sandy in October 2012. As a quality control study we shall compare the retrieved water vapor content during the lifetime of Sandy with the first guess and the analysis from the NOAA Gridpoint Statistical Interpolation (GSI) system. The assessment involves the gross error check system against the first guess with different values for the observational error's variance to see if the difference is within three standard deviations. We shall also compare against the final analysis at the relevant cycles to see if the products which have been retrieved through a cloudy radiance are similar, given that the DA system does not assimilate cloudy radiances yet.

  15. The impact of instrument field of view on measurements of cloudy-sky spectral radiances from space: application to IRIS and IMG

    Energy Technology Data Exchange (ETDEWEB)

    Brindley, H.E. E-mail: h.brindley@ic.ac.uk; Harries, J.E

    2003-05-15

    Spatially resolved images from the MODerate Resolution Imaging Spectrometer (MODIS) instrument are used to investigate the impact of a change in spatial field of view, from that typical of the Nimbus 4 Infrared Interferometer Spectrometer (IRIS) to that of the Interferometric Monitor for Greenhouse Gases (IMG), upon the spectral outgoing longwave radiation (OLR). Considering all-sky conditions it is found that for a typical tropical scene, approximately 150 paired measurements are required to obtain agreement to within {+-}2 K in the average brightness temperature (T{sub B}), in the most transparent window channels. At mid-latitudes, the reduced scene variability means that fewer observations are required to meet the same criterion. For clear- and cloudy-sky separation a simple threshold technique based on the window T{sub B} and underlying sea-surface temperature tends to result in a systematic underestimate of the average cloudy T{sub B} by the larger field of view. A better estimate can be obtained by applying a double threshold to discriminate against the most mixed scenes.

  16. Cloudy Territories?

    NARCIS (Netherlands)

    Drees, W.B.

    2016-01-01

    The Cloud of Unknowing is a late medieval English mystical text; it has inspired Catherine Keller's title Cloud of the Impossible. A cloud seems fairly diffuse; territory sounds more solid: terra-Earth. However, The Territories of Science and Religion is unsettling for those who assume to be on firm

  17. Happily CLOUDy

    CERN Multimedia

    CERN Bulletin

    While the LHC experiments are fine-tuning their equipments waiting for ‘glamorous’ beams, CLOUD has finished its assembly phase and is starting taking data using a beam of protons from the 50 year-old Proton Synchrotron (PS). Here is a quick detour around a cutting-edge physics experiment that will shed light on climate-related matters.   Jasper Kirkby photographed inside the CLOUD chamber.   Many experiments in the world are currently investigating the factors that may affect the planet’s climate but CLOUD is the only one that makes use of a particle accelerator. “The proton beam that the PS provides is unique because it allows us to adjust the “cosmic ray” intensity. In this way, we can simulate the difference of particle flux in the atmosphere in going from the ground to the outermost layers of the stratosphere (a factor 100 more intense)”, explains Jasper Kirkby, CLOUD’s spokesperson. &ldqu...

  18. A model of SNR evolution for an O-star in a cloudy ISM

    International Nuclear Information System (INIS)

    Shull, P. Jr.

    1988-01-01

    The authors present an analytical model of SNR evolution in a cloudy interstellar medium for a single progenitor star of spectral type 05 V. The model begins with the progenitor on the zero-age main sequence, includes the effects of the star's wind and ionizing photons, and ends with the SNR's assimilation by the ISM. The authors assume that the ISM consists of atomic clouds, molecular clouds, and a hot intercloud phase. The type of SNR that results bears a strong resemblance to N63A in the Large Magellanic Cloud

  19. Acceleration of radiative transfer model calculations for the retrieval of trace gases under cloudy conditions

    International Nuclear Information System (INIS)

    Efremenko, Dmitry S.; Loyola, Diego G.; Spurr, Robert J.D.; Doicu, Adrian

    2014-01-01

    In the independent pixel approximation (IPA), radiative transfer computations involving cloudy scenes require two separate calls to the radiative transfer model (RTM), one call for a clear sky scenario, the other for an atmosphere containing clouds. In this paper, clouds are considered as an optically homogeneous layer. We present two novel methods for RTM performance enhancement with particular application to trace gas retrievals under cloudy conditions. Both methods are based on reusing results from clear-sky RTM calculations to speed up corresponding calculations for the cloud-filled scenario. The first approach is numerically exact, and has been applied to the discrete-ordinate with matrix exponential (DOME) RTM. Results from the original clear sky computation can be saved in the memory and reused for the non-cloudy layers in the second computation. In addition, for the whole-atmosphere boundary-value approach to the determination of the intensity field, we can exploit a ’telescoping technique’ to reduce the dimensionality (and hence the computational effort for the solution) of the boundary value problem in the absence of Rayleigh scattering contributions for higher azimuthal components of the radiation field. The second approach is (for the cloudy scenario) to generate a spectral correction applied to the radiation field from a fast two-stream RTM. This correction is based on the use of principal-component analysis (PCA) applied to a given window of spectral optical property data, in order to exploit redundancy in the data and confine the number of full-stream multiple scatter computations to the first few EOFs (Empirical Orthogonal Functions) arising from the PCA. This method has been applied to the LIDORT RTM; although the method involves some approximation, it provides accuracy better than 0.2%, and a speed-up factor of approximately 2 compared with two calls of RTM. -- Highlights: • Reusing results from clear-sky computations for a model with a

  20. Remote sensing of PM2.5 during cloudy and nighttime periods using ceilometer backscatter

    Directory of Open Access Journals (Sweden)

    S. Li

    2017-06-01

    Full Text Available Monitoring PM2.5 (particulate matter with aerodynamic diameter d ≤  2.5 µm mass concentration has become of more importance recently because of the negative impacts of fine particles on human health. However, monitoring PM2.5 during cloudy and nighttime periods is difficult since nearly all the passive instruments used for aerosol remote sensing are not able to measure aerosol optical depth (AOD under either cloudy or nighttime conditions. In this study, an empirical model based on the regression between PM2.5 and the near-surface backscatter measured by ceilometers was developed and tested using 6 years of data (2006 to 2011 from the Howard University Beltsville Campus (HUBC site. The empirical model can explain  ∼  56,  ∼  34 and  ∼  42 % of the variability in the hourly average PM2.5 during daytime clear, daytime cloudy and nighttime periods, respectively. Meteorological conditions and seasons were found to influence the relationship between PM2.5 mass concentration and the surface backscatter. Overall the model can explain  ∼  48 % of the variability in the hourly average PM2.5 at the HUBC site when considering the seasonal variation. The model also was tested using 4 years of data (2012 to 2015 from the Atmospheric Radiation Measurement (ARM Southern Great Plains (SGP site, which was geographically and climatologically different from the HUBC site. The results show that the empirical model can explain  ∼  66 and  ∼  82 % of the variability in the daily average PM2.5 at the ARM SGP site and HUBC site, respectively. The findings of this study illustrate the strong need for ceilometer data in air quality monitoring under cloudy and nighttime conditions. Since ceilometers are used broadly over the world, they may provide an important supplemental source of information of aerosols to determine surface PM2.5 concentrations.

  1. A Fast Visible-Infrared Imaging Radiometer Suite Simulator for Cloudy Atmopheres

    Science.gov (United States)

    Liu, Chao; Yang, Ping; Nasiri, Shaima L.; Platnick, Steven; Meyer, Kerry G.; Wang, Chen Xi; Ding, Shouguo

    2015-01-01

    A fast instrument simulator is developed to simulate the observations made in cloudy atmospheres by the Visible Infrared Imaging Radiometer Suite (VIIRS). The correlated k-distribution (CKD) technique is used to compute the transmissivity of absorbing atmospheric gases. The bulk scattering properties of ice clouds used in this study are based on the ice model used for the MODIS Collection 6 ice cloud products. Two fast radiative transfer models based on pre-computed ice cloud look-up-tables are used for the VIIRS solar and infrared channels. The accuracy and efficiency of the fast simulator are quantify in comparison with a combination of the rigorous line-by-line (LBLRTM) and discrete ordinate radiative transfer (DISORT) models. Relative errors are less than 2 for simulated TOA reflectances for the solar channels and the brightness temperature differences for the infrared channels are less than 0.2 K. The simulator is over three orders of magnitude faster than the benchmark LBLRTM+DISORT model. Furthermore, the cloudy atmosphere reflectances and brightness temperatures from the fast VIIRS simulator compare favorably with those from VIIRS observations.

  2. Effect of mash maceration on the polyphenolic content and visual quality attributes of cloudy apple juice.

    Science.gov (United States)

    Mihalev, Kiril; Schieber, Andreas; Mollov, Plamen; Carle, Reinhold

    2004-12-01

    The effects of enzymatic mash treatments on yield, turbidity, color, and polyphenolic content of cloudy apple juice were studied. Using HPLC-ESI-MS, cryptochlorogenic acid was identified in cv. Brettacher cloudy apple juice for the first time. Commercial pectolytic enzyme preparations with different levels of secondary protease activity were tested under both oxidative and nonoxidative conditions. Without the addition of ascorbic acid, oxidation substantially decreased chlorogenic acid, epicatechin, and procyanidin B2 contents due to enzymatic browning. The content of chlorogenic acid as the major polyphenolic compound was also influenced by the composition of pectolytic enzyme preparations because the presence of secondary protease activity resulted in a rise of chlorogenic acid. The latter effect was probably due to the inhibited protein-polyphenol interactions, which prevented binding of polyphenolic compounds to the matrix, thus increasing their antioxidative potential. The results obtained clearly demonstrate the advantage of the nonoxidative mash maceration for the production of cloud-stable apple juice with a high polyphenolic content, particularly in a premature processing campaign.

  3. Numerical Simulations of Supernova Remnant Evolution in a Cloudy Interstellar Medium

    Energy Technology Data Exchange (ETDEWEB)

    Slavin, Jonathan D.; Smith, Randall K.; Foster, Adam; Winter, Henry D.; Raymond, John C.; Slane, Patrick O. [Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138 (United States); Yamaguchi, Hiroya, E-mail: jslavin@cfa.harvard.edu [NASA Goddard Space Flight Center, Code 662, Greenbelt, MD 20771 (United States)

    2017-09-01

    The mixed morphology class of supernova remnants has centrally peaked X-ray emission along with a shell-like morphology in radio emission. White and Long proposed that these remnants are evolving in a cloudy medium wherein the clouds are evaporated via thermal conduction once being overrun by the expanding shock. Their analytical model made detailed predictions regarding temperature, density, and emission profiles as well as shock evolution. We present numerical hydrodynamical models in 2D and 3D including thermal conduction, testing the White and Long model and presenting results for the evolution and emission from remnants evolving in a cloudy medium. We find that, while certain general results of the White and Long model hold, such as the way the remnants expand and the flattening of the X-ray surface brightness distribution, in detail there are substantial differences. In particular we find that the X-ray luminosity is dominated by emission from shocked cloud gas early on, leading to a bright peak, which then declines and flattens as evaporation becomes more important. In addition, the effects of thermal conduction on the intercloud gas, which is not included in the White and Long model, are important and lead to further flattening of the X-ray brightness profile as well as lower X-ray emission temperatures.

  4. Vertical Distributions of Macromolecular Composition of Particulate Organic Matter in the Water Column of the Amundsen Sea Polynya During the Summer in 2014

    Science.gov (United States)

    Kim, Bo Kyung; Lee, SangHoon; Ha, Sun-Yong; Jung, Jinyoung; Kim, Tae Wan; Yang, Eun Jin; Jo, Naeun; Lim, Yu Jeong; Park, Jisoo; Lee, Sang Heon

    2018-02-01

    Macromolecular compositions (carbohydrates, proteins, and lipids) of particulate organic matter (POM) are crucial as a basic marine food quality. To date, however, one investigation has been carried out in the Amundsen Sea. Water samples for macromolecular compositions were obtained at selected seven stations in the Amundsen Sea Polynya (AP) during the austral summer in 2014 to investigate vertical characteristics of POM. We found that a high proportion of carbohydrates (45.9 ± 11.4%) in photic layer which are significantly different from the previous result (27.9 ± 6.9%) in the AP, 2012. The plausible reason could be the carbohydrate content strongly associated with biomass of the dominant species (Phaeocystis antarctica). The calorific content of food material (FM) in the photic layer obtained in this study is similar with that of the Ross Sea as one of the highest primary productivity regions in the Southern Ocean. Total concentrations, calorific values, and calorific contents of FM were higher in the photic layer than the aphotic layer, which implies that a significant fraction of organic matter underwent degradation. A decreasing proteins/carbohydrates (PRT/CHO) ratio with depth could be caused by preferential nitrogen loss during sinking period. Since the biochemical compositions of POM mostly fixed in photic layers could play an important role in transporting organic carbon into the deep sea, further detail studies on the variations in biochemical compositions and main controlling factors are needed to understand sinking mechanisms of POM.

  5. Masses of the light hadrons in the chiral and cloudy bag models

    International Nuclear Information System (INIS)

    Saito, Koichi.

    1983-10-01

    The masses of the light hadrons except for the pion are calculated in the stable chiral and cloudy bag models with the massless or massive u, d quark and pion. Two difficulties in these models, i.e. the lack of stability and the divergence of the quark self-energy, are removed by taking account of a simple non-local quark-pion interaction. The effects of the finite size of the qq-bar pion and the behavior of the quark self-energy are discussed in detail. In our calculation the bag self-energy due to the pion has an important role in the origin of the N-Δ and the Σ-Λ mass differences. The baryon octet and decuplet masses are well reproduced by the present model. (author)

  6. Ophioninae (Hymenoptera: Ichneumonidae wasp community in the cloudy forest Monteseco, Cajamarca, Peru

    Directory of Open Access Journals (Sweden)

    Evelyn Sánchez

    2014-12-01

    Full Text Available We describe the species composition of the subfamily Ophioninae (Hymenoptera: Ichneumonidae along an altitudinal gradient in the cloudy forest Monteseco, Cajamarca, Peru collected in 2009 and 2010. Eighteen species were recorded in three genera of Ophioninae: Alophophion, Enicospilus y Ophion. Five species are recorded for the first time in Peru: Ophion polyhymniae Gauld, 1988; Enicospilus cubensis (Norton, 1863; E. guatemalensis (Cameron, 1886; E. cressoni Hooker, 1912 y E. mexicanus (Cresson, 1874. Subfamily composition varies with the elevation. The highest species richness (S=11 was found at 2150 m and the lowest (S=3 at 3116 m. Enicospilus is more diverse from low to mid elevation, Ophion from mid to high elevation and Alophophion occurs predominantly at high elevation.

  7. VLT FORS2 comparative transmission spectral survey of clear and cloudy exoplanet atmospheres

    Science.gov (United States)

    Nikolov, Nikolay; Sing, David; Gibson, Neale; Evans, Thomas; Barstow, Joanna Katy; Kataria, Tiffany; Wilson, Paul A.

    2016-10-01

    Transmission spectroscopy is a key to unlocking the secrets of close-in exoplanet atmospheres. Observations have started to unveil a vast diversity of irradiated giant planet atmospheres with clouds and hazes playing a definitive role across the entire mass and temperature regime. We have initiated a ground-based, multi-object transmission spectroscopy of a hand full of hot Jupiters, covering the wavelength range 360-850nm using the recently upgraded FOcal Reducer and Spectrograph (FORS2) mounted on the Very Large Telescope (VLT) at the European Southern Observatory (ESO). These targets were selected for comparative follow-up as their transmission spectra showed evidence for alkali metal absorption, based on the results of Hubble Space Telescope (HST) observations. This talk will discuss the first results from the programme, demonstrating excellent agreement between the transmission spectra measured from VLT and HST and further reinforce the findings of clear, cloudy and hazy atmospheres. More details will be discussed on the narrow alkali features obtained with FORS2 at higher resolution, revealing its high potential in securing optical transmission spectra. These FORS2 observations are the first ground-based detections of clear, cloudy and hazy hot-Jupiter atmosphere with a simultaneous detections of Na, K, and H2 Rayleigh scattering. Our program demonstrates the large potential of the instrument for optical transmission spectroscopy, capable of obtaining HST-quality light curves from the ground. Compared to HST, the larger aperture of VLT will allow for fainter targets to be observed and higher spectral resolution, which can greatly aid comparative exoplanet studies. This is important for further exploring the diversity of exoplanet atmospheres and is particularly complementary to the near- and mid-IR regime, to be covered by the upcoming James-Webb Space Telescope (JWST) and is readily applicable to less massive planets down to super-Earths.

  8. Dynamics, thermodynamics, radiation, and cloudiness associated with cumulus-topped marine boundary layers

    Energy Technology Data Exchange (ETDEWEB)

    Ghate, Virendra P. [Argonne National Lab. (ANL), Argonne, IL (United States); Miller, Mark [Rutgers Univ., New Brunswick, NJ (United States)

    2016-11-01

    The overall goal of this project was to improve the understanding of marine boundary clouds by using data collected at the Atmospheric Radiation Measurement (ARM) sites, so that they can be better represented in global climate models (GCMs). Marine boundary clouds are observed regularly over the tropical and subtropical oceans. They are an important element of the Earth’s climate system because they have substantial impact on the radiation budget together with the boundary layer moisture, and energy transports. These clouds also have an impact on large-scale precipitation features like the Inter Tropical Convergence Zone (ITCZ). Because these clouds occur at temporal and spatial scales much smaller than those relevant to GCMs, their effects and the associated processes need to be parameterized in GCM simulations aimed at predicting future climate and energy needs. Specifically, this project’s objectives were to (1) characterize the surface turbulent fluxes, boundary layer thermodynamics, radiation field, and cloudiness associated with cumulus-topped marine boundary layers; (2) explore the similarities and differences in cloudiness and boundary layer conditions observed in the tropical and trade-wind regions; and (3) understand similarities and differences by using a simple bulk boundary layer model. In addition to working toward achieving the project’s three objectives, we also worked on understanding the role played by different forcing mechanisms in maintaining turbulence within cloud-topped boundary layers We focused our research on stratocumulus clouds during the first phase of the project, and cumulus clouds during the rest of the project. Below is a brief description of manuscripts published in peer-reviewed journals that describe results from our analyses.

  9. Mapping forest tree species over large areas with partially cloudy Landsat imagery

    Science.gov (United States)

    Turlej, K.; Radeloff, V.

    2017-12-01

    Forests provide numerous services to natural systems and humankind, but which services forest provide depends greatly on their tree species composition. That makes it important to track not only changes in forest extent, something that remote sensing excels in, but also to map tree species. The main goal of our work was to map tree species with Landsat imagery, and to identify how to maximize mapping accuracy by including partially cloudy imagery. Our study area covered one Landsat footprint (26/28) in Northern Wisconsin, USA, with temperate and boreal forests. We selected this area because it contains numerous tree species and variable forest composition providing an ideal study area to test the limits of Landsat data. We quantified how species-level classification accuracy was affected by a) the number of acquisitions, b) the seasonal distribution of observations, and c) the amount of cloud contamination. We classified a single year stack of Landsat-7, and -8 images data with a decision tree algorithm to generate a map of dominant tree species at the pixel- and stand-level. We obtained three important results. First, we achieved producer's accuracies in the range 70-80% and user's accuracies in range 80-90% for the most abundant tree species in our study area. Second, classification accuracy improved with more acquisitions, when observations were available from all seasons, and is the best when images with up to 40% cloud cover are included. Finally, classifications for pure stands were 10 to 30 percentage points better than those for mixed stands. We conclude that including partially cloudy Landsat imagery allows to map forest tree species with accuracies that were previously only possible for rare years with many cloud-free observations. Our approach thus provides important information for both forest management and science.

  10. Particulate matter and plankton dynamics in the Ross Sea Polynya of Terra Nova Bay during the Austral Summer 1997/98

    Science.gov (United States)

    Fonda Umani, S.; Accornero, A.; Budillon, G.; Capello, M.; Tucci, S.; Cabrini, M.; Del Negro, P.; Monti, M.; De Vittor, C.

    2002-07-01

    The structure and variability of the plankton community and the distribution and composition of suspended particulate matter, were investigated in the polynya of Terra Nova Bay (western Ross Sea) during the austral summer 1997/1998, with the ultimate objective of understanding the trophic control of carbon export from the upper water column. Sampling was conducted along a transect parallel to the shore, near the retreating ice edge at the beginning of December, closer to the coast at the beginning of February, and more offshore in late February. Hydrological casts and water sampling were performed at several depths to measure total particulate matter (TPM), particulate organic carbon (POC), biogenic silica (BSi), chlorophyll a (Chl a) and phaeopigment (Phaeo) concentrations. Subsamples were taken for counting autotrophic and heterotrophic pico- and nanoplankton and to assess the abundance and composition of microphyto- and microzooplankton. Statistical analysis identified two major groups of samples: the first included the most coastal surface samples of early December, characterized by the prevalence of autotrophic nanoplankton biomass; the second included all the remaining samples and was dominated by microphytoplankton. With regard to the relation of the plankton community composition to the biogenic suspended and sinking material, we identified the succession of three distinct periods. In early December Phaeocystis dominated the plankton assemblage in the well-mixed water column, while at the retreating ice-edge a bloom of small diatoms (ND) was developing in the lens of superficial diluted water. Concentrations of biogenic particulates were generally low and confined to the uppermost layer. The very low downward fluxes, the near absence of faecal pellets and the high Chl a/Phaeo ratios suggested that the herbivorous food web was not established yet or, at least, was not working efficiently. In early February the superficial pycnocline and the increased water

  11. Revisiting the Phase Curves of WASP-43b: Confronting Re-analyzed Spitzer Data with Cloudy Atmospheres

    DEFF Research Database (Denmark)

    Mendonça, João M.; Malik, Matej; Demory, Brice-Olivier

    2018-01-01

    red noise due to intra-pixel sensitivity, which leads to greater fluxes emanating from the nightside of WASP-43b, thus reducing the tension between theory and data. On the theoretical front, we construct cloud-free and cloudy atmospheres of WASP-43b using our Global Circulation Model (GCM), THOR...

  12. Observed Spectral Invariant Behavior of Zenith Radiance in the Transition Zone Between Cloud-Free and Cloudy Regions

    Science.gov (United States)

    Marshak, A.; Knyazikhin, Y.; Chiu, C.; Wiscombe, W.

    2010-01-01

    The Atmospheric Radiation Measurement Program's (ARM) new Shortwave Spectrometer (SWS) looks straight up and measures zenith radiance at 418 wavelengths between 350 and 2200 nm. Because of its 1-sec sampling resolution, the SWS provides a unique capability to study the transition zone between cloudy and clear sky areas. A surprising spectral invariant behavior is found between ratios of zenith radiance spectra during the transition from cloudy to cloud-free atmosphere. This behavior suggests that the spectral signature of the transition zone is a linear mixture between the two extremes (definitely cloudy and definitely clear). The weighting function of the linear mixture is found to be a wavelength-independent characteristic of the transition zone. It is shown that the transition zone spectrum is fully determined by this function and zenith radiance spectra of clear and cloudy regions. This new finding may help us to better understand and quantify such physical phenomena as humidification of aerosols in the relatively moist cloud environment and evaporation and activation of cloud droplets.

  13. Absorption of Sunlight by Water Vapor in Cloudy Conditions: A Partial Explaination for the Cloud Absorption Anomaly

    Science.gov (United States)

    Crisp, D.

    1996-01-01

    The atmospheric radiative transfer algorithms used in most global general circulation models underestimate the globally-averaged solar energy absorbed by cloudy atmospheres by up to 25 Wm(sup -2)...Here, a sophisticated atmospheric radiative transfer model was used to provide a more comprehensive description of the physical processes that contribute to the absorption of solar radiation by the Earth's atmosphere.

  14. Occurrence of ozone anomalies over cloudy areas in TOMS version-7 level-2 data

    Directory of Open Access Journals (Sweden)

    X. Liu

    2003-01-01

    Full Text Available This study investigates anomalous ozone distributions over cloudy areas in Nimbus-7 (N7 and Earth-Probe (EP TOMS version-7 data and analyzes the causes for ozone anomaly formation. A 5°-longitude by 5°-latitude region is defined to contain a Positive Ozone Anomaly (POA or Negative Ozone Anomaly (NOA if the correlation coefficient between total ozone and reflectivity is > 0.5 or -0.5. The average fractions of ozone anomalies among all cloud fields are 31.8 ± 7.7% and 35.8 ± 7.7% in the N7 and EP TOMS data, respectively. Some ozone anomalies are caused by ozone retrieval errors, and others are caused by actual geophysical phenomena. Large cloud-height errors are found in the TOMS version-7 algorithm in comparison to the Temperature Humidity Infrared Radiometer (THIR cloud data. On average, cloud-top pressures are overestimated by ~200 hPa (THIR cloud-top pressure 200 hPa for high-altitude clouds and underestimated by ~150 hPa for low-altitude clouds (THIR cloud-top pressure > 750 hPa. Most tropical NOAs result from negative errors induced by large cloud-height errors, and most tropical POAs are caused by positive errors due to intra-cloud ozone absorption enhancement. However, positive and negative errors offset each other, reducing the ozone anomaly occurrence in TOMS data. Large ozone/reflectivity slopes for mid-latitude POAs show seasonal variation consistent with total ozone fluctuation, indicating that they result mainly from synoptic and planetary wave disturbances. POAs with an occurrence fraction of 30--60% occur in regions of marine stratocumulus off the west coast of South Africa and off the west coast of South America. Both fractions and ozone/reflectivity slopes of these POAs show seasonal variations consistent with that in the tropospheric ozone. About half the ozone/reflectivity slope can be explained by ozone retrieval errors over clear and cloudy areas. The remaining slope may result from there being more ozone production

  15. How to distinguish between cloudy mini-Neptunes and water/volatile-dominated super-Earths

    Energy Technology Data Exchange (ETDEWEB)

    Benneke, Björn; Seager, Sara, E-mail: bbenneke@mit.edu [Department of Earth, Atmospheric, and Planetary Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139 (United States)

    2013-12-01

    One of the most profound questions about the newly discovered class of low-density super-Earths is whether these exoplanets are predominately H{sub 2}-dominated mini-Neptunes or volatile-rich worlds with gas envelopes dominated by H{sub 2}O, CO{sub 2}, CO, CH{sub 4}, or N{sub 2}. Transit observations of the super-Earth GJ 1214b rule out cloud-free H{sub 2}-dominated scenarios, but are not able to determine whether the lack of deep spectral features is due to high-altitude clouds or the presence of a high mean molecular mass atmosphere. Here, we demonstrate that one can unambiguously distinguish between cloudy mini-Neptunes and volatile-dominated worlds based on wing steepness and relative depths of absorption features in moderate-resolution near-infrared transmission spectra (R ∼ 100). In a numerical retrieval study, we show for GJ 1214b that an unambiguous distinction between a cloudy H{sub 2}-dominated atmosphere and cloud-free H{sub 2}O atmosphere will be possible if the uncertainties in the spectral transit depth measurements can be reduced by a factor of ∼3 compared to the published Hubble Space Telescope Wide-Field Camera 3 and Very Large Telescope transit observations by Berta et al. and Bean et al. We argue that the required precision for the distinction may be achievable with currently available instrumentation by stacking 10-15 repeated transit observations. We provide a scaling law that scales our quantitative results to other transiting super-Earths and Neptunes such as HD 97658b, 55 Cnc e, GJ 3470b and GJ 436b. The analysis in this work is performed using an improved version of our Bayesian atmospheric retrieval framework. The new framework not only constrains the gas composition and cloud/haze parameters, but also determines our confidence in having detected molecules and cloud/haze species through Bayesian model comparison. Using the Bayesian tool, we demonstrate quantitatively that the subtle transit depth variation in the Berta et al. data is

  16. Collaborative Research: Cloudiness transitions within shallow marine clouds near the Azores

    Energy Technology Data Exchange (ETDEWEB)

    Mechem, David B. [Univ. of Kansas, Lawrence, KS (United States). Atmospheric Science Program. Dept. of Geography and Atmospheric Science; de Szoeke, Simon P. [Oregon State Univ., Corvallis, OR (United States). College of Earth, Ocean, and Atmospheric Sciences; Yuter, Sandra E. [North Carolina State Univ., Raleigh, NC (United States). Dept. of Marine, Earth, and Atmospheric Sciences

    2017-01-15

    Marine stratocumulus clouds are low, persistent, liquid phase clouds that cover large areas and play a significant role in moderating the climate by reflecting large quantities of incoming solar radiation. The deficiencies in simulating these clouds in global climate models are widely recognized. Much of the uncertainty arises from sub-grid scale variability in the cloud albedo that is not accurately parameterized in climate models. The Clouds, Aerosol and Precipitation in the Marine Boundary Layer (CAP–MBL) observational campaign and the ongoing ARM site measurements on Graciosa Island in the Azores aim to sample the Northeast Atlantic low cloud regime. These data represent, the longest continuous research quality cloud radar/lidar/radiometer/aerosol data set of open-ocean shallow marine clouds in existence. Data coverage from CAP–MBL and the series of cruises to the southeast Pacific culminating in VOCALS will both be of sufficient length to contrast the two low cloud regimes and explore the joint variability of clouds in response to several environmental factors implicated in cloudiness transitions. Our research seeks to better understand cloud system processes in an underexplored but climatologically important maritime region. Our primary goal is an improved physical understanding of low marine clouds on temporal scales of hours to days. It is well understood that aerosols, synoptic-scale forcing, surface fluxes, mesoscale dynamics, and cloud microphysics all play a role in cloudiness transitions. However, the relative importance of each mechanism as a function of different environmental conditions is unknown. To better understand cloud forcing and response, we are documenting the joint variability of observed environmental factors and associated cloud characteristics. In order to narrow the realm of likely parameter ranges, we assess the relative importance of parameter conditions based primarily on two criteria: how often the condition occurs (frequency

  17. Retrieving aerosol in a cloudy environment: aerosol product availability as a function of spatial resolution

    Directory of Open Access Journals (Sweden)

    L. A. Remer

    2012-07-01

    Full Text Available The challenge of using satellite observations to retrieve aerosol properties in a cloudy environment is to prevent contamination of the aerosol signal from clouds, while maintaining sufficient aerosol product yield to satisfy specific applications. We investigate aerosol retrieval availability at different instrument pixel resolutions using the standard MODIS aerosol cloud mask applied to MODIS data and supplemented with a new GOES-R cloud mask applied to GOES data for a domain covering North America and surrounding oceans. Aerosol product availability is not the same as the cloud free fraction and takes into account the techniques used in the MODIS algorithm to avoid clouds, reduce noise and maintain sufficient numbers of aerosol retrievals. The inherent spatial resolution of each instrument, 0.5×0.5 km for MODIS and 1×1 km for GOES, is systematically degraded to 1×1, 2×2, 1×4, 4×4 and 8×8 km resolutions and then analyzed as to how that degradation would affect the availability of an aerosol retrieval, assuming an aerosol product resolution at 8×8 km. The analysis is repeated, separately, for near-nadir pixels and those at larger view angles to investigate the effect of pixel growth at oblique angles on aerosol retrieval availability. The results show that as nominal pixel size increases, availability decreases until at 8×8 km 70% to 85% of the retrievals available at 0.5 km, nadir, have been lost. The effect at oblique angles is to further decrease availability over land but increase availability over ocean, because sun glint is found at near-nadir view angles. Finer resolution sensors (i.e., 1×1, 2×2 or even 1×4 km will retrieve aerosols in partly cloudy scenes significantly more often than sensors with nadir views of 4×4 km or coarser. Large differences in the results of the two cloud masks designed for MODIS aerosol and GOES cloud products strongly reinforce that cloud masks must be developed with specific purposes in mind and

  18. Impact of cloudiness on net ecosystem exchange of carbon dioxide in different types of forest ecosystems in China

    Directory of Open Access Journals (Sweden)

    M. Zhang

    2010-02-01

    Full Text Available Clouds can significantly affect carbon exchange process between forest ecosystems and the atmosphere by influencing the quantity and quality of solar radiation received by ecosystem's surface and other environmental factors. In this study, we analyzed the effects of cloudiness on net ecosystem exchange of carbon dioxide (NEE in a temperate broad-leaved Korean pine mixed forest at Changbaishan (CBS and a subtropical evergreen broad-leaved forest at Dinghushan (DHS, based on the flux data obtained during June–August from 2003 to 2006. The results showed that the response of NEE of forest ecosystems to photosynthetically active radiation (PAR differed under clear skies and cloudy skies. Compared with clear skies, the light-saturated maximum photosynthetic rate (Pec,max at CBS under cloudy skies during mid-growing season (from June to August increased by 34%, 25%, 4% and 11% in 2003, 2004, 2005 and 2006, respectively. In contrast, Pec,max of the forest ecosystem at DHS was higher under clear skies than under cloudy skies from 2004 to 2006. When the clearness index (kt ranged between 0.4 and 0.6, the NEE reached its maximum at both CBS and DHS. However, the NEE decreased more dramatically at CBS than at DHS when kt exceeded 0.6. The results indicate that cloudy sky conditions are beneficial to net carbon uptake in the temperate forest ecosystem and the subtropical forest ecosystem. Under clear skies, vapor pressure deficit (VPD and air temperature increased due to strong light. These environmental conditions led to greater decrease in gross ecosystem photosynthesis (GEP and greater increase in ecosystem respiration (Re at CBS than at DHS. As a result, clear sky conditions caused more reduction of NEE in the temperate forest ecosystem than in the subtropical forest ecosystem. The response of NEE of different forest ecosystems to the changes in

  19. Acceleration of pH variation in cloudy apple juice using electrodialysis with bipolar membranes.

    Science.gov (United States)

    Lam Quoc, A; Lamarche, F; Makhlouf, J

    2000-06-01

    The purpose of this study was to accelerate pH variation in cloudy apple juice using electrodialysis (ED). The testing was conducted using two ED configurations. The bipolar and cationic membrane configuration showed that reducing the spacing from 8 to 0.75 mm had little effect on treatment time, whereas stacking eight bipolar membranes reduced acidification time by 30%, although the treatment still took too long (21 min). Furthermore, it was not possible to acidify apple juice to a pH of 2.0 to completely inhibit enzymatic browning. The bipolar and anionic membrane configuration helped to accelerate the acidification step by a factor of 3, increasing the yield from 3.3 to 10 L of juice/m(2) membrane/min. Moreover, treatment time was inversely proportional to the size of the membrane stack. The speed at which the pH of acidified juice returned to its initial value was, however, 4 times slower than the speed of acidification, giving a yield of 2.5 L of juice/m(2) membrane/min. By accelerating the acidification step, ED treatment with bipolar and anionic membranes results in more effective polyphenol oxidase activity and more rapid control of juice browning at pH 2.0. Also, the treatment has very little effect on the chemical composition and organoleptic quality of apple juice.

  20. HST PanCET Program: A Cloudy Atmosphere for the Promising JWST Target WASP-101b

    Energy Technology Data Exchange (ETDEWEB)

    Wakeford, H. R.; Mandell, A. [Planetary Systems Laboratory, NASA Goddard Space Flight Center, Greenbelt, MD 20771 (United States); Stevenson, K. B.; Lewis, N. K. [Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218 (United States); Sing, D. K.; Evans, T. [Astrophysics Group, Physics Building, University of Exeter, Stocker Road, Exeter EX4 4QL (United Kingdom); López-Morales, M. [Harvard-Smithsonian Center for Astrophysics, Cambridge, MA 02138 (United States); Marley, M. [NASA Ames Research Center, MS 245-5, Moffett Field, CA 94035 (United States); Kataria, T. [NASA Jet Propulsion Laboratory, 4800 Oak Grove Drive, Pasadena, CA 91109 (United States); Ballester, G. E. [Department of Planetary Sciences and Lunar and Planetary Laboratory, University of Arizona, 1541 E Univ. Boulevard, Tucson, AZ 85721 (United States); Barstow, J. [Physics and Astronomy, University College London, London (United Kingdom); Ben-Jaffel, L. [Institut d’Astrophysique de Paris, CNRS, UMR 7095 and Sorbonne Universités, UPMC Paris 6, 98 bis bd Arago, F-75014 Paris (France); Bourrier, V.; Ehrenreich, D. [Observatoire de l’Université de Genève, 51 chemin des Maillettes, CH-1290 Sauverny (Switzerland); Buchhave, L. A. [Centre for Star and Planet Formation, Niels Bohr Institute and Natural History Museum, University of Copenhagen, Øster Voldgade 5-7, DK-1350 Copenhagen K (Denmark); García Muñoz, A., E-mail: hannah.wakeford@nasa.gov [Zentrum für Astronomie und Astrophysik, Technische Universität Berlin, D-10623 Berlin (Germany); and others

    2017-01-20

    We present results from the first observations of the Hubble Space Telescope (HST) Panchromatic Comparative Exoplanet Treasury program for WASP-101b, a highly inflated hot Jupiter and one of the community targets proposed for the James Webb Space Telescope ( JWST ) Early Release Science (ERS) program. From a single HST Wide Field Camera 3 observation, we find that the near-infrared transmission spectrum of WASP-101b contains no significant H{sub 2}O absorption features and we rule out a clear atmosphere at 13 σ . Therefore, WASP-101b is not an optimum target for a JWST ERS program aimed at observing strong molecular transmission features. We compare WASP-101b to the well-studied and nearly identical hot Jupiter WASP-31b. These twin planets show similar temperature–pressure profiles and atmospheric features in the near-infrared. We suggest exoplanets in the same parameter space as WASP-101b and WASP-31b will also exhibit cloudy transmission spectral features. For future HST exoplanet studies, our analysis also suggests that a lower count limit needs to be exceeded per pixel on the detector in order to avoid unwanted instrumental systematics.

  1. Changes in cloudiness over the Amazon rainforests during the last two decades: diagnostic and potential causes

    Energy Technology Data Exchange (ETDEWEB)

    Arias, Paola A. [The University of Texas at Austin, Department of Geological Sciences, Austin, TX (United States); Universidad de Antioquia, Grupo de Ingenieria y Gestion Ambiental (GIGA), Medellin (Colombia); Jackson School of Geosciences, Geology Foundation, PO Box B, Austin, TX (United States); Fu, Rong [The University of Texas at Austin, Department of Geological Sciences, Austin, TX (United States); Hoyos, Carlos D. [Georgia Institute of Technology, School of Earth and Atmospheric Sciences, Atlanta, GA (United States); Li, Wenhong [Duke University, Division of Earth and Oceanic Sciences, Nicholas School of the Environment, Durham, NC (United States); Zhou, Liming [Georgia Institute of Technology, School of Earth and Atmospheric Sciences, Atlanta, GA (United States); National Science Foundation, Climate and Large Scale Dynamics Program, Arlington, VA (United States)

    2011-09-15

    This study shows a decrease of seasonal mean convection, cloudiness and an increase of surface shortwave down-welling radiation during 1984-2007 over the Amazon rainforests based on the analysis of satellite-retrieved clouds and surface radiative flux data. These changes are consistent with an increase in surface temperature, increased atmospheric stability, and reduction of moisture transport to the Amazon based on in situ surface and upper air meteorological data and reanalysis data. These changes appear to link to the expansion of the western Pacific warm pool during the December-February season, to the positive phase of the Atlantic Multidecadal Oscillation and increase of SST over the eastern Pacific SST during the March-May season, and to an increase of the tropical Atlantic meridional SST gradient and an expansion of the western Pacific warm pool during September-November season. The resultant increase of surface solar radiation during all but the dry season in the Amazon could contribute to the observed increases in rainforest growth during recent decades. (orig.)

  2. Modeling the Cloudy Atmospheres of Cool Stars, Brown Dwarfs and Hot Exoplanets

    DEFF Research Database (Denmark)

    Juncher, Diana

    M-dwarfs are very attractive targets when searching for new exoplanets. Unfortunately, they are also very difficult to model since their temperatures are low enough for dust clouds to form in their atmospheres. Because the properties of an exoplanet cannot be determined without knowing the proper......M-dwarfs are very attractive targets when searching for new exoplanets. Unfortunately, they are also very difficult to model since their temperatures are low enough for dust clouds to form in their atmospheres. Because the properties of an exoplanet cannot be determined without knowing......-consistent cloudy atmosphere models that can be used to properly determine the stellar parameters of cool stars. With this enhanced model atmosphere code I have created a grid of cool, dusty atmosphere models ranging in effective temperatures from Teff = 2000 − 3000 K. I have studied the formation and structure...... of their clouds and found that their synthetic spectra fit the observed spectra of mid to late type M-dwarfs and early type L-dwarfs well. With additional development into even cooler regimes, they could be used to characterize the atmospheres of exoplanets and aid us in our search for the kind of chemical...

  3. Investigation on the dynamic behaviour of a parabolic trough power plant during strongly cloudy days

    International Nuclear Information System (INIS)

    Al-Maliki, Wisam Abed Kattea; Alobaid, Falah; Starkloff, Ralf; Kez, Vitali; Epple, Bernd

    2016-01-01

    Highlights: • A detailed dynamic model of a parabolic trough solar thermal power plant is done. • Simulated results are compared to the experimental data from the real power plant. • Discrepancy between model result and real data is caused by operation strategy. • The model strategy increased the operating hours of power plant by around 2.5–3 h. - Abstract: The objective of this study is the development of a full scale dynamic model of a parabolic trough power plant with a thermal storage system, operated by the Actividades de Construcción y Servicios Group in Spain. The model includes solar field, thermal storage system and the power block and describes the heat transfer fluid and steam/water paths in detail. The parabolic trough power plant is modelled using Advanced Process Simulation Software (APROS). To validate the model, the numerical results are compared to the measured data, obtained from “Andasol II” during strongly cloudy periods in the summer days. The comparisons show a qualitative agreement between the dynamic simulation model and the measurements. The results confirm that the thermal storage enables the parabolic trough power plant to provide a constant power rate when the storage energy discharge is available, despite significant oscillations in the solar radiation.

  4. Neural network multispectral satellite images classification of volcanic ash plumes in a cloudy scenario

    Directory of Open Access Journals (Sweden)

    Matteo Picchiani

    2015-03-01

    Full Text Available This work shows the potential use of neural networks in the characterization of eruptive events monitored by satellite, through fast and automatic classification of multispectral images. The algorithm has been developed for the MODIS instrument and can easily be extended to other similar sensors. Six classes have been defined paying particular attention to image regions that represent the different surfaces that could possibly be found under volcanic ash clouds. Complex cloudy scenarios composed by images collected during the Icelandic eruptions of the Eyjafjallajökull (2010 and Grimsvötn (2011 volcanoes have been considered as test cases. A sensitivity analysis on the MODIS TIR and VIS channels has been performed to optimize the algorithm. The neural network has been trained with the first image of the dataset, while the remaining data have been considered as independent validation sets. Finally, the neural network classifier’s results have been compared with maps classified with several interactive procedures performed in a consolidated operational framework. This comparison shows that the automatic methodology proposed achieves a very promising performance, showing an overall accuracy greater than 84%, for the Eyjafjalla - jökull event, and equal to 74% for the Grimsvötn event. 

  5. Photon path length distributions for cloudy skies – oxygen A-Band measurements and model calculations

    Directory of Open Access Journals (Sweden)

    O. Funk

    2003-03-01

    Full Text Available This paper addresses the statistics underlying cloudy sky radiative transfer (RT by inspection of the distribution of the path lengths of solar photons. Recent studies indicate that this approach is promising, since it might reveal characteristics about the diffusion process underlying atmospheric radiative transfer (Pfeilsticker, 1999. Moreover, it uses an observable that is directly related to the atmospheric absorption and, therefore, of climatic relevance. However, these studies are based largely on the accuracy of the measurement of the photon path length distribution (PPD. This paper presents a refined analysis method based on high resolution spectroscopy of the oxygen A-band. The method is validated by Monte Carlo simulation atmospheric spectra. Additionally, a new method to measure the effective optical thickness of cloud layers, based on fitting the measured differential transmissions with a 1-dimensional (discrete ordinate RT model, is presented. These methods are applied to measurements conducted during the cloud radar inter-comparison campaign CLARE’98, which supplied detailed cloud structure information, required for the further analysis. For some exemplary cases, measured path length distributions and optical thicknesses are presented and backed by detailed RT model calculations. For all cases, reasonable PPDs can be retrieved and the effects of the vertical cloud structure are found. The inferred cloud optical thicknesses are in agreement with liquid water path measurements. Key words. Meteorology and atmospheric dynamics (radiative processes; instruments and techniques

  6. Photon path length distributions for cloudy skies – oxygen A-Band measurements and model calculations

    Directory of Open Access Journals (Sweden)

    O. Funk

    Full Text Available This paper addresses the statistics underlying cloudy sky radiative transfer (RT by inspection of the distribution of the path lengths of solar photons. Recent studies indicate that this approach is promising, since it might reveal characteristics about the diffusion process underlying atmospheric radiative transfer (Pfeilsticker, 1999. Moreover, it uses an observable that is directly related to the atmospheric absorption and, therefore, of climatic relevance. However, these studies are based largely on the accuracy of the measurement of the photon path length distribution (PPD. This paper presents a refined analysis method based on high resolution spectroscopy of the oxygen A-band. The method is validated by Monte Carlo simulation atmospheric spectra. Additionally, a new method to measure the effective optical thickness of cloud layers, based on fitting the measured differential transmissions with a 1-dimensional (discrete ordinate RT model, is presented. These methods are applied to measurements conducted during the cloud radar inter-comparison campaign CLARE’98, which supplied detailed cloud structure information, required for the further analysis. For some exemplary cases, measured path length distributions and optical thicknesses are presented and backed by detailed RT model calculations. For all cases, reasonable PPDs can be retrieved and the effects of the vertical cloud structure are found. The inferred cloud optical thicknesses are in agreement with liquid water path measurements.

    Key words. Meteorology and atmospheric dynamics (radiative processes; instruments and techniques

  7. Effect of enzymatic mash treatment and storage on phenolic composition, antioxidant activity, and turbidity of cloudy apple juice.

    Science.gov (United States)

    Oszmiański, Jan; Wojdylo, Aneta; Kolniak, Joanna

    2009-08-12

    The effects of different commercial enzymatic mash treatments on yield, turbidity, color, and polyphenolic and sediment of procyanidins content of cloudy apple juice were studied. Addition of pectolytic enzymes to mash treatment had positive effect on the production of cloud apple juices by improving polyphenolic contents, especially procyanidins and juice yields (68.3% in control samples to 77% after Pectinex Yield Mash). As summary of the effect of enzymatic mash treatment, polyphenol contents in cloudy apple juices significantly increased after Pectinex Yield Mash, Pectinex Smash XXL, and Pectinex XXL maceration were applied but no effect was observed after Pectinex Ultra-SPL I Panzym XXL use, compared to the control samples. The content of polymeric procyanidins represented 50-70% of total polyphenols, but in the present study, polymeric procyanidins were significantly lower in juices than in fruits and also affected by enzymatic treatment (Pectinex AFP L-4 and Panzym Yield Mash) compared to the control samples. The enzymatic treatment decreased procyanidin content in most sediment with the exception of Pectinex Smash XXL and Pectinex AFP L-4. Generally in samples that were treated by pectinase, radical scavenging activity of cloudy apple juices was increased compared to the untreated reference samples. The highest radical scavenging activity was associated with Pectinex Yield Mash, Pectinex Smash XXL, and Pectinex XXL enzyme and the lowest activity with Pectinex Ultra SP-L and Pectinex APFL-4. However, in the case of enzymatic mash treatment cloudy apple juices showed instability of turbidity and low viscosity. These results must be ascribed to the much higher hydrolysis of pectin by enzymatic preparation which is responsible for viscosity. During 6 months of storage at 4 degrees C small changes in analyzed parameters of apple juices were observed.

  8. Observation of the Spectrally Invariant Properties of Clouds in Cloudy-to-Clear Transition Zones During the MAGIC Field Campaign

    Science.gov (United States)

    Yang, Weidong; Marshak, Alexander; McBride, Patrick; Chiu, J. Christine; Knyazikhin, Yuri; Schmidt, K. Sebastian; Flynn, Connor; Lewis, Ernie R.; Eloranta, Edwin W.

    2016-01-01

    We use the spectrally invariant method to study the variability of cloud optical thickness tau and droplet effective radius r(sub eff) in transition zones (between the cloudy and clear sky columns) observed from Solar Spectral Flux Radiometer (SSFR) and Shortwave Array Spectroradiometer-Zenith (SASZe) during the Marine ARM GPCI Investigation of Clouds (MAGIC) field campaign. The measurements from the SSFR and the SASZe are different, however inter-instrument differences of self-normalized measurements (divided by their own spectra at a fixed time) are small. The spectrally invariant method approximates the spectra in the cloud transition zone as a linear combination of definitely clear and cloudy spectra, where the coefficients, slope and intercept, characterize the spectrally invariant properties of the transition zone. Simulation results from the SBDART (Santa Barbara DISORT Atmospheric Radiative Transfer) model demonstrate that (1) the slope of the visible band is positively correlated with the cloud optical thickness t while the intercept of the near-infrared band has high negative correlation with the cloud drop effective radius r(sub eff)even without the exact knowledge of tau; (2) the above relations hold for all Solar Zenith Angle (SZA) and for cloud-contaminated skies. In observations using redundant measurements from SSFR and SASZe, we find that during cloudy-to-clear transitions, (a) the slopes of the visible band decrease, and (b) the intercepts of the near-infrared band remain almost constant near cloud edges. The findings in simulations and observations suggest that, while the optical thickness decreases during the cloudy-to-clear transition, the cloud drop effective radius does not change when cloud edges are approached. These results support the hypothesis that inhomogeneous mixing dominates near cloud edges in the studied cases.

  9. Analyzing Multidecadal Trends in Cloudiness Over the Subtropical Andes Mountains of South America Using a Regional Climate Model.

    Science.gov (United States)

    Zaitchik, B. F.; Russell, A.; Gnanadesikan, A.

    2016-12-01

    Satellite-based products indicate that many parts of South America have been experiencing increases in outgoing longwave radiation (OLR) and corresponding decreases in cloudiness over the last few decades, with the strongest trends occurring in the subtropical Andes Mountains - an area that is highly vulnerable to climate change due to its reliance on glacial melt for dry-season runoff. Changes in cloudiness may be contributing to increases in atmospheric temperature, thereby raising the freezing level height (FLH) - a critical geophysical parameter. Yet these trends are only partially captured in reanalysis products, while AMIP climate models generally show no significant trend in OLR over this timeframe, making it difficult to determine the underlying drivers. Therefore, controlled numerical experiments with a regional climate model are performed in order to investigate drivers of the observed OLR and cloudiness trends. The Weather Research and Forecasting model (WRF) is used here because it offers several advantages over global models, including higher resolution - a critical asset in areas of complex topography - as well as flexible physics, parameterization, and data assimilation capabilities. It is likely that changes in the mean states and meridional gradients of SSTs in the Pacific and Atlantic oceans are driving regional trends in clouds. A series of lower boundary manipulations are performed with WRF to determine to what extent changes in SSTs influence regional OLR.

  10. A continuum from clear to cloudy hot-Jupiter exoplanets without primordial water depletion.

    Science.gov (United States)

    Sing, David K; Fortney, Jonathan J; Nikolov, Nikolay; Wakeford, Hannah R; Kataria, Tiffany; Evans, Thomas M; Aigrain, Suzanne; Ballester, Gilda E; Burrows, Adam S; Deming, Drake; Désert, Jean-Michel; Gibson, Neale P; Henry, Gregory W; Huitson, Catherine M; Knutson, Heather A; des Etangs, Alain Lecavelier; Pont, Frederic; Showman, Adam P; Vidal-Madjar, Alfred; Williamson, Michael H; Wilson, Paul A

    2016-01-07

    Thousands of transiting exoplanets have been discovered, but spectral analysis of their atmospheres has so far been dominated by a small number of exoplanets and data spanning relatively narrow wavelength ranges (such as 1.1-1.7 micrometres). Recent studies show that some hot-Jupiter exoplanets have much weaker water absorption features in their near-infrared spectra than predicted. The low amplitude of water signatures could be explained by very low water abundances, which may be a sign that water was depleted in the protoplanetary disk at the planet's formation location, but it is unclear whether this level of depletion can actually occur. Alternatively, these weak signals could be the result of obscuration by clouds or hazes, as found in some optical spectra. Here we report results from a comparative study of ten hot Jupiters covering the wavelength range 0.3-5 micrometres, which allows us to resolve both the optical scattering and infrared molecular absorption spectroscopically. Our results reveal a diverse group of hot Jupiters that exhibit a continuum from clear to cloudy atmospheres. We find that the difference between the planetary radius measured at optical and infrared wavelengths is an effective metric for distinguishing different atmosphere types. The difference correlates with the spectral strength of water, so that strong water absorption lines are seen in clear-atmosphere planets and the weakest features are associated with clouds and hazes. This result strongly suggests that primordial water depletion during formation is unlikely and that clouds and hazes are the cause of weaker spectral signatures.

  11. Multi scale imaging of the Cloudy Zone in the Tazewell IIICD Meteorite

    Science.gov (United States)

    Einsle, J. F.; Harrison, R. J.; Nichols, C. I. O.; Blukis, R.; Midgley, P. A.; Eggeman, A.; Saghi, Z.; Bagot, P.

    2015-12-01

    Paleomagnetic studies of iron and stony iron meteorites suggest that many small planetary bodies possessed molten cores resulting in the generation of a magnetic field. As these bodies cooled, Fe-Ni metal trapped within their mantle underwent a series of low-temperature transitions, leading to the familiar Widmanstatten intergrowth of kamacite and taenite. Adjacent to the kamacite/taenite interface is the so-called "cloudy zone" (CZ): a nanoscale intergrowth of tetrataenite islands in an Fe-rich matrix phase formed via spinodal decomposition. It has recently been shown (Bryson et al. 2015, Nature) that the CZ encodes a time-series record of the evolution of the magnetic field generated by the molten core of the planetary body. Extracting meaningful paleomagnetic data from the CZ relies, on a thorough understanding of the 3D chemical and magnetic properties of the intergrowth focsusing on the interactions between the magnetically hard tetrataenite islands and the magnetically soft matrix. Here we present a multi scale study of the chemical and crystallographic make up of the CZ in the Tazewell IIICD meteorite, using a range of advanced microscopy techniques. The results provide unprecedented insight into the architecture of the CZ, with implications for how the CZ acquires chemical transformation remanance during cooling on the parent body. Previous 2D transmission electron microscope studies of the CZ suggested that the matrix is an ordered Fe3Ni phase with the L12 structure. Interpretation of the electron diffraction patterns and chemical maps in these studies was hindered by a failure to resolve signals from overlapping island and matrix phases. Here we obtain high resolution electron diffraction and 3D chemical maps with near atomic resolution using a combination of scanning precession electron diffraction, 3D STEM EDS and atom probe tomography. Using this combined methodology we reslove for the first time the phenomena of secondary precipitation in the

  12. Revisiting the Phase Curves of WASP-43b: Confronting Re-analyzed Spitzer Data with Cloudy Atmospheres

    Science.gov (United States)

    Mendonça, João M.; Malik, Matej; Demory, Brice-Olivier; Heng, Kevin

    2018-04-01

    Recently acquired Hubble and Spitzer phase curves of the short-period hot Jupiter WASP-43b make it an ideal target for confronting theory with data. On the observational front, we re-analyze the 3.6 and 4.5 μm Spitzer phase curves and demonstrate that our improved analysis better removes residual red noise due to intra-pixel sensitivity, which leads to greater fluxes emanating from the nightside of WASP-43b, thus reducing the tension between theory and data. On the theoretical front, we construct cloud-free and cloudy atmospheres of WASP-43b using our Global Circulation Model (GCM), THOR, which solves the non-hydrostatic Euler equations (compared to GCMs that typically solve the hydrostatic primitive equations). The cloud-free atmosphere produces a reasonable fit to the dayside emission spectrum. The multi-phase emission spectra constrain the cloud deck to be confined to the nightside and have a finite cloud-top pressure. The multi-wavelength phase curves are naturally consistent with our cloudy atmospheres, except for the 4.5 μm phase curve, which requires the presence of enhanced carbon dioxide in the atmosphere of WASP-43b. Multi-phase emission spectra at higher spectral resolution, as may be obtained using the James Webb Space Telescope, and a reflected-light phase curve at visible wavelengths would further constrain the properties of clouds in WASP-43b.

  13. Extent of Night Warming and Spatially Heterogeneous Cloudiness Differentiate Temporal Trend of Greenness in Mountainous Tropics in the New Century.

    Science.gov (United States)

    Yu, Mei; Gao, Qiong; Gao, Chunxiao; Wang, Chao

    2017-01-25

    Tropical forests have essential functions in global C dynamics but vulnerable to changes in land cover land use (LCLUC) and climate. The tropics of Caribbean are experiencing warming and drying climate and diverse LCLUC. However, large-scale studies to detect long-term trends of C and mechanisms behind are still rare. Using MODIS Enhanced Vegetation Index (EVI), we investigated greenness trend in the Greater Antilles Caribbean during 2000-2015, and analyzed trend of vegetation patches without LCLUC to give prominence to climate impacts. We hypothesized that night warming and heavy cloudiness would reduce EVI in this mountainous tropical region. Over the 15 years, EVI decreased significantly in Jamaica, Haiti, Dominican Republic, and Puerto Rico, but increased in Cuba partly due to its strong reforestation. Haiti had the largest decreasing trend because of continuous deforestation for charcoals. After LCLUC was excluded, EVI trend still varied greatly, decreasing in the windward but increasing in the leeward of Puerto Rico. Nighttime warming reinforced by spatially heterogeneous cloudiness was found to significantly and negatively correlate with EVI trend, and explained the spatial pattern of the latter. Although cooled daytime and increased rainfall might enhance EVI, nighttime warming dominated the climate impacts and differentiated the EVI trend.

  14. A comparative study between spiral-filter press and belt press implemented in a cloudy apple juice production process.

    Science.gov (United States)

    De Paepe, Domien; Coudijzer, Katleen; Noten, Bart; Valkenborg, Dirk; Servaes, Kelly; De Loose, Marc; Diels, Ludo; Voorspoels, Stefan; Van Droogenbroeck, Bart

    2015-04-15

    In this study, advantages and disadvantages of the innovative, low-oxygen spiral-filter press system were studied in comparison with the belt press, commonly applied in small and medium size enterprises for the production of cloudy apple juice. On the basis of equivalent throughput, a higher juice yield could be achieved with spiral-filter press. Also a more turbid juice with a higher content of suspended solids could be produced. The avoidance of enzymatic browning during juice extraction led to an attractive yellowish juice with an elevated phenolic content. Moreover, it was found that juice produced with spiral-filter press demonstrates a higher retention of phenolic compounds during the downstream processing steps and storage. The results demonstrates the advantage of the use of a spiral-filter press in comparison with belt press in the production of a high quality cloudy apple juice rich in phenolic compounds, without the use of oxidation inhibiting additives. Copyright © 2014 Elsevier Ltd. All rights reserved.

  15. Absorption of Sunlight by Water Vapor in Cloudy Conditions: A Partial Explanation for the Cloud Absorption Anomaly

    Science.gov (United States)

    Crisp, D.

    1997-01-01

    The atmospheric radiative transfer algorithms used in most global general circulation models underestimate the globally-averaged solar energy absorbed by cloudy atmospheres by up to 25 W/sq m. The origin of this anomalous absorption is not yet known, but it has been attributed to a variety of sources including oversimplified or missing physical processes in these models, uncertainties in the input data, and even measurement errors. Here, a sophisticated atmospheric radiative transfer model was used to provide a more comprehensive description of the physical processes that contribute to the absorption of solar radiation by the Earth's atmosphere. We found that the amount of sunlight absorbed by a cloudy atmosphere is inversely proportional to the solar zenith angle and the cloud top height, and directly proportional to the cloud optical depth and the water vapor concentration within the clouds. Atmospheres with saturated, optically-thick, low clouds absorbed about 12 W/sq m more than clear atmospheres. This accounts for about 1/2 to 1/3 of the anomalous ab- sorption. Atmospheres with optically thick middle and high clouds usually absorb less than clear atmospheres. Because water vapor is concentrated within and below the cloud tops, this absorber is most effective at small solar zenith angles. An additional absorber that is distributed at or above the cloud tops is needed to produce the amplitude and zenith angle dependence of the observed anomalous absorption.

  16. Changes of cloudiness over tropical land during the past few decades and its link to global climate change

    Science.gov (United States)

    Arias, P.; Fu, R.; Li, W.

    2007-12-01

    Tropical forests play a key role in determining the global carbon-climate feedback in the 21st century. Changes in rainforest growth and mortality rates, especially in the deep and least perturbed forest areas, have been consistently observed across global tropics in recent years. Understanding the underlying causes of these changes, especially their links to the global climate change, is especially important in determining the future of the tropical rainforests in the 21st century. Previous studies have mostly focus on the potential influences from elevated atmospheric CO2 and increasing surface temperature. Because the rainforests in wet tropical region is often light limited, we explore whether cloudiness have changed, if so, whether it is consistent with that expected from changes in forest growth rate. We will report our observational analysis examining the trends in annual average shortwave (SW) downwelling radiation, total cloud cover, and cumulus cover over the tropical land regions and to link them with trends in convective available potencial energy (CAPE). ISCCP data and radiosonde records available from the Department of Atmospheric Sciences of the University of Wyoming (http://www.weather.uwyo.edu/upperair/sounding.html) are used to study the trends. The period for the trend analysis is 1984-2004 for the ISCCP data and 1980-2006 for the radiosondes. The results for the Amazon rainforest region suggest a decreasing trend in total cloud and convective cloud covers, which results in an increase in downwelling SW radiation at the surface. These changes of total and convective clouds are consistent with a trend of decreasing CAPE and an elevated Level of Free Convection (LFC) height, as obtained from the radiosondes. All the above mentioned trends are statistically significant based on the Mann-Kendall test with 95% of confidence. These results consistently suggest the downward surface solar radiation has been increasing since 1984, result from a decrease

  17. The renormalised π NN coupling constant and the P-wave phase shifts in the cloudy bag model

    International Nuclear Information System (INIS)

    Pearce, B.C.; Afnan, I.R.

    1986-02-01

    Most applications of the cloudy bag model to π N scattering involve unitarising the bare diagrams arising from the Lagrangian by iterating in a Lippmann-Schwinger equation. However analyses of the renormalisation of the coupling constant proceed by iterating the Lagrangian to a given order in the bare coupling constant. These two different approaches means there is an inconsistency between the calculation of phase shifts and the calculation of renormalisation. A remedy to this problem is presented that has the added advantage of improving the fit to the phase shifts in the P 11 channel. This is achieved by using physical values of the coupling constant in the crossed diagram which reduces the repulsion rather than adds attraction. This approach can be justified by examining equations for the π π N system that incorporate three-body unitarity

  18. The carbon Kuznets curve: A cloudy picture emitted by bad econometrics?

    Energy Technology Data Exchange (ETDEWEB)

    Wagner, Martin [Institute for Advanced Studies, Stumpergasse 56, A-1060 Vienna (Austria)

    2008-08-15

    We discuss several major econometric problems that have been ignored in the empirical environmental Kuznets curve (EKC) literature thus far. These are: First, the use of nonlinear transformations of integrated regressors and second, in a panel context, cross-sectional dependence in the data. Both problems fundamentally invalidate the use of widely applied time series and panel unit root and cointegration techniques. We use the important special case of the relationship between GDP and CO{sub 2} (and SO{sub 2}) emissions to show and discuss in detail that the seemingly strong evidence for an inverted U-shaped relationship between these variables obtained with commonly used methods is entirely spurious and vanishes when resorting to estimation strategies that take the discussed problems into account. (author)

  19. Forward Model Studies of Water Vapor Using Scanning Microwave Radiometers, Global Positioning System, and Radiosondes during the Cloudiness Intercomparison Experiment

    International Nuclear Information System (INIS)

    Mattioli, Vinia; Westwater, Ed R.; Gutman, S.; Morris, Victor R.

    2005-01-01

    Brightness temperatures computed from five absorption models and radiosonde observations were analyzed by comparing them with measurements from three microwave radiometers at 23.8 and 31.4 GHz. Data were obtained during the Cloudiness Inter-Comparison experiment at the U.S. Department of Energy's Atmospheric Radiation Measurement Program's (ARM) site in North-Central Oklahoma in 2003. The radiometers were calibrated using two procedures, the so-called instantaneous ?tipcal? method and an automatic self-calibration algorithm. Measurements from the radiometers were in agreement, with less than a 0.4-K difference during clear skies, when the instantaneous method was applied. Brightness temperatures from the radiometer and the radiosonde showed an agreement of less than 0.55 K when the most recent absorption models were considered. Precipitable water vapor (PWV) computed from the radiometers were also compared to the PWV derived from a Global Positioning System station that operates at the ARM site. The instruments agree to within 0.1 cm in PWV retrieval

  20. The potential of kiwifruit puree as a clean label ingredient to stabilize high pressure pasteurized cloudy apple juice during storage.

    Science.gov (United States)

    Yi, Junjie; Kebede, Biniam; Kristiani, Kristiani; Buvé, Carolien; Van Loey, Ann; Grauwet, Tara; Hendrickx, Marc

    2018-07-30

    In the fruit juice industry, high pressure (HP) processing has become a commercial success. However, enzymatic browning, cloud loss, and flavor changes during storage remain challenges. The aim of this study is to combine kiwifruit puree and HP pasteurization (600 MPa/3 min) to stabilize cloudy apple juice during storage at 4 °C. A wide range of targeted and untargeted quality characteristics was evaluated using a multivariate approach. Due to high ascorbic acid content and high viscosity, kiwifruit puree allowed to prevent enzymatic browning and phase separation of an apple-kiwifruit mixed juice. Besides, no clear changes in organic acids, viscosity, and particle size distribution were detected in mixed juice during storage. Sucrose of apple and mixed juices decreased with glucose and fructose increasing during storage. The volatile changes of both juices behaved similar, mainly esters being degraded. Sensory evaluation demonstrated consumer preferred the aroma of mixed juice compared to apple juice. Copyright © 2018 Elsevier Ltd. All rights reserved.

  1. Building the nucleus from quarks: The cloudy bag model and the quark description of the nucleon-nucleon wave functions

    International Nuclear Information System (INIS)

    Miller, G.A.

    1984-01-01

    In the Cloudy Bag Model hadrons are treated as quarks confined in an M.I.T. bag that is surrounded by a cloud of pions. Computations of the charge and magnetism distributions of nucleons and baryons, pion-nucleon scattering, and the strong and electromagnetic decays of mesons are discussed. Agreement with experimental results is excellent if the nucleon bag radius is in the range between 0.8 and 1.1 fm. Underlying qualitative reasons which cause the pionic corrections to be of the obtained sizes are analyzed. If bags are of such reasonably large sizes, nucleon bags in nuclei will often come into contact. As a result one needs to consider whether explicit quark degrees of freedom are relevant for Nuclear Physics. To study such possibilities a model which treats a nucleus as a collection of baryons, pions and six-quark bags is discussed. In particular, the short distance part of a nucleon-nucleon wave function is treated as six quarks confined in a bag. This approach is used to study the proton-proton weak interaction, the asymptotic D to S state ratio of the deuteron, the pp → dπ reaction, the charge density of /sup 3/He, magnetic moments of /sup 3/He and /sup 3/H and, the /sup 3/He-/sup 3/H binding energy difference. It is found that quark effects are very relevant for understanding nuclear properties

  2. A study of the 3D radiative transfer effect in cloudy atmospheres

    Science.gov (United States)

    Okata, M.; Teruyuki, N.; Suzuki, K.

    2015-12-01

    Evaluation of the effect of clouds in the atmosphere is a significant problem in the Earth's radiation budget study with their large uncertainties of microphysics and the optical properties. In this situation, we still need more investigations of 3D cloud radiative transer problems using not only models but also satellite observational data.For this purpose, we have developed a 3D-Monte-Carlo radiative transfer code that is implemented with various functions compatible with the OpenCLASTR R-Star radiation code for radiance and flux computation, i.e. forward and backward tracing routines, non-linear k-distribution parameterization (Sekiguchi and Nakajima, 2008) for broad band solar flux calculation, and DM-method for flux and TMS-method for upward radiance (Nakajima and Tnaka 1998). We also developed a Minimum cloud Information Deviation Profiling Method (MIDPM) as a method for a construction of 3D cloud field with MODIS/AQUA and CPR/CloudSat data. We then selected a best-matched radar reflectivity factor profile from the library for each of off-nadir pixels of MODIS where CPR profile is not available, by minimizing the deviation between library MODIS parameters and those at the pixel. In this study, we have used three cloud microphysical parameters as key parameters for the MIDPM, i.e. effective particle radius, cloud optical thickness and top of cloud temperature, and estimated 3D cloud radiation budget. We examined the discrepancies between satellite observed and mode-simulated radiances and three cloud microphysical parameter's pattern for studying the effects of cloud optical and microphysical properties on the radiation budget of the cloud-laden atmospheres.

  3. Power Prediction and Technoeconomic Analysis of a Solar PV Power Plant by MLP-ABC and COMFAR III, considering Cloudy Weather Conditions

    Directory of Open Access Journals (Sweden)

    M. Khademi

    2016-01-01

    Full Text Available The prediction of power generated by photovoltaic (PV panels in different climates is of great importance. The aim of this paper is to predict the output power of a 3.2 kW PV power plant using the MLP-ABC (multilayer perceptron-artificial bee colony algorithm. Experimental data (ambient temperature, solar radiation, and relative humidity was gathered at five-minute intervals from Tehran University’s PV Power Plant from September 22nd, 2012, to January 14th, 2013. Following data validation, 10665 data sets, equivalent to 35 days, were used in the analysis. The output power was predicted using the MLP-ABC algorithm with the mean absolute percentage error (MAPE, the mean bias error (MBE, and correlation coefficient (R2, of 3.7, 3.1, and 94.7%, respectively. The optimized configuration of the network consisted of two hidden layers. The first layer had four neurons and the second had two neurons. A detailed economic analysis is also presented for sunny and cloudy weather conditions using COMFAR III software. A detailed cost analysis indicated that the total investment’s payback period would be 3.83 years in sunny periods and 4.08 years in cloudy periods. The results showed that the solar PV power plant is feasible from an economic point of view in both cloudy and sunny weather conditions.

  4. Accuracy of the hypothetical sky-polarimetric Viking navigation versus sky conditions: revealing solar elevations and cloudinesses favourable for this navigation method

    Science.gov (United States)

    Száz, Dénes; Farkas, Alexandra; Barta, András; Kretzer, Balázs; Blahó, Miklós; Egri, Ádám; Szabó, Gyula; Horváth, Gábor

    2017-09-01

    According to Thorkild Ramskou's theory proposed in 1967, under overcast and foggy skies, Viking seafarers might have used skylight polarization analysed with special crystals called sunstones to determine the position of the invisible Sun. After finding the occluded Sun with sunstones, its elevation angle had to be measured and its shadow had to be projected onto the horizontal surface of a sun compass. According to Ramskou's theory, these sunstones might have been birefringent calcite or dichroic cordierite or tourmaline crystals working as polarizers. It has frequently been claimed that this method might have been suitable for navigation even in cloudy weather. This hypothesis has been accepted and frequently cited for decades without any experimental support. In this work, we determined the accuracy of this hypothetical sky-polarimetric Viking navigation for 1080 different sky situations characterized by solar elevation θ and cloudiness ρ, the sky polarization patterns of which were measured by full-sky imaging polarimetry. We used the earlier measured uncertainty functions of the navigation steps 1, 2 and 3 for calcite, cordierite and tourmaline sunstone crystals, respectively, and the newly measured uncertainty function of step 4 presented here. As a result, we revealed the meteorological conditions under which Vikings could have used this hypothetical navigation method. We determined the solar elevations at which the navigation uncertainties are minimal at summer solstice and spring equinox for all three sunstone types. On average, calcite sunstone ensures a more accurate sky-polarimetric navigation than tourmaline and cordierite. However, in some special cases (generally at 35° ≤ θ ≤ 40°, 1 okta ≤ ρ ≤ 6 oktas for summer solstice, and at 20° ≤ θ ≤ 25°, 0 okta ≤ ρ ≤ 4 oktas for spring equinox), the use of tourmaline and cordierite results in smaller navigation uncertainties than that of calcite. Generally, under clear or less cloudy

  5. Accuracy of the hypothetical sky-polarimetric Viking navigation versus sky conditions: revealing solar elevations and cloudinesses favourable for this navigation method.

    Science.gov (United States)

    Száz, Dénes; Farkas, Alexandra; Barta, András; Kretzer, Balázs; Blahó, Miklós; Egri, Ádám; Szabó, Gyula; Horváth, Gábor

    2017-09-01

    According to Thorkild Ramskou's theory proposed in 1967, under overcast and foggy skies, Viking seafarers might have used skylight polarization analysed with special crystals called sunstones to determine the position of the invisible Sun. After finding the occluded Sun with sunstones, its elevation angle had to be measured and its shadow had to be projected onto the horizontal surface of a sun compass. According to Ramskou's theory, these sunstones might have been birefringent calcite or dichroic cordierite or tourmaline crystals working as polarizers. It has frequently been claimed that this method might have been suitable for navigation even in cloudy weather. This hypothesis has been accepted and frequently cited for decades without any experimental support. In this work, we determined the accuracy of this hypothetical sky-polarimetric Viking navigation for 1080 different sky situations characterized by solar elevation θ and cloudiness ρ , the sky polarization patterns of which were measured by full-sky imaging polarimetry. We used the earlier measured uncertainty functions of the navigation steps 1, 2 and 3 for calcite, cordierite and tourmaline sunstone crystals, respectively, and the newly measured uncertainty function of step 4 presented here. As a result, we revealed the meteorological conditions under which Vikings could have used this hypothetical navigation method. We determined the solar elevations at which the navigation uncertainties are minimal at summer solstice and spring equinox for all three sunstone types. On average, calcite sunstone ensures a more accurate sky-polarimetric navigation than tourmaline and cordierite. However, in some special cases (generally at 35° ≤  θ  ≤ 40°, 1 okta ≤  ρ  ≤ 6 oktas for summer solstice, and at 20° ≤  θ  ≤ 25°, 0 okta ≤  ρ  ≤ 4 oktas for spring equinox), the use of tourmaline and cordierite results in smaller navigation uncertainties than that of calcite

  6. Simulations of cloudy hyperspectral infrared radiances using the HT-FRTC, a fast PC-based multipurpose radiative transfer code

    Science.gov (United States)

    Havemann, S.; Aumann, H. H.; Desouza-Machado, S. G.

    2017-12-01

    The HT-FRTC uses principal components which cover the spectrum at a very high spectral resolution allowing very fast line-by-line-like, hyperspectral and broadband simulations for satellite-based, airborne and ground-based sensors. Using data from IASI and from the Airborne Research Interferometer Evaluation System (ARIES) on board the FAAM BAE 146 aircraft, variational retrievals in principal component space with HT-FRTC as forward model have demonstrated that valuable information on temperature and humidity profiles and on the cirrus cloud properties can be obtained simultaneously. The NASA/JPL/UMBC cloudy RTM inter-comparison project has been working on a global dataset consisting of 7377 AIRS spectra. Initial simulations with HT-FRTC for this dataset have been promising. A next step taken here is to investigate how sensitive the results are with respect to different assumptions in the cloud modelling. One aspect of this is to study how assumptions about the microphysical and related optical properties of liquid/ice clouds impact the statistics of the agreement between model and observations. The other aspect is about the cloud overlap scheme. Different schemes have been tested (maximum, random, maximum random). As the computational cost increases linearly with the number of cloud columns, it will be investigated if there is an optimal number of columns beyond which there is little additional benefit to be gained. During daytime the high wave number channels of AIRS are affected by solar radiation. With full scattering calculations using a monochromatic version of the Edwards-Slingo radiation code the HT-FRTC can model solar radiation reasonably well, but full scattering calculations are relatively expensive. Pure Chou scaling on the other hand can not properly describe scattering of solar radiation by clouds and requires additional refinements.

  7. Parameters for Estimation of Casualties from Ammonia (NH3), Tabun (GA), Soman (GD),Cyclosarin (GF) and Lewisite (L)

    Science.gov (United States)

    2015-09-01

    untreated casualty estimate, AMedP-7.5 uses the Injury Profile to deter- mine the final outcome for each Injury Profile cohort. For a treated casualty...materially from that of an HD burn. Large, single coalescent blisters with sharply defined margins are filled with cloudy and opales - cent fluid, and the

  8. Modulation of aerosol radiative forcing due to mixing state in clear and cloudy-sky: A case study from Delhi National Capital Region, India

    Science.gov (United States)

    Srivastava, Parul; Dey, Sagnik; Srivastava, Atul K.; Singh, Sachchidanand; Tiwari, Suresh; Agarwal, Poornima

    2016-04-01

    Aerosol properties change with the change in mixing state of aerosols and therefore it is a source of uncertainty in estimated aerosol radiative forcing (ARF) from observations or by models assuming a specific mixing state. The problem is important in the Indo-Gangetic Basin, Northern India, where various aerosol types mix and show strong seasonal variations. Quantifying the modulation of ARF by mixing state is hindered by lack of knowledge about proper aerosol composition. Hence, first a detailed chemical composition analysis of aerosols for Delhi National capital region (NCR) is carried out. Aerosol composition is arranged quantitatively into five major aerosol types - accumulation dust, coarse dust, water soluble (WS), water insoluble (WINS), and black carbon (BC) (directly measured by Athelometer). Eight different mixing cases - external mixing, internal mixing, and six combinations of core- shell mixing (BC over dust, WS over dust, WS over BC, BC over WS, WS over WINS, and BC over WINS; each of the combinations externally mixed with other species) have been considered. The spectral aerosol optical properties - extinction coefficient, single scattering albedo (SSA) and asymmetry parameter (g) for each of the mixing cases are calculated and finally 'clear-sky' and 'cloudy-sky' ARF at the top-of-the-atmosphere (TOA) and surface are estimated using a radiative transfer model. Comparison of surface-reaching flux for each of the cases with MERRA downward shortwave surface flux reveals the most likely mixing state. 'BC-WINS+WS+Dust' show least deviation relative to MERRA during the pre-monsoon (MAMJ) and monsoon (JAS) seasons and hence is the most probable mixing states. During the winter season (DJF), 'BC-Dust+WS+WINS' case shows the closest match with MERRA, while external mixing is the most probable mixing state in the post-monsoon season (ON). Lowest values for both TOA and surface 'clear-sky' ARF is observed for 'BC-WINS+WS+ Dust' mixing case. TOA ARF is 0.28±2

  9. The cloudy bag model

    International Nuclear Information System (INIS)

    Thomas, A.W.

    1981-01-01

    Recent developments in the bag model, in which the constraints of chiral symmetry are explicitly included are reviewed. The model leads to a new understanding of the Δ-resonance. The connection of the theory with current algebra is clarified and implications of the model for the structure of the nucleon are discussed

  10. Measurements of Atmospheric CO2 Column in Cloudy Weather Conditions using An IM-CW Lidar at 1.57 Micron

    Science.gov (United States)

    Lin, Bing; Obland, Michael; Harrison, F. Wallace; Nehrir, Amin; Browell, Edward; Campbell, Joel; Dobler, Jeremy; Meadows, Bryon; Fan, Tai-Fang; Kooi, Susan; hide

    2015-01-01

    This study evaluates the capability of atmospheric CO2 column measurements under cloudy conditions using an airborne intensity-modulated continuous-wave integrated-path-differential-absorption lidar operating in the 1.57-m CO2 absorption band. The atmospheric CO2 column amounts from the aircraft to the tops of optically thick cumulus clouds and to the surface in the presence of optically thin clouds are retrieved from lidar data obtained during the summer 2011 and spring 2013 flight campaigns, respectively.

  11. The SunCloud project: An initiative for a development of a worldwide sunshine duration and cloudiness observations dataset

    Science.gov (United States)

    Sanchez-Lorenzo, A.

    2010-09-01

    One problem encountered when establishing the causes of global dimming and brightening is the limited number of long-term solar radiation series with accurate and calibrated measurements. For this reason, the analysis is often supported and extended with the use of other climatic variables such as sunshine duration and cloud cover. Specifically, sunshine duration is defined as the amount of time usually expressed in hours that direct solar radiation exceeds a certain threshold (usually taken at 120 W m-2). Consequently, this variable can be considered as an excellent proxy measure of solar radiation at interannual and decadal time scales, with the advantage that measurements of this variable were initiated in the late 19th century in different, worldwide, main meteorological stations. Nevertheless, detailed and up-to-date analysis of sunshine duration behavior on global or hemispheric scales are still missing. Thus, starting on September 2010 in the framework of different research projects, we will engage a worldwide compilation of the longest daily or monthly sunshine duration series from the late 19th century until present. Several quality control checks and homogenization methods will be applied to the generated sunshine dataset. The relationship between the more precise downward solar radiation series from the Global Energy Balance Archive (GEBA) and the homogenized sunshine series will be studied in order to reconstruct global and regional solar irradiance at the Earth's surface since the late 19th century. Since clouds are the main cause of interannual and decadal variability of radiation reaching the Earth's surface, as a complement to the long-term sunshine series we will also compile worldwide surface cloudiness observations. With this presentation we seek to encourage the climate community to contribute with their own local datasets to the SunCloud project. The SunCloud Team: M. Wild, Institute for Atmospheric and Climate Science, ETH Zurich, Switzerland

  12. The neuron net method for processing the clear pixels and method of the analytical formulas for processing the cloudy pixels of POLDER instrument images

    Science.gov (United States)

    Melnikova, I.; Mukai, S.; Vasilyev, A.

    Data of remote measurements of reflected radiance with the POLDER instrument on board of ADEOS satellite are used for retrieval of the optical thickness, single scattering albedo and phase function parameter of cloudy and clear atmosphere. The method of perceptron neural network that from input values of multiangle radiance and Solar incident angle allows to obtain surface albedo, the optical thickness, single scattering albedo and phase function parameter in case of clear sky. Two last parameters are determined as optical average for atmospheric column. The calculation of solar radiance with using the MODTRAN-3 code with taking into account multiple scattering is accomplished for neural network learning. All mentioned parameters were randomly varied on the base of statistical models of possible measured parameters variation. Results of processing one frame of remote observation that consists from 150,000 pixels are presented. The methodology elaborated allows operative determining optical characteristics as cloudy as clear atmosphere. Further interpretation of these results gives the possibility to extract the information about total contents of atmospheric aerosols and absorbing gases in the atmosphere and create models of the real cloudiness An analytical method of interpretation that based on asymptotic formulas of multiple scattering theory is applied to remote observations of reflected radiance in case of cloudy pixel. Details of the methodology and error analysis were published and discussed earlier. Here we present results of data processing of pixel size 6x6 km In many studies the optical thickness is evaluated earlier in the assumption of the conservative scattering. But in case of true absorption in clouds the large errors in parameter obtained are possible. The simultaneous retrieval of two parameters at every wavelength independently is the advantage comparing with earlier studies. The analytical methodology is based on the transfer theory asymptotic

  13. Development of dual stream PCRTM-SOLAR for fast and accurate radiative transfer modeling in the cloudy atmosphere with solar radiation

    Science.gov (United States)

    Yang, Q.; Liu, X.; Wu, W.; Kizer, S.; Baize, R. R.

    2016-12-01

    Fast and accurate radiative transfer model is the key for satellite data assimilation and observation system simulation experiments for numerical weather prediction and climate study applications. We proposed and developed a dual stream PCRTM-SOLAR model which may simulate radiative transfer in the cloudy atmosphere with solar radiation quickly and accurately. Multi-scattering of multiple layers of clouds/aerosols is included in the model. The root-mean-square errors are usually less than 5x10-4 mW/cm2.sr.cm-1. The computation speed is 3 to 4 orders of magnitude faster than the medium speed correlated-k option MODTRAN5. This model will enable a vast new set of scientific calculations that were previously limited due to the computational expenses of available radiative transfer models.

  14. Quality Of Cloudy Plum Juice Produced From Fresh Fruit Of Prunus Domestica L. – The Effect Of Cultivar And Enzyme Treatment

    Directory of Open Access Journals (Sweden)

    Zbrzeźniak Monika

    2015-12-01

    Full Text Available The quality of cloudy juices produced from two plum cultivars varied in chemical characteristics and native polyphenol oxidase (PPO activity, and was studied in relation to specific pectinolytic activity of enzyme preparations used for fresh fruit maceration before pressing. Process effectiveness expressed as juice yield, turbidity and the rate of transfer of anthocyanins and polyphenols were determined for five different enzyme preparations, whose activity was also analysed. Juice yields obtained after 1 hour mash maceration (50 ºC, 100 g·t−1 were between 86.6 and 95.4%. The anthocyanins content of the obtained juices strongly depended on the cultivar and ranged from 26 to 50 mg·L−1 for ‘Promis’, and from 269 to 289 mg·L−1 for ‘Čačanska Najbolja’, which could be related to the differences in the measured PPO activity (175.4 and 79.8 nkat·g−1, respectively. The type of enzyme preparation strongly affected the degradation rate of anthocyanins during juice processing. Peonidin-3-rutinoside proved to be the most stable during plum juice production in contrast to cyanidin-3-glucoside. Irrespectively of the cultivar, the juice prepared with the mixture of Rohapect PTE + Rohament PL (2 : 1 showed the highest turbidity among the investigated combinations. The results suggest that for the production of cloudy plum juice use of a preparation with low pectin methyl esterase and polygalacturonase activities and high pectin lyase activity could be recommended.

  15. A Lookup-Table-Based Approach to Estimating Surface Solar Irradiance from Geostationary and Polar-Orbiting Satellite Data

    Directory of Open Access Journals (Sweden)

    Hailong Zhang

    2018-03-01

    Full Text Available Incoming surface solar irradiance (SSI is essential for calculating Earth’s surface radiation budget and is a key parameter for terrestrial ecological modeling and climate change research. Remote sensing images from geostationary and polar-orbiting satellites provide an opportunity for SSI estimation through directly retrieving atmospheric and land-surface parameters. This paper presents a new scheme for estimating SSI from the visible and infrared channels of geostationary meteorological and polar-orbiting satellite data. Aerosol optical thickness and cloud microphysical parameters were retrieved from Geostationary Operational Environmental Satellite (GOES system images by interpolating lookup tables of clear and cloudy skies, respectively. SSI was estimated using pre-calculated offline lookup tables with different atmospheric input data of clear and cloudy skies. The lookup tables were created via the comprehensive radiative transfer model, Santa Barbara Discrete Ordinate Radiative Transfer (SBDART, to balance computational efficiency and accuracy. The atmospheric attenuation effects considered in our approach were water vapor absorption and aerosol extinction for clear skies, while cloud parameters were the only atmospheric input for cloudy-sky SSI estimation. The approach was validated using one-year pyranometer measurements from seven stations in the SURFRAD (SURFace RADiation budget network. The results of the comparison for 2012 showed that the estimated SSI agreed with ground measurements with correlation coefficients of 0.94, 0.69, and 0.89 with a bias of 26.4 W/m2, −5.9 W/m2, and 14.9 W/m2 for clear-sky, cloudy-sky, and all-sky conditions, respectively. The overall root mean square error (RMSE of instantaneous SSI was 80.0 W/m2 (16.8%, 127.6 W/m2 (55.1%, and 99.5 W/m2 (25.5% for clear-sky, cloudy-sky (overcast sky and partly cloudy sky, and all-sky (clear-sky and cloudy-sky conditions, respectively. A comparison with other state

  16. Effect of pectinase treatment on extraction of antioxidant phenols from pomace, for the production of puree-enriched cloudy apple juices.

    Science.gov (United States)

    Oszmiański, Jan; Wojdyło, Aneta; Kolniak, Joanna

    2011-07-15

    Effects of pomace maceration on yield, turbidity, cloud stability, composition of phenolics, antioxidant activity and colour properties were studied, to evaluate the potential applicability of enzyme preparations in puree-enriched cloudy apple juice production. The yield of mixed juice and puree from pomace obtained in the enzymatic processing of apple ranged from 92.3% to 95.3%, significantly higher than the yield from the control without enzymatic pomace treatment (81.8%). Higher turbidity was obtained upon pomace treatment with Pectinex XXL and Pectinex Ultra SPL enzymes. The total content of phenolic compounds in apple pomace was higher than in raw juices (1520mg/kg and 441mg/L, respectively). The total polyphenol yields were higher in juices treated with Pectinex AFP L-4, Pectinex Yield Mash and Pectinex XXL, as compared to the control treatment. During 6months of storage, a significant change was observed in the content of polyphenols, especially in procyanidin fractions. Crown Copyright © 2011. Published by Elsevier Ltd. All rights reserved.

  17. Estimation of spectral solar radiation based on global insolation and characteristics of spectral solar radiation on a tilt surface; Zenten nissharyo ni motozuku zenten nissha supekutoru no suitei to keishamen bunko tokusei

    Energy Technology Data Exchange (ETDEWEB)

    Baba, H; Kanayama, K; Endo, N; Koromohara, K; Takayama, H [Kitami Institute of Technology, Hokkaido (Japan)

    1996-10-27

    Use of global insolation for estimating the corresponding spectral distribution is proposed. Measurements of global insolation spectrum throughout a year were compiled for clear days and cloudy days, ranked by 100W/m{sup 2}, for the clarification of spectral distribution. Global insolation quantity for a clear day was subject mainly to sun elevation. The global insolation spectral distribution with the sun elevation not lower than 15{degree} was similar to Bird`s model. Under the cloudy sky, energy density was lower in the region of wavelengths longer than the peak wavelength of 0.46{mu}m, and the distribution curve was sharper than that under the clear sky. Values given by Bird`s model were larger than measured values in the wavelength range of 0.6-1.8{mu}m, which was attributed to absorption by vapor. From the standard spectral distribution charts for the clear sky and cloudy sky, and from the dimensionless spectral distributions obtained by dividing them by the peak values, spectral distributions could be estimated of insolation quantities for the clear sky, cloudy sky, etc. As for the characteristics of spectral solar radiation on a tilt surface obtained from Bird`s model, they agreed with actually measured values at an angle of inclination of 60{degree} or smaller. 6 refs., 10 figs., 1 tab.

  18. Quantifying Forest and Coastal Disturbance from Industrial Mining Using Satellite Time Series Analysis Under Very Cloudy Conditions

    Science.gov (United States)

    Alonzo, M.; Van Den Hoek, J.; Ahmed, N.

    2015-12-01

    The open-pit Grasberg mine, located in the highlands of Western Papua, Indonesia, and operated by PT Freeport Indonesia (PT-FI), is among the world's largest in terms of copper and gold production. Over the last 27 years, PT-FI has used the Ajkwa River to transport an estimated 1.3 billion tons of tailings from the mine into the so-called Ajkwa Deposition Area (ADA). The ADA is the product of aggradation and lateral expansion of the Ajkwa River into the surrounding lowland rainforest and mangroves, which include species important to the livelihoods of indigenous Papuans. Mine tailings that do not settle in the ADA disperse into the Arafura Sea where they increase levels of suspended particulate matter (SPM) and associated concentrations of dissolved copper. Despite the mine's large-scale operations, ecological impact of mine tailings deposition on the forest and estuarial ecosystems have received minimal formal study. While ground-based inquiries are nearly impossible due to access restrictions, assessment via satellite remote sensing is promising but hindered by extreme cloud cover. In this study, we characterize ridgeline-to-coast environmental impacts along the Ajkwa River, from the Grasberg mine to the Arafura Sea between 1987 and 2014. We use "all available" Landsat TM and ETM+ images collected over this time period to both track pixel-level vegetation disturbance and monitor changes in coastal SPM levels. Existing temporal segmentation algorithms are unable to assess both acute and protracted trajectories of vegetation change due to pervasive cloud cover. In response, we employ robust, piecewise linear regression on noisy vegetation index (NDVI) data in a manner that is relatively insensitive to atmospheric contamination. Using this disturbance detection technique we constructed land cover histories for every pixel, based on 199 image dates, to differentiate processes of vegetation decline, disturbance, and regrowth. Using annual reports from PT-FI, we show

  19. Distributional shift of urea production site from the extraembryonic yolk sac membrane to the embryonic liver during the development of cloudy catshark (Scyliorhinus torazame).

    Science.gov (United States)

    Takagi, Wataru; Kajimura, Makiko; Tanaka, Hironori; Hasegawa, Kumi; Ogawa, Shuntaro; Hyodo, Susumu

    2017-09-01

    Urea is an essential osmolyte for marine cartilaginous fishes. Adult elasmobranchs and holocephalans are known to actively produce urea in the liver, muscle and other extrahepatic organs; however, osmoregulatory mechanisms in the developing cartilaginous fish embryo with an undeveloped urea-producing organ are poorly understood. We recently described the contribution of extraembryonic yolk sac membranes (YSM) to embryonic urea synthesis during the early developmental period of the oviparous holocephalan elephant fish (Callorhinchus milii). In the present study, to test whether urea production in the YSM is a general phenomenon among oviparous Chondrichthyes, we investigated gene expression and activities of ornithine urea cycle (OUC) enzymes together with urea concentrations in embryos of the elasmobranch cloudy catshark (Scyliorhinus torazame). The intracapsular fluid, in which the catshark embryo develops, had a similar osmolality to seawater, and embryos maintained a high concentration of urea at levels similar to that of adult plasma throughout development. Relative mRNA expressions and activities of catshark OUC enzymes were significantly higher in YSM than in embryos until stage 32. Concomitant with the development of the embryonic liver, the expression levels and activities of OUC enzymes were markedly increased in the embryo from stage 33, while those of the YSM decreased from stage 32. The present study provides further evidence that the YSM contributes to embryonic urea homeostasis until the liver and other extrahepatic organs become fully functional, and that urea-producing tissue shifts from the YSM to the embryonic liver in the late developmental period of oviparous marine cartilaginous fishes. Copyright © 2017 Elsevier Inc. All rights reserved.

  20. A Cloudy View of Exoplanets

    Science.gov (United States)

    Deming, Drake

    2010-01-01

    The lack of absorption features in the transmission spectrum of exoplanet GJ1214b rules out a hydrogen-rich atmosphere for the planet. It is consistent with an atmosphere rich in water vapour or abundant in clouds.

  1. Albedo and estimates of net radiation for green beans under polyethylene cover and field conditions

    International Nuclear Information System (INIS)

    Souza, J.L. de; Escobedo, J.F.; Tornero, M.T.T.

    1999-01-01

    This paper describes the albedo (r) and estimates of net radiation and global solar irradiance for green beans crop (Phaseolus vulgaris L.), cultivated in greenhouse with cover of polyethylene and field conditions, in Botucatu, SP, Brazil (22° 54' S; 48° 27' W; 850 m). The solar global irradiance (R g ) and solar reflected radiation (R r ) were used to estimate the albedo through the ratio between R r and R g . The diurnal curves of albedo were obtained for days with clear sky and partially cloudy conditions, for different phenological stages of the crop. The albedo ranged with the solar elevation, the environment and the phenological stages. The cloudiness range have almost no influence on the albedo diurnal amount. The estimation of radiation were made by linear regression, using the global solar irradiance (R g ) and net short-waves radiation (R c ) as independent variables. All estimates of radiation showed better adjustment for specific phenological periods compared to the entire crop growing cycle. The net radiation in the greenhouse has been estimated by the global solar irradiance measured at field conditions. (author) [pt

  2. A Parameterization for Land-Atmosphere-Cloud Exchange (PLACE): Documentation and Testing of a Detailed Process Model of the Partly Cloudy Boundary Layer over Heterogeneous Land.

    Science.gov (United States)

    Wetzel, Peter J.; Boone, Aaron

    1995-07-01

    This paper presents a general description of, and demonstrates the capabilities of, the Parameterization for Land-Atmosphere-Cloud Exchange (PLACE). The PLACE model is a detailed process model of the partly cloudy atmospheric boundary layer and underlying heterogeneous land surfaces. In its development, particular attention has been given to three of the model's subprocesses: the prediction of boundary layer cloud amount, the treatment of surface and soil subgrid heterogeneity, and the liquid water budget. The model includes a three-parameter nonprecipitating cumulus model that feeds back to the surface and boundary layer through radiative effects. Surface heterogeneity in the PLACE model is treated both statistically and by resolving explicit subgrid patches. The model maintains a vertical column of liquid water that is divided into seven reservoirs, from the surface interception store down to bedrock.Five single-day demonstration cases are presented, in which the PLACE model was initialized, run, and compared to field observations from four diverse sites. The model is shown to predict cloud amount well in these while predicting the surface fluxes with similar accuracy. A slight tendency to underpredict boundary layer depth is noted in all cases.Sensitivity tests were also run using anemometer-level forcing provided by the Project for Inter-comparison of Land-surface Parameterization Schemes (PILPS). The purpose is to demonstrate the relative impact of heterogeneity of surface parameters on the predicted annual mean surface fluxes. Significant sensitivity to subgrid variability of certain parameters is demonstrated, particularly to parameters related to soil moisture. A major result is that the PLACE-computed impact of total (homogeneous) deforestation of a rain forest is comparable in magnitude to the effect of imposing heterogeneity of certain surface variables, and is similarly comparable to the overall variance among the other PILPS participant models. Were

  3. Inter-comparison of different models for estimating clear sky solar global radiation for the Negev region of Israel

    International Nuclear Information System (INIS)

    Ianetz, Amiran; Lyubansky, Vera; Setter, Ilan; Kriheli, Boris; Evseev, Efim G.; Kudish, Avraham I.

    2007-01-01

    Solar global radiation is a function of solar altitude, site altitude, albedo, atmospheric transparency and cloudiness, whereas solar global radiation on a clear day is defined such that it is a function of all the abovementioned parameters except cloudiness. Consequently, analysis of the relative magnitudes of solar global radiation and solar global radiation on a clear day provides a platform for studying the influence of cloudiness on solar global radiation. The Iqbal filter for determining the day type has been utilized to calculate the monthly average clear day solar global radiation at three sites in the Negev region of Israel. An inter-comparison between four models for estimating clear sky solar global radiation at the three sites was made. The relative accuracy of the four models was determined by comparing the monthly average daily clear sky solar global radiation to that determined using the Iqbal filter. The analysis was performed on databases consisting of measurements made during the time interval of January 1991 to December 2004. The monthly average daily clear sky solar global radiation determined by the Berlynd model was found to give the best agreement with that determined using the Iqbal filter. The Berlynd model was then utilized to calculate a daily clear day index, K c , which is defined as the ratio of the daily solar global radiation to the daily clear day solar global radiation. It is suggested that this index be used as an indication of the degree of cloudiness. Linear regression analysis was performed on the individual monthly databases for each site to determine the correlation between the daily clear day index and the daily clearness index, K T

  4. Estimating Planetary Boundary Layer Heights from NOAA Profiler Network Wind Profiler Data

    Science.gov (United States)

    Molod, Andrea M.; Salmun, H.; Dempsey, M

    2015-01-01

    An algorithm was developed to estimate planetary boundary layer (PBL) heights from hourly archived wind profiler data from the NOAA Profiler Network (NPN) sites located throughout the central United States. Unlike previous studies, the present algorithm has been applied to a long record of publicly available wind profiler signal backscatter data. Under clear conditions, summertime averaged hourly time series of PBL heights compare well with Richardson-number based estimates at the few NPN stations with hourly temperature measurements. Comparisons with clear sky reanalysis based estimates show that the wind profiler PBL heights are lower by approximately 250-500 m. The geographical distribution of daily maximum PBL heights corresponds well with the expected distribution based on patterns of surface temperature and soil moisture. Wind profiler PBL heights were also estimated under mostly cloudy conditions, and are generally higher than both the Richardson number based and reanalysis PBL heights, resulting in a smaller clear-cloudy condition difference. The algorithm presented here was shown to provide a reliable summertime climatology of daytime hourly PBL heights throughout the central United States.

  5. Parameter Estimation

    DEFF Research Database (Denmark)

    Sales-Cruz, Mauricio; Heitzig, Martina; Cameron, Ian

    2011-01-01

    of optimisation techniques coupled with dynamic solution of the underlying model. Linear and nonlinear approaches to parameter estimation are investigated. There is also the application of maximum likelihood principles in the estimation of parameters, as well as the use of orthogonal collocation to generate a set......In this chapter the importance of parameter estimation in model development is illustrated through various applications related to reaction systems. In particular, rate constants in a reaction system are obtained through parameter estimation methods. These approaches often require the application...... of algebraic equations as the basis for parameter estimation.These approaches are illustrated using estimations of kinetic constants from reaction system models....

  6. 浅析《云笈七签》与中医养生%The Analysis of the Data of Medicine and Regimen in Seven Tablets in a Cloudy Satchel

    Institute of Scientific and Technical Information of China (English)

    常久; 蒋力生

    2017-01-01

    Seven Tablets in a Cloudy Satchel,edited by Zhang Junfang,is called the encyclopedia of Taoism.It has collected the essence content of Da-Song Tiangong Baozang.The book plays an extremely important role in Taoist collections,even in the history of Taoism.Seven Tablets in a Cloudy Satchel discusses systematically the theory of the origination and evolution of the universe and life.It contains a variety of health preservation methods,such as traditional Chinese medicine (TCM),Chinese physical and breathing exercises,massage,breathing,meditation,etc.Its modern value is definitely worth exploring as one of the important literatures about health preservation but its rationality and validity of the health preservation methods in Seven Tablets in a Cloudy Satchel remain to be in constantly discussed.The effect on the prevention and treatment of disease in the book,most with grandiose composition,needs an objective view.%张君房所编辑的《云笈七签》被称为“道教百科全书”,其辑录了《大宋天宫宝藏》的精华内容.《云笈七签》在道教典籍的整理和道教史上的地位十分重要.本书对宇宙起源和生命起源进行了系统的讨论,还包含了大量的养生方法,如中药养生、导引术、按摩术、呼吸吐纳、存思静坐等.本书作为重要的养生文献,其现代价值值得深入挖掘.但是书中养生方法的合理性和有效性有待于进一步的研究.其中对治病防病效果的描述,也有一些夸大成分,需要客观看待.

  7. APhoRISM FP7 project: the Multi-platform volcanic Ash Cloud Estimation (MACE) infrastructure

    Science.gov (United States)

    Merucci, Luca; Corradini, Stefano; Bignami, Christian; Stramondo, Salvatore

    2014-05-01

    APHORISM is an FP7 project that aims to develop innovative products to support the management and mitigation of the volcanic and the seismic crisis. Satellite and ground measurements will be managed in a novel manner to provide new and improved products in terms of accuracy and quality of information. The Multi-platform volcanic Ash Cloud Estimation (MACE) infrastructure will exploit the complementarity between geostationary, and polar satellite sensors and ground measurements to improve the ash detection and retrieval and to fully characterize the volcanic ash clouds from source to the atmosphere. The basic idea behind the proposed method consists to manage in a novel manner, the volcanic ash retrievals at the space-time scale of typical geostationary observations using both the polar satellite estimations and in-situ measurements. The typical ash thermal infrared (TIR) retrieval will be integrated by using a wider spectral range from visible (VIS) to microwave (MW) and the ash detection will be extended also in case of cloudy atmosphere or steam plumes. All the MACE ash products will be tested on three recent eruptions representative of different eruption styles in different clear or cloudy atmospheric conditions: Eyjafjallajokull (Iceland) 2010, Grimsvotn (Iceland) 2011 and Etna (Italy) 2011-2012. The MACE infrastructure will be suitable to be implemented in the next generation of ESA Sentinels satellite missions.

  8. Development of software for estimating clear sky solar radiation in Indonesia

    Science.gov (United States)

    Ambarita, H.

    2017-01-01

    Research on solar energy applications in Indonesia has come under scrutiny in recent years. Solar radiation is harvested by solar collector or solar cell and convert the energy into useful energy such as heat and or electricity. In order to provide a better configuration of a solar collector or a solar cell, clear sky radiation should be estimated properly. In this study, an in-house software for estimating clear sky radiation is developed. The governing equations are solved simultaneously. The software is tested in Medan city by performing a solar radiation measurements. For clear sky radiation, the results of the software and measurements ones show a good agreement. However, for the cloudy sky condition it cannot predict the solar radiation. This software can be used to estimate the clear sky radiation in Indonesia.

  9. Estimating surface solar radiation from upper-air humidity

    Energy Technology Data Exchange (ETDEWEB)

    Kun Yang [Telecommunications Advancement Organization of Japan, Tokyo (Japan); Koike, Toshio [University of Tokyo (Japan). Dept. of Civil Engineering

    2002-07-01

    A numerical model is developed to estimate global solar irradiance from upper-air humidity. In this model, solar radiation under clear skies is calculated through a simple model with radiation-damping processes under consideration. A sky clearness indicator is parameterized from relative humidity profiles within three atmospheric sublayers, and the indicator is used to connect global solar radiation under clear skies and that under cloudy skies. Model inter-comparisons at 18 sites in Japan suggest (1) global solar radiation strongly depends on the sky clearness indicator, (2) the new model generally gives better estimation to hourly-mean solar irradiance than the other three methods used in numerical weather predictions, and (3) the new model may be applied to estimate long-term solar radiation. In addition, a study at one site in the Tibetan Plateau shows vigorous convective activities in the region may cause some uncertainties to radiation estimations due to the small-scale and short life of convective systems. (author)

  10. Estimation of solar radiation from Australian meterological observations

    International Nuclear Information System (INIS)

    Moriarty, W.W.

    1991-01-01

    A carefully prepared set of Australian radiation and meteorological data was used to develop a system for estimating hourly or instantaneous broad band direct, diffuse and global radiation from meteorological observations. For clear sky conditions relationships developed elsewhere were adapted to Australian data. For cloudy conditions the clouds were divided into two groups, high clouds and opaque (middle and low) clouds, and corrections were made to compensate for the bias due to reporting practices for almost clear and almost overcast skies. Careful consideration was given to the decrease of visible sky toward the horizon caused by the vertical extent of opaque clouds. Equations relating cloud and other meteorological observations to the direct and diffuse radiation contained four unknown quantities, functions of cloud amount and of solar elevation, which were estimated from the data. These were the proportions of incident solar radiation passed on as direct and as diffuse radiation by high clouds, and as diffuse radiation by opaque clouds, and a factor to describe the elevation dependence of the fraction of sky not obscured by opaque clouds. When the resulting relationships were used to estimate global, direct and diffuse radiation on a horizontal surface, the results were good, especially for global radiation. Some discrepancies between estimates and measurements of diffuse and direct radiation were probably due to erroneously high measurements of diffuse radiation

  11. Development and evaluation of neural network models to estimate daily solar radiation at Córdoba, Argentina

    International Nuclear Information System (INIS)

    Bocco, M.

    2006-01-01

    The objective of this work was to develop neural network models of backpropagation type to estimate solar radiation based on extraterrestrial radiation data, daily temperature range, precipitation, cloudiness and relative sunshine duration. Data from Córdoba, Argentina, were used for development and validation. The behaviour and adjustment between values observed and estimates obtained by neural networks for different combinations of input were assessed. These estimations showed root mean square error between 3.15 and 3.88 MJ m -2 d -1 . The latter corresponds to the model that calculates radiation using only precipitation and daily temperature range. In all models, results show good adjustment to seasonal solar radiation. These results allow inferring the adequate performance and pertinence of this methodology to estimate complex phenomena, such as solar radiation [pt

  12. Reconstruction of temporal variations of evapotranspiration using instantaneous estimates at the time of satellite overpass

    Directory of Open Access Journals (Sweden)

    E. Delogu

    2012-08-01

    Full Text Available Evapotranspiration estimates can be derived from remote sensing data and ancillary, mostly meterorological, information. For this purpose, two types of methods are classically used: the first type estimates a potential evapotranspiration rate from vegetation indices, and adjusts this rate according to water availability derived from either a surface temperature index or a first guess obtained from a rough estimate of the water budget, while the second family of methods relies on the link between the surface temperature and the latent heat flux through the surface energy budget. The latter provides an instantaneous estimate at the time of satellite overpass. In order to compute daily evapotranspiration, one needs an extrapolation algorithm. Since no image is acquired during cloudy conditions, these methods can only be applied during clear sky days. In order to derive seasonal evapotranspiration, one needs an interpolation method. Two combined interpolation/extrapolation methods based on the self preservation of evaporative fraction and the stress factor are compared to reconstruct seasonal evapotranspiration from instantaneous measurements acquired in clear sky conditions. Those measurements are taken from instantaneous latent heat flux from 11 datasets in Southern France and Morocco. Results show that both methods have comparable performances with a clear advantage for the evaporative fraction for datasets with several water stress events. Both interpolation algorithms tend to underestimate evapotranspiration due to the energy limiting conditions that prevail during cloudy days. Taking into account the diurnal variations of the evaporative fraction according to an empirical relationship derived from a previous study improved the performance of the extrapolation algorithm and therefore the retrieval of the seasonal evapotranspiration for all but one datasets.

  13. Estimating Utility

    DEFF Research Database (Denmark)

    Arndt, Channing; Simler, Kenneth R.

    2010-01-01

    A fundamental premise of absolute poverty lines is that they represent the same level of utility through time and space. Disturbingly, a series of recent studies in middle- and low-income economies show that even carefully derived poverty lines rarely satisfy this premise. This article proposes a......, with the current approach tending to systematically overestimate (underestimate) poverty in urban (rural) zones.......A fundamental premise of absolute poverty lines is that they represent the same level of utility through time and space. Disturbingly, a series of recent studies in middle- and low-income economies show that even carefully derived poverty lines rarely satisfy this premise. This article proposes...... an information-theoretic approach to estimating cost-of-basic-needs (CBN) poverty lines that are utility consistent. Applications to date illustrate that utility-consistent poverty measurements derived from the proposed approach and those derived from current CBN best practices often differ substantially...

  14. Sun tracker for clear or cloudy weather

    Science.gov (United States)

    Scott, D. R.; White, P. R.

    1979-01-01

    Sun tracker orients solar collector so that they absorb maximum possible sunlight without being fooled by bright clouds, holes in cloud cover, or other atmospheric conditions. Tracker follows sun within 0.25 deg arc and is accurate within + or - 5 deg when sun is hidden.

  15. Modeling the summertime Arctic cloudy boundary layer

    Energy Technology Data Exchange (ETDEWEB)

    Curry, J.A.; Pinto, J.O. [Univ. of Colorado, Boulder, CO (United States); McInnes, K.L. [CSIRO Division of Atmospheric Research, Mordialloc (Australia)

    1996-04-01

    Global climate models have particular difficulty in simulating the low-level clouds during the Arctic summer. Model problems are exacerbated in the polar regions by the complicated vertical structure of the Arctic boundary layer. The presence of multiple cloud layers, a humidity inversion above cloud top, and vertical fluxes in the cloud that are decoupled from the surface fluxes, identified in Curry et al. (1988), suggest that models containing sophisticated physical parameterizations would be required to accurately model this region. Accurate modeling of the vertical structure of multiple cloud layers in climate models is important for determination of the surface radiative fluxes. This study focuses on the problem of modeling the layered structure of the Arctic summertime boundary-layer clouds and in particular, the representation of the more complex boundary layer type consisting of a stable foggy surface layer surmounted by a cloud-topped mixed layer. A hierarchical modeling/diagnosis approach is used. A case study from the summertime Arctic Stratus Experiment is examined. A high-resolution, one-dimensional model of turbulence and radiation is tested against the observations and is then used in sensitivity studies to infer the optimal conditions for maintaining two separate layers in the Arctic summertime boundary layer. A three-dimensional mesoscale atmospheric model is then used to simulate the interaction of this cloud deck with the large-scale atmospheric dynamics. An assessment of the improvements needed to the parameterizations of the boundary layer, cloud microphysics, and radiation in the 3-D model is made.

  16. LHC Report: Cloudy with sunny spells

    CERN Multimedia

    Lionel Herblin & Mike Lamont for the LHC team

    2015-01-01

    The LHC is continuing its 25 ns intensity ramp-up and has now reached 1465 bunches per beam. Performance is reasonable and the experiments have seen some long fills with steadily increasing luminosity delivery rates. Some now familiar issues continue to make life interesting.   The image shows the heat load evolution as measured in specially equipped dipoles. (Image: Giovanni Iadarola). Top frame: energy and intensity. Middle frame: measured heat load in W/m. Bottom frame: heat load normalised to total beam intensity. One of the key challenges of 2015 was always expected to be electron clouds. The two scrubbing runs that were performed in the summer successfully qualified the LHC for up to around 1500 bunches. However, the final phase of the scrubbing, which saw the move from regular 25 ns beam to the doublet beam, proved difficult, and the scrubbing team concluded that the machine was not yet well-enough scrubbed for the doublets to be used effectively. The 25 ...

  17. Market cloudiness, a German national polemics

    International Nuclear Information System (INIS)

    Luginsland, M.

    2004-01-01

    While theoretically liberalized, the German electricity market remains the most opaque of all European electricity markets. Strong price increases (up to 25%) are announced for 2005, while Brussels and Berlin want to put an end to the lack of regulation authority and transparency. Since the implementation of market deregulation, Germany has come back to its former situation: the 4 main producers are equivalent to an oligopoly which controls more than 80% of the market and respects the boundaries of their respective ex-monopolies. Other factors influence the electricity price: the eco-taxes, the subsidies for renewable energies development, the abandonment of nuclear energy and the excessive tariffs of the power transportation network. (J.S.)

  18. Interactive Appearance Prediction for Cloudy Beverages

    DEFF Research Database (Denmark)

    Dal Corso, Alessandro; Frisvad, Jeppe Revall; Kjeldsen, Thomas Kim

    2016-01-01

    Juice appearance is important to consumers, so digital juice with a slider that varies a production parameter or changes juice content is useful. It is however challenging to render juice with scattering particles quickly and accurately. As a case study, we create an appearance model that provide...

  19. Detection of carbon monoxide pollution from cities and wildfires on regional and urban scales: the benefit of CO column retrievals from SCIAMACHY 2.3 µm measurements under cloudy conditions

    Directory of Open Access Journals (Sweden)

    T. Borsdorff

    2018-05-01

    Full Text Available In the perspective of the upcoming TROPOMI Sentinel-5 Precursor carbon monoxide data product, we discuss the benefit of using CO total column retrievals from cloud-contaminated SCIAMACHY 2.3 µm shortwave infrared spectra to detect atmospheric CO enhancements on regional and urban scales due to emissions from cities and wildfires. The study uses the operational Sentinel-5 Precursor algorithm SICOR, which infers the vertically integrated CO column together with effective cloud parameters. We investigate its capability to detect localized CO enhancements distinguishing between clear-sky observations and observations with low (<  1.5 km and medium–high clouds (1.5–5 km. As an example, we analyse CO enhancements over the cities Paris, Los Angeles and Tehran as well as the wildfire events in Mexico–Guatemala 2005 and Alaska–Canada 2004. The CO average of the SCIAMACHY full-mission data set of clear-sky observations can detect weak CO enhancements of less than 10 ppb due to air pollution in these cities. For low-cloud conditions, the CO data product performs similarly well. For medium–high clouds, the observations show a reduced CO signal both over Tehran and Los Angeles, while for Paris no significant CO enhancement can be detected. This indicates that information about the vertical distribution of CO can be obtained from the SCIAMACHY measurements. Moreover, for the Mexico–Guatemala fires, the low-cloud CO data captures a strong outflow of CO over the Gulf of Mexico and the Pacific Ocean and so provides complementary information to clear-sky retrievals, which can only be obtained over land. For both burning events, enhanced CO values are even detectable with medium–high-cloud retrievals, confirming a distinct vertical extension of the pollution. The larger number of additional measurements, and hence the better spatial coverage, significantly improve the detection of wildfire pollution using both the clear-sky and cloudy

  20. Detection of carbon monoxide pollution from cities and wildfires on regional and urban scales: the benefit of CO column retrievals from SCIAMACHY 2.3 µm measurements under cloudy conditions

    Science.gov (United States)

    Borsdorff, Tobias; Andrasec, Josip; aan de Brugh, Joost; Hu, Haili; Aben, Ilse; Landgraf, Jochen

    2018-05-01

    In the perspective of the upcoming TROPOMI Sentinel-5 Precursor carbon monoxide data product, we discuss the benefit of using CO total column retrievals from cloud-contaminated SCIAMACHY 2.3 µm shortwave infrared spectra to detect atmospheric CO enhancements on regional and urban scales due to emissions from cities and wildfires. The study uses the operational Sentinel-5 Precursor algorithm SICOR, which infers the vertically integrated CO column together with effective cloud parameters. We investigate its capability to detect localized CO enhancements distinguishing between clear-sky observations and observations with low (Paris, Los Angeles and Tehran as well as the wildfire events in Mexico-Guatemala 2005 and Alaska-Canada 2004. The CO average of the SCIAMACHY full-mission data set of clear-sky observations can detect weak CO enhancements of less than 10 ppb due to air pollution in these cities. For low-cloud conditions, the CO data product performs similarly well. For medium-high clouds, the observations show a reduced CO signal both over Tehran and Los Angeles, while for Paris no significant CO enhancement can be detected. This indicates that information about the vertical distribution of CO can be obtained from the SCIAMACHY measurements. Moreover, for the Mexico-Guatemala fires, the low-cloud CO data captures a strong outflow of CO over the Gulf of Mexico and the Pacific Ocean and so provides complementary information to clear-sky retrievals, which can only be obtained over land. For both burning events, enhanced CO values are even detectable with medium-high-cloud retrievals, confirming a distinct vertical extension of the pollution. The larger number of additional measurements, and hence the better spatial coverage, significantly improve the detection of wildfire pollution using both the clear-sky and cloudy CO retrievals. Due to the improved instrument performance of the TROPOMI instrument with respect to its precursor SCIAMACHY, the upcoming Sentinel-5

  1. The use of a sky camera for solar radiation estimation based on digital image processing

    International Nuclear Information System (INIS)

    Alonso-Montesinos, J.; Batlles, F.J.

    2015-01-01

    The necessary search for a more sustainable global future means using renewable energy sources to generate pollutant-free electricity. CSP (Concentrated solar power) and PV (photovoltaic) plants are the systems most in demand for electricity production using solar radiation as the energy source. The main factors affecting final electricity generation in these plants are, among others, atmospheric conditions; therefore, knowing whether there will be any change in the solar radiation hitting the plant's solar field is of fundamental importance to CSP and PV plant operators in adapting the plant's operation mode to these fluctuations. Consequently, the most useful technology must involve the study of atmospheric conditions. This is the case for sky cameras, an emerging technology that allows one to gather sky information with optimal spatial and temporal resolution. Hence, in this work, a solar radiation estimation using sky camera images is presented for all sky conditions, where beam, diffuse and global solar radiation components are estimated in real-time as a novel way to evaluate the solar resource from a terrestrial viewpoint. - Highlights: • Using a sky camera, the solar resource has been estimated for one minute periods. • The sky images have been processed to estimate the solar radiation at pixel level. • The three radiation components have been estimated under all sky conditions. • Results have been presented for cloudless, partially-cloudy and overcast conditions. • For beam and global radiation, the nRMSE value is of about 11% under overcast skies.

  2. Variance estimation for generalized Cavalieri estimators

    OpenAIRE

    Johanna Ziegel; Eva B. Vedel Jensen; Karl-Anton Dorph-Petersen

    2011-01-01

    The precision of stereological estimators based on systematic sampling is of great practical importance. This paper presents methods of data-based variance estimation for generalized Cavalieri estimators where errors in sampling positions may occur. Variance estimators are derived under perturbed systematic sampling, systematic sampling with cumulative errors and systematic sampling with random dropouts. Copyright 2011, Oxford University Press.

  3. Lake Izabal (Guatemala) shoreline detection and inundated area estimation from ENVISAT ASAR images

    Science.gov (United States)

    Medina, C.; Gomez-Enri, J.; Alonso, J. J.; Villares, P.

    2008-10-01

    The surface extent of a lake reflects its water storage variations. This information has important hydrological and operational applications. However, there is a lack of information regarding this subject because the traditional methodologies for this purposes (ground surveys, aerial photos) requires high resources investments. Remote sensing techniques (optical/radar sensors) permit a low cost, constant and accurate monitoring of this parameter. The objective of this study was to determine the surface variations of Lake Izabal, the largest one in Guatemala. The lake is located close to the Caribbean Sea coastline. The climate in the region is predominantly cloudy and rainy, being the Synthetic Aperture Radar (SAR) the best suited sensor for this purpose. Although several studies have successfully used SAR products in detecting land-water boundaries, all of them highlighted some sensor limitations. These limitations are mainly caused by roughened water surfaces caused by strong winds which are frequent in Lake Izabal. The ESA's ASAR data products were used. From the set of 9 ASAR images used, all of them have wind-roughened ashore waters in several levels. Here, a chain of image processing steps were applied in order to extract a reliable shoreline. The shoreline detection is the key task for the surface estimation. After the shoreline extraction, the inundated area of the lake was estimated. In-situ lake level measurements were used for validation. The results showed good agreement between the inundated areas estimations and the lake level gauges.

  4. An artificial neural network ensemble model for estimating global solar radiation from Meteosat satellite images

    International Nuclear Information System (INIS)

    Linares-Rodriguez, Alvaro; Ruiz-Arias, José Antonio; Pozo-Vazquez, David; Tovar-Pescador, Joaquin

    2013-01-01

    An optimized artificial neural network ensemble model is built to estimate daily global solar radiation over large areas. The model uses clear-sky estimates and satellite images as input variables. Unlike most studies using satellite imagery based on visible channels, our model also exploits all information within infrared channels of the Meteosat 9 satellite. A genetic algorithm is used to optimize selection of model inputs, for which twelve are selected – eleven 3-km Meteosat 9 channels and one clear-sky term. The model is validated in Andalusia (Spain) from January 2008 through December 2008. Measured data from 83 stations across the region are used, 65 for training and 18 independent ones for testing the model. At the latter stations, the ensemble model yields an overall root mean square error of 6.74% and correlation coefficient of 99%; the generated estimates are relatively accurate and errors spatially uniform. The model yields reliable results even on cloudy days, improving on current models based on satellite imagery. - Highlights: • Daily solar radiation data are generated using an artificial neural network ensemble. • Eleven Meteosat channels observations and a clear sky term are used as model inputs. • Model exploits all information within infrared Meteosat channels. • Measured data for a year from 83 ground stations are used. • The proposed approach has better performance than existing models on daily basis

  5. Estimating Surface Downward Shortwave Radiation over China Based on the Gradient Boosting Decision Tree Method

    Directory of Open Access Journals (Sweden)

    Lu Yang

    2018-01-01

    Full Text Available Downward shortwave radiation (DSR is an essential parameter in the terrestrial radiation budget and a necessary input for models of land-surface processes. Although several radiation products using satellite observations have been released, coarse spatial resolution and low accuracy limited their application. It is important to develop robust and accurate retrieval methods with higher spatial resolution. Machine learning methods may be powerful candidates for estimating the DSR from remotely sensed data because of their ability to perform adaptive, nonlinear data fitting. In this study, the gradient boosting regression tree (GBRT was employed to retrieve DSR measurements with the ground observation data in China collected from the China Meteorological Administration (CMA Meteorological Information Center and the satellite observations from the Advanced Very High Resolution Radiometer (AVHRR at a spatial resolution of 5 km. The validation results of the DSR estimates based on the GBRT method in China at a daily time scale for clear sky conditions show an R2 value of 0.82 and a root mean square error (RMSE value of 27.71 W·m−2 (38.38%. These values are 0.64 and 42.97 W·m−2 (34.57%, respectively, for cloudy sky conditions. The monthly DSR estimates were also evaluated using ground measurements. The monthly DSR estimates have an overall R2 value of 0.92 and an RMSE of 15.40 W·m−2 (12.93%. Comparison of the DSR estimates with the reanalyzed and retrieved DSR measurements from satellite observations showed that the estimated DSR is reasonably accurate but has a higher spatial resolution. Moreover, the proposed GBRT method has good scalability and is easy to apply to other parameter inversion problems by changing the parameters and training data.

  6. V2676 Oph: Estimating Physical Parameters of a Moderately Fast Nova

    Science.gov (United States)

    Raj, A.; Pavana, M.; Kamath, U. S.; Anupama, G. C.; Walter, F. M.

    2018-03-01

    Using our previously reported observations, we derive some physical parameters of the moderately fast nova V2676 Oph 2012 #1. The best-fit Cloudy model of the nebular spectrum obtained on 2015 May 8 shows a hot white dwarf source with TBB≍1.0×105 K having a luminosity of 1.0×1038 erg/s. Our abundance analysis shows that the ejecta are significantly enhanced relative to solar, He/H=2.14, O/H=2.37, S/H=6.62 and Ar/H=3.25. The ejecta mass is estimated to be 1.42×10-5 M⊙. The nova showed a pronounced dust formation phase after 90 d from discovery. The J-H and H-K colors were very large as compared to other molecule- and dust-forming novae in recent years. The dust temperature and mass at two epochs have been estimated from spectral energy distribution fits to infrared photometry.

  7. Cloud detection, classification and motion estimation using geostationary satellite imagery for cloud cover forecast

    International Nuclear Information System (INIS)

    Escrig, H.; Batlles, F.J.; Alonso, J.; Baena, F.M.; Bosch, J.L.; Salbidegoitia, I.B.; Burgaleta, J.I.

    2013-01-01

    Considering that clouds are the greatest causes to solar radiation blocking, short term cloud forecasting can help power plant operation and therefore improve benefits. Cloud detection, classification and motion vector determination are key to forecasting sun obstruction by clouds. Geostationary satellites provide cloud information covering wide areas, allowing cloud forecast to be performed for several hours in advance. Herein, the methodology developed and tested in this study is based on multispectral tests and binary cross correlations followed by coherence and quality control tests over resulting motion vectors. Monthly synthetic surface albedo image and a method to reject erroneous correlation vectors were developed. Cloud classification in terms of opacity and height of cloud top is also performed. A whole-sky camera has been used for validation, showing over 85% of agreement between the camera and the satellite derived cloud cover, whereas error in motion vectors is below 15%. - Highlights: ► A methodology for detection, classification and movement of clouds is presented. ► METEOSAT satellite images are used to obtain a cloud mask. ► The prediction of cloudiness is estimated with 90% in overcast conditions. ► Results for partially covered sky conditions showed a 75% accuracy. ► Motion vectors are estimated from the clouds with a success probability of 86%

  8. All-weather Land Surface Temperature Estimation from Satellite Data

    Science.gov (United States)

    Zhou, J.; Zhang, X.

    2017-12-01

    Satellite remote sensing, including the thermal infrared (TIR) and passive microwave (MW), provides the possibility to observe LST at large scales. For better modeling the land surface processes with high temporal resolutions, all-weather LST from satellite data is desirable. However, estimation of all-weather LST faces great challenges. On the one hand, TIR remote sensing is limited to clear-sky situations; this drawback reduces its usefulness under cloudy conditions considerably, especially in regions with frequent and/or permanent clouds. On the other hand, MW remote sensing suffers from much greater thermal sampling depth (TSD) and coarser spatial resolution than TIR; thus, MW LST is generally lower than TIR LST, especially at daytime. Two case studies addressing the challenges mentioned previously are presented here. The first study is for the development of a novel thermal sampling depth correction method (TSDC) to estimate the MW LST over barren land; this second study is for the development of a feasible method to merge the TIR and MW LSTs by addressing the coarse resolution of the latter one. In the first study, the core of the TSDC method is a new formulation of the passive microwave radiation balance equation, which allows linking bulk MW radiation to the soil temperature at a specific depth, i.e. the representative temperature: this temperature is then converted to LST through an adapted soil heat conduction equation. The TSDC method is applied to the 6.9 GHz channel in vertical polarization of AMSR-E. Evaluation shows that LST estimated by the TSDC method agrees well with the MODIS LST. Validation is based on in-situ LSTs measured at the Gobabeb site in western Namibia. The results demonstrate the high accuracy of the TSDC method: it yields a root-mean squared error (RMSE) of 2 K and ignorable systematic error over barren land. In the second study, the method consists of two core processes: (1) estimation of MW LST from MW brightness temperature and (2

  9. Wet-bulb globe temperature index estimation using meteorological data from São Paulo State, Brazil

    Science.gov (United States)

    Maia, Paulo Alves; Ruas, Álvaro Cézar; Bitencourt, Daniel Pires

    2015-10-01

    It is well known that excessive heat exposure causes heat disorders and can lead to death in some situations. Evaluation of heat stress on workers performing indoor and outdoor activities is, nowadays, conducted worldwide by wet-bulb globe temperature (WBGT) index, which calculation parameters are dry-bulb, natural wet-bulb, and globe temperatures, which must be measured at the same time and in location where the worker is conducting his/her activities. However, for some activities performed in large outdoor areas such as those of agricultural ones, it is not feasible to measure directly those temperatures in all work periods and locations where there are workers. Taking this into account, this work aims to introduce a WBGT index estimation using atmospheric variables observed by automatic meteorological stations. In order to support our estimation method, we used, as a test-bed, data recorded in the State of São Paulo (SP), Brazil. By adding the cloudiness factor in the calculation through measurement of solar radiation, the algorithm proved to be as efficient as those mentioned in this work. It was found that this method is viable, with WBGT-estimated values obtained from meteorological data measured by stations with a distance of less than 80 km. This estimate can be used for monitoring heat stress in real time as well as to investigate heat-related disorders and agricultural work.

  10. Variable Kernel Density Estimation

    OpenAIRE

    Terrell, George R.; Scott, David W.

    1992-01-01

    We investigate some of the possibilities for improvement of univariate and multivariate kernel density estimates by varying the window over the domain of estimation, pointwise and globally. Two general approaches are to vary the window width by the point of estimation and by point of the sample observation. The first possibility is shown to be of little efficacy in one variable. In particular, nearest-neighbor estimators in all versions perform poorly in one and two dimensions, but begin to b...

  11. Fuel Burn Estimation Model

    Science.gov (United States)

    Chatterji, Gano

    2011-01-01

    Conclusions: Validated the fuel estimation procedure using flight test data. A good fuel model can be created if weight and fuel data are available. Error in assumed takeoff weight results in similar amount of error in the fuel estimate. Fuel estimation error bounds can be determined.

  12. Optimal fault signal estimation

    NARCIS (Netherlands)

    Stoorvogel, Antonie Arij; Niemann, H.H.; Saberi, A.; Sannuti, P.

    2002-01-01

    We consider here both fault identification and fault signal estimation. Regarding fault identification, we seek either exact or almost fault identification. On the other hand, regarding fault signal estimation, we seek either $H_2$ optimal, $H_2$ suboptimal or Hinfinity suboptimal estimation. By

  13. A neural flow estimator

    DEFF Research Database (Denmark)

    Jørgensen, Ivan Harald Holger; Bogason, Gudmundur; Bruun, Erik

    1995-01-01

    This paper proposes a new way to estimate the flow in a micromechanical flow channel. A neural network is used to estimate the delay of random temperature fluctuations induced in a fluid. The design and implementation of a hardware efficient neural flow estimator is described. The system...... is implemented using switched-current technique and is capable of estimating flow in the μl/s range. The neural estimator is built around a multiplierless neural network, containing 96 synaptic weights which are updated using the LMS1-algorithm. An experimental chip has been designed that operates at 5 V...

  14. Deriving temporally continuous soil moisture estimations at fine resolution by downscaling remotely sensed product

    Science.gov (United States)

    Jin, Yan; Ge, Yong; Wang, Jianghao; Heuvelink, Gerard B. M.

    2018-06-01

    Land surface soil moisture (SSM) has important roles in the energy balance of the land surface and in the water cycle. Downscaling of coarse-resolution SSM remote sensing products is an efficient way for producing fine-resolution data. However, the downscaling methods used most widely require full-coverage visible/infrared satellite data as ancillary information. These methods are restricted to cloud-free days, making them unsuitable for continuous monitoring. The purpose of this study is to overcome this limitation to obtain temporally continuous fine-resolution SSM estimations. The local spatial heterogeneities of SSM and multiscale ancillary variables were considered in the downscaling process both to solve the problem of the strong variability of SSM and to benefit from the fusion of ancillary information. The generation of continuous downscaled remote sensing data was achieved via two principal steps. For cloud-free days, a stepwise hybrid geostatistical downscaling approach, based on geographically weighted area-to-area regression kriging (GWATARK), was employed by combining multiscale ancillary variables with passive microwave remote sensing data. Then, the GWATARK-estimated SSM and China Soil Moisture Dataset from Microwave Data Assimilation SSM data were combined to estimate fine-resolution data for cloudy days. The developed methodology was validated by application to the 25-km resolution daily AMSR-E SSM product to produce continuous SSM estimations at 1-km resolution over the Tibetan Plateau. In comparison with ground-based observations, the downscaled estimations showed correlation (R ≥ 0.7) for both ascending and descending overpasses. The analysis indicated the high potential of the proposed approach for producing a temporally continuous SSM product at fine spatial resolution.

  15. Total evaporation estimates from a Renosterveld and dryland wheat ...

    African Journals Online (AJOL)

    2010-07-09

    Jul 9, 2010 ... 1 CSIR Natural Resources and the Environment, PO Box 320 Stellenbosch 7599, South ... A change in land use from Renosterveld to dryland annual crops could therefore affect the soil .... Modelling total evaporation spatially: Surface Energy ..... similar, with ETo's ranging between 1.8 mm∙d-1 (on a cloudy/.

  16. Adjusting estimative prediction limits

    OpenAIRE

    Masao Ueki; Kaoru Fueda

    2007-01-01

    This note presents a direct adjustment of the estimative prediction limit to reduce the coverage error from a target value to third-order accuracy. The adjustment is asymptotically equivalent to those of Barndorff-Nielsen & Cox (1994, 1996) and Vidoni (1998). It has a simpler form with a plug-in estimator of the coverage probability of the estimative limit at the target value. Copyright 2007, Oxford University Press.

  17. Estimation of measurement variances

    International Nuclear Information System (INIS)

    Anon.

    1981-01-01

    In the previous two sessions, it was assumed that the measurement error variances were known quantities when the variances of the safeguards indices were calculated. These known quantities are actually estimates based on historical data and on data generated by the measurement program. Session 34 discusses how measurement error parameters are estimated for different situations. The various error types are considered. The purpose of the session is to enable participants to: (1) estimate systematic error variances from standard data; (2) estimate random error variances from data as replicate measurement data; (3) perform a simple analysis of variances to characterize the measurement error structure when biases vary over time

  18. Electrical estimating methods

    CERN Document Server

    Del Pico, Wayne J

    2014-01-01

    Simplify the estimating process with the latest data, materials, and practices Electrical Estimating Methods, Fourth Edition is a comprehensive guide to estimating electrical costs, with data provided by leading construction database RS Means. The book covers the materials and processes encountered by the modern contractor, and provides all the information professionals need to make the most precise estimate. The fourth edition has been updated to reflect the changing materials, techniques, and practices in the field, and provides the most recent Means cost data available. The complexity of el

  19. Estimating Hourly Beam and Diffuse Solar Radiation in an Alpine Valley: A Critical Assessment of Decomposition Models

    Directory of Open Access Journals (Sweden)

    Lavinia Laiti

    2018-03-01

    Full Text Available Accurate solar radiation estimates in Alpine areas represent a challenging task, because of the strong variability arising from orographic effects and mountain weather phenomena. These factors, together with the scarcity of observations in elevated areas, often cause large modelling uncertainties. In the present paper, estimates of hourly mean diffuse fraction values from global radiation data, provided by a number (13 of decomposition models (chosen among the most widely tested in the literature, are evaluated and compared with observations collected near the city of Bolzano, in the Adige Valley (Italian Alps. In addition, the physical factors influencing diffuse fraction values in such a complex orographic context are explored. The average accuracy of the models were found to be around 27% and 14% for diffuse and beam radiation respectively, the largest errors being observed under clear sky and partly cloudy conditions, respectively. The best performances were provided by the more complex models, i.e., those including a predictor specifically explaining the radiation components’ variability associated with scattered clouds. Yet, these models return non-negligible biases. In contrast, the local calibration of a single-equation logistical model with five predictors allows perfectly unbiased estimates, as accurate as those of the best-performing models (20% and 12% for diffuse and beam radiation, respectively, but at much smaller computational costs.

  20. Developing a new solar radiation estimation model based on Buckingham theorem

    Science.gov (United States)

    Ekici, Can; Teke, Ismail

    2018-06-01

    While the value of solar radiation can be expressed physically in the days without clouds, this expression becomes difficult in cloudy and complicated weather conditions. In addition, solar radiation measurements are often not taken in developing countries. In such cases, solar radiation estimation models are used. Solar radiation prediction models estimate solar radiation using other measured meteorological parameters those are available in the stations. In this study, a solar radiation estimation model was obtained using Buckingham theorem. This theory has been shown to be useful in predicting solar radiation. In this study, Buckingham theorem is used to express the solar radiation by derivation of dimensionless pi parameters. This derived model is compared with temperature based models in the literature. MPE, RMSE, MBE and NSE error analysis methods are used in this comparison. Allen, Hargreaves, Chen and Bristow-Campbell models in the literature are used for comparison. North Dakota's meteorological data were used to compare the models. Error analysis were applied through the comparisons between the models in the literature and the model that is derived in the study. These comparisons were made using data obtained from North Dakota's agricultural climate network. In these applications, the model obtained within the scope of the study gives better results. Especially, in terms of short-term performance, it has been found that the obtained model gives satisfactory results. It has been seen that this model gives better accuracy in comparison with other models. It is possible in RMSE analysis results. Buckingham theorem was found useful in estimating solar radiation. In terms of long term performances and percentage errors, the model has given good results.

  1. Maximum likely scale estimation

    DEFF Research Database (Denmark)

    Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo

    2005-01-01

    A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and/or ...

  2. Cost function estimation

    DEFF Research Database (Denmark)

    Andersen, C K; Andersen, K; Kragh-Sørensen, P

    2000-01-01

    on these criteria, a two-part model was chosen. In this model, the probability of incurring any costs was estimated using a logistic regression, while the level of the costs was estimated in the second part of the model. The choice of model had a substantial impact on the predicted health care costs, e...

  3. Software cost estimation

    NARCIS (Netherlands)

    Heemstra, F.J.

    1992-01-01

    The paper gives an overview of the state of the art of software cost estimation (SCE). The main questions to be answered in the paper are: (1) What are the reasons for overruns of budgets and planned durations? (2) What are the prerequisites for estimating? (3) How can software development effort be

  4. Software cost estimation

    NARCIS (Netherlands)

    Heemstra, F.J.; Heemstra, F.J.

    1993-01-01

    The paper gives an overview of the state of the art of software cost estimation (SCE). The main questions to be answered in the paper are: (1) What are the reasons for overruns of budgets and planned durations? (2) What are the prerequisites for estimating? (3) How can software development effort be

  5. Coherence in quantum estimation

    Science.gov (United States)

    Giorda, Paolo; Allegra, Michele

    2018-01-01

    The geometry of quantum states provides a unifying framework for estimation processes based on quantum probes, and it establishes the ultimate bounds of the achievable precision. We show a relation between the statistical distance between infinitesimally close quantum states and the second order variation of the coherence of the optimal measurement basis with respect to the state of the probe. In quantum phase estimation protocols, this leads to propose coherence as the relevant resource that one has to engineer and control to optimize the estimation precision. Furthermore, the main object of the theory i.e. the symmetric logarithmic derivative, in many cases allows one to identify a proper factorization of the whole Hilbert space in two subsystems. The factorization allows one to discuss the role of coherence versus correlations in estimation protocols; to show how certain estimation processes can be completely or effectively described within a single-qubit subsystem; and to derive lower bounds for the scaling of the estimation precision with the number of probes used. We illustrate how the framework works for both noiseless and noisy estimation procedures, in particular those based on multi-qubit GHZ-states. Finally we succinctly analyze estimation protocols based on zero-temperature critical behavior. We identify the coherence that is at the heart of their efficiency, and we show how it exhibits the non-analyticities and scaling behavior proper of a large class of quantum phase transitions.

  6. Overconfidence in Interval Estimates

    Science.gov (United States)

    Soll, Jack B.; Klayman, Joshua

    2004-01-01

    Judges were asked to make numerical estimates (e.g., "In what year was the first flight of a hot air balloon?"). Judges provided high and low estimates such that they were X% sure that the correct answer lay between them. They exhibited substantial overconfidence: The correct answer fell inside their intervals much less than X% of the time. This…

  7. Adaptive Spectral Doppler Estimation

    DEFF Research Database (Denmark)

    Gran, Fredrik; Jakobsson, Andreas; Jensen, Jørgen Arendt

    2009-01-01

    . The methods can also provide better quality of the estimated power spectral density (PSD) of the blood signal. Adaptive spectral estimation techniques are known to pro- vide good spectral resolution and contrast even when the ob- servation window is very short. The 2 adaptive techniques are tested......In this paper, 2 adaptive spectral estimation techniques are analyzed for spectral Doppler ultrasound. The purpose is to minimize the observation window needed to estimate the spectrogram to provide a better temporal resolution and gain more flexibility when designing the data acquisition sequence...... and compared with the averaged periodogram (Welch’s method). The blood power spectral capon (BPC) method is based on a standard minimum variance technique adapted to account for both averaging over slow-time and depth. The blood amplitude and phase estimation technique (BAPES) is based on finding a set...

  8. Optomechanical parameter estimation

    International Nuclear Information System (INIS)

    Ang, Shan Zheng; Tsang, Mankei; Harris, Glen I; Bowen, Warwick P

    2013-01-01

    We propose a statistical framework for the problem of parameter estimation from a noisy optomechanical system. The Cramér–Rao lower bound on the estimation errors in the long-time limit is derived and compared with the errors of radiometer and expectation–maximization (EM) algorithms in the estimation of the force noise power. When applied to experimental data, the EM estimator is found to have the lowest error and follow the Cramér–Rao bound most closely. Our analytic results are envisioned to be valuable to optomechanical experiment design, while the EM algorithm, with its ability to estimate most of the system parameters, is envisioned to be useful for optomechanical sensing, atomic magnetometry and fundamental tests of quantum mechanics. (paper)

  9. CHANNEL ESTIMATION TECHNIQUE

    DEFF Research Database (Denmark)

    2015-01-01

    A method includes determining a sequence of first coefficient estimates of a communication channel based on a sequence of pilots arranged according to a known pilot pattern and based on a receive signal, wherein the receive signal is based on the sequence of pilots transmitted over the communicat......A method includes determining a sequence of first coefficient estimates of a communication channel based on a sequence of pilots arranged according to a known pilot pattern and based on a receive signal, wherein the receive signal is based on the sequence of pilots transmitted over...... the communication channel. The method further includes determining a sequence of second coefficient estimates of the communication channel based on a decomposition of the first coefficient estimates in a dictionary matrix and a sparse vector of the second coefficient estimates, the dictionary matrix including...... filter characteristics of at least one known transceiver filter arranged in the communication channel....

  10. Microclimatic models. Estimation of components of the energy balance over land surfaces

    Energy Technology Data Exchange (ETDEWEB)

    Heikinheimo, M.; Venaelaeinen, A.; Tourula, T. [Finnish Meteorological Inst., Helsinki (Finland). Air Quality Dept.

    1996-12-31

    Climates at regional scale are strongly dependent on the interaction between atmosphere and its lower boundary, the oceans and the land surface mosaic. Land surfaces influence climate through their albedo, and the aerodynamic roughness, the processes of the biosphere and many soil hydrological properties; all these factors vary considerably geographically. Land surfaces receive a certain portion of the solar irradiance depending on the cloudiness, atmospheric transparency and surface albedo. Short-wave solar irradiance is the source of the heat energy exchange at the earth`s surface and also regulates many biological processes, e.g. photosynthesis. Methods for estimating solar irradiance, atmospheric transparency and surface albedo were reviewed during the course of this project. The solar energy at earth`s surface is consumed for heating the soil and the lower atmosphere. Where moisture is available, evaporation is one of the key components of the surface energy balance, because the conversion of liquid water into water vapour consumes heat. The evaporation process was studied by carrying out field experiments and testing parameterisation for a cultivated agricultural surface and for lakes. The micrometeorological study over lakes was carried out as part of the international `Northern Hemisphere Climatic Processes Experiment` (NOPEX/BAHC) in Sweden. These studies have been aimed at a better understanding of the energy exchange processes of the earth`s surface-atmosphere boundary for a more accurate and realistic parameterisation of the land surface in atmospheric models

  11. Microclimatic models. Estimation of components of the energy balance over land surfaces

    Energy Technology Data Exchange (ETDEWEB)

    Heikinheimo, M; Venaelaeinen, A; Tourula, T [Finnish Meteorological Inst., Helsinki (Finland). Air Quality Dept.

    1997-12-31

    Climates at regional scale are strongly dependent on the interaction between atmosphere and its lower boundary, the oceans and the land surface mosaic. Land surfaces influence climate through their albedo, and the aerodynamic roughness, the processes of the biosphere and many soil hydrological properties; all these factors vary considerably geographically. Land surfaces receive a certain portion of the solar irradiance depending on the cloudiness, atmospheric transparency and surface albedo. Short-wave solar irradiance is the source of the heat energy exchange at the earth`s surface and also regulates many biological processes, e.g. photosynthesis. Methods for estimating solar irradiance, atmospheric transparency and surface albedo were reviewed during the course of this project. The solar energy at earth`s surface is consumed for heating the soil and the lower atmosphere. Where moisture is available, evaporation is one of the key components of the surface energy balance, because the conversion of liquid water into water vapour consumes heat. The evaporation process was studied by carrying out field experiments and testing parameterisation for a cultivated agricultural surface and for lakes. The micrometeorological study over lakes was carried out as part of the international `Northern Hemisphere Climatic Processes Experiment` (NOPEX/BAHC) in Sweden. These studies have been aimed at a better understanding of the energy exchange processes of the earth`s surface-atmosphere boundary for a more accurate and realistic parameterisation of the land surface in atmospheric models

  12. Radiation risk estimation

    International Nuclear Information System (INIS)

    Schull, W.J.; Texas Univ., Houston, TX

    1992-01-01

    Estimation of the risk of cancer following exposure to ionizing radiation remains largely empirical, and models used to adduce risk incorporate few, if any, of the advances in molecular biology of a past decade or so. These facts compromise the estimation risk where the epidemiological data are weakest, namely, at low doses and dose rates. Without a better understanding of the molecular and cellular events ionizing radiation initiates or promotes, it seems unlikely that this situation will improve. Nor will the situation improve without further attention to the identification and quantitative estimation of the effects of those host and environmental factors that enhance or attenuate risk. (author)

  13. Estimation of Jump Tails

    DEFF Research Database (Denmark)

    Bollerslev, Tim; Todorov, Victor

    We propose a new and flexible non-parametric framework for estimating the jump tails of Itô semimartingale processes. The approach is based on a relatively simple-to-implement set of estimating equations associated with the compensator for the jump measure, or its "intensity", that only utilizes...... the weak assumption of regular variation in the jump tails, along with in-fill asymptotic arguments for uniquely identifying the "large" jumps from the data. The estimation allows for very general dynamic dependencies in the jump tails, and does not restrict the continuous part of the process...... and the temporal variation in the stochastic volatility. On implementing the new estimation procedure with actual high-frequency data for the S&P 500 aggregate market portfolio, we find strong evidence for richer and more complex dynamic dependencies in the jump tails than hitherto entertained in the literature....

  14. Bridged Race Population Estimates

    Data.gov (United States)

    U.S. Department of Health & Human Services — Population estimates from "bridging" the 31 race categories used in Census 2000, as specified in the 1997 Office of Management and Budget (OMB) race and ethnicity...

  15. Estimation of measurement variances

    International Nuclear Information System (INIS)

    Jaech, J.L.

    1984-01-01

    The estimation of measurement error parameters in safeguards systems is discussed. Both systematic and random errors are considered. A simple analysis of variances to characterize the measurement error structure with biases varying over time is presented

  16. APLIKASI SPLINE ESTIMATOR TERBOBOT

    Directory of Open Access Journals (Sweden)

    I Nyoman Budiantara

    2001-01-01

    Full Text Available We considered the nonparametric regression model : Zj = X(tj + ej, j = 1,2,…,n, where X(tj is the regression curve. The random error ej are independently distributed normal with a zero mean and a variance s2/bj, bj > 0. The estimation of X obtained by minimizing a Weighted Least Square. The solution of this optimation is a Weighted Spline Polynomial. Further, we give an application of weigted spline estimator in nonparametric regression. Abstract in Bahasa Indonesia : Diberikan model regresi nonparametrik : Zj = X(tj + ej, j = 1,2,…,n, dengan X (tj kurva regresi dan ej sesatan random yang diasumsikan berdistribusi normal dengan mean nol dan variansi s2/bj, bj > 0. Estimasi kurva regresi X yang meminimumkan suatu Penalized Least Square Terbobot, merupakan estimator Polinomial Spline Natural Terbobot. Selanjutnya diberikan suatu aplikasi estimator spline terbobot dalam regresi nonparametrik. Kata kunci: Spline terbobot, Regresi nonparametrik, Penalized Least Square.

  17. Fractional cointegration rank estimation

    DEFF Research Database (Denmark)

    Lasak, Katarzyna; Velasco, Carlos

    the parameters of the model under the null hypothesis of the cointegration rank r = 1, 2, ..., p-1. This step provides consistent estimates of the cointegration degree, the cointegration vectors, the speed of adjustment to the equilibrium parameters and the common trends. In the second step we carry out a sup......-likelihood ratio test of no-cointegration on the estimated p - r common trends that are not cointegrated under the null. The cointegration degree is re-estimated in the second step to allow for new cointegration relationships with different memory. We augment the error correction model in the second step...... to control for stochastic trend estimation effects from the first step. The critical values of the tests proposed depend only on the number of common trends under the null, p - r, and on the interval of the cointegration degrees b allowed, but not on the true cointegration degree b0. Hence, no additional...

  18. Estimation of spectral kurtosis

    Science.gov (United States)

    Sutawanir

    2017-03-01

    Rolling bearings are the most important elements in rotating machinery. Bearing frequently fall out of service for various reasons: heavy loads, unsuitable lubrications, ineffective sealing. Bearing faults may cause a decrease in performance. Analysis of bearing vibration signals has attracted attention in the field of monitoring and fault diagnosis. Bearing vibration signals give rich information for early detection of bearing failures. Spectral kurtosis, SK, is a parameter in frequency domain indicating how the impulsiveness of a signal varies with frequency. Faults in rolling bearings give rise to a series of short impulse responses as the rolling elements strike faults, SK potentially useful for determining frequency bands dominated by bearing fault signals. SK can provide a measure of the distance of the analyzed bearings from a healthy one. SK provides additional information given by the power spectral density (psd). This paper aims to explore the estimation of spectral kurtosis using short time Fourier transform known as spectrogram. The estimation of SK is similar to the estimation of psd. The estimation falls in model-free estimation and plug-in estimator. Some numerical studies using simulations are discussed to support the methodology. Spectral kurtosis of some stationary signals are analytically obtained and used in simulation study. Kurtosis of time domain has been a popular tool for detecting non-normality. Spectral kurtosis is an extension of kurtosis in frequency domain. The relationship between time domain and frequency domain analysis is establish through power spectrum-autocovariance Fourier transform. Fourier transform is the main tool for estimation in frequency domain. The power spectral density is estimated through periodogram. In this paper, the short time Fourier transform of the spectral kurtosis is reviewed, a bearing fault (inner ring and outer ring) is simulated. The bearing response, power spectrum, and spectral kurtosis are plotted to

  19. Approximate Bayesian recursive estimation

    Czech Academy of Sciences Publication Activity Database

    Kárný, Miroslav

    2014-01-01

    Roč. 285, č. 1 (2014), s. 100-111 ISSN 0020-0255 R&D Projects: GA ČR GA13-13502S Institutional support: RVO:67985556 Keywords : Approximate parameter estimation * Bayesian recursive estimation * Kullback–Leibler divergence * Forgetting Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 4.038, year: 2014 http://library.utia.cas.cz/separaty/2014/AS/karny-0425539.pdf

  20. Ranking as parameter estimation

    Czech Academy of Sciences Publication Activity Database

    Kárný, Miroslav; Guy, Tatiana Valentine

    2009-01-01

    Roč. 4, č. 2 (2009), s. 142-158 ISSN 1745-7645 R&D Projects: GA MŠk 2C06001; GA AV ČR 1ET100750401; GA MŠk 1M0572 Institutional research plan: CEZ:AV0Z10750506 Keywords : ranking * Bayesian estimation * negotiation * modelling Subject RIV: BB - Applied Statistics, Operational Research http://library.utia.cas.cz/separaty/2009/AS/karny- ranking as parameter estimation.pdf

  1. Maximal combustion temperature estimation

    International Nuclear Information System (INIS)

    Golodova, E; Shchepakina, E

    2006-01-01

    This work is concerned with the phenomenon of delayed loss of stability and the estimation of the maximal temperature of safe combustion. Using the qualitative theory of singular perturbations and canard techniques we determine the maximal temperature on the trajectories located in the transition region between the slow combustion regime and the explosive one. This approach is used to estimate the maximal temperature of safe combustion in multi-phase combustion models

  2. Estimation of global solar radiation by means of sunshine duration

    Energy Technology Data Exchange (ETDEWEB)

    Luis, Mazorra Aguiar; Felipe, Diaz Reyes [Electrical Engineering Dept., Las Palmas de Gran Canaria Univ. (U.L.P.G.C.), Campus Univ. Tafira (Spain); Pilar, Navarro Rivero [Canary Islands Technological Inst. (I.T.C.), Gran Canaria (Spain)

    2008-07-01

    lineal equation obtained from the limit condition data reproduces good results working with daily and monthly TMY series. The root mean squares (%rms) observed is around 6% in each location. Otherwise, with all daily data, the exponential model obtained the best results. Furthermore, this proposed model reproduces similar %rms as the lineal one to estimate TMY series. In the North location of the Island, the results were not so good, probably due to the high cloudiness of this area compared to the South. In the close range of the TMY series, both the lineal and the exponential model estimate satisfactory global solar irradiation from sunshine duration, whereas using all daily data, the exponential model reproduces the best results. A general conclusion is that the exponential model proposed in this paper is the most adequate to estimate global solar irradiation from sunshine duration. (orig.)

  3. Estimation of clearness index using neural network with meteorological forecast; Kisho yoho wo nyuryoku toshita neural network ni yoru seiten shisu no yosoku

    Energy Technology Data Exchange (ETDEWEB)

    Nishimura, S; Kenmoku, Y; Sakakibara, T [Toyohashi University of Technology, Aichi (Japan); Nakagawa, S [Maizuru National College of Technology, Kyoto (Japan); Kawamoto, T [Shizuoka University, Shizuoka (Japan)

    1997-11-25

    Discussions were given on estimation of clearness index in order to operate stably a solar energy utilizing system. All-sky insolation amount varies not only by change in the climate, but also seasonal change in the sun`s altitude. Therefore, a clearness index (ratio of all-sky insolation to out-of-atmosphere insolation) was used. The larger the value, the higher the solar ray permeability. The all-sky insolation amount is a measured value, while the out-of-atmosphere insolation amount is a calculated value. Although the clearness index may be roughly estimated by weather forecast, the clearness index varies largely even on the same weather forecast, especially for cloudy days, if a weather forecast actually having error is used. Therefore, discussions were given on estimation of the clearness index by using a neural network which uses meteorological information such as air temperatures and precipitation probabilities as inputs. Using multiple number of meteorological forecast information simultaneously has reduced the average square error to 49% of that using only the weather forecast. The estimation accuracy depends on the accuracy of meteorological forecast, but using multiple number of forecast information can improve the accuracy. 6 refs., 7 figs., 1 tab.

  4. Single snapshot DOA estimation

    Science.gov (United States)

    Häcker, P.; Yang, B.

    2010-10-01

    In array signal processing, direction of arrival (DOA) estimation has been studied for decades. Many algorithms have been proposed and their performance has been studied thoroughly. Yet, most of these works are focused on the asymptotic case of a large number of snapshots. In automotive radar applications like driver assistance systems, however, only a small number of snapshots of the radar sensor array or, in the worst case, a single snapshot is available for DOA estimation. In this paper, we investigate and compare different DOA estimators with respect to their single snapshot performance. The main focus is on the estimation accuracy and the angular resolution in multi-target scenarios including difficult situations like correlated targets and large target power differences. We will show that some algorithms lose their ability to resolve targets or do not work properly at all. Other sophisticated algorithms do not show a superior performance as expected. It turns out that the deterministic maximum likelihood estimator is a good choice under these hard conditions.

  5. Thermodynamic estimation: Ionic materials

    International Nuclear Information System (INIS)

    Glasser, Leslie

    2013-01-01

    Thermodynamics establishes equilibrium relations among thermodynamic parameters (“properties”) and delineates the effects of variation of the thermodynamic functions (typically temperature and pressure) on those parameters. However, classical thermodynamics does not provide values for the necessary thermodynamic properties, which must be established by extra-thermodynamic means such as experiment, theoretical calculation, or empirical estimation. While many values may be found in the numerous collected tables in the literature, these are necessarily incomplete because either the experimental measurements have not been made or the materials may be hypothetical. The current paper presents a number of simple and relible estimation methods for thermodynamic properties, principally for ionic materials. The results may also be used as a check for obvious errors in published values. The estimation methods described are typically based on addition of properties of individual ions, or sums of properties of neutral ion groups (such as “double” salts, in the Simple Salt Approximation), or based upon correlations such as with formula unit volumes (Volume-Based Thermodynamics). - Graphical abstract: Thermodynamic properties of ionic materials may be readily estimated by summation of the properties of individual ions, by summation of the properties of ‘double salts’, and by correlation with formula volume. Such estimates may fill gaps in the literature, and may also be used as checks of published values. This simplicity arises from exploitation of the fact that repulsive energy terms are of short range and very similar across materials, while coulombic interactions provide a very large component of the attractive energy in ionic systems. Display Omitted - Highlights: • Estimation methods for thermodynamic properties of ionic materials are introduced. • Methods are based on summation of single ions, multiple salts, and correlations. • Heat capacity, entropy

  6. Distribution load estimation - DLE

    Energy Technology Data Exchange (ETDEWEB)

    Seppaelae, A. [VTT Energy, Espoo (Finland)

    1996-12-31

    The load research project has produced statistical information in the form of load models to convert the figures of annual energy consumption to hourly load values. The reliability of load models is limited to a certain network because many local circumstances are different from utility to utility and time to time. Therefore there is a need to make improvements in the load models. Distribution load estimation (DLE) is the method developed here to improve load estimates from the load models. The method is also quite cheap to apply as it utilises information that is already available in SCADA systems

  7. Generalized estimating equations

    CERN Document Server

    Hardin, James W

    2002-01-01

    Although powerful and flexible, the method of generalized linear models (GLM) is limited in its ability to accurately deal with longitudinal and clustered data. Developed specifically to accommodate these data types, the method of Generalized Estimating Equations (GEE) extends the GLM algorithm to accommodate the correlated data encountered in health research, social science, biology, and other related fields.Generalized Estimating Equations provides the first complete treatment of GEE methodology in all of its variations. After introducing the subject and reviewing GLM, the authors examine th

  8. Digital Quantum Estimation

    Science.gov (United States)

    Hassani, Majid; Macchiavello, Chiara; Maccone, Lorenzo

    2017-11-01

    Quantum metrology calculates the ultimate precision of all estimation strategies, measuring what is their root-mean-square error (RMSE) and their Fisher information. Here, instead, we ask how many bits of the parameter we can recover; namely, we derive an information-theoretic quantum metrology. In this setting, we redefine "Heisenberg bound" and "standard quantum limit" (the usual benchmarks in the quantum estimation theory) and show that the former can be attained only by sequential strategies or parallel strategies that employ entanglement among probes, whereas parallel-separable strategies are limited by the latter. We highlight the differences between this setting and the RMSE-based one.

  9. Distribution load estimation - DLE

    Energy Technology Data Exchange (ETDEWEB)

    Seppaelae, A [VTT Energy, Espoo (Finland)

    1997-12-31

    The load research project has produced statistical information in the form of load models to convert the figures of annual energy consumption to hourly load values. The reliability of load models is limited to a certain network because many local circumstances are different from utility to utility and time to time. Therefore there is a need to make improvements in the load models. Distribution load estimation (DLE) is the method developed here to improve load estimates from the load models. The method is also quite cheap to apply as it utilises information that is already available in SCADA systems

  10. Estimating Delays In ASIC's

    Science.gov (United States)

    Burke, Gary; Nesheiwat, Jeffrey; Su, Ling

    1994-01-01

    Verification is important aspect of process of designing application-specific integrated circuit (ASIC). Design must not only be functionally accurate, but must also maintain correct timing. IFA, Intelligent Front Annotation program, assists in verifying timing of ASIC early in design process. This program speeds design-and-verification cycle by estimating delays before layouts completed. Written in C language.

  11. Organizational flexibility estimation

    OpenAIRE

    Komarynets, Sofia

    2013-01-01

    By the help of parametric estimation the evaluation scale of organizational flexibility and its parameters was formed. Definite degrees of organizational flexibility and its parameters for the Lviv region enterprises were determined. Grouping of the enterprises under the existing scale was carried out. Special recommendations to correct the enterprises behaviour were given.

  12. On Functional Calculus Estimates

    NARCIS (Netherlands)

    Schwenninger, F.L.

    2015-01-01

    This thesis presents various results within the field of operator theory that are formulated in estimates for functional calculi. Functional calculus is the general concept of defining operators of the form $f(A)$, where f is a function and $A$ is an operator, typically on a Banach space. Norm

  13. Estimation of vector velocity

    DEFF Research Database (Denmark)

    2000-01-01

    Using a pulsed ultrasound field, the two-dimensional velocity vector can be determined with the invention. The method uses a transversally modulated ultrasound field for probing the moving medium under investigation. A modified autocorrelation approach is used in the velocity estimation. The new...

  14. Quantifying IT estimation risks

    NARCIS (Netherlands)

    Kulk, G.P.; Peters, R.J.; Verhoef, C.

    2009-01-01

    A statistical method is proposed for quantifying the impact of factors that influence the quality of the estimation of costs for IT-enabled business projects. We call these factors risk drivers as they influence the risk of the misestimation of project costs. The method can effortlessly be

  15. Numerical Estimation in Preschoolers

    Science.gov (United States)

    Berteletti, Ilaria; Lucangeli, Daniela; Piazza, Manuela; Dehaene, Stanislas; Zorzi, Marco

    2010-01-01

    Children's sense of numbers before formal education is thought to rely on an approximate number system based on logarithmically compressed analog magnitudes that increases in resolution throughout childhood. School-age children performing a numerical estimation task have been shown to increasingly rely on a formally appropriate, linear…

  16. Estimating Gender Wage Gaps

    Science.gov (United States)

    McDonald, Judith A.; Thornton, Robert J.

    2011-01-01

    Course research projects that use easy-to-access real-world data and that generate findings with which undergraduate students can readily identify are hard to find. The authors describe a project that requires students to estimate the current female-male earnings gap for new college graduates. The project also enables students to see to what…

  17. Fast fundamental frequency estimation

    DEFF Research Database (Denmark)

    Nielsen, Jesper Kjær; Jensen, Tobias Lindstrøm; Jensen, Jesper Rindom

    2017-01-01

    Modelling signals as being periodic is common in many applications. Such periodic signals can be represented by a weighted sum of sinusoids with frequencies being an integer multiple of the fundamental frequency. Due to its widespread use, numerous methods have been proposed to estimate the funda...

  18. On Gnostical Estimates

    Czech Academy of Sciences Publication Activity Database

    Fabián, Zdeněk

    2017-01-01

    Roč. 56, č. 2 (2017), s. 125-132 ISSN 0973-1377 Institutional support: RVO:67985807 Keywords : gnostic theory * statistics * robust estimates Subject RIV: BB - Applied Statistics, Operational Research OBOR OECD: Statistics and probability http://www.ceser.in/ceserp/index.php/ijamas/article/view/4707

  19. Estimation of morbidity effects

    International Nuclear Information System (INIS)

    Ostro, B.

    1994-01-01

    Many researchers have related exposure to ambient air pollution to respiratory morbidity. To be included in this review and analysis, however, several criteria had to be met. First, a careful study design and a methodology that generated quantitative dose-response estimates were required. Therefore, there was a focus on time-series regression analyses relating daily incidence of morbidity to air pollution in a single city or metropolitan area. Studies that used weekly or monthly average concentrations or that involved particulate measurements in poorly characterized metropolitan areas (e.g., one monitor representing a large region) were not included in this review. Second, studies that minimized confounding ad omitted variables were included. For example, research that compared two cities or regions and characterized them as 'high' and 'low' pollution area were not included because of potential confounding by other factors in the respective areas. Third, concern for the effects of seasonality and weather had to be demonstrated. This could be accomplished by either stratifying and analyzing the data by season, by examining the independent effects of temperature and humidity, and/or by correcting the model for possible autocorrelation. A fourth criterion for study inclusion was that the study had to include a reasonably complete analysis of the data. Such analysis would include an careful exploration of the primary hypothesis as well as possible examination of te robustness and sensitivity of the results to alternative functional forms, specifications, and influential data points. When studies reported the results of these alternative analyses, the quantitative estimates that were judged as most representative of the overall findings were those that were summarized in this paper. Finally, for inclusion in the review of particulate matter, the study had to provide a measure of particle concentration that could be converted into PM10, particulate matter below 10

  20. Origin of freshwater and polynya water in the Arctic Ocean halocline in summer 2007

    NARCIS (Netherlands)

    Bauch, D.; Rutgers van der Loeff, M.; Andersen, N.; Torres-Valdes, S.; Bakker, K.; Abrahamsen, E.Povl

    2011-01-01

    Extremely low summer sea-ice coverage in the Arctic Ocean in 2007 allowed extensive sampling and a wide quasi-synoptic hydrographic and delta O-18 dataset could be collected in the Eurasian Basin and the Makarov Basin up to the Alpha Ridge and the East Siberian continental margin. With the aim of

  1. Observed thinning of Totten Glacier is linked to coastal polynya variability

    NARCIS (Netherlands)

    Khazendar, A.; Schodlok, M.P.; Fenty, I.; Ligtenberg, S.R.M.; Rignot, Eric; van den Broeke, M.R.

    2013-01-01

    Analysis of ICESat-1 data (2003–2008) shows significant surface lowering of Totten Glacier, the glacier discharging the largest volume of ice in East Antarctica, and less change on nearby Moscow University Glacier. After accounting for firn compaction anomalies, the thinning appears to coincide with

  2. Phytoplankton biomass and pigment responses to Fe amendments in the Pine Island and Amundsen polynyas

    NARCIS (Netherlands)

    Mills, M.M.; Alderkamp, A.C.; Thuróczy, C.E.; van Dijken, G.L.; Laan, P.; de Baar, H.J.W.; Arrigo, K.R.

    2012-01-01

    Nutrient addition experiments were performed during the austral summer in the Amundsen Sea (Southern Ocean) to investigate the availability of organically bound iron (Fe) to the phytoplankton communities, as well as assess their response to Fe amendment. Changes in autotrophic biomass, pigment

  3. Some oceanographic observations in the polynya and along a section in the southwest Indian/ Antarctic Ocean

    Digital Repository Service at National Institute of Oceanography (India)

    Naqvi, S.W.A.

    which reduces the surface salinity. A sub-surface oxygen maximum is observed in January associated with a maximum in primary production. Oxygen concentrations at all depths exhibit decreases from January to February in conjunction with increases...

  4. Histogram Estimators of Bivariate Densities

    National Research Council Canada - National Science Library

    Husemann, Joyce A

    1986-01-01

    One-dimensional fixed-interval histogram estimators of univariate probability density functions are less efficient than the analogous variable-interval estimators which are constructed from intervals...

  5. Spatio-temporal reconstruction of air temperature maps and their application to estimate rice growing season heat accumulation using multi-temporal MODIS data.

    Science.gov (United States)

    Zhang, Li-wen; Huang, Jing-feng; Guo, Rui-fang; Li, Xin-xing; Sun, Wen-bo; Wang, Xiu-zhen

    2013-02-01

    The accumulation of thermal time usually represents the local heat resources to drive crop growth. Maps of temperature-based agro-meteorological indices are commonly generated by the spatial interpolation of data collected from meteorological stations with coarse geographic continuity. To solve the critical problems of estimating air temperature (T(a)) and filling in missing pixels due to cloudy and low-quality images in growing degree days (GDDs) calculation from remotely sensed data, a novel spatio-temporal algorithm for T(a) estimation from Terra and Aqua moderate resolution imaging spectroradiometer (MODIS) data was proposed. This is a preliminary study to calculate heat accumulation, expressed in accumulative growing degree days (AGDDs) above 10 °C, from reconstructed T(a) based on MODIS land surface temperature (LST) data. The verification results of maximum T(a), minimum T(a), GDD, and AGDD from MODIS-derived data to meteorological calculation were all satisfied with high correlations over 0.01 significant levels. Overall, MODIS-derived AGDD was slightly underestimated with almost 10% relative error. However, the feasibility of employing AGDD anomaly maps to characterize the 2001-2010 spatio-temporal variability of heat accumulation and estimating the 2011 heat accumulation distribution using only MODIS data was finally demonstrated in the current paper. Our study may supply a novel way to calculate AGDD in heat-related study concerning crop growth monitoring, agricultural climatic regionalization, and agro-meteorological disaster detection at the regional scale.

  6. Automatic trend estimation

    CERN Document Server

    Vamos¸, C˘alin

    2013-01-01

    Our book introduces a method to evaluate the accuracy of trend estimation algorithms under conditions similar to those encountered in real time series processing. This method is based on Monte Carlo experiments with artificial time series numerically generated by an original algorithm. The second part of the book contains several automatic algorithms for trend estimation and time series partitioning. The source codes of the computer programs implementing these original automatic algorithms are given in the appendix and will be freely available on the web. The book contains clear statement of the conditions and the approximations under which the algorithms work, as well as the proper interpretation of their results. We illustrate the functioning of the analyzed algorithms by processing time series from astrophysics, finance, biophysics, and paleoclimatology. The numerical experiment method extensively used in our book is already in common use in computational and statistical physics.

  7. Distribution load estimation (DLE)

    Energy Technology Data Exchange (ETDEWEB)

    Seppaelae, A; Lehtonen, M [VTT Energy, Espoo (Finland)

    1998-08-01

    The load research has produced customer class load models to convert the customers` annual energy consumption to hourly load values. The reliability of load models applied from a nation-wide sample is limited in any specific network because many local circumstances are different from utility to utility and time to time. Therefore there is a need to find improvements to the load models or, in general, improvements to the load estimates. In Distribution Load Estimation (DLE) the measurements from the network are utilized to improve the customer class load models. The results of DLE will be new load models that better correspond to the loading of the distribution network but are still close to the original load models obtained by load research. The principal data flow of DLE is presented

  8. Estimating ISABELLE shielding requirements

    International Nuclear Information System (INIS)

    Stevens, A.J.; Thorndike, A.M.

    1976-01-01

    Estimates were made of the shielding thicknesses required at various points around the ISABELLE ring. Both hadron and muon requirements are considered. Radiation levels at the outside of the shield and at the BNL site boundary are kept at or below 1000 mrem per year and 5 mrem/year respectively. Muon requirements are based on the Wang formula for pion spectra, and the hadron requirements on the hadron cascade program CYLKAZ of Ranft. A muon shield thickness of 77 meters of sand is indicated outside the ring in one area, and hadron shields equivalent to from 2.7 to 5.6 meters in thickness of sand above the ring. The suggested safety allowance would increase these values to 86 meters and 4.0 to 7.2 meters respectively. There are many uncertainties in such estimates, but these last figures are considered to be rather conservative

  9. Variance Function Estimation. Revision.

    Science.gov (United States)

    1987-03-01

    UNLSIFIED RFOSR-TR-87-±112 F49620-85-C-O144 F/C 12/3 NL EEEEEEh LOUA28~ ~ L53 11uLoo MICROOP REOUINTS-’HR ------ N L E U INARF-% - IS %~1 %i % 0111...and 9 jointly. If 7,, 0. and are any preliminary estimators for 71, 6. and 3. define 71 and 6 to be the solutions of (4.1) N1 IN2 (7., ’ Td " ~ - / =0P

  10. Estimating Risk Parameters

    OpenAIRE

    Aswath Damodaran

    1999-01-01

    Over the last three decades, the capital asset pricing model has occupied a central and often controversial place in most corporate finance analysts’ tool chests. The model requires three inputs to compute expected returns – a riskfree rate, a beta for an asset and an expected risk premium for the market portfolio (over and above the riskfree rate). Betas are estimated, by most practitioners, by regressing returns on an asset against a stock index, with the slope of the regression being the b...

  11. Estimating Venezuelas Latent Inflation

    OpenAIRE

    Juan Carlos Bencomo; Hugo J. Montesinos; Hugo M. Montesinos; Jose Roberto Rondo

    2011-01-01

    Percent variation of the consumer price index (CPI) is the inflation indicator most widely used. This indicator, however, has some drawbacks. In addition to measurement errors of the CPI, there is a problem of incongruence between the definition of inflation as a sustained and generalized increase of prices and the traditional measure associated with the CPI. We use data from 1991 to 2005 to estimate a complementary indicator for Venezuela, the highest inflation country in Latin America. Late...

  12. Chernobyl source term estimation

    International Nuclear Information System (INIS)

    Gudiksen, P.H.; Harvey, T.F.; Lange, R.

    1990-09-01

    The Chernobyl source term available for long-range transport was estimated by integration of radiological measurements with atmospheric dispersion modeling and by reactor core radionuclide inventory estimation in conjunction with WASH-1400 release fractions associated with specific chemical groups. The model simulations revealed that the radioactive cloud became segmented during the first day, with the lower section heading toward Scandinavia and the upper part heading in a southeasterly direction with subsequent transport across Asia to Japan, the North Pacific, and the west coast of North America. By optimizing the agreement between the observed cloud arrival times and duration of peak concentrations measured over Europe, Japan, Kuwait, and the US with the model predicted concentrations, it was possible to derive source term estimates for those radionuclides measured in airborne radioactivity. This was extended to radionuclides that were largely unmeasured in the environment by performing a reactor core radionuclide inventory analysis to obtain release fractions for the various chemical transport groups. These analyses indicated that essentially all of the noble gases, 60% of the radioiodines, 40% of the radiocesium, 10% of the tellurium and about 1% or less of the more refractory elements were released. These estimates are in excellent agreement with those obtained on the basis of worldwide deposition measurements. The Chernobyl source term was several orders of magnitude greater than those associated with the Windscale and TMI reactor accidents. However, the 137 Cs from the Chernobyl event is about 6% of that released by the US and USSR atmospheric nuclear weapon tests, while the 131 I and 90 Sr released by the Chernobyl accident was only about 0.1% of that released by the weapon tests. 13 refs., 2 figs., 7 tabs

  13. Estimating Corporate Yield Curves

    OpenAIRE

    Antionio Diaz; Frank Skinner

    2001-01-01

    This paper represents the first study of retail deposit spreads of UK financial institutions using stochastic interest rate modelling and the market comparable approach. By replicating quoted fixed deposit rates using the Black Derman and Toy (1990) stochastic interest rate model, we find that the spread between fixed and variable rates of interest can be modeled (and priced) using an interest rate swap analogy. We also find that we can estimate an individual bank deposit yield curve as a spr...

  14. Estimation of inspection effort

    International Nuclear Information System (INIS)

    Mullen, M.F.; Wincek, M.A.

    1979-06-01

    An overview of IAEA inspection activities is presented, and the problem of evaluating the effectiveness of an inspection is discussed. Two models are described - an effort model and an effectiveness model. The effort model breaks the IAEA's inspection effort into components; the amount of effort required for each component is estimated; and the total effort is determined by summing the effort for each component. The effectiveness model quantifies the effectiveness of inspections in terms of probabilities of detection and quantities of material to be detected, if diverted over a specific period. The method is applied to a 200 metric ton per year low-enriched uranium fuel fabrication facility. A description of the model plant is presented, a safeguards approach is outlined, and sampling plans are calculated. The required inspection effort is estimated and the results are compared to IAEA estimates. Some other applications of the method are discussed briefly. Examples are presented which demonstrate how the method might be useful in formulating guidelines for inspection planning and in establishing technical criteria for safeguards implementation

  15. Qualitative Robustness in Estimation

    Directory of Open Access Journals (Sweden)

    Mohammed Nasser

    2012-07-01

    Full Text Available Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Times New Roman","serif";} Qualitative robustness, influence function, and breakdown point are three main concepts to judge an estimator from the viewpoint of robust estimation. It is important as well as interesting to study relation among them. This article attempts to present the concept of qualitative robustness as forwarded by first proponents and its later development. It illustrates intricacies of qualitative robustness and its relation with consistency, and also tries to remove commonly believed misunderstandings about relation between influence function and qualitative robustness citing some examples from literature and providing a new counter-example. At the end it places a useful finite and a simulated version of   qualitative robustness index (QRI. In order to assess the performance of the proposed measures, we have compared fifteen estimators of correlation coefficient using simulated as well as real data sets.

  16. Estimating directional epistasis

    Science.gov (United States)

    Le Rouzic, Arnaud

    2014-01-01

    Epistasis, i.e., the fact that gene effects depend on the genetic background, is a direct consequence of the complexity of genetic architectures. Despite this, most of the models used in evolutionary and quantitative genetics pay scant attention to genetic interactions. For instance, the traditional decomposition of genetic effects models epistasis as noise around the evolutionarily-relevant additive effects. Such an approach is only valid if it is assumed that there is no general pattern among interactions—a highly speculative scenario. Systematic interactions generate directional epistasis, which has major evolutionary consequences. In spite of its importance, directional epistasis is rarely measured or reported by quantitative geneticists, not only because its relevance is generally ignored, but also due to the lack of simple, operational, and accessible methods for its estimation. This paper describes conceptual and statistical tools that can be used to estimate directional epistasis from various kinds of data, including QTL mapping results, phenotype measurements in mutants, and artificial selection responses. As an illustration, I measured directional epistasis from a real-life example. I then discuss the interpretation of the estimates, showing how they can be used to draw meaningful biological inferences. PMID:25071828

  17. Adaptive Nonparametric Variance Estimation for a Ratio Estimator ...

    African Journals Online (AJOL)

    Kernel estimators for smooth curves require modifications when estimating near end points of the support, both for practical and asymptotic reasons. The construction of such boundary kernels as solutions of variational problem is a difficult exercise. For estimating the error variance of a ratio estimator, we suggest an ...

  18. Estimation of Lung Ventilation

    Science.gov (United States)

    Ding, Kai; Cao, Kunlin; Du, Kaifang; Amelon, Ryan; Christensen, Gary E.; Raghavan, Madhavan; Reinhardt, Joseph M.

    Since the primary function of the lung is gas exchange, ventilation can be interpreted as an index of lung function in addition to perfusion. Injury and disease processes can alter lung function on a global and/or a local level. MDCT can be used to acquire multiple static breath-hold CT images of the lung taken at different lung volumes, or with proper respiratory control, 4DCT images of the lung reconstructed at different respiratory phases. Image registration can be applied to this data to estimate a deformation field that transforms the lung from one volume configuration to the other. This deformation field can be analyzed to estimate local lung tissue expansion, calculate voxel-by-voxel intensity change, and make biomechanical measurements. The physiologic significance of the registration-based measures of respiratory function can be established by comparing to more conventional measurements, such as nuclear medicine or contrast wash-in/wash-out studies with CT or MR. An important emerging application of these methods is the detection of pulmonary function change in subjects undergoing radiation therapy (RT) for lung cancer. During RT, treatment is commonly limited to sub-therapeutic doses due to unintended toxicity to normal lung tissue. Measurement of pulmonary function may be useful as a planning tool during RT planning, may be useful for tracking the progression of toxicity to nearby normal tissue during RT, and can be used to evaluate the effectiveness of a treatment post-therapy. This chapter reviews the basic measures to estimate regional ventilation from image registration of CT images, the comparison of them to the existing golden standard and the application in radiation therapy.

  19. Estimating Subjective Probabilities

    DEFF Research Database (Denmark)

    Andersen, Steffen; Fountain, John; Harrison, Glenn W.

    2014-01-01

    either construct elicitation mechanisms that control for risk aversion, or construct elicitation mechanisms which undertake 'calibrating adjustments' to elicited reports. We illustrate how the joint estimation of risk attitudes and subjective probabilities can provide the calibration adjustments...... that theory calls for. We illustrate this approach using data from a controlled experiment with real monetary consequences to the subjects. This allows the observer to make inferences about the latent subjective probability, under virtually any well-specified model of choice under subjective risk, while still...

  20. Estimating NHL Scoring Rates

    OpenAIRE

    Buttrey, Samuel E.; Washburn, Alan R.; Price, Wilson L.; Operations Research

    2011-01-01

    The article of record as published may be located at http://dx.doi.org/10.2202/1559-0410.1334 We propose a model to estimate the rates at which NHL teams score and yield goals. In the model, goals occur as if from a Poisson process whose rate depends on the two teams playing, the home-ice advantage, and the manpower (power-play, short-handed) situation. Data on all the games from the 2008-2009 season was downloaded and processed into a form suitable for the analysis. The model...

  1. Risk estimation and evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Ferguson, R A.D.

    1982-10-01

    Risk assessment involves subjectivity, which makes objective decision making difficult in the nuclear power debate. The author reviews the process and uncertainties of estimating risks as well as the potential for misinterpretation and misuse. Risk data from a variety of aspects cannot be summed because the significance of different risks is not comparable. A method for including political, social, moral, psychological, and economic factors, environmental impacts, catastrophes, and benefits in the evaluation process could involve a broad base of lay and technical consultants, who would explain and argue their evaluation positions. 15 references. (DCK)

  2. Estimating Gear Teeth Stiffness

    DEFF Research Database (Denmark)

    Pedersen, Niels Leergaard

    2013-01-01

    The estimation of gear stiffness is important for determining the load distribution between the gear teeth when two sets of teeth are in contact. Two factors have a major influence on the stiffness; firstly the boundary condition through the gear rim size included in the stiffness calculation...... and secondly the size of the contact. In the FE calculation the true gear tooth root profile is applied. The meshing stiffness’s of gears are highly non-linear, it is however found that the stiffness of an individual tooth can be expressed in a linear form assuming that the contact length is constant....

  3. Mixtures Estimation and Applications

    CERN Document Server

    Mengersen, Kerrie; Titterington, Mike

    2011-01-01

    This book uses the EM (expectation maximization) algorithm to simultaneously estimate the missing data and unknown parameter(s) associated with a data set. The parameters describe the component distributions of the mixture; the distributions may be continuous or discrete. The editors provide a complete account of the applications, mathematical structure and statistical analysis of finite mixture distributions along with MCMC computational methods, together with a range of detailed discussions covering the applications of the methods and features chapters from the leading experts on the subject

  4. Robust Wave Resource Estimation

    DEFF Research Database (Denmark)

    Lavelle, John; Kofoed, Jens Peter

    2013-01-01

    density estimates of the PDF as a function both of Hm0 and Tp, and Hm0 and T0;2, together with the mean wave power per unit crest length, Pw, as a function of Hm0 and T0;2. The wave elevation parameters, from which the wave parameters are calculated, are filtered to correct or remove spurious data....... An overview is given of the methods used to do this, and a method for identifying outliers of the wave elevation data, based on the joint distribution of wave elevations and accelerations, is presented. The limitations of using a JONSWAP spectrum to model the measured wave spectra as a function of Hm0 and T0......;2 or Hm0 and Tp for the Hanstholm site data are demonstrated. As an alternative, the non-parametric loess method, which does not rely on any assumptions about the shape of the wave elevation spectra, is used to accurately estimate Pw as a function of Hm0 and T0;2....

  5. Estimations of actual availability

    International Nuclear Information System (INIS)

    Molan, M.; Molan, G.

    2001-01-01

    Adaptation of working environment (social, organizational, physical and physical) should assure higher level of workers' availability and consequently higher level of workers' performance. A special theoretical model for description of connections between environmental factors, human availability and performance was developed and validated. The central part of the model is evaluations of human actual availability in the real working situation or fitness for duties self-estimation. The model was tested in different working environments. On the numerous (2000) workers, standardized values and critical limits for an availability questionnaire were defined. Standardized method was used in identification of the most important impact of environmental factors. Identified problems were eliminated by investments in the organization in modification of selection and training procedures in humanization of working .environment. For workers with behavioural and health problems individual consultancy was offered. The described method is a tool for identification of impacts. In combination with behavioural analyses and mathematical analyses of connections, it offers possibilities to keep adequate level of human availability and fitness for duty in each real working situation. The model should be a tool for achieving adequate level of nuclear safety by keeping the adequate level of workers' availability and fitness for duty. For each individual worker possibility for estimation of level of actual fitness for duty is possible. Effects of prolonged work and additional tasks should be evaluated. Evaluations of health status effects and ageing are possible on the individual level. (author)

  6. The Cloudy Crystal Ball: Detecting and Disrupting Homegrown Violent Extremism

    Science.gov (United States)

    2018-03-01

    investigators that Mateen had been watching videos of the radical cleric Anwar al-Awlaki. This lead to a third interview with Mateen, but there was still...Mateen’s viewing of radical videos online was only confirmed during the follow-up investigation. One official said emphatically, “We don’t have a...change over time, the risk assessment can likewise adapt. The Sexual Violence Risk (SVR-20) instrument is designed to detect a sex offender’s risk

  7. Analysis of Ozone in Cloudy Versus Clear Sky Conditions

    Science.gov (United States)

    Strode, Sarah; Douglass, Anne; Ziemke, Jerald

    2016-01-01

    Convection impacts ozone concentrations by transporting ozone vertically and by lofting ozone precursors from the surface, while the clouds and lighting associated with convection affect ozone chemistry. Observations of the above-cloud ozone column (Ziemke et al., 2009) derived from the OMI instrument show geographic variability, and comparison of the above-cloud ozone with all-sky tropospheric ozone columns from OMI indicates important regional differences. We use two global models of atmospheric chemistry, the GMI chemical transport model (CTM) and the GEOS-5 chemistry climate model, to diagnose the contributions of transport and chemistry to observed differences in ozone between areas with and without deep convection, as well as differences in clean versus polluted convective regions. We also investigate how the above-cloud tropospheric ozone from OMI can provide constraints on the relationship between ozone and convection in a free-running climate simulation as well as a CTM.

  8. Student Data Privacy Is Cloudy Today, Clearer Tomorrow

    Science.gov (United States)

    Trainor, Sonja

    2015-01-01

    An introduction to the big picture conversation on student data privacy and the norms that are coming out of it. The author looks at the current state of federal law, and ahead to proposed legislation at the federal level. The intent is to help educators become familiar with the key issues regarding student data privacy in education so as to…

  9. Fast Monte Carlo-assisted simulation of cloudy Earth backgrounds

    Science.gov (United States)

    Adler-Golden, Steven; Richtsmeier, Steven C.; Berk, Alexander; Duff, James W.

    2012-11-01

    A calculation method has been developed for rapidly synthesizing radiometrically accurate ultraviolet through longwavelengthinfrared spectral imagery of the Earth for arbitrary locations and cloud fields. The method combines cloudfree surface reflectance imagery with cloud radiance images calculated from a first-principles 3-D radiation transport model. The MCScene Monte Carlo code [1-4] is used to build a cloud image library; a data fusion method is incorporated to speed convergence. The surface and cloud images are combined with an upper atmospheric description with the aid of solar and thermal radiation transport equations that account for atmospheric inhomogeneity. The method enables a wide variety of sensor and sun locations, cloud fields, and surfaces to be combined on-the-fly, and provides hyperspectral wavelength resolution with minimal computational effort. The simulations agree very well with much more time-consuming direct Monte Carlo calculations of the same scene.

  10. Modules for cloudy days; Module fuer truebe Tage

    Energy Technology Data Exchange (ETDEWEB)

    Rentzing, Sascha

    2011-04-15

    Thin-film systems are commonly assumed to be unproductive because of their low efficiency. Those who use them, however, have found that the opposite may be the case: Owing to its better weak light and temperature characteristics, this technology may be more efficient in central European climates than thick fim systems.

  11. Recent variations of cloudiness over Russia from surface daytime observations

    International Nuclear Information System (INIS)

    Chernokulsky, A V; Mokhov, I I; Bulygina, O N

    2011-01-01

    Changes of total and low cloud fraction and the occurrence of different cloud types over Russia were assessed. The analysis was based on visual observations from more than 1600 meteorological stations. Differences between the 2001-10 and 1991-2000 year ranges were evaluated. In general, cloud fraction has tended to increase during recent years. A major increase of total cloud fraction and a decrease of the number of days without clouds are revealed in spring and autumn mostly due to an increase of the occurrence of convective and non-precipitating stratiform clouds. In contrast, the occurrence of nimbostratus clouds has tended to decrease. In general, the ratio between the occurrence of cumulonimbus and nimbostratus clouds has increased for the period 2001-10 relative to 1991-2000. Over particular regions, a decrease of total cloud fraction and an increase of the number of days without clouds are noted.

  12. Cloudy Skies over AGN: Observations with Simbol-X

    Science.gov (United States)

    Salvati, M.; Risaliti, G.

    2009-05-01

    Recent time-resolved spectroscopic X-ray studies of bright obscured AGN show that column density variability on time scales of hours/days may be common, at least for sources with NH>1023 cm-2. This opens new oppurtunities in the analysis of the structure of the circumnuclear medium and of the X-ray source: resolving the variations due to single clouds covering/uncovering the X-ray source provides tight constraints on the source size, the clouds' size and distance, and their average number, density and column density. We show how Simbol-X will provide a breakthrough in this field, thanks to its broad band coverage, allowing (a) to precisely disentangle the continuum and NH variations, and (2) to extend the NH variability analysis to column densities >1023 cm-2.

  13. Comparison of variance estimators for metaanalysis of instrumental variable estimates

    NARCIS (Netherlands)

    Schmidt, A. F.; Hingorani, A. D.; Jefferis, B. J.; White, J.; Groenwold, R. H H; Dudbridge, F.; Ben-Shlomo, Y.; Chaturvedi, N.; Engmann, J.; Hughes, A.; Humphries, S.; Hypponen, E.; Kivimaki, M.; Kuh, D.; Kumari, M.; Menon, U.; Morris, R.; Power, C.; Price, J.; Wannamethee, G.; Whincup, P.

    2016-01-01

    Background: Mendelian randomization studies perform instrumental variable (IV) analysis using genetic IVs. Results of individual Mendelian randomization studies can be pooled through meta-analysis. We explored how different variance estimators influence the meta-analysed IV estimate. Methods: Two

  14. Introduction to variance estimation

    CERN Document Server

    Wolter, Kirk M

    2007-01-01

    We live in the information age. Statistical surveys are used every day to determine or evaluate public policy and to make important business decisions. Correct methods for computing the precision of the survey data and for making inferences to the target population are absolutely essential to sound decision making. Now in its second edition, Introduction to Variance Estimation has for more than twenty years provided the definitive account of the theory and methods for correct precision calculations and inference, including examples of modern, complex surveys in which the methods have been used successfully. The book provides instruction on the methods that are vital to data-driven decision making in business, government, and academe. It will appeal to survey statisticians and other scientists engaged in the planning and conduct of survey research, and to those analyzing survey data and charged with extracting compelling information from such data. It will appeal to graduate students and university faculty who...

  15. Estimating Discount Rates

    Directory of Open Access Journals (Sweden)

    Laurence Booth

    2015-04-01

    Full Text Available Discount rates are essential to applied finance, especially in setting prices for regulated utilities and valuing the liabilities of insurance companies and defined benefit pension plans. This paper reviews the basic building blocks for estimating discount rates. It also examines market risk premiums, as well as what constitutes a benchmark fair or required rate of return, in the aftermath of the financial crisis and the U.S. Federal Reserve’s bond-buying program. Some of the results are disconcerting. In Canada, utilities and pension regulators responded to the crash in different ways. Utilities regulators haven’t passed on the full impact of low interest rates, so that consumers face higher prices than they should whereas pension regulators have done the opposite, and forced some contributors to pay more. In both cases this is opposite to the desired effect of monetary policy which is to stimulate aggregate demand. A comprehensive survey of global finance professionals carried out last year provides some clues as to where adjustments are needed. In the U.S., the average equity market required return was estimated at 8.0 per cent; Canada’s is 7.40 per cent, due to the lower market risk premium and the lower risk-free rate. This paper adds a wealth of historic and survey data to conclude that the ideal base long-term interest rate used in risk premium models should be 4.0 per cent, producing an overall expected market return of 9-10.0 per cent. The same data indicate that allowed returns to utilities are currently too high, while the use of current bond yields in solvency valuations of pension plans and life insurers is unhelpful unless there is a realistic expectation that the plans will soon be terminated.

  16. Improved Estimates of Clear Sky Longwave Flux and Application to the Tropical Greenhouse Effect

    Science.gov (United States)

    Collins, W. D.

    1997-01-01

    The first objective of this investigation is to eliminate the clear-sky offset introduced by the scene-identification procedures developed for the Earth Radiation Budget Experiment (ERBE). Estimates of this systematic bias range from 10 to as high as 30 W/sq m. The initial version of the ScaRaB data is being processed with the original ERBE algorithm. Since the ERBE procedure for scene identification is based upon zonal flux averages, clear scenes with longwave emission well below the zonal mean value are mistakenly classified as cloudy. The erroneous classification is more frequent in regions with deep convection and enhanced mid- and upper-tropospheric humidity. We will develop scene identification parameters with zonal and/or time dependence to reduce or eliminate the bias in the clear- sky data. The modified scene identification procedure could be used for the ScaRaB-specific version of the Earth-radiation products. The second objective is to investigate changes in the clear-sky Outgoing Longwave Radiation (OLR) associated with decadal variations in the tropical and subtropical climate. There is considerable evidence for a shift in the climate state starting in approximately 1977. The shift is accompanied by higher SSTs in the equatorial Pacific, increased tropical convection, and higher values of atmospheric humidity. Other evidence indicates that the humidity in the tropical troposphere has been steadily increasing over the last 30 years. It is not known whether the atmospheric greenhouse effect has increased during this period in response to these changes in SST and precipitable water. We will investigate the decadal-scale fluctuations in the greenhouse effect using Nimbus-7, ERBE, and ScaRaB measurements spaning 1979 to the present. The data from the different satellites will be intercalibrated by comparison with model calculations based upon ship radiosonde observations. The fluxes calculated from the radiation model will also be used for validation of the

  17. Method for estimation of the spectral distribution that influence electric power of PV module; Taiyo denchi shutsuryoku ni eikyo wo ataeru bunko nissha bunpu no suiteiho

    Energy Technology Data Exchange (ETDEWEB)

    Yamagami, Y.; Tani, T. [Science University of Tokyo, Tokyo (Japan)

    1997-11-25

    A method was proposed for estimating the spectral distribution using air mass, precipitable water, and clear indexes which are generally obtainable, and a comparative study was made between the spectral distribution obtained by this method and the measured data using output power of PV modules, etc. as indexes. When solar light comes into the atmosphere, it dissipates receiving scattering/absorption by various gases and aerosols. Direct light component and scattered light component which arrive at the earth surface become functions of air mass and precipitable water. The wavelength distribution of scattered light in cloudy sky is not dependent upon air mass, but affected strongly by absorption band by steam of clouds. By relational equations considered of these, output power and short-circuit current of PV modules are obtained to make a comparison with the measured data. As a result, it was found that this method estimated the spectral distribution with accuracy. Further, seasonal changes in the spectral distribution were well reproduced. The simulation of the module output in Sapporo and Okinawa brought a result that the output in Okinawa is 1.93% larger than in Okinawa. 5 refs., 5 figs., 6 tabs.

  18. Toxicity Estimation Software Tool (TEST)

    Science.gov (United States)

    The Toxicity Estimation Software Tool (TEST) was developed to allow users to easily estimate the toxicity of chemicals using Quantitative Structure Activity Relationships (QSARs) methodologies. QSARs are mathematical models used to predict measures of toxicity from the physical c...

  19. Sampling and estimating recreational use.

    Science.gov (United States)

    Timothy G. Gregoire; Gregory J. Buhyoff

    1999-01-01

    Probability sampling methods applicable to estimate recreational use are presented. Both single- and multiple-access recreation sites are considered. One- and two-stage sampling methods are presented. Estimation of recreational use is presented in a series of examples.

  20. Flexible and efficient estimating equations for variogram estimation

    KAUST Repository

    Sun, Ying; Chang, Xiaohui; Guan, Yongtao

    2018-01-01

    Variogram estimation plays a vastly important role in spatial modeling. Different methods for variogram estimation can be largely classified into least squares methods and likelihood based methods. A general framework to estimate the variogram through a set of estimating equations is proposed. This approach serves as an alternative approach to likelihood based methods and includes commonly used least squares approaches as its special cases. The proposed method is highly efficient as a low dimensional representation of the weight matrix is employed. The statistical efficiency of various estimators is explored and the lag effect is examined. An application to a hydrology dataset is also presented.

  1. Flexible and efficient estimating equations for variogram estimation

    KAUST Repository

    Sun, Ying

    2018-01-11

    Variogram estimation plays a vastly important role in spatial modeling. Different methods for variogram estimation can be largely classified into least squares methods and likelihood based methods. A general framework to estimate the variogram through a set of estimating equations is proposed. This approach serves as an alternative approach to likelihood based methods and includes commonly used least squares approaches as its special cases. The proposed method is highly efficient as a low dimensional representation of the weight matrix is employed. The statistical efficiency of various estimators is explored and the lag effect is examined. An application to a hydrology dataset is also presented.

  2. Improved Estimates of Thermodynamic Parameters

    Science.gov (United States)

    Lawson, D. D.

    1982-01-01

    Techniques refined for estimating heat of vaporization and other parameters from molecular structure. Using parabolic equation with three adjustable parameters, heat of vaporization can be used to estimate boiling point, and vice versa. Boiling points and vapor pressures for some nonpolar liquids were estimated by improved method and compared with previously reported values. Technique for estimating thermodynamic parameters should make it easier for engineers to choose among candidate heat-exchange fluids for thermochemical cycles.

  3. State estimation in networked systems

    NARCIS (Netherlands)

    Sijs, J.

    2012-01-01

    This thesis considers state estimation strategies for networked systems. State estimation refers to a method for computing the unknown state of a dynamic process by combining sensor measurements with predictions from a process model. The most well known method for state estimation is the Kalman

  4. Global Polynomial Kernel Hazard Estimation

    DEFF Research Database (Denmark)

    Hiabu, Munir; Miranda, Maria Dolores Martínez; Nielsen, Jens Perch

    2015-01-01

    This paper introduces a new bias reducing method for kernel hazard estimation. The method is called global polynomial adjustment (GPA). It is a global correction which is applicable to any kernel hazard estimator. The estimator works well from a theoretical point of view as it asymptotically redu...

  5. Uveal melanoma: Estimating prognosis

    Directory of Open Access Journals (Sweden)

    Swathi Kaliki

    2015-01-01

    Full Text Available Uveal melanoma is the most common primary malignant tumor of the eye in adults, predominantly found in Caucasians. Local tumor control of uveal melanoma is excellent, yet this malignancy is associated with relatively high mortality secondary to metastasis. Various clinical, histopathological, cytogenetic features and gene expression features help in estimating the prognosis of uveal melanoma. The clinical features associated with poor prognosis in patients with uveal melanoma include older age at presentation, male gender, larger tumor basal diameter and thickness, ciliary body location, diffuse tumor configuration, association with ocular/oculodermal melanocytosis, extraocular tumor extension, and advanced tumor staging by American Joint Committee on Cancer classification. Histopathological features suggestive of poor prognosis include epithelioid cell type, high mitotic activity, higher values of mean diameter of ten largest nucleoli, higher microvascular density, extravascular matrix patterns, tumor-infiltrating lymphocytes, tumor-infiltrating macrophages, higher expression of insulin-like growth factor-1 receptor, and higher expression of human leukocyte antigen Class I and II. Monosomy 3, 1p loss, 6q loss, and 8q and those classified as Class II by gene expression are predictive of poor prognosis of uveal melanoma. In this review, we discuss the prognostic factors of uveal melanoma. A database search was performed on PubMed, using the terms "uvea," "iris," "ciliary body," "choroid," "melanoma," "uveal melanoma" and "prognosis," "metastasis," "genetic testing," "gene expression profiling." Relevant English language articles were extracted, reviewed, and referenced appropriately.

  6. Approaches to estimating decommissioning costs

    International Nuclear Information System (INIS)

    Smith, R.I.

    1990-07-01

    The chronological development of methodology for estimating the cost of nuclear reactor power station decommissioning is traced from the mid-1970s through 1990. Three techniques for developing decommissioning cost estimates are described. The two viable techniques are compared by examining estimates developed for the same nuclear power station using both methods. The comparison shows that the differences between the estimates are due largely to differing assumptions regarding the size of the utility and operating contractor overhead staffs. It is concluded that the two methods provide bounding estimates on a range of manageable costs, and provide reasonable bases for the utility rate adjustments necessary to pay for future decommissioning costs. 6 refs

  7. Water Mass Classification on a Highly Variable Arctic Shelf Region: Origin of Laptev Sea Water Masses and Implications for the Nutrient Budget

    Science.gov (United States)

    Bauch, D.; Cherniavskaia, E.

    2018-03-01

    Large gradients and inter annual variations on the Laptev Sea shelf prevent the use of uniform property ranges for a classification of major water masses. The central Laptev Sea is dominated by predominantly marine waters, locally formed polynya waters and riverine summer surface waters. Marine waters enter the central Laptev Sea from the northwestern Laptev Sea shelf and originate from the Kara Sea or the Arctic Ocean halocline. Local polynya waters are formed in the Laptev Sea coastal polynyas. Riverine summer surface waters are formed from Lena river discharge and local melt. We use a principal component analysis (PCA) in order to assess the distribution and importance of water masses within the Laptev Sea. This mathematical method is applied to hydro-chemical summer data sets from the Laptev Sea from five years and allows to define water types based on objective and statistically significant criteria. We argue that the PCA-derived water types are consistent with the Laptev Sea hydrography and indeed represent the major water masses on the central Laptev Sea shelf. Budgets estimated for the thus defined major Laptev Sea water masses indicate that freshwater inflow from the western Laptev Sea is about half or in the same order of magnitude as freshwater stored in locally formed polynya waters. Imported water dominates the nutrient budget in the central Laptev Sea; and only in years with enhanced local polynya activity is the nutrient budget of the locally formed water in the same order as imported nutrients.

  8. Estimating Stochastic Volatility Models using Prediction-based Estimating Functions

    DEFF Research Database (Denmark)

    Lunde, Asger; Brix, Anne Floor

    to the performance of the GMM estimator based on conditional moments of integrated volatility from Bollerslev and Zhou (2002). The case where the observed log-price process is contaminated by i.i.d. market microstructure (MMS) noise is also investigated. First, the impact of MMS noise on the parameter estimates from......In this paper prediction-based estimating functions (PBEFs), introduced in Sørensen (2000), are reviewed and PBEFs for the Heston (1993) stochastic volatility model are derived. The finite sample performance of the PBEF based estimator is investigated in a Monte Carlo study, and compared...... to correctly account for the noise are investigated. Our Monte Carlo study shows that the estimator based on PBEFs outperforms the GMM estimator, both in the setting with and without MMS noise. Finally, an empirical application investigates the possible challenges and general performance of applying the PBEF...

  9. A new estimator for vector velocity estimation [medical ultrasonics

    DEFF Research Database (Denmark)

    Jensen, Jørgen Arendt

    2001-01-01

    A new estimator for determining the two-dimensional velocity vector using a pulsed ultrasound field is derived. The estimator uses a transversely modulated ultrasound field for probing the moving medium under investigation. A modified autocorrelation approach is used in the velocity estimation...... be introduced, and the velocity estimation is done at a fixed depth in tissue to reduce the influence of a spatial velocity spread. Examples for different velocity vectors and field conditions are shown using both simple and more complex field simulations. A relative accuracy of 10.1% is obtained...

  10. Estimating Net Primary Productivity Beneath Snowpack Using Snowpack Radiative Transfer Modeling and Global Satellite Data

    Science.gov (United States)

    Barber, D. E.; Peterson, M. C.

    2002-05-01

    will present PAR levels beneath the snowpack for the Northern Hemisphere during spring for both cloudy and clear sky conditions for 1983-1987, describe our methods, and provide the first estimate of the NPP for Cladonia species beneath Northern Hemisphere snowpack. This analysis synthesizes 5 years of data; the variability during this period will be used to discuss the influence of global warming on Arctic plant growth prior to snowmelt.

  11. Estimating Sampling Biases and Measurement Uncertainties of AIRS-AMSU-A Temperature and Water Vapor Observations Using MERRA Reanalysis

    Science.gov (United States)

    Hearty, Thomas J.; Savtchenko, Andrey K.; Tian, Baijun; Fetzer, Eric; Yung, Yuk L.; Theobald, Michael; Vollmer, Bruce; Fishbein, Evan; Won, Young-In

    2014-01-01

    We use MERRA (Modern Era Retrospective-Analysis for Research Applications) temperature and water vapor data to estimate the sampling biases of climatologies derived from the AIRS/AMSU-A (Atmospheric Infrared Sounder/Advanced Microwave Sounding Unit-A) suite of instruments. We separate the total sampling bias into temporal and instrumental components. The temporal component is caused by the AIRS/AMSU-A orbit and swath that are not able to sample all of time and space. The instrumental component is caused by scenes that prevent successful retrievals. The temporal sampling biases are generally smaller than the instrumental sampling biases except in regions with large diurnal variations, such as the boundary layer, where the temporal sampling biases of temperature can be +/- 2 K and water vapor can be 10% wet. The instrumental sampling biases are the main contributor to the total sampling biases and are mainly caused by clouds. They are up to 2 K cold and greater than 30% dry over mid-latitude storm tracks and tropical deep convective cloudy regions and up to 20% wet over stratus regions. However, other factors such as surface emissivity and temperature can also influence the instrumental sampling bias over deserts where the biases can be up to 1 K cold and 10% wet. Some instrumental sampling biases can vary seasonally and/or diurnally. We also estimate the combined measurement uncertainties of temperature and water vapor from AIRS/AMSU-A and MERRA by comparing similarly sampled climatologies from both data sets. The measurement differences are often larger than the sampling biases and have longitudinal variations.

  12. Estimation of Water Quality

    International Nuclear Information System (INIS)

    Vetrinskaya, N.I.; Manasbayeva, A.B.

    1998-01-01

    Water has a particular ecological function and it is an indicator of the general state of the biosphere. In relation with this summary, the toxicological evaluation of water by biologic testing methods is very actual. The peculiarity of biologic testing information is an integral reflection of all totality properties of examination of the environment in position of its perception by living objects. Rapid integral evaluation of anthropological situation is a base aim of biologic testing. If this evaluation has deviations from normal state, detailed analysis and revelation of dangerous components could be conducted later. The quality of water from the Degelen gallery, where nuclear explosions were conducted, was investigated by bio-testing methods. The micro-organisms (Micrococcus Luteus, Candida crusei, Pseudomonas algaligenes) and water plant elodea (Elodea canadensis Rich) were used as test-objects. It is known that the transporting functions of cell membranes of living organisms are violated the first time in extreme conditions by difference influences. Therefore, ion penetration of elodeas and micro-organisms cells, which contained in the examination water with toxicants, were used as test-function. Alteration of membrane penetration was estimated by measurement of electrolytes electrical conductivity, which gets out from living objects cells to distillate water. Index of water toxic is ratio of electrical conductivity in experience to electrical conductivity in control. Also, observations from common state of plant, which was incubated in toxic water, were made. (Chronic experience conducted for 60 days.) The plants were incubated in water samples, which were picked out from gallery in the years 1996 and 1997. The time of incubation is 1-10 days. The results of investigation showed that ion penetration of elodeas and micro-organisms cells changed very much with influence of radionuclides, which were contained in testing water. Changes are taking place even in

  13. WAYS HIERARCHY OF ACCOUNTING ESTIMATES

    Directory of Open Access Journals (Sweden)

    ŞERBAN CLAUDIU VALENTIN

    2015-03-01

    Full Text Available Based on one hand on the premise that the estimate is an approximate evaluation, completed with the fact that the term estimate is increasingly common and used by a variety of both theoretical and practical areas, particularly in situations where we can not decide ourselves with certainty, it must be said that, in fact, we are dealing with estimates and in our case with an accounting estimate. Completing on the other hand the idea above with the phrase "estimated value", which implies that we are dealing with a value obtained from an evaluation process, but its size is not exact but approximated, meaning is close to the actual size, it becomes obvious the neccessity to delimit the hierarchical relationship between evaluation / estimate while considering the context in which the evaluation activity is derulated at entity level.

  14. Spring Small Grains Area Estimation

    Science.gov (United States)

    Palmer, W. F.; Mohler, R. J.

    1986-01-01

    SSG3 automatically estimates acreage of spring small grains from Landsat data. Report describes development and testing of a computerized technique for using Landsat multispectral scanner (MSS) data to estimate acreage of spring small grains (wheat, barley, and oats). Application of technique to analysis of four years of data from United States and Canada yielded estimates of accuracy comparable to those obtained through procedures that rely on trained analysis.

  15. Parameter estimation in plasmonic QED

    Science.gov (United States)

    Jahromi, H. Rangani

    2018-03-01

    We address the problem of parameter estimation in the presence of plasmonic modes manipulating emitted light via the localized surface plasmons in a plasmonic waveguide at the nanoscale. The emitter that we discuss is the nitrogen vacancy centre (NVC) in diamond modelled as a qubit. Our goal is to estimate the β factor measuring the fraction of emitted energy captured by waveguide surface plasmons. The best strategy to obtain the most accurate estimation of the parameter, in terms of the initial state of the probes and different control parameters, is investigated. In particular, for two-qubit estimation, it is found although we may achieve the best estimation at initial instants by using the maximally entangled initial states, at long times, the optimal estimation occurs when the initial state of the probes is a product one. We also find that decreasing the interqubit distance or increasing the propagation length of the plasmons improve the precision of the estimation. Moreover, decrease of spontaneous emission rate of the NVCs retards the quantum Fisher information (QFI) reduction and therefore the vanishing of the QFI, measuring the precision of the estimation, is delayed. In addition, if the phase parameter of the initial state of the two NVCs is equal to πrad, the best estimation with the two-qubit system is achieved when initially the NVCs are maximally entangled. Besides, the one-qubit estimation has been also analysed in detail. Especially, we show that, using a two-qubit probe, at any arbitrary time, enhances considerably the precision of estimation in comparison with one-qubit estimation.

  16. Quantity Estimation Of The Interactions

    International Nuclear Information System (INIS)

    Gorana, Agim; Malkaj, Partizan; Muda, Valbona

    2007-01-01

    In this paper we present some considerations about quantity estimations, regarding the range of interaction and the conservations laws in various types of interactions. Our estimations are done under classical and quantum point of view and have to do with the interaction's carriers, the radius, the influence range and the intensity of interactions

  17. CONDITIONS FOR EXACT CAVALIERI ESTIMATION

    Directory of Open Access Journals (Sweden)

    Mónica Tinajero-Bravo

    2014-03-01

    Full Text Available Exact Cavalieri estimation amounts to zero variance estimation of an integral with systematic observations along a sampling axis. A sufficient condition is given, both in the continuous and the discrete cases, for exact Cavalieri sampling. The conclusions suggest improvements on the current stereological application of fractionator-type sampling.

  18. Optimization of Barron density estimates

    Czech Academy of Sciences Publication Activity Database

    Vajda, Igor; van der Meulen, E. C.

    2001-01-01

    Roč. 47, č. 5 (2001), s. 1867-1883 ISSN 0018-9448 R&D Projects: GA ČR GA102/99/1137 Grant - others:Copernicus(XE) 579 Institutional research plan: AV0Z1075907 Keywords : Barron estimator * chi-square criterion * density estimation Subject RIV: BD - Theory of Information Impact factor: 2.077, year: 2001

  19. Stochastic Estimation via Polynomial Chaos

    Science.gov (United States)

    2015-10-01

    AFRL-RW-EG-TR-2015-108 Stochastic Estimation via Polynomial Chaos Douglas V. Nance Air Force Research...COVERED (From - To) 20-04-2015 – 07-08-2015 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER Stochastic Estimation via Polynomial Chaos ...This expository report discusses fundamental aspects of the polynomial chaos method for representing the properties of second order stochastic

  20. Bayesian estimates of linkage disequilibrium

    Directory of Open Access Journals (Sweden)

    Abad-Grau María M

    2007-06-01

    Full Text Available Abstract Background The maximum likelihood estimator of D' – a standard measure of linkage disequilibrium – is biased toward disequilibrium, and the bias is particularly evident in small samples and rare haplotypes. Results This paper proposes a Bayesian estimation of D' to address this problem. The reduction of the bias is achieved by using a prior distribution on the pair-wise associations between single nucleotide polymorphisms (SNPs that increases the likelihood of equilibrium with increasing physical distances between pairs of SNPs. We show how to compute the Bayesian estimate using a stochastic estimation based on MCMC methods, and also propose a numerical approximation to the Bayesian estimates that can be used to estimate patterns of LD in large datasets of SNPs. Conclusion Our Bayesian estimator of D' corrects the bias toward disequilibrium that affects the maximum likelihood estimator. A consequence of this feature is a more objective view about the extent of linkage disequilibrium in the human genome, and a more realistic number of tagging SNPs to fully exploit the power of genome wide association studies.

  1. Reactivity estimation using digital nonlinear H∞ estimator for VHTRC experiment

    International Nuclear Information System (INIS)

    Suzuki, Katsuo; Nabeshima, Kunihiko; Yamane, Tsuyoshi

    2003-01-01

    On-line and real-time estimation of time-varying reactivity in a nuclear reactor in necessary for early detection of reactivity anomaly and safe operation. Using a digital nonlinear H ∞ estimator, an experiment of real-time dynamic reactivity estimation was carried out in the Very High Temperature Reactor Critical Assembly (VHTRC) of Japan Atomic Energy Research Institute. Some technical issues of the experiment are described, such as reactivity insertion, data sampling frequency, anti-aliasing filter, experimental circuit and digitalising nonlinear H ∞ reactivity estimator, and so on. Then, we discussed the experimental results obtained by the digital nonlinear H ∞ estimator with sampled data of the nuclear instrumentation signal for the power responses under various reactivity insertions. Good performances of estimated reactivity were observed, with almost no delay to the true reactivity and sufficient accuracy between 0.05 cent and 0.1 cent. The experiment shows that real-time reactivity for data sampling period of 10 ms can be certainly realized. From the results of the experiment, it is concluded that the digital nonlinear H ∞ reactivity estimator can be applied as on-line real-time reactivity meter for actual nuclear plants. (author)

  2. Age estimation in the living

    DEFF Research Database (Denmark)

    Tangmose, Sara; Thevissen, Patrick; Lynnerup, Niels

    2015-01-01

    A radiographic assessment of third molar development is essential for differentiating between juveniles and adolescents in forensic age estimations. As the developmental stages of third molars are highly correlated, age estimates based on a combination of a full set of third molar scores...... are statistically complicated. Transition analysis (TA) is a statistical method developed for estimating age at death in skeletons, which combines several correlated developmental traits into one age estimate including a 95% prediction interval. The aim of this study was to evaluate the performance of TA...... in the living on a full set of third molar scores. A cross sectional sample of 854 panoramic radiographs, homogenously distributed by sex and age (15.0-24.0 years), were randomly split in two; a reference sample for obtaining age estimates including a 95% prediction interval according to TA; and a validation...

  3. UNBIASED ESTIMATORS OF SPECIFIC CONNECTIVITY

    Directory of Open Access Journals (Sweden)

    Jean-Paul Jernot

    2011-05-01

    Full Text Available This paper deals with the estimation of the specific connectivity of a stationary random set in IRd. It turns out that the "natural" estimator is only asymptotically unbiased. The example of a boolean model of hypercubes illustrates the amplitude of the bias produced when the measurement field is relatively small with respect to the range of the random set. For that reason unbiased estimators are desired. Such an estimator can be found in the literature in the case where the measurement field is a right parallelotope. In this paper, this estimator is extended to apply to measurement fields of various shapes, and to possess a smaller variance. Finally an example from quantitative metallography (specific connectivity of a population of sintered bronze particles is given.

  4. Laser cost experience and estimation

    International Nuclear Information System (INIS)

    Shofner, F.M.; Hoglund, R.L.

    1977-01-01

    This report addresses the question of estimating the capital and operating costs for LIS (Laser Isotope Separation) lasers, which have performance requirements well beyond the state of mature art. This question is seen with different perspectives by political leaders, ERDA administrators, scientists, and engineers concerned with reducing LIS to economically successful commercial practice, on a timely basis. Accordingly, this report attempts to provide ''ballpark'' estimators for capital and operating costs and useful design and operating information for lasers based on mature technology, and their LIS analogs. It is written very basically and is intended to respond about equally to the perspectives of administrators, scientists, and engineers. Its major contributions are establishing the current, mature, industrialized laser track record (including capital and operating cost estimators, reliability, types of application, etc.) and, especially, evolution of generalized estimating procedures for capital and operating cost estimators for new laser design

  5. Improvements to TOVS retrievals over sea ice and applications to estimating Arctic energy fluxes

    Science.gov (United States)

    Francis, Jennifer A.

    1994-01-01

    Modeling studies suggest that polar regions play a major role in modulating the Earth's climate and that they may be more sensitive than lower latitudes to climate change. Until recently, however, data from meteorological stations poleward of 70 degs have been sparse, and consequently, our understanding of air-sea-ice interaction processes is relatively poor. Satellite-borne sensors now offer a promising opportunity to observe polar regions and ultimately to improve parameterizations of energy transfer processes in climate models. This study focuses on the application of the TIROS-N operational vertical sounder (TOVS) to sea-ice-covered regions in the nonmelt season. TOVS radiances are processed with the improved initialization inversion ('3I') algorithm, providng estimates of layer-average temperature and moisture, cloud conditions, and surface characteristics at a horizontal resolution of approximately 100 km x 100 km. Although TOVS has flown continuously on polar-orbiting satellites since 1978, its potential has not been realized in high latitudes because the quality of retrievals is often significantly lower over sea ice and snow than over the surfaces. The recent availability of three Arctic data sets has provided an opportunity to validate TOVS retrievals: the first from the Coordinated Eastern Arctic Experiment (CEAREX) in winter 1988/1989, the second from the LeadEx field program in spring 1992, and the third from Russian drifting ice stations. Comparisons with these data reveal deficiencies in TOVS retrievals over sea ice during the cold season; e.g., ice surface temperature is often 5 to 15 K too warm, microwave emissivity is approximately 15% too low at large view angles, clear/cloudy scenes are sometimes misidentified, and low-level inversions are often not captured. In this study, methods to reduce these errors are investigated. Improvements to the ice surface temperature retrieval have reduced rms errors from approximately 7 K to 3 K; correction of

  6. Estimation of toxicity using the Toxicity Estimation Software Tool (TEST)

    Science.gov (United States)

    Tens of thousands of chemicals are currently in commerce, and hundreds more are introduced every year. Since experimental measurements of toxicity are extremely time consuming and expensive, it is imperative that alternative methods to estimate toxicity are developed.

  7. Condition Number Regularized Covariance Estimation.

    Science.gov (United States)

    Won, Joong-Ho; Lim, Johan; Kim, Seung-Jean; Rajaratnam, Bala

    2013-06-01

    Estimation of high-dimensional covariance matrices is known to be a difficult problem, has many applications, and is of current interest to the larger statistics community. In many applications including so-called the "large p small n " setting, the estimate of the covariance matrix is required to be not only invertible, but also well-conditioned. Although many regularization schemes attempt to do this, none of them address the ill-conditioning problem directly. In this paper, we propose a maximum likelihood approach, with the direct goal of obtaining a well-conditioned estimator. No sparsity assumption on either the covariance matrix or its inverse are are imposed, thus making our procedure more widely applicable. We demonstrate that the proposed regularization scheme is computationally efficient, yields a type of Steinian shrinkage estimator, and has a natural Bayesian interpretation. We investigate the theoretical properties of the regularized covariance estimator comprehensively, including its regularization path, and proceed to develop an approach that adaptively determines the level of regularization that is required. Finally, we demonstrate the performance of the regularized estimator in decision-theoretic comparisons and in the financial portfolio optimization setting. The proposed approach has desirable properties, and can serve as a competitive procedure, especially when the sample size is small and when a well-conditioned estimator is required.

  8. Condition Number Regularized Covariance Estimation*

    Science.gov (United States)

    Won, Joong-Ho; Lim, Johan; Kim, Seung-Jean; Rajaratnam, Bala

    2012-01-01

    Estimation of high-dimensional covariance matrices is known to be a difficult problem, has many applications, and is of current interest to the larger statistics community. In many applications including so-called the “large p small n” setting, the estimate of the covariance matrix is required to be not only invertible, but also well-conditioned. Although many regularization schemes attempt to do this, none of them address the ill-conditioning problem directly. In this paper, we propose a maximum likelihood approach, with the direct goal of obtaining a well-conditioned estimator. No sparsity assumption on either the covariance matrix or its inverse are are imposed, thus making our procedure more widely applicable. We demonstrate that the proposed regularization scheme is computationally efficient, yields a type of Steinian shrinkage estimator, and has a natural Bayesian interpretation. We investigate the theoretical properties of the regularized covariance estimator comprehensively, including its regularization path, and proceed to develop an approach that adaptively determines the level of regularization that is required. Finally, we demonstrate the performance of the regularized estimator in decision-theoretic comparisons and in the financial portfolio optimization setting. The proposed approach has desirable properties, and can serve as a competitive procedure, especially when the sample size is small and when a well-conditioned estimator is required. PMID:23730197

  9. Radiation dose estimates for radiopharmaceuticals

    International Nuclear Information System (INIS)

    Stabin, M.G.; Stubbs, J.B.; Toohey, R.E.

    1996-04-01

    Tables of radiation dose estimates based on the Cristy-Eckerman adult male phantom are provided for a number of radiopharmaceuticals commonly used in nuclear medicine. Radiation dose estimates are listed for all major source organs, and several other organs of interest. The dose estimates were calculated using the MIRD Technique as implemented in the MIRDOSE3 computer code, developed by the Oak Ridge Institute for Science and Education, Radiation Internal Dose Information Center. In this code, residence times for source organs are used with decay data from the MIRD Radionuclide Data and Decay Schemes to produce estimates of radiation dose to organs of standardized phantoms representing individuals of different ages. The adult male phantom of the Cristy-Eckerman phantom series is different from the MIRD 5, or Reference Man phantom in several aspects, the most important of which is the difference in the masses and absorbed fractions for the active (red) marrow. The absorbed fractions for flow energy photons striking the marrow are also different. Other minor differences exist, but are not likely to significantly affect dose estimates calculated with the two phantoms. Assumptions which support each of the dose estimates appears at the bottom of the table of estimates for a given radiopharmaceutical. In most cases, the model kinetics or organ residence times are explicitly given. The results presented here can easily be extended to include other radiopharmaceuticals or phantoms

  10. Risk estimation using probability machines

    Science.gov (United States)

    2014-01-01

    Background Logistic regression has been the de facto, and often the only, model used in the description and analysis of relationships between a binary outcome and observed features. It is widely used to obtain the conditional probabilities of the outcome given predictors, as well as predictor effect size estimates using conditional odds ratios. Results We show how statistical learning machines for binary outcomes, provably consistent for the nonparametric regression problem, can be used to provide both consistent conditional probability estimation and conditional effect size estimates. Effect size estimates from learning machines leverage our understanding of counterfactual arguments central to the interpretation of such estimates. We show that, if the data generating model is logistic, we can recover accurate probability predictions and effect size estimates with nearly the same efficiency as a correct logistic model, both for main effects and interactions. We also propose a method using learning machines to scan for possible interaction effects quickly and efficiently. Simulations using random forest probability machines are presented. Conclusions The models we propose make no assumptions about the data structure, and capture the patterns in the data by just specifying the predictors involved and not any particular model structure. So they do not run the same risks of model mis-specification and the resultant estimation biases as a logistic model. This methodology, which we call a “risk machine”, will share properties from the statistical machine that it is derived from. PMID:24581306

  11. Boundary methods for mode estimation

    Science.gov (United States)

    Pierson, William E., Jr.; Ulug, Batuhan; Ahalt, Stanley C.

    1999-08-01

    This paper investigates the use of Boundary Methods (BMs), a collection of tools used for distribution analysis, as a method for estimating the number of modes associated with a given data set. Model order information of this type is required by several pattern recognition applications. The BM technique provides a novel approach to this parameter estimation problem and is comparable in terms of both accuracy and computations to other popular mode estimation techniques currently found in the literature and automatic target recognition applications. This paper explains the methodology used in the BM approach to mode estimation. Also, this paper quickly reviews other common mode estimation techniques and describes the empirical investigation used to explore the relationship of the BM technique to other mode estimation techniques. Specifically, the accuracy and computational efficiency of the BM technique are compared quantitatively to the a mixture of Gaussian (MOG) approach and a k-means approach to model order estimation. The stopping criteria of the MOG and k-means techniques is the Akaike Information Criteria (AIC).

  12. Generalized Centroid Estimators in Bioinformatics

    Science.gov (United States)

    Hamada, Michiaki; Kiryu, Hisanori; Iwasaki, Wataru; Asai, Kiyoshi

    2011-01-01

    In a number of estimation problems in bioinformatics, accuracy measures of the target problem are usually given, and it is important to design estimators that are suitable to those accuracy measures. However, there is often a discrepancy between an employed estimator and a given accuracy measure of the problem. In this study, we introduce a general class of efficient estimators for estimation problems on high-dimensional binary spaces, which represent many fundamental problems in bioinformatics. Theoretical analysis reveals that the proposed estimators generally fit with commonly-used accuracy measures (e.g. sensitivity, PPV, MCC and F-score) as well as it can be computed efficiently in many cases, and cover a wide range of problems in bioinformatics from the viewpoint of the principle of maximum expected accuracy (MEA). It is also shown that some important algorithms in bioinformatics can be interpreted in a unified manner. Not only the concept presented in this paper gives a useful framework to design MEA-based estimators but also it is highly extendable and sheds new light on many problems in bioinformatics. PMID:21365017

  13. NASA Software Cost Estimation Model: An Analogy Based Estimation Model

    Science.gov (United States)

    Hihn, Jairus; Juster, Leora; Menzies, Tim; Mathew, George; Johnson, James

    2015-01-01

    The cost estimation of software development activities is increasingly critical for large scale integrated projects such as those at DOD and NASA especially as the software systems become larger and more complex. As an example MSL (Mars Scientific Laboratory) developed at the Jet Propulsion Laboratory launched with over 2 million lines of code making it the largest robotic spacecraft ever flown (Based on the size of the software). Software development activities are also notorious for their cost growth, with NASA flight software averaging over 50% cost growth. All across the agency, estimators and analysts are increasingly being tasked to develop reliable cost estimates in support of program planning and execution. While there has been extensive work on improving parametric methods there is very little focus on the use of models based on analogy and clustering algorithms. In this paper we summarize our findings on effort/cost model estimation and model development based on ten years of software effort estimation research using data mining and machine learning methods to develop estimation models based on analogy and clustering. The NASA Software Cost Model performance is evaluated by comparing it to COCOMO II, linear regression, and K-­ nearest neighbor prediction model performance on the same data set.

  14. Likelihood estimators for multivariate extremes

    KAUST Repository

    Huser, Raphaë l; Davison, Anthony C.; Genton, Marc G.

    2015-01-01

    The main approach to inference for multivariate extremes consists in approximating the joint upper tail of the observations by a parametric family arising in the limit for extreme events. The latter may be expressed in terms of componentwise maxima, high threshold exceedances or point processes, yielding different but related asymptotic characterizations and estimators. The present paper clarifies the connections between the main likelihood estimators, and assesses their practical performance. We investigate their ability to estimate the extremal dependence structure and to predict future extremes, using exact calculations and simulation, in the case of the logistic model.

  15. Likelihood estimators for multivariate extremes

    KAUST Repository

    Huser, Raphaël

    2015-11-17

    The main approach to inference for multivariate extremes consists in approximating the joint upper tail of the observations by a parametric family arising in the limit for extreme events. The latter may be expressed in terms of componentwise maxima, high threshold exceedances or point processes, yielding different but related asymptotic characterizations and estimators. The present paper clarifies the connections between the main likelihood estimators, and assesses their practical performance. We investigate their ability to estimate the extremal dependence structure and to predict future extremes, using exact calculations and simulation, in the case of the logistic model.

  16. Analytical estimates of structural behavior

    CERN Document Server

    Dym, Clive L

    2012-01-01

    Explicitly reintroducing the idea of modeling to the analysis of structures, Analytical Estimates of Structural Behavior presents an integrated approach to modeling and estimating the behavior of structures. With the increasing reliance on computer-based approaches in structural analysis, it is becoming even more important for structural engineers to recognize that they are dealing with models of structures, not with the actual structures. As tempting as it is to run innumerable simulations, closed-form estimates can be effectively used to guide and check numerical results, and to confirm phys

  17. Phase estimation in optical interferometry

    CERN Document Server

    Rastogi, Pramod

    2014-01-01

    Phase Estimation in Optical Interferometry covers the essentials of phase-stepping algorithms used in interferometry and pseudointerferometric techniques. It presents the basic concepts and mathematics needed for understanding the phase estimation methods in use today. The first four chapters focus on phase retrieval from image transforms using a single frame. The next several chapters examine the local environment of a fringe pattern, give a broad picture of the phase estimation approach based on local polynomial phase modeling, cover temporal high-resolution phase evaluation methods, and pre

  18. An Analytical Cost Estimation Procedure

    National Research Council Canada - National Science Library

    Jayachandran, Toke

    1999-01-01

    Analytical procedures that can be used to do a sensitivity analysis of a cost estimate, and to perform tradeoffs to identify input values that can reduce the total cost of a project, are described in the report...

  19. Spectral unmixing: estimating partial abundances

    CSIR Research Space (South Africa)

    Debba, Pravesh

    2009-01-01

    Full Text Available techniques is complicated when considering very similar spectral signatures. Iron-bearing oxide/hydroxide/sulfate minerals have similar spectral signatures. The study focuses on how could estimates of abundances of spectrally similar iron-bearing oxide...

  20. 50th Percentile Rent Estimates

    Data.gov (United States)

    Department of Housing and Urban Development — Rent estimates at the 50th percentile (or median) are calculated for all Fair Market Rent areas. Fair Market Rents (FMRs) are primarily used to determine payment...

  1. LPS Catch and Effort Estimation

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Data collected from the LPS dockside (LPIS) and the LPS telephone (LPTS) surveys are combined to produce estimates of total recreational catch, landings, and fishing...

  2. Exploratory shaft liner corrosion estimate

    International Nuclear Information System (INIS)

    Duncan, D.R.

    1985-10-01

    An estimate of expected corrosion degradation during the 100-year design life of the Exploratory Shaft (ES) is presented. The basis for the estimate is a brief literature survey of corrosion data, in addition to data taken by the Basalt Waste Isolation Project. The scope of the study is expected corrosion environment of the ES, the corrosion modes of general corrosion, pitting and crevice corrosion, dissimilar metal corrosion, and environmentally assisted cracking. The expected internal and external environment of the shaft liner is described in detail and estimated effects of each corrosion mode are given. The maximum amount of general corrosion degradation was estimated to be 70 mils at the exterior and 48 mils at the interior, at the shaft bottom. Corrosion at welds or mechanical joints could be significant, dependent on design. After a final determination of corrosion allowance has been established by the project it will be added to the design criteria. 10 refs., 6 figs., 5 tabs

  3. Project Cost Estimation for Planning

    Science.gov (United States)

    2010-02-26

    For Nevada Department of Transportation (NDOT), there are far too many projects that ultimately cost much more than initially planned. Because project nominations are linked to estimates of future funding and the analysis of system needs, the inaccur...

  4. Robust estimation and hypothesis testing

    CERN Document Server

    Tiku, Moti L

    2004-01-01

    In statistical theory and practice, a certain distribution is usually assumed and then optimal solutions sought. Since deviations from an assumed distribution are very common, one cannot feel comfortable with assuming a particular distribution and believing it to be exactly correct. That brings the robustness issue in focus. In this book, we have given statistical procedures which are robust to plausible deviations from an assumed mode. The method of modified maximum likelihood estimation is used in formulating these procedures. The modified maximum likelihood estimators are explicit functions of sample observations and are easy to compute. They are asymptotically fully efficient and are as efficient as the maximum likelihood estimators for small sample sizes. The maximum likelihood estimators have computational problems and are, therefore, elusive. A broad range of topics are covered in this book. Solutions are given which are easy to implement and are efficient. The solutions are also robust to data anomali...

  5. Estimating Emissions from Railway Traffic

    DEFF Research Database (Denmark)

    Jørgensen, Morten W.; Sorenson, Spencer C.

    1998-01-01

    Several parameters of importance for estimating emissions from railway traffic are discussed, and typical results presented. Typical emissions factors from diesel engines and electrical power generation are presented, and the effect of differences in national electrical generation sources...

  6. Travel time estimation using Bluetooth.

    Science.gov (United States)

    2015-06-01

    The objective of this study was to investigate the feasibility of using a Bluetooth Probe Detection System (BPDS) to : estimate travel time in an urban area. Specifically, the study investigated the possibility of measuring overall congestion, the : ...

  7. Estimating uncertainty in resolution tests

    CSIR Research Space (South Africa)

    Goncalves, DP

    2006-05-01

    Full Text Available frequencies yields a biased estimate, and we provide an improved estimator. An application illustrates how the results derived can be incorporated into a larger un- certainty analysis. ? 2006 Society of Photo-Optical Instrumentation Engineers. H20851DOI: 10....1117/1.2202914H20852 Subject terms: resolution testing; USAF 1951 test target; resolution uncertainity. Paper 050404R received May 20, 2005; revised manuscript received Sep. 2, 2005; accepted for publication Sep. 9, 2005; published online May 10, 2006. 1...

  8. Estimating solar radiation in Ghana

    International Nuclear Information System (INIS)

    Anane-Fenin, K.

    1986-04-01

    The estimates of global radiation on a horizontal surface for 9 towns in Ghana, West Africa, are deduced from their sunshine data using two methods developed by Angstrom and Sabbagh. An appropriate regional parameter is determined with the first method and used to predict solar irradiation in all the 9 stations with an accuracy better than 15%. Estimation of diffuse solar irradiation by Page, Lin and Jordan and three other authors' correlation are performed and the results examined. (author)

  9. The Psychology of Cost Estimating

    Science.gov (United States)

    Price, Andy

    2016-01-01

    Cost estimation for large (and even not so large) government programs is a challenge. The number and magnitude of cost overruns associated with large Department of Defense (DoD) and National Aeronautics and Space Administration (NASA) programs highlight the difficulties in developing and promulgating accurate cost estimates. These overruns can be the result of inadequate technology readiness or requirements definition, the whims of politicians or government bureaucrats, or even as failures of the cost estimating profession itself. However, there may be another reason for cost overruns that is right in front of us, but only recently have we begun to grasp it: the fact that cost estimators and their customers are human. The last 70+ years of research into human psychology and behavioral economics have yielded amazing findings into how we humans process and use information to make judgments and decisions. What these scientists have uncovered is surprising: humans are often irrational and illogical beings, making decisions based on factors such as emotion and perception, rather than facts and data. These built-in biases to our thinking directly affect how we develop our cost estimates and how those cost estimates are used. We cost estimators can use this knowledge of biases to improve our cost estimates and also to improve how we communicate and work with our customers. By understanding how our customers think, and more importantly, why they think the way they do, we can have more productive relationships and greater influence. By using psychology to our advantage, we can more effectively help the decision maker and our organizations make fact-based decisions.

  10. Estimating emissions from railway traffic

    Energy Technology Data Exchange (ETDEWEB)

    Joergensen, M.W.; Sorenson, C.

    1997-07-01

    The report discusses methods that can be used to estimate the emissions from various kinds of railway traffic. The methods are based on the estimation of the energy consumption of the train, so that comparisons can be made between electric and diesel driven trains. Typical values are given for the necessary traffic parameters, emission factors, and train loading. Detailed models for train energy consumption are presented, as well as empirically based methods using average train speed and distance between stop. (au)

  11. Efficient, Differentially Private Point Estimators

    OpenAIRE

    Smith, Adam

    2008-01-01

    Differential privacy is a recent notion of privacy for statistical databases that provides rigorous, meaningful confidentiality guarantees, even in the presence of an attacker with access to arbitrary side information. We show that for a large class of parametric probability models, one can construct a differentially private estimator whose distribution converges to that of the maximum likelihood estimator. In particular, it is efficient and asymptotically unbiased. This result provides (furt...

  12. Computer-Aided Parts Estimation

    OpenAIRE

    Cunningham, Adam; Smart, Robert

    1993-01-01

    In 1991, Ford Motor Company began deployment of CAPE (computer-aided parts estimating system), a highly advanced knowledge-based system designed to generate, evaluate, and cost automotive part manufacturing plans. cape is engineered on an innovative, extensible, declarative process-planning and estimating knowledge representation language, which underpins the cape kernel architecture. Many manufacturing processes have been modeled to date, but eventually every significant process in motor veh...

  13. Guideline to Estimate Decommissioning Costs

    Energy Technology Data Exchange (ETDEWEB)

    Yun, Taesik; Kim, Younggook; Oh, Jaeyoung [KHNP CRI, Daejeon (Korea, Republic of)

    2016-10-15

    The primary objective of this work is to provide guidelines to estimate the decommissioning cost as well as the stakeholders with plausible information to understand the decommissioning activities in a reasonable manner, which eventually contribute to acquiring the public acceptance for the nuclear power industry. Although several cases of the decommissioning cost estimate have been made for a few commercial nuclear power plants, the different technical, site-specific and economic assumptions used make it difficult to interpret those cost estimates and compare them with that of a relevant plant. Trustworthy cost estimates are crucial to plan a safe and economic decommissioning project. The typical approach is to break down the decommissioning project into a series of discrete and measurable work activities. Although plant specific differences derived from the economic and technical assumptions make a licensee difficult to estimate reliable decommissioning costs, estimating decommissioning costs is the most crucial processes since it encompasses all the spectrum of activities from the planning to the final evaluation on whether a decommissioning project has successfully been preceded from the perspective of safety and economic points. Hence, it is clear that tenacious efforts should be needed to successfully perform the decommissioning project.

  14. Estimation of Land Surface Temperature through Blending MODIS and AMSR-E Data with the Bayesian Maximum Entropy Method

    Directory of Open Access Journals (Sweden)

    Xiaokang Kou

    2016-01-01

    Full Text Available Land surface temperature (LST plays a major role in the study of surface energy balances. Remote sensing techniques provide ways to monitor LST at large scales. However, due to atmospheric influences, significant missing data exist in LST products retrieved from satellite thermal infrared (TIR remotely sensed data. Although passive microwaves (PMWs are able to overcome these atmospheric influences while estimating LST, the data are constrained by low spatial resolution. In this study, to obtain complete and high-quality LST data, the Bayesian Maximum Entropy (BME method was introduced to merge 0.01° and 0.25° LSTs inversed from MODIS and AMSR-E data, respectively. The result showed that the missing LSTs in cloudy pixels were filled completely, and the availability of merged LSTs reaches 100%. Because the depths of LST and soil temperature measurements are different, before validating the merged LST, the station measurements were calibrated with an empirical equation between MODIS LST and 0~5 cm soil temperatures. The results showed that the accuracy of merged LSTs increased with the increasing quantity of utilized data, and as the availability of utilized data increased from 25.2% to 91.4%, the RMSEs of the merged data decreased from 4.53 °C to 2.31 °C. In addition, compared with the filling gap method in which MODIS LST gaps were filled with AMSR-E LST directly, the merged LSTs from the BME method showed better spatial continuity. The different penetration depths of TIR and PMWs may influence fusion performance and still require further studies.

  15. Short-term light and leaf photosynthetic dynamics affect estimates of daily understory photosynthesis in four tree species.

    Science.gov (United States)

    Naumburg, Elke; Ellsworth, David S

    2002-04-01

    Instantaneous measurements of photosynthesis are often implicitly or explicitly scaled to longer time frames to provide an understanding of plant performance in a given environment. For plants growing in a forest understory, results from photosynthetic light response curves in conjunction with diurnal light data are frequently extrapolated to daily photosynthesis (A(day)), ignoring dynamic photosynthetic responses to light. In this study, we evaluated the importance of two factors on A(day) estimates: dynamic physiological responses to photosynthetic photon flux density (PPFD); and time-resolution of the PPFD data used for modeling. We used a dynamic photosynthesis model to investigate how these factors interact with species-specific photosynthetic traits, forest type, and sky conditions to affect the accuracy of A(day) predictions. Increasing time-averaging of PPFD significantly increased the relative overestimation of A(day) similarly for all study species because of the nonlinear response of photosynthesis to PPFD (15% with 5-min PPFD means). Depending on the light environment characteristics and species-specific dynamic responses to PPFD, understory tree A(day) can be overestimated by 6-42% for the study species by ignoring these dynamics. Although these overestimates decrease under cloudy conditions where direct sunlight and consequently understory sunfleck radiation is reduced, they are still significant. Within a species, overestimation of A(day) as a result of ignoring dynamic responses was highly dependent on daily sunfleck PPFD and the frequency and irradiance of sunflecks. Overall, large overestimates of A(day) in understory trees may cause misleading inferences concerning species growth and competition in forest understories with sunlight. We conclude that comparisons of A(day) among co-occurring understory species in deep shade will be enhanced by consideration of sunflecks by using high-resolution PPFD data and understanding the physiological

  16. Comparison of density estimators. [Estimation of probability density functions

    Energy Technology Data Exchange (ETDEWEB)

    Kao, S.; Monahan, J.F.

    1977-09-01

    Recent work in the field of probability density estimation has included the introduction of some new methods, such as the polynomial and spline methods and the nearest neighbor method, and the study of asymptotic properties in depth. This earlier work is summarized here. In addition, the computational complexity of the various algorithms is analyzed, as are some simulations. The object is to compare the performance of the various methods in small samples and their sensitivity to change in their parameters, and to attempt to discover at what point a sample is so small that density estimation can no longer be worthwhile. (RWR)

  17. Weldon Spring historical dose estimate

    International Nuclear Information System (INIS)

    Meshkov, N.; Benioff, P.; Wang, J.; Yuan, Y.

    1986-07-01

    This study was conducted to determine the estimated radiation doses that individuals in five nearby population groups and the general population in the surrounding area may have received as a consequence of activities at a uranium processing plant in Weldon Spring, Missouri. The study is retrospective and encompasses plant operations (1957-1966), cleanup (1967-1969), and maintenance (1969-1982). The dose estimates for members of the nearby population groups are as follows. Of the three periods considered, the largest doses to the general population in the surrounding area would have occurred during the plant operations period (1957-1966). Dose estimates for the cleanup (1967-1969) and maintenance (1969-1982) periods are negligible in comparison. Based on the monitoring data, if there was a person residing continually in a dwelling 1.2 km (0.75 mi) north of the plant, this person is estimated to have received an average of about 96 mrem/yr (ranging from 50 to 160 mrem/yr) above background during plant operations, whereas the dose to a nearby resident during later years is estimated to have been about 0.4 mrem/yr during cleanup and about 0.2 mrem/yr during the maintenance period. These values may be compared with the background dose in Missouri of 120 mrem/yr

  18. Weldon Spring historical dose estimate

    Energy Technology Data Exchange (ETDEWEB)

    Meshkov, N.; Benioff, P.; Wang, J.; Yuan, Y.

    1986-07-01

    This study was conducted to determine the estimated radiation doses that individuals in five nearby population groups and the general population in the surrounding area may have received as a consequence of activities at a uranium processing plant in Weldon Spring, Missouri. The study is retrospective and encompasses plant operations (1957-1966), cleanup (1967-1969), and maintenance (1969-1982). The dose estimates for members of the nearby population groups are as follows. Of the three periods considered, the largest doses to the general population in the surrounding area would have occurred during the plant operations period (1957-1966). Dose estimates for the cleanup (1967-1969) and maintenance (1969-1982) periods are negligible in comparison. Based on the monitoring data, if there was a person residing continually in a dwelling 1.2 km (0.75 mi) north of the plant, this person is estimated to have received an average of about 96 mrem/yr (ranging from 50 to 160 mrem/yr) above background during plant operations, whereas the dose to a nearby resident during later years is estimated to have been about 0.4 mrem/yr during cleanup and about 0.2 mrem/yr during the maintenance period. These values may be compared with the background dose in Missouri of 120 mrem/yr.

  19. An improved estimation and focusing scheme for vector velocity estimation

    DEFF Research Database (Denmark)

    Jensen, Jørgen Arendt; Munk, Peter

    1999-01-01

    to reduce spatial velocity dispersion. Examples of different velocity vector conditions are shown using the Field II simulation program. A relative accuracy of 10.1 % is obtained for the lateral velocity estimates for a parabolic velocity profile for a flow perpendicular to the ultrasound beam and a signal...

  20. Robust Pitch Estimation Using an Optimal Filter on Frequency Estimates

    DEFF Research Database (Denmark)

    Karimian-Azari, Sam; Jensen, Jesper Rindom; Christensen, Mads Græsbøll

    2014-01-01

    of such signals from unconstrained frequency estimates (UFEs). A minimum variance distortionless response (MVDR) method is proposed as an optimal solution to minimize the variance of UFEs considering the constraint of integer harmonics. The MVDR filter is designed based on noise statistics making it robust...

  1. estimating formwork striking time for concrete mixes estimating

    African Journals Online (AJOL)

    eobe

    In this study, we estimated the time for strength development in concrete cured up to 56 days. Water. In this .... regression analysis using MS Excel 2016 Software performed on the ..... [1] Abolfazl, K. R, Peroti S. and Rahemi L 'The Effect of.

  2. Using a field radiometer to estimate instantaneous sky clearness Radiômetro de campo para cálculo da clareza instantânea do céu

    Directory of Open Access Journals (Sweden)

    Eduardo G. Souza

    2006-06-01

    Full Text Available Reflectance measurements of crop plants and canopies show promise for guiding within-season, variable-rate nitrogen (N application. Most research results have been obtained around solar noon with clear skies. However, for practical application, the system must work under cloudy skies or away from solar noon. The objective of this work was to assess the effect of cloud conditions on reflectance measurements of a corn canopy. The approach was to estimate an instantaneous sky clearness index (ICI which could be used to correct field radiometer data for variations in cloud cover, such that the same reflectance reading would be obtained (and the same N recommendation made for the same plants regardless of cloud conditions. Readings were taken from morning until night over 11 days with a range of sky conditions (sunny, overcast, partly cloudy. Data from clear days were used to estimate the theoretical expected spectral global radiation incident on a horizontal surface. The ICI was calculated as the ratio between the actual spectral global radiation and the corresponding theoretical global radiation. Analysis of the ICI for each band showed that the influence of cloudiness was different for each band. Thus, the cloud effect could not be compensated by the use of a band ratio or vegetation index.Medidas da reflectância das folhas das plantas mostram-se promissoras para a aplicação de nitrogênio a taxa variável; entretanto, a maioria dos resultados de pesquisa foi obtida ao redor do meio-dia solar e com céu aberto, porém para aplicações práticas um sistema tem que trabalhar debaixo de céu nublado e fora do meio-dia solar. O objetivo deste trabalho foi avaliar o efeito de condições de nuvem em medidas de reflectância de milho. A abordagem foi calcular um índice instantâneo de clareza do céu (ICI que pode ser usado para corrigir dados de radiômetros de campo para variações em cobertura de nuvem, tal que essas reflectâncias seriam

  3. Moving Horizon Estimation and Control

    DEFF Research Database (Denmark)

    Jørgensen, John Bagterp

    successful and applied methodology beyond PID-control for control of industrial processes. The main contribution of this thesis is introduction and definition of the extended linear quadratic optimal control problem for solution of numerical problems arising in moving horizon estimation and control...... problems. Chapter 1 motivates moving horizon estimation and control as a paradigm for control of industrial processes. It introduces the extended linear quadratic control problem and discusses its central role in moving horizon estimation and control. Introduction, application and efficient solution....... It provides an algorithm for computation of the maximal output admissible set for linear model predictive control. Appendix D provides results concerning linear regression. Appendix E discuss prediction error methods for identification of linear models tailored for model predictive control....

  4. Heuristic introduction to estimation methods

    International Nuclear Information System (INIS)

    Feeley, J.J.; Griffith, J.M.

    1982-08-01

    The methods and concepts of optimal estimation and control have been very successfully applied in the aerospace industry during the past 20 years. Although similarities exist between the problems (control, modeling, measurements) in the aerospace and nuclear power industries, the methods and concepts have found only scant acceptance in the nuclear industry. Differences in technical language seem to be a major reason for the slow transfer of estimation and control methods to the nuclear industry. Therefore, this report was written to present certain important and useful concepts with a minimum of specialized language. By employing a simple example throughout the report, the importance of several information and uncertainty sources is stressed and optimal ways of using or allowing for these sources are presented. This report discusses optimal estimation problems. A future report will discuss optimal control problems

  5. Estimation of effective wind speed

    Science.gov (United States)

    Østergaard, K. Z.; Brath, P.; Stoustrup, J.

    2007-07-01

    The wind speed has a huge impact on the dynamic response of wind turbine. Because of this, many control algorithms use a measure of the wind speed to increase performance, e.g. by gain scheduling and feed forward. Unfortunately, no accurate measurement of the effective wind speed is online available from direct measurements, which means that it must be estimated in order to make such control methods applicable in practice. In this paper a new method is presented for the estimation of the effective wind speed. First, the rotor speed and aerodynamic torque are estimated by a combined state and input observer. These two variables combined with the measured pitch angle is then used to calculate the effective wind speed by an inversion of a static aerodynamic model.

  6. Estimation and valuation in accounting

    Directory of Open Access Journals (Sweden)

    Cicilia Ionescu

    2014-03-01

    Full Text Available The relationships of the enterprise with the external environment give rise to a range of informational needs. Satisfying those needs requires the production of coherent, comparable, relevant and reliable information included into the individual or consolidated financial statements. International Financial Reporting Standards IAS / IFRS aim to ensure the comparability and relevance of the accounting information, providing, among other things, details about the issue of accounting estimates and changes in accounting estimates. Valuation is a process continually used, in order to assign values to the elements that are to be recognised in the financial statements. Most of the times, the values reflected in the books are clear, they are recorded in the contracts with third parties, in the supporting documents, etc. However, the uncertainties in which a reporting entity operates determines that, sometimes, the assigned or values attributable to some items composing the financial statements be determined by use estimates.

  7. Integral Criticality Estimators in MCATK

    Energy Technology Data Exchange (ETDEWEB)

    Nolen, Steven Douglas [Los Alamos National Laboratory; Adams, Terry R. [Los Alamos National Laboratory; Sweezy, Jeremy Ed [Los Alamos National Laboratory

    2016-06-14

    The Monte Carlo Application ToolKit (MCATK) is a component-based software toolset for delivering customized particle transport solutions using the Monte Carlo method. Currently under development in the XCP Monte Carlo group at Los Alamos National Laboratory, the toolkit has the ability to estimate the ke f f and a eigenvalues for static geometries. This paper presents a description of the estimators and variance reduction techniques available in the toolkit and includes a preview of those slated for future releases. Along with the description of the underlying algorithms is a description of the available user inputs for controlling the iterations. The paper concludes with a comparison of the MCATK results with those provided by analytic solutions. The results match within expected statistical uncertainties and demonstrate MCATK’s usefulness in estimating these important quantities.

  8. Order statistics & inference estimation methods

    CERN Document Server

    Balakrishnan, N

    1991-01-01

    The literature on order statistics and inferenc eis quite extensive and covers a large number of fields ,but most of it is dispersed throughout numerous publications. This volume is the consolidtion of the most important results and places an emphasis on estimation. Both theoretical and computational procedures are presented to meet the needs of researchers, professionals, and students. The methods of estimation discussed are well-illustrated with numerous practical examples from both the physical and life sciences, including sociology,psychology,a nd electrical and chemical engineering. A co

  9. Methods for estimating the semivariogram

    DEFF Research Database (Denmark)

    Lophaven, Søren Nymand; Carstensen, Niels Jacob; Rootzen, Helle

    2002-01-01

    . In the existing literature various methods for modelling the semivariogram have been proposed, while only a few studies have been made on comparing different approaches. In this paper we compare eight approaches for modelling the semivariogram, i.e. six approaches based on least squares estimation...... maximum likelihood performed better than the least squares approaches. We also applied maximum likelihood and least squares estimation to a real dataset, containing measurements of salinity at 71 sampling stations in the Kattegat basin. This showed that the calculation of spatial predictions...

  10. Albedo estimation for scene segmentation

    Energy Technology Data Exchange (ETDEWEB)

    Lee, C H; Rosenfeld, A

    1983-03-01

    Standard methods of image segmentation do not take into account the three-dimensional nature of the underlying scene. For example, histogram-based segmentation tacitly assumes that the image intensity is piecewise constant, and this is not true when the scene contains curved surfaces. This paper introduces a method of taking 3d information into account in the segmentation process. The image intensities are adjusted to compensate for the effects of estimated surface orientation; the adjusted intensities can be regarded as reflectivity estimates. When histogram-based segmentation is applied to these new values, the image is segmented into parts corresponding to surfaces of constant reflectivity in the scene. 7 references.

  11. Estimation of strong ground motion

    International Nuclear Information System (INIS)

    Watabe, Makoto

    1993-01-01

    Fault model has been developed to estimate a strong ground motion in consideration of characteristics of seismic source and propagation path of seismic waves. There are two different approaches in the model. The first one is a theoretical approach, while the second approach is a semi-empirical approach. Though the latter is more practical than the former to be applied to the estimation of input motions, it needs at least the small-event records, the value of the seismic moment of the small event and the fault model of the large event

  12. Multicollinearity and maximum entropy leuven estimator

    OpenAIRE

    Sudhanshu Mishra

    2004-01-01

    Multicollinearity is a serious problem in applied regression analysis. Q. Paris (2001) introduced the MEL estimator to resolve the multicollinearity problem. This paper improves the MEL estimator to the Modular MEL (MMEL) estimator and shows by Monte Carlo experiments that MMEL estimator performs significantly better than OLS as well as MEL estimators.

  13. Unrecorded Alcohol Consumption: Quantitative Methods of Estimation

    OpenAIRE

    Razvodovsky, Y. E.

    2010-01-01

    unrecorded alcohol; methods of estimation In this paper we focused on methods of estimation of unrecorded alcohol consumption level. Present methods of estimation of unrevorded alcohol consumption allow only approximate estimation of unrecorded alcohol consumption level. Tacking into consideration the extreme importance of such kind of data, further investigation is necessary to improve the reliability of methods estimation of unrecorded alcohol consumption.

  14. Collider Scaling and Cost Estimation

    International Nuclear Information System (INIS)

    Palmer, R.B.

    1986-01-01

    This paper deals with collider cost and scaling. The main points of the discussion are the following ones: 1) scaling laws and cost estimation: accelerating gradient requirements, total stored RF energy considerations, peak power consideration, average power consumption; 2) cost optimization; 3) Bremsstrahlung considerations; 4) Focusing optics: conventional, laser focusing or super disruption. 13 refs

  15. Helicopter Toy and Lift Estimation

    Science.gov (United States)

    Shakerin, Said

    2013-01-01

    A $1 plastic helicopter toy (called a Wacky Whirler) can be used to demonstrate lift. Students can make basic measurements of the toy, use reasonable assumptions and, with the lift formula, estimate the lift, and verify that it is sufficient to overcome the toy's weight. (Contains 1 figure.)

  16. Estimation of potential uranium resources

    International Nuclear Information System (INIS)

    Curry, D.L.

    1977-09-01

    Potential estimates, like reserves, are limited by the information on hand at the time and are not intended to indicate the ultimate resources. Potential estimates are based on geologic judgement, so their reliability is dependent on the quality and extent of geologic knowledge. Reliability differs for each of the three potential resource classes. It is greatest for probable potential resources because of the greater knowledge base resulting from the advanced stage of exploration and development in established producing districts where most of the resources in this class are located. Reliability is least for speculative potential resources because no significant deposits are known, and favorability is inferred from limited geologic data. Estimates of potential resources are revised as new geologic concepts are postulated, as new types of uranium ore bodies are discovered, and as improved geophysical and geochemical techniques are developed and applied. Advances in technology that permit the exploitation of deep or low-grade deposits, or the processing of ores of previously uneconomic metallurgical types, also will affect the estimates

  17. An Improved Cluster Richness Estimator

    Energy Technology Data Exchange (ETDEWEB)

    Rozo, Eduardo; /Ohio State U.; Rykoff, Eli S.; /UC, Santa Barbara; Koester, Benjamin P.; /Chicago U. /KICP, Chicago; McKay, Timothy; /Michigan U.; Hao, Jiangang; /Michigan U.; Evrard, August; /Michigan U.; Wechsler, Risa H.; /SLAC; Hansen, Sarah; /Chicago U. /KICP, Chicago; Sheldon, Erin; /New York U.; Johnston, David; /Houston U.; Becker, Matthew R.; /Chicago U. /KICP, Chicago; Annis, James T.; /Fermilab; Bleem, Lindsey; /Chicago U.; Scranton, Ryan; /Pittsburgh U.

    2009-08-03

    Minimizing the scatter between cluster mass and accessible observables is an important goal for cluster cosmology. In this work, we introduce a new matched filter richness estimator, and test its performance using the maxBCG cluster catalog. Our new estimator significantly reduces the variance in the L{sub X}-richness relation, from {sigma}{sub lnL{sub X}}{sup 2} = (0.86 {+-} 0.02){sup 2} to {sigma}{sub lnL{sub X}}{sup 2} = (0.69 {+-} 0.02){sup 2}. Relative to the maxBCG richness estimate, it also removes the strong redshift dependence of the richness scaling relations, and is significantly more robust to photometric and redshift errors. These improvements are largely due to our more sophisticated treatment of galaxy color data. We also demonstrate the scatter in the L{sub X}-richness relation depends on the aperture used to estimate cluster richness, and introduce a novel approach for optimizing said aperture which can be easily generalized to other mass tracers.

  18. Estimation of Bridge Reliability Distributions

    DEFF Research Database (Denmark)

    Thoft-Christensen, Palle

    In this paper it is shown how the so-called reliability distributions can be estimated using crude Monte Carlo simulation. The main purpose is to demonstrate the methodology. Therefor very exact data concerning reliability and deterioration are not needed. However, it is intended in the paper to ...

  19. Estimation of Motion Vector Fields

    DEFF Research Database (Denmark)

    Larsen, Rasmus

    1993-01-01

    This paper presents an approach to the estimation of 2-D motion vector fields from time varying image sequences. We use a piecewise smooth model based on coupled vector/binary Markov random fields. We find the maximum a posteriori solution by simulated annealing. The algorithm generate sample...... fields by means of stochastic relaxation implemented via the Gibbs sampler....

  20. Multispacecraft current estimates at swarm

    DEFF Research Database (Denmark)

    Dunlop, M. W.; Yang, Y.-Y.; Yang, J.-Y.

    2015-01-01

    During the first several months of the three-spacecraft Swarm mission all three spacecraft camerepeatedly into close alignment, providing an ideal opportunity for validating the proposed dual-spacecraftmethod for estimating current density from the Swarm magnetic field data. Two of the Swarm...

  1. Estimating Swedish biomass energy supply

    International Nuclear Information System (INIS)

    Johansson, J.; Lundqvist, U.

    1999-01-01

    Biomass is suggested to supply an increasing amount of energy in Sweden. There have been several studies estimating the potential supply of biomass energy, including that of the Swedish Energy Commission in 1995. The Energy Commission based its estimates of biomass supply on five other analyses which presented a wide variation in estimated future supply, in large part due to differing assumptions regarding important factors. In this paper, these studies are assessed, and the estimated potential biomass energy supplies are discusses regarding prices, technical progress and energy policy. The supply of logging residues depends on the demand for wood products and is limited by ecological, technological, and economic restrictions. The supply of stemwood from early thinning for energy and of straw from cereal and oil seed production is mainly dependent upon economic considerations. One major factor for the supply of willow and reed canary grass is the size of arable land projected to be not needed for food and fodder production. Future supply of biomass energy depends on energy prices and technical progress, both of which are driven by energy policy priorities. Biomass energy has to compete with other energy sources as well as with alternative uses of biomass such as forest products and food production. Technical progress may decrease the costs of biomass energy and thus increase the competitiveness. Economic instruments, including carbon taxes and subsidies, and allocation of research and development resources, are driven by energy policy goals and can change the competitiveness of biomass energy

  2. Estimates of wildland fire emissions

    Science.gov (United States)

    Yongqiang Liu; John J. Qu; Wanting Wang; Xianjun Hao

    2013-01-01

    Wildland fire missions can significantly affect regional and global air quality, radiation, climate, and the carbon cycle. A fundamental and yet challenging prerequisite to understanding the environmental effects is to accurately estimate fire emissions. This chapter describes and analyzes fire emission calculations. Various techniques (field measurements, empirical...

  3. State Estimation for Tensegrity Robots

    Science.gov (United States)

    Caluwaerts, Ken; Bruce, Jonathan; Friesen, Jeffrey M.; Sunspiral, Vytas

    2016-01-01

    Tensegrity robots are a class of compliant robots that have many desirable traits when designing mass efficient systems that must interact with uncertain environments. Various promising control approaches have been proposed for tensegrity systems in simulation. Unfortunately, state estimation methods for tensegrity robots have not yet been thoroughly studied. In this paper, we present the design and evaluation of a state estimator for tensegrity robots. This state estimator will enable existing and future control algorithms to transfer from simulation to hardware. Our approach is based on the unscented Kalman filter (UKF) and combines inertial measurements, ultra wideband time-of-flight ranging measurements, and actuator state information. We evaluate the effectiveness of our method on the SUPERball, a tensegrity based planetary exploration robotic prototype. In particular, we conduct tests for evaluating both the robot's success in estimating global position in relation to fixed ranging base stations during rolling maneuvers as well as local behavior due to small-amplitude deformations induced by cable actuation.

  4. Fuel Estimation Using Dynamic Response

    National Research Council Canada - National Science Library

    Hines, Michael S

    2007-01-01

    ...?s simulated satellite (SimSAT) to known control inputs. With an iterative process, the moment of inertia of SimSAT about the yaw axis was estimated by matching a model of SimSAT to the measured angular rates...

  5. Empirical estimates of the NAIRU

    DEFF Research Database (Denmark)

    Madsen, Jakob Brøchner

    2005-01-01

    equations. In this paper it is shown that a high proportion of the constant term is a statistical artefact and suggests a new method which yields approximately unbiased estimates of the time-invariant NAIRU. Using data for OECD countries it is shown that the constant-term correction lowers the unadjusted...

  6. Online Wavelet Complementary velocity Estimator.

    Science.gov (United States)

    Righettini, Paolo; Strada, Roberto; KhademOlama, Ehsan; Valilou, Shirin

    2018-02-01

    In this paper, we have proposed a new online Wavelet Complementary velocity Estimator (WCE) over position and acceleration data gathered from an electro hydraulic servo shaking table. This is a batch estimator type that is based on the wavelet filter banks which extract the high and low resolution of data. The proposed complementary estimator combines these two resolutions of velocities which acquired from numerical differentiation and integration of the position and acceleration sensors by considering a fixed moving horizon window as input to wavelet filter. Because of using wavelet filters, it can be implemented in a parallel procedure. By this method the numerical velocity is estimated without having high noise of differentiators, integration drifting bias and with less delay which is suitable for active vibration control in high precision Mechatronics systems by Direct Velocity Feedback (DVF) methods. This method allows us to make velocity sensors with less mechanically moving parts which makes it suitable for fast miniature structures. We have compared this method with Kalman and Butterworth filters over stability, delay and benchmarked them by their long time velocity integration for getting back the initial position data. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  7. Load Estimation from Modal Parameters

    DEFF Research Database (Denmark)

    Aenlle, Manuel López; Brincker, Rune; Fernández, Pelayo Fernández

    2007-01-01

    In Natural Input Modal Analysis the modal parameters are estimated just from the responses while the loading is not recorded. However, engineers are sometimes interested in knowing some features of the loading acting on a structure. In this paper, a procedure to determine the loading from a FRF m...

  8. Gini estimation under infinite variance

    NARCIS (Netherlands)

    A. Fontanari (Andrea); N.N. Taleb (Nassim Nicholas); P. Cirillo (Pasquale)

    2018-01-01

    textabstractWe study the problems related to the estimation of the Gini index in presence of a fat-tailed data generating process, i.e. one in the stable distribution class with finite mean but infinite variance (i.e. with tail index α∈(1,2)). We show that, in such a case, the Gini coefficient

  9. Software Cost-Estimation Model

    Science.gov (United States)

    Tausworthe, R. C.

    1985-01-01

    Software Cost Estimation Model SOFTCOST provides automated resource and schedule model for software development. Combines several cost models found in open literature into one comprehensive set of algorithms. Compensates for nearly fifty implementation factors relative to size of task, inherited baseline, organizational and system environment and difficulty of task.

  10. Correlation Dimension Estimation for Classification

    Czech Academy of Sciences Publication Activity Database

    Jiřina, Marcel; Jiřina jr., M.

    2006-01-01

    Roč. 1, č. 3 (2006), s. 547-557 ISSN 1895-8648 R&D Projects: GA MŠk(CZ) 1M0567 Institutional research plan: CEZ:AV0Z10300504 Keywords : correlation dimension * probability density estimation * classification * UCI MLR Subject RIV: BA - General Mathematics

  11. Molecular pathology and age estimation.

    Science.gov (United States)

    Meissner, Christoph; Ritz-Timme, Stefanie

    2010-12-15

    Over the course of our lifetime a stochastic process leads to gradual alterations of biomolecules on the molecular level, a process that is called ageing. Important changes are observed on the DNA-level as well as on the protein level and are the cause and/or consequence of our 'molecular clock', influenced by genetic as well as environmental parameters. These alterations on the molecular level may aid in forensic medicine to estimate the age of a living person, a dead body or even skeletal remains for identification purposes. Four such important alterations have become the focus of molecular age estimation in the forensic community over the last two decades. The age-dependent accumulation of the 4977bp deletion of mitochondrial DNA and the attrition of telomeres along with ageing are two important processes at the DNA-level. Among a variety of protein alterations, the racemisation of aspartic acid and advanced glycation endproducs have already been tested for forensic applications. At the moment the racemisation of aspartic acid represents the pinnacle of molecular age estimation for three reasons: an excellent standardization of sampling and methods, an evaluation of different variables in many published studies and highest accuracy of results. The three other mentioned alterations often lack standardized procedures, published data are sparse and often have the character of pilot studies. Nevertheless it is important to evaluate molecular methods for their suitability in forensic age estimation, because supplementary methods will help to extend and refine accuracy and reliability of such estimates. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  12. Assessment of four methods to estimate surface UV radiation using satellite data, by comparison with ground measurements from four stations in Europe

    Science.gov (United States)

    Arola, Antti; Kalliskota, S.; den Outer, P. N.; Edvardsen, K.; Hansen, G.; Koskela, T.; Martin, T. J.; Matthijsen, J.; Meerkoetter, R.; Peeters, P.; Seckmeyer, G.; Simon, P. C.; Slaper, H.; Taalas, P.; Verdebout, J.

    2002-08-01

    Four different satellite-UV mapping methods are assessed by comparing them against ground-based measurements. The study includes most of the variability found in geographical, meteorological and atmospheric conditions. Three of the methods did not show any significant systematic bias, except during snow cover. The mean difference (bias) in daily doses for the Rijksinstituut voor Volksgezondheid en Milieu (RIVM) and Joint Research Centre (JRC) methods was found to be less than 10% with a RMS difference of the order of 30%. The Deutsches Zentrum für Luft- und Raumfahrt (DLR) method was assessed for a few selected months, and the accuracy was similar to the RIVM and JRC methods. It was additionally used to demonstrate how spatial averaging of high-resolution cloud data improves the estimation of UV daily doses. For the Institut d'Aéronomie Spatiale de Belgique (IASB) method the differences were somewhat higher, because of their original cloud algorithm. The mean difference in daily doses for IASB was about 30% or more, depending on the station, while the RMS difference was about 60%. The cloud algorithm of IASB has been replaced recently, and as a result the accuracy of the IASB method has improved. Evidence is found that further research and development should focus on the improvement of the cloud parameterization. Estimation of daily exposures is likely to be improved if additional time-resolved cloudiness information is available for the satellite-based methods. It is also demonstrated that further development work should be carried out on the treatment of albedo of snow-covered surfaces.

  13. Estimating Roof Solar Energy Potential in the Downtown Area Using a GPU-Accelerated Solar Radiation Model and Airborne LiDAR Data

    Directory of Open Access Journals (Sweden)

    Yan Huang

    2015-12-01

    Full Text Available Solar energy, as a clean and renewable resource is becoming increasingly important in the global context of climate change and energy crisis. Utilization of solar energy in urban areas is of great importance in urban energy planning, environmental conservation, and sustainable development. However, available spaces for solar panel installation in cities are quite limited except for building roofs. Furthermore, complex urban 3D morphology greatly affects sunlit patterns on building roofs, especially in downtown areas, which makes the determination of roof solar energy potential a challenging task. The object of this study is to estimate the solar radiation on building roofs in an urban area in Shanghai, China, and select suitable spaces for installing solar panels that can effectively utilize solar energy. A Graphic Processing Unit (GPU-based solar radiation model named SHORTWAVE-C simulating direct and non-direct solar radiation intensity was developed by adding the capability of considering cloud influence into the previous SHORTWAVE model. Airborne Light Detection and Ranging (LiDAR data was used as the input of the SHORTWAVE-C model and to investigate the morphological characteristics of the study area. The results show that the SHORTWAVE-C model can accurately estimate the solar radiation intensity in a complex urban environment under cloudy conditions, and the GPU acceleration method can reduce the computation time by up to 46%. Two sites with different building densities and rooftop structures were selected to illustrate the influence of urban morphology on the solar radiation and solar illumination duration. Based on the findings, an object-based method was implemented to identify suitable places for rooftop solar panel installation that can fully utilize the solar energy potential. Our study provides useful strategic guidelines for the selection and assessment of roof solar energy potential for urban energy planning.

  14. 23 CFR 635.115 - Agreement estimate.

    Science.gov (United States)

    2010-04-01

    ... CONSTRUCTION AND MAINTENANCE Contract Procedures § 635.115 Agreement estimate. (a) Following the award of contract, an agreement estimate based on the contract unit prices and estimated quantities shall be...

  15. On semiautomatic estimation of surface area

    DEFF Research Database (Denmark)

    Dvorak, J.; Jensen, Eva B. Vedel

    2013-01-01

    and the surfactor. For ellipsoidal particles, it is shown that the flower estimator is equal to the pivotal estimator based on support function measurements along four perpendicular rays. This result makes the pivotal estimator a powerful approximation to the flower estimator. In a simulation study of prolate....... If the segmentation is correct the estimate is computed automatically, otherwise the expert performs the necessary measurements manually. In case of convex particles we suggest to base the semiautomatic estimation on the so-called flower estimator, a new local stereological estimator of particle surface area....... For convex particles, the estimator is equal to four times the area of the support set (flower set) of the particle transect. We study the statistical properties of the flower estimator and compare its performance to that of two discretizations of the flower estimator, namely the pivotal estimator...

  16. Estimating sediment discharge: Appendix D

    Science.gov (United States)

    Gray, John R.; Simões, Francisco J. M.

    2008-01-01

    Sediment-discharge measurements usually are available on a discrete or periodic basis. However, estimates of sediment transport often are needed for unmeasured periods, such as when daily or annual sediment-discharge values are sought, or when estimates of transport rates for unmeasured or hypothetical flows are required. Selected methods for estimating suspended-sediment, bed-load, bed- material-load, and total-load discharges have been presented in some detail elsewhere in this volume. The purposes of this contribution are to present some limitations and potential pitfalls associated with obtaining and using the requisite data and equations to estimate sediment discharges and to provide guidance for selecting appropriate estimating equations. Records of sediment discharge are derived from data collected with sufficient frequency to obtain reliable estimates for the computational interval and period. Most sediment- discharge records are computed at daily or annual intervals based on periodically collected data, although some partial records represent discrete or seasonal intervals such as those for flood periods. The method used to calculate sediment- discharge records is dependent on the types and frequency of available data. Records for suspended-sediment discharge computed by methods described by Porterfield (1972) are most prevalent, in part because measurement protocols and computational techniques are well established and because suspended sediment composes the bulk of sediment dis- charges for many rivers. Discharge records for bed load, total load, or in some cases bed-material load plus wash load are less common. Reliable estimation of sediment discharges presupposes that the data on which the estimates are based are comparable and reliable. Unfortunately, data describing a selected characteristic of sediment were not necessarily derived—collected, processed, analyzed, or interpreted—in a consistent manner. For example, bed-load data collected with

  17. Estimation of available water capacity components of two-layered soils using crop model inversion: Effect of crop type and water regime

    Science.gov (United States)

    Sreelash, K.; Buis, Samuel; Sekhar, M.; Ruiz, Laurent; Kumar Tomer, Sat; Guérif, Martine

    2017-03-01

    Characterization of the soil water reservoir is critical for understanding the interactions between crops and their environment and the impacts of land use and environmental changes on the hydrology of agricultural catchments especially in tropical context. Recent studies have shown that inversion of crop models is a powerful tool for retrieving information on root zone properties. Increasing availability of remotely sensed soil and vegetation observations makes it well suited for large scale applications. The potential of this methodology has however never been properly evaluated on extensive experimental datasets and previous studies suggested that the quality of estimation of soil hydraulic properties may vary depending on agro-environmental situations. The objective of this study was to evaluate this approach on an extensive field experiment. The dataset covered four crops (sunflower, sorghum, turmeric, maize) grown on different soils and several years in South India. The components of AWC (available water capacity) namely soil water content at field capacity and wilting point, and soil depth of two-layered soils were estimated by inversion of the crop model STICS with the GLUE (generalized likelihood uncertainty estimation) approach using observations of surface soil moisture (SSM; typically from 0 to 10 cm deep) and leaf area index (LAI), which are attainable from radar remote sensing in tropical regions with frequent cloudy conditions. The results showed that the quality of parameter estimation largely depends on the hydric regime and its interaction with crop type. A mean relative absolute error of 5% for field capacity of surface layer, 10% for field capacity of root zone, 15% for wilting point of surface layer and root zone, and 20% for soil depth can be obtained in favorable conditions. A few observations of SSM (during wet and dry soil moisture periods) and LAI (within water stress periods) were sufficient to significantly improve the estimation of AWC

  18. Estimating Foreign Exchange Reserve Adequacy

    Directory of Open Access Journals (Sweden)

    Abdul Hakim

    2013-04-01

    Full Text Available Accumulating foreign exchange reserves, despite their cost and their impacts on other macroeconomics variables, provides some benefits. This paper models such foreign exchange reserves. To measure the adequacy of foreign exchange reserves for import, it uses total reserves-to-import ratio (TRM. The chosen independent variables are gross domestic product growth, exchange rates, opportunity cost, and a dummy variable separating the pre and post 1997 Asian financial crisis. To estimate the risky TRM value, this paper uses conditional Value-at-Risk (VaR, with the help of Glosten-Jagannathan-Runkle (GJR model to estimate the conditional volatility. The results suggest that all independent variables significantly influence TRM. They also suggest that the short and long run volatilities are evident, with the additional evidence of asymmetric effects of negative and positive past shocks. The VaR, which are calculated assuming both normal and t distributions, provide similar results, namely violations in 2005 and 2008.

  19. Organ volume estimation using SPECT

    CERN Document Server

    Zaidi, H

    1996-01-01

    Knowledge of in vivo thyroid volume has both diagnostic and therapeutic importance and could lead to a more precise quantification of absolute activity contained in the thyroid gland. In order to improve single-photon emission computed tomography (SPECT) quantitation, attenuation correction was performed according to Chang's algorithm. The dual-window method was used for scatter subtraction. We used a Monte Carlo simulation of the SPECT system to accurately determine the scatter multiplier factor k. Volume estimation using SPECT was performed by summing up the volume elements (voxels) lying within the contour of the object, determined by a fixed threshold and the gray level histogram (GLH) method. Thyroid phantom and patient studies were performed and the influence of 1) fixed thresholding, 2) automatic thresholding, 3) attenuation, 4) scatter, and 5) reconstruction filter were investigated. This study shows that accurate volume estimation of the thyroid gland is feasible when accurate corrections are perform...

  20. Comments on mutagenesis risk estimation

    International Nuclear Information System (INIS)

    Russell, W.L.

    1976-01-01

    Several hypotheses and concepts have tended to oversimplify the problem of mutagenesis and can be misleading when used for genetic risk estimation. These include: the hypothesis that radiation-induced mutation frequency depends primarily on the DNA content per haploid genome, the extension of this concept to chemical mutagenesis, the view that, since DNA is DNA, mutational effects can be expected to be qualitatively similar in all organisms, the REC unit, and the view that mutation rates from chronic irradiation can be theoretically and accurately predicted from acute irradiation data. Therefore, direct determination of frequencies of transmitted mutations in mammals continues to be important for risk estimation, and the specific-locus method in mice is shown to be not as expensive as is commonly supposed for many of the chemical testing requirements

  1. Bayesian estimation in homodyne interferometry

    International Nuclear Information System (INIS)

    Olivares, Stefano; Paris, Matteo G A

    2009-01-01

    We address phase-shift estimation by means of squeezed vacuum probe and homodyne detection. We analyse Bayesian estimator, which is known to asymptotically saturate the classical Cramer-Rao bound to the variance, and discuss convergence looking at the a posteriori distribution as the number of measurements increases. We also suggest two feasible adaptive methods, acting on the squeezing parameter and/or the homodyne local oscillator phase, which allow us to optimize homodyne detection and approach the ultimate bound to precision imposed by the quantum Cramer-Rao theorem. The performances of our two-step methods are investigated by means of Monte Carlo simulated experiments with a small number of homodyne data, thus giving a quantitative meaning to the notion of asymptotic optimality.

  2. Parameter estimation and inverse problems

    CERN Document Server

    Aster, Richard C; Thurber, Clifford H

    2005-01-01

    Parameter Estimation and Inverse Problems primarily serves as a textbook for advanced undergraduate and introductory graduate courses. Class notes have been developed and reside on the World Wide Web for faciliting use and feedback by teaching colleagues. The authors'' treatment promotes an understanding of fundamental and practical issus associated with parameter fitting and inverse problems including basic theory of inverse problems, statistical issues, computational issues, and an understanding of how to analyze the success and limitations of solutions to these probles. The text is also a practical resource for general students and professional researchers, where techniques and concepts can be readily picked up on a chapter-by-chapter basis.Parameter Estimation and Inverse Problems is structured around a course at New Mexico Tech and is designed to be accessible to typical graduate students in the physical sciences who may not have an extensive mathematical background. It is accompanied by a Web site that...

  3. Cost Estimates and Investment Decisions

    International Nuclear Information System (INIS)

    Emhjellen, Kjetil; Emhjellen Magne; Osmundsen, Petter

    2001-08-01

    When evaluating new investment projects, oil companies traditionally use the discounted cashflow method. This method requires expected cashflows in the numerator and a risk adjusted required rate of return in the denominator in order to calculate net present value. The capital expenditure (CAPEX) of a project is one of the major cashflows used to calculate net present value. Usually the CAPEX is given by a single cost figure, with some indication of its probability distribution. In the oil industry and many other industries, it is common practice to report a CAPEX that is the estimated 50/50 (median) CAPEX instead of the estimated expected (expected value) CAPEX. In this article we demonstrate how the practice of using a 50/50 (median) CAPEX, when the cost distributions are asymmetric, causes project valuation errors and therefore may lead to wrong investment decisions with acceptance of projects that have negative net present values. (author)

  4. Location Estimation using Delayed Measurements

    DEFF Research Database (Denmark)

    Bak, Martin; Larsen, Thomas Dall; Nørgård, Peter Magnus

    1998-01-01

    When combining data from various sensors it is vital to acknowledge possible measurement delays. Furthermore, the sensor fusion algorithm, often a Kalman filter, should be modified in order to handle the delay. The paper examines different possibilities for handling delays and applies a new techn...... technique to a sensor fusion system for estimating the location of an autonomous guided vehicle. The system fuses encoder and vision measurements in an extended Kalman filter. Results from experiments in a real environment are reported...

  5. Prior information in structure estimation

    Czech Academy of Sciences Publication Activity Database

    Kárný, Miroslav; Nedoma, Petr; Khailova, Natalia; Pavelková, Lenka

    2003-01-01

    Roč. 150, č. 6 (2003), s. 643-653 ISSN 1350-2379 R&D Projects: GA AV ČR IBS1075102; GA AV ČR IBS1075351; GA ČR GA102/03/0049 Institutional research plan: CEZ:AV0Z1075907 Keywords : prior knowledge * structure estimation * autoregressive models Subject RIV: BC - Control Systems Theory Impact factor: 0.745, year: 2003 http://library.utia.cas.cz/separaty/historie/karny-0411258.pdf

  6. Radiation in space: risk estimates

    International Nuclear Information System (INIS)

    Fry, R.J.M.

    2002-01-01

    The complexity of radiation environments in space makes estimation of risks more difficult than for the protection of terrestrial population. In deep space the duration of the mission, position of the solar cycle, number and size of solar particle events (SPE) and the spacecraft shielding are the major determinants of risk. In low-earth orbit missions there are the added factors of altitude and orbital inclination. Different radiation qualities such as protons and heavy ions and secondary radiations inside the spacecraft such as neutrons of various energies, have to be considered. Radiation dose rates in space are low except for short periods during very large SPEs. Risk estimation for space activities is based on the human experience of exposure to gamma rays and to a lesser extent X rays. The doses of protons, heavy ions and neutrons are adjusted to take into account the relative biological effectiveness (RBE) of the different radiation types and thus derive equivalent doses. RBE values and factors to adjust for the effect of dose rate have to be obtained from experimental data. The influence of age and gender on the cancer risk is estimated from the data from atomic bomb survivors. Because of the large number of variables the uncertainties in the probability of the effects are large. Information needed to improve the risk estimates includes: (1) risk of cancer induction by protons, heavy ions and neutrons; (2) influence of dose rate and protraction, particularly on potential tissue effects such as reduced fertility and cataracts; and (3) possible effects of heavy ions on the central nervous system. Risk cannot be eliminated and thus there must be a consensus on what level of risk is acceptable. (author)

  7. Properties of estimated characteristic roots

    OpenAIRE

    Bent Nielsen; Heino Bohn Nielsen

    2008-01-01

    Estimated characteristic roots in stationary autoregressions are shown to give rather noisy information about their population equivalents. This is remarkable given the central role of the characteristic roots in the theory of autoregressive processes. In the asymptotic analysis the problems appear when multiple roots are present as this implies a non-differentiablity so the δ-method does not apply, convergence rates are slow, and the asymptotic distribution is non-normal. In finite samples ...

  8. Recent estimates of capital flight

    OpenAIRE

    Claessens, Stijn; Naude, David

    1993-01-01

    Researchers and policymakers have in recent years paid considerable attention to the phenomenon of capital flight. Researchers have focused on four questions: What concept should be used to measure capital flight? What figure for capital flight will emerge, using this measure? Can the occurrence and magnitude of capital flight be explained by certain (economic) variables? What policy changes can be useful to reverse capital flight? The authors focus strictly on presenting estimates of capital...

  9. Effort Estimation in BPMS Migration

    OpenAIRE

    Drews, Christopher; Lantow, Birger

    2018-01-01

    Usually Business Process Management Systems (BPMS) are highly integrated in the IT of organizations and are at the core of their business. Thus, migrating from one BPMS solution to another is not a common task. However, there are forces that are pushing organizations to perform this step, e.g. maintenance costs of legacy BPMS or the need for additional functionality. Before the actual migration, the risk and the effort must be evaluated. This work provides a framework for effort estimation re...

  10. Reactor core performance estimating device

    International Nuclear Information System (INIS)

    Tanabe, Akira; Yamamoto, Toru; Shinpuku, Kimihiro; Chuzen, Takuji; Nishide, Fusayo.

    1995-01-01

    The present invention can autonomously simplify a neural net model thereby enabling to conveniently estimate various amounts which represents reactor core performances by a simple calculation in a short period of time. Namely, a reactor core performance estimation device comprises a nerve circuit net which divides the reactor core into a large number of spacial regions, and receives various physical amounts for each region as input signals for input nerve cells and outputs estimation values of each amount representing the reactor core performances as output signals of output nerve cells. In this case, the nerve circuit net (1) has a structure of extended multi-layered model having direct coupling from an upper stream layer to each of downstream layers, (2) has a forgetting constant q in a corrected equation for a joined load value ω using an inverse error propagation method, (3) learns various amounts representing reactor core performances determined using the physical models as teacher signals, (4) determines the joined load value ω decreased as '0' when it is to less than a predetermined value upon learning described above, and (5) eliminates elements of the nerve circuit net having all of the joined load value decreased to 0. As a result, the neural net model comprises an autonomously simplifying means. (I.S.)

  11. Contact Estimation in Robot Interaction

    Directory of Open Access Journals (Sweden)

    Filippo D'Ippolito

    2014-07-01

    Full Text Available In the paper, safety issues are examined in a scenario in which a robot manipulator and a human perform the same task in the same workspace. During the task execution, the human should be able to physically interact with the robot, and in this case an estimation algorithm for both interaction forces and a contact point is proposed in order to guarantee safety conditions. The method, starting from residual joint torque estimation, allows both direct and adaptive computation of the contact point and force, based on a principle of equivalence of the contact forces. At the same time, all the unintended contacts must be avoided, and a suitable post-collision strategy is considered to move the robot away from the collision area or else to reduce impact effects. Proper experimental tests have demonstrated the applicability in practice of both the post-impact strategy and the estimation algorithms; furthermore, experiments demonstrate the different behaviour resulting from the adaptation of the contact point as opposed to direct calculation.

  12. Statistical estimation of process holdup

    International Nuclear Information System (INIS)

    Harris, S.P.

    1988-01-01

    Estimates of potential process holdup and their random and systematic error variances are derived to improve the inventory difference (ID) estimate and its associated measure of uncertainty for a new process at the Savannah River Plant. Since the process is in a start-up phase, data have not yet accumulated for statistical modelling. The material produced in the facility will be a very pure, highly enriched 235U with very small isotopic variability. Therefore, data published in LANL's unclassified report on Estimation Methods for Process Holdup of a Special Nuclear Materials was used as a starting point for the modelling process. LANL's data were gathered through a series of designed measurements of special nuclear material (SNM) holdup at two of their materials-processing facilities. Also, they had taken steps to improve the quality of data through controlled, larger scale, experiments outside of LANL at highly enriched uranium processing facilities. The data they have accumulated are on an equipment component basis. Our modelling has been restricted to the wet chemistry area. We have developed predictive models for each of our process components based on the LANL data. 43 figs

  13. Abundance estimation and conservation biology

    Science.gov (United States)

    Nichols, J.D.; MacKenzie, D.I.

    2004-01-01

    Abundance is the state variable of interest in most population–level ecological research and in most programs involving management and conservation of animal populations. Abundance is the single parameter of interest in capture–recapture models for closed populations (e.g., Darroch, 1958; Otis et al., 1978; Chao, 2001). The initial capture–recapture models developed for partially (Darroch, 1959) and completely (Jolly, 1965; Seber, 1965) open populations represented efforts to relax the restrictive assumption of population closure for the purpose of estimating abundance. Subsequent emphases in capture–recapture work were on survival rate estimation in the 1970’s and 1980’s (e.g., Burnham et al., 1987; Lebreton et al.,1992), and on movement estimation in the 1990’s (Brownie et al., 1993; Schwarz et al., 1993). However, from the mid–1990’s until the present time, capture–recapture investigators have expressed a renewed interest in abundance and related parameters (Pradel, 1996; Schwarz & Arnason, 1996; Schwarz, 2001). The focus of this session was abundance, and presentations covered topics ranging from estimation of abundance and rate of change in abundance, to inferences about the demographic processes underlying changes in abundance, to occupancy as a surrogate of abundance. The plenary paper by Link & Barker (2004) is provocative and very interesting, and it contains a number of important messages and suggestions. Link & Barker (2004) emphasize that the increasing complexity of capture–recapture models has resulted in large numbers of parameters and that a challenge to ecologists is to extract ecological signals from this complexity. They offer hierarchical models as a natural approach to inference in which traditional parameters are viewed as realizations of stochastic processes. These processes are governed by hyperparameters, and the inferential approach focuses on these hyperparameters. Link & Barker (2004) also suggest that our attention

  14. Abundance estimation and Conservation Biology

    Directory of Open Access Journals (Sweden)

    Nichols, J. D.

    2004-06-01

    Full Text Available Abundance is the state variable of interest in most population–level ecological research and in most programs involving management and conservation of animal populations. Abundance is the single parameter of interest in capture–recapture models for closed populations (e.g., Darroch, 1958; Otis et al., 1978; Chao, 2001. The initial capture–recapture models developed for partially (Darroch, 1959 and completely (Jolly, 1965; Seber, 1965 open populations represented efforts to relax the restrictive assumption of population closure for the purpose of estimating abundance. Subsequent emphases in capture–recapture work were on survival rate estimation in the 1970’s and 1980’s (e.g., Burnham et al., 1987; Lebreton et al.,1992, and on movement estimation in the 1990’s (Brownie et al., 1993; Schwarz et al., 1993. However, from the mid–1990’s until the present time, capture–recapture investigators have expressed a renewed interest in abundance and related parameters (Pradel, 1996; Schwarz & Arnason, 1996; Schwarz, 2001. The focus of this session was abundance, and presentations covered topics ranging from estimation of abundance and rate of change in abundance, to inferences about the demographic processes underlying changes in abundance, to occupancy as a surrogate of abundance. The plenary paper by Link & Barker (2004 is provocative and very interesting, and it contains a number of important messages and suggestions. Link & Barker (2004 emphasize that the increasing complexity of capture–recapture models has resulted in large numbers of parameters and that a challenge to ecologists is to extract ecological signals from this complexity. They offer hierarchical models as a natural approach to inference in which traditional parameters are viewed as realizations of stochastic processes. These processes are governed by hyperparameters, and the inferential approach focuses on these hyperparameters. Link & Barker (2004 also suggest that

  15. Validation Test Report for the Arctic Cap Nowcast/Forecast System as a Fractures/Leads and Polynyas Product

    Science.gov (United States)

    2015-05-26

    and Lipscomb, 2004) to describe the ice dynamics and compute strain rates. It incorporates the standard ridging scheme of Thorndike et al. (1975...Forecast System (ACNFS). NRL/MR/7320—10- 9287, Naval Research Laboratory, Stennis Space Center, MS, 55 pp. Thorndike , A.S., D.A. Rothrock, G.A. Maykut, and

  16. Estimating the Costs of Preventive Interventions

    Science.gov (United States)

    Foster, E. Michael; Porter, Michele M.; Ayers, Tim S.; Kaplan, Debra L.; Sandler, Irwin

    2007-01-01

    The goal of this article is to improve the practice and reporting of cost estimates of prevention programs. It reviews the steps in estimating the costs of an intervention and the principles that should guide estimation. The authors then review prior efforts to estimate intervention costs using a sample of well-known but diverse studies. Finally,…

  17. Thermodynamics and life span estimation

    International Nuclear Information System (INIS)

    Kuddusi, Lütfullah

    2015-01-01

    In this study, the life span of people living in seven regions of Turkey is estimated by applying the first and second laws of thermodynamics to the human body. The people living in different regions of Turkey have different food habits. The first and second laws of thermodynamics are used to calculate the entropy generation rate per unit mass of a human due to the food habits. The lifetime entropy generation per unit mass of a human was previously found statistically. The two entropy generations, lifetime entropy generation and entropy generation rate, enable one to determine the life span of people living in seven regions of Turkey with different food habits. In order to estimate the life span, some statistics of Turkish Statistical Institute regarding the food habits of the people living in seven regions of Turkey are used. The life spans of people that live in Central Anatolia and Eastern Anatolia regions are the longest and shortest, respectively. Generally, the following inequality regarding the life span of people living in seven regions of Turkey is found: Eastern Anatolia < Southeast Anatolia < Black Sea < Mediterranean < Marmara < Aegean < Central Anatolia. - Highlights: • The first and second laws of thermodynamics are applied to the human body. • The entropy generation of a human due to his food habits is determined. • The life span of Turks is estimated by using the entropy generation method. • Food habits of a human have effect on his life span

  18. The estimation of genetic divergence

    Science.gov (United States)

    Holmquist, R.; Conroy, T.

    1981-01-01

    Consideration is given to the criticism of Nei and Tateno (1978) of the REH (random evolutionary hits) theory of genetic divergence in nucleic acids and proteins, and to their proposed alternative estimator of total fixed mutations designated X2. It is argued that the assumption of nonuniform amino acid or nucleotide substitution will necessarily increase REH estimates relative to those made for a model where each locus has an equal likelihood of fixing mutations, thus the resulting value will not be an overestimation. The relative values of X2 and measures calculated on the basis of the PAM and REH theories for the number of nucleotide substitutions necessary to explain a given number of observed amino acid differences between two homologous proteins are compared, and the smaller values of X2 are attributed to (1) a mathematical model based on the incorrect assumption that an entire structural gene is free to fix mutations and (2) the assumptions of different numbers of variable codons for the X2 and REH calculations. Results of a repeat of the computer simulations of Nei and Tateno are presented which, in contrast to the original results, confirm the REH theory. It is pointed out that while a negative correlation is observed between estimations of the fixation intensity per varion and the number of varions for a given pair of sequences, the correlation between the two fixation intensities and varion numbers of two different pairs of sequences need not be negative. Finally, REH theory is used to resolve a paradox concerning the high rate of covarion turnover and the nature of general function sites as permanent covarions.

  19. Nonparametric e-Mixture Estimation.

    Science.gov (United States)

    Takano, Ken; Hino, Hideitsu; Akaho, Shotaro; Murata, Noboru

    2016-12-01

    This study considers the common situation in data analysis when there are few observations of the distribution of interest or the target distribution, while abundant observations are available from auxiliary distributions. In this situation, it is natural to compensate for the lack of data from the target distribution by using data sets from these auxiliary distributions-in other words, approximating the target distribution in a subspace spanned by a set of auxiliary distributions. Mixture modeling is one of the simplest ways to integrate information from the target and auxiliary distributions in order to express the target distribution as accurately as possible. There are two typical mixtures in the context of information geometry: the [Formula: see text]- and [Formula: see text]-mixtures. The [Formula: see text]-mixture is applied in a variety of research fields because of the presence of the well-known expectation-maximazation algorithm for parameter estimation, whereas the [Formula: see text]-mixture is rarely used because of its difficulty of estimation, particularly for nonparametric models. The [Formula: see text]-mixture, however, is a well-tempered distribution that satisfies the principle of maximum entropy. To model a target distribution with scarce observations accurately, this letter proposes a novel framework for a nonparametric modeling of the [Formula: see text]-mixture and a geometrically inspired estimation algorithm. As numerical examples of the proposed framework, a transfer learning setup is considered. The experimental results show that this framework works well for three types of synthetic data sets, as well as an EEG real-world data set.

  20. Dose estimation by biological methods

    International Nuclear Information System (INIS)

    Guerrero C, C.; David C, L.; Serment G, J.; Brena V, M.

    1997-01-01

    The human being is exposed to strong artificial radiation sources, mainly of two forms: the first is referred to the occupationally exposed personnel (POE) and the second, to the persons that require radiological treatment. A third form less common is by accidents. In all these conditions it is very important to estimate the absorbed dose. The classical biological dosimetry is based in the dicentric analysis. The present work is part of researches to the process to validate the In situ Fluorescent hybridation (FISH) technique which allows to analyse the aberrations on the chromosomes. (Author)

  1. Stochastic estimation of electricity consumption

    International Nuclear Information System (INIS)

    Kapetanovic, I.; Konjic, T.; Zahirovic, Z.

    1999-01-01

    Electricity consumption forecasting represents a part of the stable functioning of the power system. It is very important because of rationality and increase of control process efficiency and development planning of all aspects of society. On a scientific basis, forecasting is a possible way to solve problems. Among different models that have been used in the area of forecasting, the stochastic aspect of forecasting as a part of quantitative models takes a very important place in applications. ARIMA models and Kalman filter as stochastic estimators have been treated together for electricity consumption forecasting. Therefore, the main aim of this paper is to present the stochastic forecasting aspect using short time series. (author)

  2. Size Estimates in Inverse Problems

    KAUST Repository

    Di Cristo, Michele

    2014-01-06

    Detection of inclusions or obstacles inside a body by boundary measurements is an inverse problems very useful in practical applications. When only finite numbers of measurements are available, we try to detect some information on the embedded object such as its size. In this talk we review some recent results on several inverse problems. The idea is to provide constructive upper and lower estimates of the area/volume of the unknown defect in terms of a quantity related to the work that can be expressed with the available boundary data.

  3. Location Estimation of Mobile Devices

    Directory of Open Access Journals (Sweden)

    Kamil ŽIDEK

    2009-06-01

    Full Text Available This contribution describes mathematical model (kinematics for Mobile Robot carriage. The mathematical model is fully parametric. Model is designed universally for any measures three or four wheeled carriage. The next conditions are: back wheels are driving-wheel, front wheels change angle of Robot turning. Position of the front wheel gives the actual position of the robot. Position of the robot is described by coordinates x, y and by angle of the front wheel α in reference position. Main reason for model implementation is indoor navigation. We need some estimation of robot position especially after turning of the Robot. Next use is for outdoor navigation especially for precising GPS information.

  4. Estimation of the energy needs

    International Nuclear Information System (INIS)

    Ailleret

    1955-01-01

    The present report draws up the balance on the present and estimable energy consumption for the next twenty years. The present energy comes mainly of the consumption of coal, oil products and essentially hydraulic electric energy. the market development comes essentially of the development the industrial activity and of new applications tributary of the cost and the distribution of the electric energy. To this effect, the atomic energy offers good industrial perspectives in complement of the energy present resources in order to answer the new needs. (M.B.) [fr

  5. Random Decrement Based FRF Estimation

    DEFF Research Database (Denmark)

    Brincker, Rune; Asmussen, J. C.

    to speed and quality. The basis of the new method is the Fourier transformation of the Random Decrement functions which can be used to estimate the frequency response functions. The investigations are based on load and response measurements of a laboratory model of a 3 span bridge. By applying both methods...... that the Random Decrement technique is based on a simple controlled averaging of time segments of the load and response processes. Furthermore, the Random Decrement technique is expected to produce reliable results. The Random Decrement technique will reduce leakage, since the Fourier transformation...

  6. Random Decrement Based FRF Estimation

    DEFF Research Database (Denmark)

    Brincker, Rune; Asmussen, J. C.

    1997-01-01

    to speed and quality. The basis of the new method is the Fourier transformation of the Random Decrement functions which can be used to estimate the frequency response functions. The investigations are based on load and response measurements of a laboratory model of a 3 span bridge. By applying both methods...... that the Random Decrement technique is based on a simple controlled averaging of time segments of the load and response processes. Furthermore, the Random Decrement technique is expected to produce reliable results. The Random Decrement technique will reduce leakage, since the Fourier transformation...

  7. Development of estimation method for crop yield using MODIS satellite imagery data and process-based model for corn and soybean in US Corn-Belt region

    Science.gov (United States)

    Lee, J.; Kang, S.; Jang, K.; Ko, J.; Hong, S.

    2012-12-01

    Crop productivity is associated with the food security and hence, several models have been developed to estimate crop yield by combining remote sensing data with carbon cycle processes. In present study, we attempted to estimate crop GPP and NPP using algorithm based on the LUE model and a simplified respiration model. The state of Iowa and Illinois was chosen as the study site for estimating the crop yield for a period covering the 5 years (2006-2010), as it is the main Corn-Belt area in US. Present study focuses on developing crop-specific parameters for corn and soybean to estimate crop productivity and yield mapping using satellite remote sensing data. We utilized a 10 km spatial resolution daily meteorological data from WRF to provide cloudy-day meteorological variables but in clear-say days, MODIS-based meteorological data were utilized to estimate daily GPP, NPP, and biomass. County-level statistics on yield, area harvested, and productions were used to test model predicted crop yield. The estimated input meteorological variables from MODIS and WRF showed with good agreements with the ground observations from 6 Ameriflux tower sites in 2006. For examples, correlation coefficients ranged from 0.93 to 0.98 for Tmin and Tavg ; from 0.68 to 0.85 for daytime mean VPD; from 0.85 to 0.96 for daily shortwave radiation, respectively. We developed county-specific crop conversion coefficient, i.e. ratio of yield to biomass on 260 DOY and then, validated the estimated county-level crop yield with the statistical yield data. The estimated corn and soybean yields at the county level ranged from 671 gm-2 y-1 to 1393 gm-2 y-1 and from 213 gm-2 y-1 to 421 gm-2 y-1, respectively. The county-specific yield estimation mostly showed errors less than 10%. Furthermore, we estimated crop yields at the state level which were validated against the statistics data and showed errors less than 1%. Further analysis for crop conversion coefficient was conducted for 200 DOY and 280 DOY

  8. Applied parameter estimation for chemical engineers

    CERN Document Server

    Englezos, Peter

    2000-01-01

    Formulation of the parameter estimation problem; computation of parameters in linear models-linear regression; Gauss-Newton method for algebraic models; other nonlinear regression methods for algebraic models; Gauss-Newton method for ordinary differential equation (ODE) models; shortcut estimation methods for ODE models; practical guidelines for algorithm implementation; constrained parameter estimation; Gauss-Newton method for partial differential equation (PDE) models; statistical inferences; design of experiments; recursive parameter estimation; parameter estimation in nonlinear thermodynam

  9. Graph Sampling for Covariance Estimation

    KAUST Repository

    Chepuri, Sundeep Prabhakar

    2017-04-25

    In this paper the focus is on subsampling as well as reconstructing the second-order statistics of signals residing on nodes of arbitrary undirected graphs. Second-order stationary graph signals may be obtained by graph filtering zero-mean white noise and they admit a well-defined power spectrum whose shape is determined by the frequency response of the graph filter. Estimating the graph power spectrum forms an important component of stationary graph signal processing and related inference tasks such as Wiener prediction or inpainting on graphs. The central result of this paper is that by sampling a significantly smaller subset of vertices and using simple least squares, we can reconstruct the second-order statistics of the graph signal from the subsampled observations, and more importantly, without any spectral priors. To this end, both a nonparametric approach as well as parametric approaches including moving average and autoregressive models for the graph power spectrum are considered. The results specialize for undirected circulant graphs in that the graph nodes leading to the best compression rates are given by the so-called minimal sparse rulers. A near-optimal greedy algorithm is developed to design the subsampling scheme for the non-parametric and the moving average models, whereas a particular subsampling scheme that allows linear estimation for the autoregressive model is proposed. Numerical experiments on synthetic as well as real datasets related to climatology and processing handwritten digits are provided to demonstrate the developed theory.

  10. Note on demographic estimates 1979.

    Science.gov (United States)

    1979-01-01

    Based on UN projections, national projections, and the South Pacific Commission data, the ESCAP Population Division has compiled estimates of the 1979 population and demogaphic figures for the 38 member countries and associate members. The 1979 population is estimated at 2,400 million, 55% of the world total of 4,336 million. China comprises 39% of the region, India, 28%. China, India, Indonesia, Japan, Bangladesh, and Pakistan comprise 6 of the 10 largest countries in the world. China and India are growing at the rate of 1 million people per month. Between 1978-9 Hong Kong experienced the highest rate of growth, 6.2%, Niue the lowest, 4.5%. Life expectancy at birth is 58.7 years in the ESCAP region, but is over 70 in Japan, Hong Kong, Australia, New Zealand, and Singapore. At 75.2 years life expectancy in Japan is highest in the world. By world standards, a high percentage of females aged 16-64 are economically active. More than half the women aged 15-64 are in the labor force in 10 of the ESCAP countries. The region is still 73% rural. By the end of the 20th century the population of the ESCAP region is projected at 3,272 million, a 36% increase over the 1979 total.

  11. Practical global oceanic state estimation

    Science.gov (United States)

    Wunsch, Carl; Heimbach, Patrick

    2007-06-01

    The problem of oceanographic state estimation, by means of an ocean general circulation model (GCM) and a multitude of observations, is described and contrasted with the meteorological process of data assimilation. In practice, all such methods reduce, on the computer, to forms of least-squares. The global oceanographic problem is at the present time focussed primarily on smoothing, rather than forecasting, and the data types are unlike meteorological ones. As formulated in the consortium Estimating the Circulation and Climate of the Ocean (ECCO), an automatic differentiation tool is used to calculate the so-called adjoint code of the GCM, and the method of Lagrange multipliers used to render the problem one of unconstrained least-squares minimization. Major problems today lie less with the numerical algorithms (least-squares problems can be solved by many means) than with the issues of data and model error. Results of ongoing calculations covering the period of the World Ocean Circulation Experiment, and including among other data, satellite altimetry from TOPEX/POSEIDON, Jason-1, ERS- 1/2, ENVISAT, and GFO, a global array of profiling floats from the Argo program, and satellite gravity data from the GRACE mission, suggest that the solutions are now useful for scientific purposes. Both methodology and applications are developing in a number of different directions.

  12. LOD estimation from DORIS observations

    Science.gov (United States)

    Stepanek, Petr; Filler, Vratislav; Buday, Michal; Hugentobler, Urs

    2016-04-01

    The difference between astronomically determined duration of the day and 86400 seconds is called length of day (LOD). The LOD could be also understood as the daily rate of the difference between the Universal Time UT1, based on the Earth rotation, and the International Atomic Time TAI. The LOD is estimated using various Satellite Geodesy techniques as GNSS and SLR, while absolute UT1-TAI difference is precisely determined by VLBI. Contrary to other IERS techniques, the LOD estimation using DORIS (Doppler Orbitography and Radiopositioning Integrated by satellite) measurement did not achieve a geodetic accuracy in the past, reaching the precision at the level of several ms per day. However, recent experiments performed by IDS (International DORIS Service) analysis centre at Geodetic Observatory Pecny show a possibility to reach accuracy around 0.1 ms per day, when not adjusting the cross-track harmonics in the Satellite orbit model. The paper presents the long term LOD series determined from the DORIS solutions. The series are compared with C04 as the reference. Results are discussed in the context of accuracy achieved with GNSS and SLR. Besides the multi-satellite DORIS solutions, also the LOD series from the individual DORIS satellite solutions are analysed.

  13. CONSTRUCTING ACCOUNTING UNCERTAINITY ESTIMATES VARIABLE

    Directory of Open Access Journals (Sweden)

    Nino Serdarevic

    2012-10-01

    Full Text Available This paper presents research results on the BIH firms’ financial reporting quality, utilizing empirical relation between accounting conservatism, generated in created critical accounting policy choices, and management abilities in estimates and prediction power of domicile private sector accounting. Primary research is conducted based on firms’ financial statements, constructing CAPCBIH (Critical Accounting Policy Choices relevant in B&H variable that presents particular internal control system and risk assessment; and that influences financial reporting positions in accordance with specific business environment. I argue that firms’ management possesses no relevant capacity to determine risks and true consumption of economic benefits, leading to creation of hidden reserves in inventories and accounts payable; and latent losses for bad debt and assets revaluations. I draw special attention to recent IFRS convergences to US GAAP, especially in harmonizing with FAS 130 Reporting comprehensive income (in revised IAS 1 and FAS 157 Fair value measurement. CAPCBIH variable, resulted in very poor performance, presents considerable lack of recognizing environment specifics. Furthermore, I underline the importance of revised ISAE and re-enforced role of auditors in assessing relevance of management estimates.

  14. The need to estimate risks

    International Nuclear Information System (INIS)

    Pochin, E.E.

    1980-01-01

    In an increasing number of situations, it is becoming possible to obtain and compare numerical estimates of the biological risks involved in different alternative sources of action. In some cases these risks are similar in kind, as for example when the risk of including fatal cancer of the breast or stomach by x-ray screening of a population at risk, is compared with the risk of such cancers proving fatal if not detected by a screening programme. In other cases in which it is important to attempt a comparison, the risks are dissimilar in type, as when the safety of occupations involving exposure to radiation or chemical carcinogens is compared with that of occupations in which the major risks are from lung disease or from accidental injury and death. Similar problems of assessing the relative severity of unlike effects occur in any attempt to compare the total biological harm associated with a given output of electricity derived from different primary fuel sources, with its contributions both of occupation and of public harm. In none of these instances is the numerical frequency of harmful effects alone an adequate measure of total biological detriment, nor is such detriment the only factor which should influence decisions. Estimations of risk appear important however, since otherwise public health decisions are likely to be made on more arbitrary grounds, and public opinion will continue to be affected predominantly by the type rather than also by the size of risk. (author)

  15. Variance function estimation for immunoassays

    International Nuclear Information System (INIS)

    Raab, G.M.; Thompson, R.; McKenzie, I.

    1980-01-01

    A computer program is described which implements a recently described, modified likelihood method of determining an appropriate weighting function to use when fitting immunoassay dose-response curves. The relationship between the variance of the response and its mean value is assumed to have an exponential form, and the best fit to this model is determined from the within-set variability of many small sets of repeated measurements. The program estimates the parameter of the exponential function with its estimated standard error, and tests the fit of the experimental data to the proposed model. Output options include a list of the actual and fitted standard deviation of the set of responses, a plot of actual and fitted standard deviation against the mean response, and an ordered list of the 10 sets of data with the largest ratios of actual to fitted standard deviation. The program has been designed for a laboratory user without computing or statistical expertise. The test-of-fit has proved valuable for identifying outlying responses, which may be excluded from further analysis by being set to negative values in the input file. (Auth.)

  16. Information and crystal structure estimation

    International Nuclear Information System (INIS)

    Wilkins, S.W.; Commonwealth Scientific and Industrial Research Organization, Clayton; Varghese, J.N.; Steenstrup, S.

    1984-01-01

    The conceptual foundations of a general information-theoretic based approach to X-ray structure estimation are reexamined with a view to clarifying some of the subtleties inherent in the approach and to enhancing the scope of the method. More particularly, general reasons for choosing the minimum of the Shannon-Kullback measure for information as the criterion for inference are discussed and it is shown that the minimum information (or maximum entropy) principle enters the present treatment of the structure estimation problem in at least to quite separate ways, and that three formally similar but conceptually quite different expressions for relative information appear at different points in the theory. One of these is the general Shannon-Kullback expression, while the second is a derived form pertaining only under the restrictive assumptions of the present stochastic model for allowed structures, and the third is a measure of the additional information involved in accepting a fluctuation relative to an arbitrary mean structure. (orig.)

  17. PHAZE, Parametric Hazard Function Estimation

    International Nuclear Information System (INIS)

    2002-01-01

    1 - Description of program or function: Phaze performs statistical inference calculations on a hazard function (also called a failure rate or intensity function) based on reported failure times of components that are repaired and restored to service. Three parametric models are allowed: the exponential, linear, and Weibull hazard models. The inference includes estimation (maximum likelihood estimators and confidence regions) of the parameters and of the hazard function itself, testing of hypotheses such as increasing failure rate, and checking of the model assumptions. 2 - Methods: PHAZE assumes that the failures of a component follow a time-dependent (or non-homogenous) Poisson process and that the failure counts in non-overlapping time intervals are independent. Implicit in the independence property is the assumption that the component is restored to service immediately after any failure, with negligible repair time. The failures of one component are assumed to be independent of those of another component; a proportional hazards model is used. Data for a component are called time censored if the component is observed for a fixed time-period, or plant records covering a fixed time-period are examined, and the failure times are recorded. The number of these failures is random. Data are called failure censored if the component is kept in service until a predetermined number of failures has occurred, at which time the component is removed from service. In this case, the number of failures is fixed, but the end of the observation period equals the final failure time and is random. A typical PHAZE session consists of reading failure data from a file prepared previously, selecting one of the three models, and performing data analysis (i.e., performing the usual statistical inference about the parameters of the model, with special emphasis on the parameter(s) that determine whether the hazard function is increasing). The final goals of the inference are a point estimate

  18. Bayesian estimation methods in metrology

    International Nuclear Information System (INIS)

    Cox, M.G.; Forbes, A.B.; Harris, P.M.

    2004-01-01

    In metrology -- the science of measurement -- a measurement result must be accompanied by a statement of its associated uncertainty. The degree of validity of a measurement result is determined by the validity of the uncertainty statement. In recognition of the importance of uncertainty evaluation, the International Standardization Organization in 1995 published the Guide to the Expression of Uncertainty in Measurement and the Guide has been widely adopted. The validity of uncertainty statements is tested in interlaboratory comparisons in which an artefact is measured by a number of laboratories and their measurement results compared. Since the introduction of the Mutual Recognition Arrangement, key comparisons are being undertaken to determine the degree of equivalence of laboratories for particular measurement tasks. In this paper, we discuss the possible development of the Guide to reflect Bayesian approaches and the evaluation of key comparison data using Bayesian estimation methods

  19. Residual risk over-estimated

    International Nuclear Information System (INIS)

    Anon.

    1982-01-01

    The way nuclear power plants are built practically excludes accidents with serious consequences. This is attended to by careful selection of material, control of fabrication and regular retesting as well as by several safety systems working independently. But the remaining risk, a 'hypothetic' uncontrollable incident with catastrophic effects is the main subject of the discussion on the peaceful utilization of nuclear power. The this year's 'Annual Meeting on Nuclear Engineering' in Mannheim and the meeting 'Reactor Safety Research' in Cologne showed, that risk studies so far were too pessimistic. 'Best estimate' calculations suggest that core melt-down accidents only occur if almost all safety systems fail, that accidents take place much more slowly, and that the release of radioactive fission products is by several magnitudes lower than it was assumed until now. (orig.) [de

  20. Neutron background estimates in GESA

    Directory of Open Access Journals (Sweden)

    Fernandes A.C.

    2014-01-01

    Full Text Available The SIMPLE project looks for nuclear recoil events generated by rare dark matter scattering interactions. Nuclear recoils are also produced by more prevalent cosmogenic neutron interactions. While the rock overburden shields against (μ,n neutrons to below 10−8 cm−2 s−1, it itself contributes via radio-impurities. Additional shielding of these is similar, both suppressing and contributing neutrons. We report on the Monte Carlo (MCNP estimation of the on-detector neutron backgrounds for the SIMPLE experiment located in the GESA facility of the Laboratoire Souterrain à Bas Bruit, and its use in defining additional shielding for measurements which have led to a reduction in the extrinsic neutron background to ∼ 5 × 10−3 evts/kgd. The calculated event rate induced by the neutron background is ∼ 0,3 evts/kgd, with a dominant contribution from the detector container.

  1. Mergers as an Omega estimator

    International Nuclear Information System (INIS)

    Carlberg, R.G.

    1990-01-01

    The redshift dependence of the fraction of galaxies which are merging or strongly interacting is a steep function of Omega and depends on the ratio of the cutoff velocity for interactions to the pairwise velocity dispersion. For typical galaxies the merger rate is shown to vary as (1 + z)exp m, where m is about 4.51 (Omega)exp 0.42, for Omega near 1 and a CDM-like cosmology. The index m has a relatively weak dependence on the maximum merger velocity, the mass of the galaxy, and the background cosmology, for small variations around a cosmology with a low redshift, z of about 2, of galaxy formation. Estimates of m from optical and IRAS galaxies have found that m is about 3-4, but with very large uncertainties. If quasar evolution follows the evolution of galaxy merging and m for quasars is greater than 4, then Omega is greater than 0.8. 21 refs

  2. 2007 Estimated International Energy Flows

    Energy Technology Data Exchange (ETDEWEB)

    Smith, C A; Belles, R D; Simon, A J

    2011-03-10

    An energy flow chart or 'atlas' for 136 countries has been constructed from data maintained by the International Energy Agency (IEA) and estimates of energy use patterns for the year 2007. Approximately 490 exajoules (460 quadrillion BTU) of primary energy are used in aggregate by these countries each year. While the basic structure of the energy system is consistent from country to country, patterns of resource use and consumption vary. Energy can be visualized as it flows from resources (i.e. coal, petroleum, natural gas) through transformations such as electricity generation to end uses (i.e. residential, commercial, industrial, transportation). These flow patterns are visualized in this atlas of 136 country-level energy flow charts.

  3. Data Handling and Parameter Estimation

    DEFF Research Database (Denmark)

    Sin, Gürkan; Gernaey, Krist

    2016-01-01

    ,engineers, and professionals. However, it is also expected that they will be useful both for graduate teaching as well as a stepping stone for academic researchers who wish to expand their theoretical interest in the subject. For the models selected to interpret the experimental data, this chapter uses available models from...... literature that are mostly based on the ActivatedSludge Model (ASM) framework and their appropriate extensions (Henze et al., 2000).The chapter presents an overview of the most commonly used methods in the estimation of parameters from experimental batch data, namely: (i) data handling and validation, (ii......Modelling is one of the key tools at the disposal of modern wastewater treatment professionals, researchers and engineers. It enables them to study and understand complex phenomena underlying the physical, chemical and biological performance of wastewater treatment plants at different temporal...

  4. Model for traffic emissions estimation

    Science.gov (United States)

    Alexopoulos, A.; Assimacopoulos, D.; Mitsoulis, E.

    A model is developed for the spatial and temporal evaluation of traffic emissions in metropolitan areas based on sparse measurements. All traffic data available are fully employed and the pollutant emissions are determined with the highest precision possible. The main roads are regarded as line sources of constant traffic parameters in the time interval considered. The method is flexible and allows for the estimation of distributed small traffic sources (non-line/area sources). The emissions from the latter are assumed to be proportional to the local population density as well as to the traffic density leading to local main arteries. The contribution of moving vehicles to air pollution in the Greater Athens Area for the period 1986-1988 is analyzed using the proposed model. Emissions and other related parameters are evaluated. Emissions from area sources were found to have a noticeable share of the overall air pollution.

  5. Effort Estimation in BPMS Migration

    Directory of Open Access Journals (Sweden)

    Christopher Drews

    2018-04-01

    Full Text Available Usually Business Process Management Systems (BPMS are highly integrated in the IT of organizations and are at the core of their business. Thus, migrating from one BPMS solution to another is not a common task. However, there are forces that are pushing organizations to perform this step, e.g. maintenance costs of legacy BPMS or the need for additional functionality. Before the actual migration, the risk and the effort must be evaluated. This work provides a framework for effort estimation regarding the technical aspects of BPMS migration. The framework provides questions for BPMS comparison and an effort evaluation schema. The applicability of the framework is evaluated based on a simplified BPMS migration scenario.

  6. Supplemental report on cost estimates'

    International Nuclear Information System (INIS)

    1992-01-01

    The Office of Management and Budget (OMB) and the U.S. Army Corps of Engineers have completed an analysis of the Department of Energy's (DOE) Fiscal Year (FY) 1993 budget request for its Environmental Restoration and Waste Management (ERWM) program. The results were presented to an interagency review group (IAG) of senior-Administration officials for their consideration in the budget process. This analysis included evaluations of the underlying legal requirements and cost estimates on which the ERWM budget request was based. The major conclusions are contained in a separate report entitled, ''Interagency Review of the Department of Energy Environmental Restoration and Waste Management Program.'' This Corps supplemental report provides greater detail on the cost analysis

  7. Age Estimation in Forensic Sciences

    Science.gov (United States)

    Alkass, Kanar; Buchholz, Bruce A.; Ohtani, Susumu; Yamamoto, Toshiharu; Druid, Henrik; Spalding, Kirsty L.

    2010-01-01

    Age determination of unknown human bodies is important in the setting of a crime investigation or a mass disaster because the age at death, birth date, and year of death as well as gender can guide investigators to the correct identity among a large number of possible matches. Traditional morphological methods used by anthropologists to determine age are often imprecise, whereas chemical analysis of tooth dentin, such as aspartic acid racemization, has shown reproducible and more precise results. In this study, we analyzed teeth from Swedish individuals using both aspartic acid racemization and radiocarbon methodologies. The rationale behind using radiocarbon analysis is that aboveground testing of nuclear weapons during the cold war (1955–1963) caused an extreme increase in global levels of carbon-14 (14C), which has been carefully recorded over time. Forty-four teeth from 41 individuals were analyzed using aspartic acid racemization analysis of tooth crown dentin or radiocarbon analysis of enamel, and 10 of these were split and subjected to both radiocarbon and racemization analysis. Combined analysis showed that the two methods correlated well (R2 = 0.66, p Aspartic acid racemization also showed a good precision with an overall absolute error of 5.4 ± 4.2 years. Whereas radiocarbon analysis gives an estimated year of birth, racemization analysis indicates the chronological age of the individual at the time of death. We show how these methods in combination can also assist in the estimation of date of death of an unidentified victim. This strategy can be of significant assistance in forensic casework involving dead victim identification. PMID:19965905

  8. Runoff estimation in residencial area

    Directory of Open Access Journals (Sweden)

    Meire Regina de Almeida Siqueira

    2013-12-01

    Full Text Available This study aimed to estimate the watershed runoff caused by extreme events that often result in the flooding of urban areas. The runoff of a residential area in the city of Guaratinguetá, São Paulo, Brazil was estimated using the Curve-Number method proposed by USDA-NRCS. The study also investigated current land use and land cover conditions, impermeable areas with pasture and indications of the reforestation of those areas. Maps and satellite images of Residential Riverside I Neighborhood were used to characterize the area. In addition to characterizing land use and land cover, the definition of the soil type infiltration capacity, the maximum local rainfall, and the type and quality of the drainage system were also investigated. The study showed that this neighborhood, developed in 1974, has an area of 792,700 m², a population of 1361 inhabitants, and a sloping area covered with degraded pasture (Guaratinguetá-Piagui Peak located in front of the residential area. The residential area is located in a flat area near the Paraiba do Sul River, and has a poor drainage system with concrete pipes, mostly 0.60 m in diameter, with several openings that capture water and sediments from the adjacent sloping area. The Low Impact Development (LID system appears to be a viable solution for this neighborhood drainage system. It can be concluded that the drainage system of the Guaratinguetá Riverside I Neighborhood has all of the conditions and characteristics that make it suitable for the implementation of a low impact urban drainage system. Reforestation of Guaratinguetá-Piagui Peak can reduce the basin’s runoff by 50% and minimize flooding problems in the Beira Rio neighborhood.

  9. Estimated status 2006-2015

    International Nuclear Information System (INIS)

    2003-01-01

    According to article 6 of the French law from February 10, 2000 relative to the modernization and development of the electric public utility, the manager of the public power transportation grid (RTE) has to produce, at least every two years and under the control of the French government, a pluri-annual estimated status. Then, the energy ministry uses this status to prepare the pluri-annual planning of power production investments. The estimated status aims at establishing a medium- and long-term diagnosis of the balance between power supply and demand and at evaluating the new production capacity needs to ensure a durable security of power supplies. The hypotheses relative to the power consumption and to the evolution of the power production means and trades are presented in chapters 2 to 4. Chapter 5 details the methodology and modeling principles retained for the supply-demand balance simulations. Chapter 6 presents the probabilistic simulation results at the 2006, 2010 and 2015 prospects and indicates the volumes of reinforcement of the production parks which would warrant an acceptable level of security. Chapter 7 develops the critical problem of winter demand peaks and evokes the possibilities linked with demand reduction, market resources and use of the existing park. Finally, chapter 8 makes a synthesis of the technical conclusions and recalls the determining hypotheses that have been retained. The particular situations of western France, of the Mediterranean and Paris region, and of Corsica and overseas territories are examined in chapter 9. The simulation results for all consumption-production scenarios and the wind-power production data are presented in appendixes. (J.S.)

  10. Estimating location without external cues.

    Directory of Open Access Journals (Sweden)

    Allen Cheung

    2014-10-01

    Full Text Available The ability to determine one's location is fundamental to spatial navigation. Here, it is shown that localization is theoretically possible without the use of external cues, and without knowledge of initial position or orientation. With only error-prone self-motion estimates as input, a fully disoriented agent can, in principle, determine its location in familiar spaces with 1-fold rotational symmetry. Surprisingly, localization does not require the sensing of any external cue, including the boundary. The combination of self-motion estimates and an internal map of the arena provide enough information for localization. This stands in conflict with the supposition that 2D arenas are analogous to open fields. Using a rodent error model, it is shown that the localization performance which can be achieved is enough to initiate and maintain stable firing patterns like those of grid cells, starting from full disorientation. Successful localization was achieved when the rotational asymmetry was due to the external boundary, an interior barrier or a void space within an arena. Optimal localization performance was found to depend on arena shape, arena size, local and global rotational asymmetry, and the structure of the path taken during localization. Since allothetic cues including visual and boundary contact cues were not present, localization necessarily relied on the fusion of idiothetic self-motion cues and memory of the boundary. Implications for spatial navigation mechanisms are discussed, including possible relationships with place field overdispersion and hippocampal reverse replay. Based on these results, experiments are suggested to identify if and where information fusion occurs in the mammalian spatial memory system.

  11. Estimation of Poverty in Small Areas

    Directory of Open Access Journals (Sweden)

    Agne Bikauskaite

    2014-12-01

    Full Text Available A qualitative techniques of poverty estimation is needed to better implement, monitor and determine national areas where support is most required. The problem of small area estimation (SAE is the production of reliable estimates in areas with small samples. The precision of estimates in strata deteriorates (i.e. the precision decreases when the standard deviation increases, if the sample size is smaller. In these cases traditional direct estimators may be not precise and therefore pointless. Currently there are many indirect methods for SAE. The purpose of this paper is to analyze several diff erent types of techniques which produce small area estimates of poverty.

  12. Robust DOA Estimation of Harmonic Signals Using Constrained Filters on Phase Estimates

    DEFF Research Database (Denmark)

    Karimian-Azari, Sam; Jensen, Jesper Rindom; Christensen, Mads Græsbøll

    2014-01-01

    In array signal processing, distances between receivers, e.g., microphones, cause time delays depending on the direction of arrival (DOA) of a signal source. We can then estimate the DOA from the time-difference of arrival (TDOA) estimates. However, many conventional DOA estimators based on TDOA...... estimates are not optimal in colored noise. In this paper, we estimate the DOA of a harmonic signal source from multi-channel phase estimates, which relate to narrowband TDOA estimates. More specifically, we design filters to apply on phase estimates to obtain a DOA estimate with minimum variance. Using...

  13. Mapping Congo Basin vegetation types from 300 m and 1 km multi-sensor time series for carbon stocks and forest areas estimation

    Directory of Open Access Journals (Sweden)

    A. Verhegghen

    2012-12-01

    Full Text Available This study aims to contribute to the understanding of the Congo Basin forests by delivering a detailed map of vegetation types with an improved spatial discrimination and coherence for the whole Congo Basin region. A total of 20 land cover classes were described with the standardized Land Cover Classification System (LCCS developed by the FAO. Based on a semi-automatic processing chain, the Congo Basin vegetation types map was produced by combining 19 months of observations from the Envisat MERIS full resolution products (300 m and 8 yr of daily SPOT VEGETATION (VGT reflectances (1 km. Four zones (north, south and two central were delineated and processed separately according to their seasonal and cloud cover specificities. The discrimination between different vegetation types (e.g. forest and savannas was significantly improved thanks to the MERIS sharp spatial resolution. A better discrimination was achieved in cloudy areas by taking advantage of the temporal consistency of the SPOT VGT observations. This resulted in a precise delineation of the spatial extent of the rural complex in the countries situated along the Atlantic coast. Based on this new map, more accurate estimates of the surface areas of forest types were produced for each country of the Congo Basin. Carbon stocks of the Basin were evaluated to a total of 49 360 million metric tons. The regional scale of the map was an opportunity to investigate what could be an appropriate tree cover threshold for a forest class definition in the Congo Basin countries. A 30% tree cover threshold was suggested. Furthermore, the phenology of the different vegetation types was illustrated systematically with EVI temporal profiles. This Congo Basin forest types map reached a satisfactory overall accuracy of 71.5% and even 78.9% when some classes are aggregated. The values of the Cohen's kappa coefficient, respectively 0.64 and 0.76 indicates a result significantly better than random.

  14. On the relation between S-Estimators and M-Estimators of multivariate location and covariance

    NARCIS (Netherlands)

    Lopuhaa, H.P.

    1987-01-01

    We discuss the relation between S-estimators and M-estimators of multivariate location and covariance. As in the case of the estimation of a multiple regression parameter, S-estimators are shown to satisfy first-order conditions of M-estimators. We show that the influence function IF (x;S F) of

  15. Estimation of the energy needs; Estimation des besoins energetiques

    Energy Technology Data Exchange (ETDEWEB)

    Ailleret, [Electricite de France (EDF), Dir. General des Etudes de Recherches, 75 - Paris (France)

    1955-07-01

    The present report draws up the balance on the present and estimable energy consumption for the next twenty years. The present energy comes mainly of the consumption of coal, oil products and essentially hydraulic electric energy. the market development comes essentially of the development the industrial activity and of new applications tributary of the cost and the distribution of the electric energy. To this effect, the atomic energy offers good industrial perspectives in complement of the energy present resources in order to answer the new needs. (M.B.) [French] Le present rapport dresse le bilan sur la consommation energetique actuelle et previsionnelle pour les vingt prochaines annees. L'energie actuelle provient principalement consommation de charbon, de produits petroliers et d'energie electrique essentiellement hydraulique. l'evolution du marche provient essentielement du developpement l'activite industriel et de nouvelles applications tributaire du cout et de la distribution de l'energie electrique. A cet effet, l'energie atomique offre de bonne perspectives industrielles en complement des sources actuelles energetiques afin de repondre aux nouveaux besoins. (M.B.)

  16. How Valid are Estimates of Occupational Illness?

    Science.gov (United States)

    Hilaski, Harvey J.; Wang, Chao Ling

    1982-01-01

    Examines some of the methods of estimating occupational diseases and suggests that a consensus on the adequacy and reliability of estimates by the Bureau of Labor Statistics and others is not likely. (SK)

  17. State estimation for a hexapod robot

    CSIR Research Space (South Africa)

    Lubbe, Estelle

    2015-09-01

    Full Text Available This paper introduces a state estimation methodology for a hexapod robot that makes use of proprioceptive sensors and a kinematic model of the robot. The methodology focuses on providing reliable full pose state estimation for a commercially...

  18. Access Based Cost Estimation for Beddown Analysis

    National Research Council Canada - National Science Library

    Pennington, Jasper E

    2006-01-01

    The purpose of this research is to develop an automated web-enabled beddown estimation application for Air Mobility Command in order to increase the effectiveness and enhance the robustness of beddown estimates...

  19. Estimated annual economic loss from organ condemnation ...

    African Journals Online (AJOL)

    as a basis for the analysis of estimation of the economic significance of bovine .... percent involvement of each organ were used in the estimation of the financial loss from organ .... DVM thesis, Addis Ababa University, Faculty of Veterinary.

  20. Velocity Estimate Following Air Data System Failure

    National Research Council Canada - National Science Library

    McLaren, Scott A

    2008-01-01

    .... A velocity estimator (VEST) algorithm was developed to combine the inertial and wind velocities to provide an estimate of the aircraft's current true velocity to be used for command path gain scheduling and for display in the cockpit...

  1. On Estimating Quantiles Using Auxiliary Information

    Directory of Open Access Journals (Sweden)

    Berger Yves G.

    2015-03-01

    Full Text Available We propose a transformation-based approach for estimating quantiles using auxiliary information. The proposed estimators can be easily implemented using a regression estimator. We show that the proposed estimators are consistent and asymptotically unbiased. The main advantage of the proposed estimators is their simplicity. Despite the fact the proposed estimators are not necessarily more efficient than their competitors, they offer a good compromise between accuracy and simplicity. They can be used under single and multistage sampling designs with unequal selection probabilities. A simulation study supports our finding and shows that the proposed estimators are robust and of an acceptable accuracy compared to alternative estimators, which can be more computationally intensive.

  2. On Estimation and Testing for Pareto Tails

    Czech Academy of Sciences Publication Activity Database

    Jordanova, P.; Stehlík, M.; Fabián, Zdeněk; Střelec, L.

    2013-01-01

    Roč. 22, č. 1 (2013), s. 89-108 ISSN 0204-9805 Institutional support: RVO:67985807 Keywords : testing against heavy tails * asymptotic properties of estimators * point estimation Subject RIV: BB - Applied Statistics, Operational Research

  3. Estimating the NIH efficient frontier.

    Directory of Open Access Journals (Sweden)

    Dimitrios Bisias

    Full Text Available BACKGROUND: The National Institutes of Health (NIH is among the world's largest investors in biomedical research, with a mandate to: "…lengthen life, and reduce the burdens of illness and disability." Its funding decisions have been criticized as insufficiently focused on disease burden. We hypothesize that modern portfolio theory can create a closer link between basic research and outcome, and offer insight into basic-science related improvements in public health. We propose portfolio theory as a systematic framework for making biomedical funding allocation decisions-one that is directly tied to the risk/reward trade-off of burden-of-disease outcomes. METHODS AND FINDINGS: Using data from 1965 to 2007, we provide estimates of the NIH "efficient frontier", the set of funding allocations across 7 groups of disease-oriented NIH institutes that yield the greatest expected return on investment for a given level of risk, where return on investment is measured by subsequent impact on U.S. years of life lost (YLL. The results suggest that NIH may be actively managing its research risk, given that the volatility of its current allocation is 17% less than that of an equal-allocation portfolio with similar expected returns. The estimated efficient frontier suggests that further improvements in expected return (89% to 119% vs. current or reduction in risk (22% to 35% vs. current are available holding risk or expected return, respectively, constant, and that 28% to 89% greater decrease in average years-of-life-lost per unit risk may be achievable. However, these results also reflect the imprecision of YLL as a measure of disease burden, the noisy statistical link between basic research and YLL, and other known limitations of portfolio theory itself. CONCLUSIONS: Our analysis is intended to serve as a proof-of-concept and starting point for applying quantitative methods to allocating biomedical research funding that are objective, systematic, transparent

  4. Estimating the NIH efficient frontier.

    Science.gov (United States)

    Bisias, Dimitrios; Lo, Andrew W; Watkins, James F

    2012-01-01

    The National Institutes of Health (NIH) is among the world's largest investors in biomedical research, with a mandate to: "…lengthen life, and reduce the burdens of illness and disability." Its funding decisions have been criticized as insufficiently focused on disease burden. We hypothesize that modern portfolio theory can create a closer link between basic research and outcome, and offer insight into basic-science related improvements in public health. We propose portfolio theory as a systematic framework for making biomedical funding allocation decisions-one that is directly tied to the risk/reward trade-off of burden-of-disease outcomes. Using data from 1965 to 2007, we provide estimates of the NIH "efficient frontier", the set of funding allocations across 7 groups of disease-oriented NIH institutes that yield the greatest expected return on investment for a given level of risk, where return on investment is measured by subsequent impact on U.S. years of life lost (YLL). The results suggest that NIH may be actively managing its research risk, given that the volatility of its current allocation is 17% less than that of an equal-allocation portfolio with similar expected returns. The estimated efficient frontier suggests that further improvements in expected return (89% to 119% vs. current) or reduction in risk (22% to 35% vs. current) are available holding risk or expected return, respectively, constant, and that 28% to 89% greater decrease in average years-of-life-lost per unit risk may be achievable. However, these results also reflect the imprecision of YLL as a measure of disease burden, the noisy statistical link between basic research and YLL, and other known limitations of portfolio theory itself. Our analysis is intended to serve as a proof-of-concept and starting point for applying quantitative methods to allocating biomedical research funding that are objective, systematic, transparent, repeatable, and expressly designed to reduce the burden of

  5. Estimating the NIH Efficient Frontier

    Science.gov (United States)

    2012-01-01

    Background The National Institutes of Health (NIH) is among the world’s largest investors in biomedical research, with a mandate to: “…lengthen life, and reduce the burdens of illness and disability.” Its funding decisions have been criticized as insufficiently focused on disease burden. We hypothesize that modern portfolio theory can create a closer link between basic research and outcome, and offer insight into basic-science related improvements in public health. We propose portfolio theory as a systematic framework for making biomedical funding allocation decisions–one that is directly tied to the risk/reward trade-off of burden-of-disease outcomes. Methods and Findings Using data from 1965 to 2007, we provide estimates of the NIH “efficient frontier”, the set of funding allocations across 7 groups of disease-oriented NIH institutes that yield the greatest expected return on investment for a given level of risk, where return on investment is measured by subsequent impact on U.S. years of life lost (YLL). The results suggest that NIH may be actively managing its research risk, given that the volatility of its current allocation is 17% less than that of an equal-allocation portfolio with similar expected returns. The estimated efficient frontier suggests that further improvements in expected return (89% to 119% vs. current) or reduction in risk (22% to 35% vs. current) are available holding risk or expected return, respectively, constant, and that 28% to 89% greater decrease in average years-of-life-lost per unit risk may be achievable. However, these results also reflect the imprecision of YLL as a measure of disease burden, the noisy statistical link between basic research and YLL, and other known limitations of portfolio theory itself. Conclusions Our analysis is intended to serve as a proof-of-concept and starting point for applying quantitative methods to allocating biomedical research funding that are objective, systematic, transparent

  6. Estimation of population mean under systematic sampling

    Science.gov (United States)

    Noor-ul-amin, Muhammad; Javaid, Amjad

    2017-11-01

    In this study we propose a generalized ratio estimator under non-response for systematic random sampling. We also generate a class of estimators through special cases of generalized estimator using different combinations of coefficients of correlation, kurtosis and variation. The mean square errors and mathematical conditions are also derived to prove the efficiency of proposed estimators. Numerical illustration is included using three populations to support the results.

  7. Fast and Statistically Efficient Fundamental Frequency Estimation

    DEFF Research Database (Denmark)

    Nielsen, Jesper Kjær; Jensen, Tobias Lindstrøm; Jensen, Jesper Rindom

    2016-01-01

    Fundamental frequency estimation is a very important task in many applications involving periodic signals. For computational reasons, fast autocorrelation-based estimation methods are often used despite parametric estimation methods having superior estimation accuracy. However, these parametric...... a recursive solver. Via benchmarks, we demonstrate that the computation time is reduced by approximately two orders of magnitude. The proposed fast algorithm is available for download online....

  8. Kernel bandwidth estimation for non-parametric density estimation: a comparative study

    CSIR Research Space (South Africa)

    Van der Walt, CM

    2013-12-01

    Full Text Available We investigate the performance of conventional bandwidth estimators for non-parametric kernel density estimation on a number of representative pattern-recognition tasks, to gain a better understanding of the behaviour of these estimators in high...

  9. Developing first time-series of land surface temperature from AATSR with uncertainty estimates

    Science.gov (United States)

    Ghent, Darren; Remedios, John

    2013-04-01

    Land surface temperature (LST) is the radiative skin temperature of the land, and is one of the key parameters in the physics of land-surface processes on regional and global scales. Earth Observation satellites provide the opportunity to obtain global coverage of LST approximately every 3 days or less. One such source of satellite retrieved LST has been the Advanced Along-Track Scanning Radiometer (AATSR); with LST retrieval being implemented in the AATSR Instrument Processing Facility in March 2004. Here we present first regional and global time-series of LST data from AATSR with estimates of uncertainty. Mean changes in temperature over the last decade will be discussed along with regional patterns. Although time-series across all three ATSR missions have previously been constructed (Kogler et al., 2012), the use of low resolution auxiliary data in the retrieval algorithm and non-optimal cloud masking resulted in time-series artefacts. As such, considerable ESA supported development has been carried out on the AATSR data to address these concerns. This includes the integration of high resolution auxiliary data into the retrieval algorithm and subsequent generation of coefficients and tuning parameters, plus the development of an improved cloud mask based on the simulation of clear sky conditions from radiance transfer modelling (Ghent et al., in prep.). Any inference on this LST record is though of limited value without the accompaniment of an uncertainty estimate; wherein the Joint Committee for Guides in Metrology quote an uncertainty as "a parameter associated with the result of a measurement that characterizes the dispersion of the values that could reasonably be attributed to the measurand that is the value of the particular quantity to be measured". Furthermore, pixel level uncertainty fields are a mandatory requirement in the on-going preparation of the LST product for the upcoming Sea and Land Surface Temperature (SLSTR) instrument on-board Sentinel-3

  10. Development of Numerical Estimation in Young Children

    Science.gov (United States)

    Siegler, Robert S.; Booth, Julie L.

    2004-01-01

    Two experiments examined kindergartners', first graders', and second graders' numerical estimation, the internal representations that gave rise to the estimates, and the general hypothesis that developmental sequences within a domain tend to repeat themselves in new contexts. Development of estimation in this age range on 0-to-100 number lines…

  11. Carleman estimates for some elliptic systems

    International Nuclear Information System (INIS)

    Eller, M

    2008-01-01

    A Carleman estimate for a certain first order elliptic system is proved. The proof is elementary and does not rely on pseudo-differential calculus. This estimate is used to prove Carleman estimates for the isotropic Lame system as well as for the isotropic Maxwell system with C 1 coefficients

  12. Estimating Canopy Dark Respiration for Crop Models

    Science.gov (United States)

    Monje Mejia, Oscar Alberto

    2014-01-01

    Crop production is obtained from accurate estimates of daily carbon gain.Canopy gross photosynthesis (Pgross) can be estimated from biochemical models of photosynthesis using sun and shaded leaf portions and the amount of intercepted photosyntheticallyactive radiation (PAR).In turn, canopy daily net carbon gain can be estimated from canopy daily gross photosynthesis when canopy dark respiration (Rd) is known.

  13. Estimating uncertainty of data limited stock assessments

    DEFF Research Database (Denmark)

    Kokkalis, Alexandros; Eikeset, Anne Maria; Thygesen, Uffe Høgsbro

    2017-01-01

    -limited. Particular emphasis is put on providing uncertainty estimates of the data-limited assessment. We assess four cod stocks in the North-East Atlantic and compare our estimates of stock status (F/Fmsy) with the official assessments. The estimated stock status of all four cod stocks followed the established stock...

  14. Another look at the Grubbs estimators

    KAUST Repository

    Lombard, F.; Potgieter, C.J.

    2012-01-01

    of the estimate is to be within reasonable bounds and if negative precision estimates are to be avoided. We show that the two instrument Grubbs estimator can be improved considerably if fairly reliable preliminary information regarding the ratio of sampling unit

  15. Load Estimation by Frequency Domain Decomposition

    DEFF Research Database (Denmark)

    Pedersen, Ivar Chr. Bjerg; Hansen, Søren Mosegaard; Brincker, Rune

    2007-01-01

    When performing operational modal analysis the dynamic loading is unknown, however, once the modal properties of the structure have been estimated, the transfer matrix can be obtained, and the loading can be estimated by inverse filtering. In this paper loads in frequency domain are estimated by ...

  16. Non-Parametric Estimation of Correlation Functions

    DEFF Research Database (Denmark)

    Brincker, Rune; Rytter, Anders; Krenk, Steen

    In this paper three methods of non-parametric correlation function estimation are reviewed and evaluated: the direct method, estimation by the Fast Fourier Transform and finally estimation by the Random Decrement technique. The basic ideas of the techniques are reviewed, sources of bias are point...

  17. Bayesian techniques for surface fuel loading estimation

    Science.gov (United States)

    Kathy Gray; Robert Keane; Ryan Karpisz; Alyssa Pedersen; Rick Brown; Taylor Russell

    2016-01-01

    A study by Keane and Gray (2013) compared three sampling techniques for estimating surface fine woody fuels. Known amounts of fine woody fuel were distributed on a parking lot, and researchers estimated the loadings using different sampling techniques. An important result was that precise estimates of biomass required intensive sampling for both the planar intercept...

  18. Estimation of exposed dose, 1

    International Nuclear Information System (INIS)

    Okajima, Shunzo

    1976-01-01

    Radioactive atomic fallouts in Nishiyama district of Nagasaki Prefecture are reported on the basis of the survey since 1969. In 1969, the amount of 137 Cs in the body of 50 inhabitants in Nishiyama district was measured using human counter, and was compared with that of non-exposured group. The average value of 137 Cs (pCi/kg) was higher in inhabitants in Nishiyama district (38.5 in men and 24.9 in females) than in the controls (25.5 in men and 14.9 in females). The resurvey in 1971 showed that the amount of 137 Cs was decreased to 76% in men and 60% in females. When the amount of 137 Cs in the body was calculated from the chemical analysis of urine, it was 29.0 +- 8.2 in men and 29.4 +- 26.2 in females in Nishiyama district, and 29.9 +- 8.2 in men and 29.4 +- 11.7 in females in the controls. The content of 137 Cs in soils and crops (potato etc.) was higher in Nishiyama district than in the controls. When the internal exposure dose per year was calculated from the amount of 137 Cs in the body in 1969, it was 0.29 mrad/year in men and 0.19 mrad/year in females. Finally, the internal exposure dose immediately after the explosion was estimated. (Serizawa, K.)

  19. Inflation and cosmological parameter estimation

    Energy Technology Data Exchange (ETDEWEB)

    Hamann, J.

    2007-05-15

    In this work, we focus on two aspects of cosmological data analysis: inference of parameter values and the search for new effects in the inflationary sector. Constraints on cosmological parameters are commonly derived under the assumption of a minimal model. We point out that this procedure systematically underestimates errors and possibly biases estimates, due to overly restrictive assumptions. In a more conservative approach, we analyse cosmological data using a more general eleven-parameter model. We find that regions of the parameter space that were previously thought ruled out are still compatible with the data; the bounds on individual parameters are relaxed by up to a factor of two, compared to the results for the minimal six-parameter model. Moreover, we analyse a class of inflation models, in which the slow roll conditions are briefly violated, due to a step in the potential. We show that the presence of a step generically leads to an oscillating spectrum and perform a fit to CMB and galaxy clustering data. We do not find conclusive evidence for a step in the potential and derive strong bounds on quantities that parameterise the step. (orig.)

  20. Quantum rewinding via phase estimation

    Science.gov (United States)

    Tabia, Gelo Noel

    2015-03-01

    In cryptography, the notion of a zero-knowledge proof was introduced by Goldwasser, Micali, and Rackoff. An interactive proof system is said to be zero-knowledge if any verifier interacting with an honest prover learns nothing beyond the validity of the statement being proven. With recent advances in quantum information technologies, it has become interesting to ask if classical zero-knowledge proof systems remain secure against adversaries with quantum computers. The standard approach to show the zero-knowledge property involves constructing a simulator for a malicious verifier that can be rewinded to a previous step when the simulation fails. In the quantum setting, the simulator can be described by a quantum circuit that takes an arbitrary quantum state as auxiliary input but rewinding becomes a nontrivial issue. Watrous proposed a quantum rewinding technique in the case where the simulation's success probability is independent of the auxiliary input. Here I present a more general quantum rewinding scheme that employs the quantum phase estimation algorithm. This work was funded by institutional research grant IUT2-1 from the Estonian Research Council and by the European Union through the European Regional Development Fund.

  1. Global Warming Estimation from MSU

    Science.gov (United States)

    Prabhakara, C.; Iacovazzi, Robert, Jr.

    1999-01-01

    In this study, we have developed time series of global temperature from 1980-97 based on the Microwave Sounding Unit (MSU) Ch 2 (53.74 GHz) observations taken from polar-orbiting NOAA operational satellites. In order to create these time series, systematic errors (approx. 0.1 K) in the Ch 2 data arising from inter-satellite differences are removed objectively. On the other hand, smaller systematic errors (approx. 0.03 K) in the data due to orbital drift of each satellite cannot be removed objectively. Such errors are expected to remain in the time series and leave an uncertainty in the inferred global temperature trend. With the help of a statistical method, the error in the MSU inferred global temperature trend resulting from orbital drifts and residual inter-satellite differences of all satellites is estimated to be 0.06 K decade. Incorporating this error, our analysis shows that the global temperature increased at a rate of 0.13 +/- 0.06 K decade during 1980-97.

  2. Estimates of LLEA officer availability

    International Nuclear Information System (INIS)

    Berkbigler, K.P.

    1978-05-01

    One element in the Physical Protection of Nuclear Material in Transit Program is a determination of the number of local law enforcement agency (LLEA) officers available to respond to an attack upon a special nuclear material (SNM) carrying convoy. A computer model, COPS, has been developed at Sandia Laboratories to address this problem. Its purposes are to help identify to the SNM shipper areas along a route which may have relatively low police coverage and to aid in the comparison of alternate routes to the same location. Data bases used in COPS include population data from the Bureau of Census and police data published by the FBI. Police are assumed to be distributed in proportion to the population, with adjustable weighting factors. Example results illustrating the model's capabilities are presented for two routes between Los Angeles, CA, and Denver, CO, and for two routes between Columbia, SC, and Syracuse, NY. The estimated police distribution at points along the route is presented. Police availability as a function of time is modeled based on the time-dependent characteristics of a trip. An example demonstrating the effects of jurisdictional restrictions on the size of the response force is given. Alternate routes between two locations are compared by means of cumulative plots

  3. Multimodal Estimation of Distribution Algorithms.

    Science.gov (United States)

    Yang, Qiang; Chen, Wei-Neng; Li, Yun; Chen, C L Philip; Xu, Xiang-Min; Zhang, Jun

    2016-02-15

    Taking the advantage of estimation of distribution algorithms (EDAs) in preserving high diversity, this paper proposes a multimodal EDA. Integrated with clustering strategies for crowding and speciation, two versions of this algorithm are developed, which operate at the niche level. Then these two algorithms are equipped with three distinctive techniques: 1) a dynamic cluster sizing strategy; 2) an alternative utilization of Gaussian and Cauchy distributions to generate offspring; and 3) an adaptive local search. The dynamic cluster sizing affords a potential balance between exploration and exploitation and reduces the sensitivity to the cluster size in the niching methods. Taking advantages of Gaussian and Cauchy distributions, we generate the offspring at the niche level through alternatively using these two distributions. Such utilization can also potentially offer a balance between exploration and exploitation. Further, solution accuracy is enhanced through a new local search scheme probabilistically conducted around seeds of niches with probabilities determined self-adaptively according to fitness values of these seeds. Extensive experiments conducted on 20 benchmark multimodal problems confirm that both algorithms can achieve competitive performance compared with several state-of-the-art multimodal algorithms, which is supported by nonparametric tests. Especially, the proposed algorithms are very promising for complex problems with many local optima.

  4. Multivariate Location Estimation Using Extension of $R$-Estimates Through $U$-Statistics Type Approach

    OpenAIRE

    Chaudhuri, Probal

    1992-01-01

    We consider a class of $U$-statistics type estimates for multivariate location. The estimates extend some $R$-estimates to multivariate data. In particular, the class of estimates includes the multivariate median considered by Gini and Galvani (1929) and Haldane (1948) and a multivariate extension of the well-known Hodges-Lehmann (1963) estimate. We explore large sample behavior of these estimates by deriving a Bahadur type representation for them. In the process of developing these asymptoti...

  5. Indirect estimators in US federal programs

    CERN Document Server

    1996-01-01

    In 1991, a subcommittee of the Federal Committee on Statistical Methodology met to document the use of indirect estimators - that is, estimators which use data drawn from a domain or time different from the domain or time for which an estimate is required. This volume comprises the eight reports which describe the use of indirect estimators and they are based on case studies from a variety of federal programs. As a result, many researchers will find this book provides a valuable survey of how indirect estimators are used in practice and which addresses some of the pitfalls of these methods.

  6. Parameter Estimation in Continuous Time Domain

    Directory of Open Access Journals (Sweden)

    Gabriela M. ATANASIU

    2016-12-01

    Full Text Available This paper will aim to presents the applications of a continuous-time parameter estimation method for estimating structural parameters of a real bridge structure. For the purpose of illustrating this method two case studies of a bridge pile located in a highly seismic risk area are considered, for which the structural parameters for the mass, damping and stiffness are estimated. The estimation process is followed by the validation of the analytical results and comparison with them to the measurement data. Further benefits and applications for the continuous-time parameter estimation method in civil engineering are presented in the final part of this paper.

  7. Site characterization: a spatial estimation approach

    International Nuclear Information System (INIS)

    Candy, J.V.; Mao, N.

    1980-10-01

    In this report the application of spatial estimation techniques or kriging to groundwater aquifers and geological borehole data is considered. The adequacy of these techniques to reliably develop contour maps from various data sets is investigated. The estimator is developed theoretically in a simplified fashion using vector-matrix calculus. The practice of spatial estimation is discussed and the estimator is then applied to two groundwater aquifer systems and used also to investigate geological formations from borehole data. It is shown that the estimator can provide reasonable results when designed properly

  8. A Gaussian IV estimator of cointegrating relations

    DEFF Research Database (Denmark)

    Bårdsen, Gunnar; Haldrup, Niels

    2006-01-01

    In static single equation cointegration regression modelsthe OLS estimator will have a non-standard distribution unless regressors arestrictly exogenous. In the literature a number of estimators have been suggestedto deal with this problem, especially by the use of semi-nonparametricestimators. T......In static single equation cointegration regression modelsthe OLS estimator will have a non-standard distribution unless regressors arestrictly exogenous. In the literature a number of estimators have been suggestedto deal with this problem, especially by the use of semi...... in cointegrating regressions. These instruments are almost idealand simulations show that the IV estimator using such instruments alleviatethe endogeneity problem extremely well in both finite and large samples....

  9. Optimal estimation of the optomechanical coupling strength

    Science.gov (United States)

    Bernád, József Zsolt; Sanavio, Claudio; Xuereb, André

    2018-06-01

    We apply the formalism of quantum estimation theory to obtain information about the value of the nonlinear optomechanical coupling strength. In particular, we discuss the minimum mean-square error estimator and a quantum Cramér-Rao-type inequality for the estimation of the coupling strength. Our estimation strategy reveals some cases where quantum statistical inference is inconclusive and merely results in the reinforcement of prior expectations. We show that these situations also involve the highest expected information losses. We demonstrate that interaction times on the order of one time period of mechanical oscillations are the most suitable for our estimation scenario, and compare situations involving different photon and phonon excitations.

  10. Bayesian estimation and tracking a practical guide

    CERN Document Server

    Haug, Anton J

    2012-01-01

    A practical approach to estimating and tracking dynamic systems in real-worl applications Much of the literature on performing estimation for non-Gaussian systems is short on practical methodology, while Gaussian methods often lack a cohesive derivation. Bayesian Estimation and Tracking addresses the gap in the field on both accounts, providing readers with a comprehensive overview of methods for estimating both linear and nonlinear dynamic systems driven by Gaussian and non-Gaussian noices. Featuring a unified approach to Bayesian estimation and tracking, the book emphasizes the derivation

  11. Budget estimates. Fiscal year 1998

    International Nuclear Information System (INIS)

    1997-02-01

    The U.S. Congress has determined that the safe use of nuclear materials for peaceful purposes is a legitimate and important national goal. It has entrusted the Nuclear Regulatory Commission (NRC) with the primary Federal responsibility for achieving that goal. The NRC's mission, therefore, is to regulate the Nation's civilian use of byproduct, source, and special nuclear materials to ensure adequate protection of public health and safety, to promote the common defense and security, and to protect the environment. The NRC's FY 1998 budget requests new budget authority of $481,300,000 to be funded by two appropriations - one is the NRC's Salaraies and Expenses appropriation for $476,500,000, and the other is NRC's Office of Inspector General appropriation for $4,800,000. Of the funds appropriated to the NRC's Salaries and Expenses, $17,000,000, shall be derived from the Nuclear Waste Fund and $2,000,000 shall be derived from general funds. The proposed FY 1998 appropriation legislation would also exempt the $2,000,000 for regulatory reviews and other assistance provided to the Department of Energy from the requirement that the NRC collect 100 percent of its budget from fees. The sums appropriated to the NRC's Salaries and Expenses and NRC's Office of Inspector General shall be reduced by the amount of revenues received during FY 1998 from licensing fees, inspection services, and other services and collections, so as to result in a final FY 1998 appropriation for the NRC of an estimated $19,000,000 - the amount appropriated from the Nuclear Waste Fund and from general funds. Revenues derived from enforcement actions shall be deposited to miscellaneous receipts of the Treasury

  12. Budget estimates. Fiscal year 1998

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-02-01

    The U.S. Congress has determined that the safe use of nuclear materials for peaceful purposes is a legitimate and important national goal. It has entrusted the Nuclear Regulatory Commission (NRC) with the primary Federal responsibility for achieving that goal. The NRC`s mission, therefore, is to regulate the Nation`s civilian use of byproduct, source, and special nuclear materials to ensure adequate protection of public health and safety, to promote the common defense and security, and to protect the environment. The NRC`s FY 1998 budget requests new budget authority of $481,300,000 to be funded by two appropriations - one is the NRC`s Salaraies and Expenses appropriation for $476,500,000, and the other is NRC`s Office of Inspector General appropriation for $4,800,000. Of the funds appropriated to the NRC`s Salaries and Expenses, $17,000,000, shall be derived from the Nuclear Waste Fund and $2,000,000 shall be derived from general funds. The proposed FY 1998 appropriation legislation would also exempt the $2,000,000 for regulatory reviews and other assistance provided to the Department of Energy from the requirement that the NRC collect 100 percent of its budget from fees. The sums appropriated to the NRC`s Salaries and Expenses and NRC`s Office of Inspector General shall be reduced by the amount of revenues received during FY 1998 from licensing fees, inspection services, and other services and collections, so as to result in a final FY 1998 appropriation for the NRC of an estimated $19,000,000 - the amount appropriated from the Nuclear Waste Fund and from general funds. Revenues derived from enforcement actions shall be deposited to miscellaneous receipts of the Treasury.

  13. Optimal estimations of random fields using kriging

    International Nuclear Information System (INIS)

    Barua, G.

    2004-01-01

    Kriging is a statistical procedure of estimating the best weights of a linear estimator. Suppose there is a point or an area or a volume of ground over which we do not know a hydrological variable and wish to estimate it. In order to produce an estimator, we need some information to work on, usually available in the form of samples. There can, be an infinite number of linear unbiased estimators for which the weights sum up to one. The problem is how to determine the best weights for which the estimation variance is the least. The system of equations as shown above is generally known as the kriging system and the estimator produced is the kriging estimator. The variance of the kriging estimator can be found by substitution of the weights in the general estimation variance equation. We assume here a linear model for the semi-variogram. Applying the model to the equation, we obtain a set of kriging equations. By solving these equations, we obtain the kriging variance. Thus, for the one-dimensional problem considered, kriging definitely gives a better estimation variance than the extension variance

  14. Monte Carlo-based tail exponent estimator

    Science.gov (United States)

    Barunik, Jozef; Vacha, Lukas

    2010-11-01

    In this paper we propose a new approach to estimation of the tail exponent in financial stock markets. We begin the study with the finite sample behavior of the Hill estimator under α-stable distributions. Using large Monte Carlo simulations, we show that the Hill estimator overestimates the true tail exponent and can hardly be used on samples with small length. Utilizing our results, we introduce a Monte Carlo-based method of estimation for the tail exponent. Our proposed method is not sensitive to the choice of tail size and works well also on small data samples. The new estimator also gives unbiased results with symmetrical confidence intervals. Finally, we demonstrate the power of our estimator on the international world stock market indices. On the two separate periods of 2002-2005 and 2006-2009, we estimate the tail exponent.

  15. Robust bearing estimation for 3-component stations

    International Nuclear Information System (INIS)

    CLAASSEN, JOHN P.

    2000-01-01

    A robust bearing estimation process for 3-component stations has been developed and explored. The method, called SEEC for Search, Estimate, Evaluate and Correct, intelligently exploits the inherent information in the arrival at every step of the process to achieve near-optimal results. In particular the approach uses a consistent framework to define the optimal time-frequency windows on which to make estimates, to make the bearing estimates themselves, to construct metrics helpful in choosing the better estimates or admitting that the bearing is immeasurable, and finally to apply bias corrections when calibration information is available to yield a single final estimate. The algorithm was applied to a small but challenging set of events in a seismically active region. It demonstrated remarkable utility by providing better estimates and insights than previously available. Various monitoring implications are noted from these findings

  16. Iterative Estimation in Turbo Equalization Process

    Directory of Open Access Journals (Sweden)

    MORGOS Lucian

    2014-05-01

    Full Text Available This paper presents the iterative estimation in turbo equalization process. Turbo equalization is the process of reception in which equalization and decoding are done together, not as separate processes. For the equalizer to work properly, it must receive before equalization accurate information about the value of the channel impulse response. This estimation of channel impulse response is done by transmission of a training sequence known at reception. Knowing both the transmitted and received sequence, it can be calculated estimated value of the estimated the channel impulse response using one of the well-known estimation algorithms. The estimated value can be also iterative recalculated based on the sequence data available at the output of the channel and estimated sequence data coming from turbo equalizer output, thereby refining the obtained results.

  17. Weighted conditional least-squares estimation

    International Nuclear Information System (INIS)

    Booth, J.G.

    1987-01-01

    A two-stage estimation procedure is proposed that generalizes the concept of conditional least squares. The method is instead based upon the minimization of a weighted sum of squares, where the weights are inverses of estimated conditional variance terms. Some general conditions are given under which the estimators are consistent and jointly asymptotically normal. More specific details are given for ergodic Markov processes with stationary transition probabilities. A comparison is made with the ordinary conditional least-squares estimators for two simple branching processes with immigration. The relationship between weighted conditional least squares and other, more well-known, estimators is also investigated. In particular, it is shown that in many cases estimated generalized least-squares estimators can be obtained using the weighted conditional least-squares approach. Applications to stochastic compartmental models, and linear models with nested error structures are considered

  18. Estimates of Free-tropospheric NO2 Abundance from the Aura Ozone Monitoring Instrument (OMI) Using Cloud Slicing Technique

    Science.gov (United States)

    Choi, S.; Joiner, J.; Krotkov, N. A.; Choi, Y.; Duncan, B. N.; Celarier, E. A.; Bucsela, E. J.; Vasilkov, A. P.; Strahan, S. E.; Veefkind, J. P.; Cohen, R. C.; Weinheimer, A. J.; Pickering, K. E.

    2013-12-01

    Total column measurements of NO2 from space-based sensors are of interest to the atmospheric chemistry and air quality communities; the relatively short lifetime of near-surface NO2 produces satellite-observed hot-spots near pollution sources including power plants and urban areas. However, estimates of NO2 concentrations in the free-troposphere, where lifetimes are longer and the radiative impact through ozone formation is larger, are severely lacking. Such information is critical to evaluate chemistry-climate and air quality models that are used for prediction of the evolution of tropospheric ozone and its impact of climate and air quality. Here, we retrieve free-tropospheric NO2 volume mixing ratio (VMR) using the cloud slicing technique. We use cloud optical centroid pressures (OCPs) as well as collocated above-cloud vertical NO2 columns (defined as the NO2 column from top of the atmosphere to the cloud OCP) from the Ozone Monitoring Instrument (OMI). The above-cloud NO2 vertical columns used in our study are retrieved independent of a priori NO2 profile information. In the cloud-slicing approach, the slope of the above-cloud NO2 column versus the cloud optical centroid pressure is proportional to the NO2 volume mixing ratio (VMR) for a given pressure (altitude) range. We retrieve NO2 volume mixing ratios and compare the obtained NO2 VMRs with in-situ aircraft profiles measured during the NASA Intercontinental Chemical Transport Experiment Phase B (INTEX-B) campaign in 2006. The agreement is good when proper data screening is applied. In addition, the OMI cloud slicing reports a high NO2 VMR where the aircraft reported lightning NOx during the Deep Convection Clouds and Chemistry (DC3) campaign in 2012. We also provide a global seasonal climatology of free-tropospheric NO2 VMR in cloudy conditions. Enhanced NO2 in free troposphere commonly appears near polluted urban locations where NO2 produced in the boundary layer may be transported vertically out of the

  19. COVARIANCE ASSISTED SCREENING AND ESTIMATION.

    Science.gov (United States)

    Ke, By Tracy; Jin, Jiashun; Fan, Jianqing

    2014-11-01

    Consider a linear model Y = X β + z , where X = X n,p and z ~ N (0, I n ). The vector β is unknown and it is of interest to separate its nonzero coordinates from the zero ones (i.e., variable selection). Motivated by examples in long-memory time series (Fan and Yao, 2003) and the change-point problem (Bhattacharya, 1994), we are primarily interested in the case where the Gram matrix G = X ' X is non-sparse but sparsifiable by a finite order linear filter. We focus on the regime where signals are both rare and weak so that successful variable selection is very challenging but is still possible. We approach this problem by a new procedure called the Covariance Assisted Screening and Estimation (CASE). CASE first uses a linear filtering to reduce the original setting to a new regression model where the corresponding Gram (covariance) matrix is sparse. The new covariance matrix induces a sparse graph, which guides us to conduct multivariate screening without visiting all the submodels. By interacting with the signal sparsity, the graph enables us to decompose the original problem into many separated small-size subproblems (if only we know where they are!). Linear filtering also induces a so-called problem of information leakage , which can be overcome by the newly introduced patching technique. Together, these give rise to CASE, which is a two-stage Screen and Clean (Fan and Song, 2010; Wasserman and Roeder, 2009) procedure, where we first identify candidates of these submodels by patching and screening , and then re-examine each candidate to remove false positives. For any procedure β̂ for variable selection, we measure the performance by the minimax Hamming distance between the sign vectors of β̂ and β. We show that in a broad class of situations where the Gram matrix is non-sparse but sparsifiable, CASE achieves the optimal rate of convergence. The results are successfully applied to long-memory time series and the change-point model.

  20. Comparison of hourly surface downwelling solar radiation estimated from MSG-SEVIRI and forecast by the RAMS model with pyranometers over Italy

    Science.gov (United States)

    Federico, Stefano; Torcasio, Rosa Claudia; Sanò, Paolo; Casella, Daniele; Campanelli, Monica; Fokke Meirink, Jan; Wang, Ping; Vergari, Stefania; Diémoz, Henri; Dietrich, Stefano

    2017-06-01

    In this paper, we evaluate the performance of two global horizontal solar irradiance (GHI) estimates, one derived from Meteosat Second Generation (MSG) and another from the 1-day forecast of the Regional Atmospheric Modeling System (RAMS) mesoscale model. The horizontal resolution of the MSG-GHI is 3 × 5 km2 over Italy, which is the focus area of this study. For this paper, RAMS has the horizontal resolution of 4 km.The performances of the MSG-GHI estimate and RAMS-GHI 1-day forecast are evaluated for 1 year (1 June 2013-31 May 2014) against data of 12 ground-based pyranometers over Italy spanning a range of climatic conditions, i.e. from maritime Mediterranean to Alpine climate.Statistics for hourly GHI and daily integrated GHI are presented for the four seasons and the whole year for all the measurement sites. Different sky conditions are considered in the analysisResults for hourly data show an evident dependence on the sky conditions, with the root mean square error (RMSE) increasing from clear to cloudy conditions. The RMSE is substantially higher for Alpine stations in all the seasons, mainly because of the increase of the cloud coverage for these stations, which is not well represented at the satellite and model resolutions. Considering the yearly statistics computed from hourly data for the RAMS model, the RMSE ranges from 152 W m-2 (31 %) obtained for Cozzo Spadaro, a maritime station, to 287 W m-2 (82 %) for Aosta, an Alpine site. Considering the yearly statistics computed from hourly data for MSG-GHI, the minimum RMSE is for Cozzo Spadaro (71 W m-2, 14 %), while the maximum is for Aosta (181 W m-2, 51 %). The mean bias error (MBE) shows the tendency of RAMS to over-forecast the GHI, while no specific behaviour is found for MSG-GHI.Results for daily integrated GHI show a lower RMSE compared to hourly GHI evaluation for both RAMS-GHI 1-day forecast and MSG-GHI estimate. Considering the yearly evaluation, the RMSE of daily integrated GHI is at least 9

  1. Atmospheric Turbulence Estimates from a Pulsed Lidar

    Science.gov (United States)

    Pruis, Matthew J.; Delisi, Donald P.; Ahmad, Nash'at N.; Proctor, Fred H.

    2013-01-01

    Estimates of the eddy dissipation rate (EDR) were obtained from measurements made by a coherent pulsed lidar and compared with estimates from mesoscale model simulations and measurements from an in situ sonic anemometer at the Denver International Airport and with EDR estimates from the last observation time of the trailing vortex pair. The estimates of EDR from the lidar were obtained using two different methodologies. The two methodologies show consistent estimates of the vertical profiles. Comparison of EDR derived from the Weather Research and Forecast (WRF) mesoscale model with the in situ lidar estimates show good agreement during the daytime convective boundary layer, but the WRF simulations tend to overestimate EDR during the nighttime. The EDR estimates from a sonic anemometer located at 7.3 meters above ground level are approximately one order of magnitude greater than both the WRF and lidar estimates - which are from greater heights - during the daytime convective boundary layer and substantially greater during the nighttime stable boundary layer. The consistency of the EDR estimates from different methods suggests a reasonable ability to predict the temporal evolution of a spatially averaged vertical profile of EDR in an airport terminal area using a mesoscale model during the daytime convective boundary layer. In the stable nighttime boundary layer, there may be added value to EDR estimates provided by in situ lidar measurements.

  2. Cosmochemical Estimates of Mantle Composition

    Science.gov (United States)

    Palme, H.; O'Neill, H. St. C.

    2003-12-01

    , and a crust. Both Daubrée and Boisse also expected that the Earth was composed of a similar sequence of concentric layers (see Burke, 1986; Marvin, 1996).At the beginning of the twentieth century Harkins at the University of Chicago thought that meteorites would provide a better estimate for the bulk composition of the Earth than the terrestrial rocks collected at the surface as we have only access to the "mere skin" of the Earth. Harkins made an attempt to reconstruct the composition of the hypothetical meteorite planet by compiling compositional data for 125 stony and 318 iron meteorites, and mixing the two components in ratios based on the observed falls of stones and irons. The results confirmed his prediction that elements with even atomic numbers are more abundant and therefore more stable than those with odd atomic numbers and he concluded that the elemental abundances in the bulk meteorite planet are determined by nucleosynthetic processes. For his meteorite planet Harkins calculated Mg/Si, Al/Si, and Fe/Si atomic ratios of 0.86, 0.079, and 0.83, very closely resembling corresponding ratios of the average solar system based on presently known element abundances in the Sun and in CI-meteorites (see Burke, 1986).If the Earth were similar compositionally to the meteorite planet, it should have a similarly high iron content, which requires that the major fraction of iron is concentrated in the interior of the Earth. The presence of a central metallic core to the Earth was suggested by Wiechert in 1897. The existence of the core was firmly established using the study of seismic wave propagation by Oldham in 1906 with the outer boundary of the core accurately located at a depth of 2,900km by Beno Gutenberg in 1913. In 1926 the fluidity of the outer core was finally accepted. The high density of the core and the high abundance of iron and nickel in meteorites led very early to the suggestion that iron and nickel are the dominant elements in the Earth's core (Brush

  3. Entropy estimates of small data sets

    Energy Technology Data Exchange (ETDEWEB)

    Bonachela, Juan A; Munoz, Miguel A [Departamento de Electromagnetismo y Fisica de la Materia and Instituto de Fisica Teorica y Computacional Carlos I, Facultad de Ciencias, Universidad de Granada, 18071 Granada (Spain); Hinrichsen, Haye [Fakultaet fuer Physik und Astronomie, Universitaet Wuerzburg, Am Hubland, 97074 Wuerzburg (Germany)

    2008-05-23

    Estimating entropies from limited data series is known to be a non-trivial task. Naive estimations are plagued with both systematic (bias) and statistical errors. Here, we present a new 'balanced estimator' for entropy functionals (Shannon, Renyi and Tsallis) specially devised to provide a compromise between low bias and small statistical errors, for short data series. This new estimator outperforms other currently available ones when the data sets are small and the probabilities of the possible outputs of the random variable are not close to zero. Otherwise, other well-known estimators remain a better choice. The potential range of applicability of this estimator is quite broad specially for biological and digital data series. (fast track communication)

  4. Entropy estimates of small data sets

    International Nuclear Information System (INIS)

    Bonachela, Juan A; Munoz, Miguel A; Hinrichsen, Haye

    2008-01-01

    Estimating entropies from limited data series is known to be a non-trivial task. Naive estimations are plagued with both systematic (bias) and statistical errors. Here, we present a new 'balanced estimator' for entropy functionals (Shannon, Renyi and Tsallis) specially devised to provide a compromise between low bias and small statistical errors, for short data series. This new estimator outperforms other currently available ones when the data sets are small and the probabilities of the possible outputs of the random variable are not close to zero. Otherwise, other well-known estimators remain a better choice. The potential range of applicability of this estimator is quite broad specially for biological and digital data series. (fast track communication)

  5. Relative Pose Estimation Algorithm with Gyroscope Sensor

    Directory of Open Access Journals (Sweden)

    Shanshan Wei

    2016-01-01

    Full Text Available This paper proposes a novel vision and inertial fusion algorithm S2fM (Simplified Structure from Motion for camera relative pose estimation. Different from current existing algorithms, our algorithm estimates rotation parameter and translation parameter separately. S2fM employs gyroscopes to estimate camera rotation parameter, which is later fused with the image data to estimate camera translation parameter. Our contributions are in two aspects. (1 Under the circumstance that no inertial sensor can estimate accurately enough translation parameter, we propose a translation estimation algorithm by fusing gyroscope sensor and image data. (2 Our S2fM algorithm is efficient and suitable for smart devices. Experimental results validate efficiency of the proposed S2fM algorithm.

  6. Nondestructive, stereological estimation of canopy surface area

    DEFF Research Database (Denmark)

    Wulfsohn, Dvora-Laio; Sciortino, Marco; Aaslyng, Jesper M.

    2010-01-01

    We describe a stereological procedure to estimate the total leaf surface area of a plant canopy in vivo, and address the problem of how to predict the variance of the corresponding estimator. The procedure involves three nested systematic uniform random sampling stages: (i) selection of plants from...... a canopy using the smooth fractionator, (ii) sampling of leaves from the selected plants using the fractionator, and (iii) area estimation of the sampled leaves using point counting. We apply this procedure to estimate the total area of a chrysanthemum (Chrysanthemum morifolium L.) canopy and evaluate both...... the time required and the precision of the estimator. Furthermore, we compare the precision of point counting for three different grid intensities with that of several standard leaf area measurement techniques. Results showed that the precision of the plant leaf area estimator based on point counting...

  7. Resilient Distributed Estimation Through Adversary Detection

    Science.gov (United States)

    Chen, Yuan; Kar, Soummya; Moura, Jose M. F.

    2018-05-01

    This paper studies resilient multi-agent distributed estimation of an unknown vector parameter when a subset of the agents is adversarial. We present and analyze a Flag Raising Distributed Estimator ($\\mathcal{FRDE}$) that allows the agents under attack to perform accurate parameter estimation and detect the adversarial agents. The $\\mathcal{FRDE}$ algorithm is a consensus+innovations estimator in which agents combine estimates of neighboring agents (consensus) with local sensing information (innovations). We establish that, under $\\mathcal{FRDE}$, either the uncompromised agents' estimates are almost surely consistent or the uncompromised agents detect compromised agents if and only if the network of uncompromised agents is connected and globally observable. Numerical examples illustrate the performance of $\\mathcal{FRDE}$.

  8. ESTIMATION ACCURACY OF EXPONENTIAL DISTRIBUTION PARAMETERS

    Directory of Open Access Journals (Sweden)

    muhammad zahid rashid

    2011-04-01

    Full Text Available The exponential distribution is commonly used to model the behavior of units that have a constant failure rate. The two-parameter exponential distribution provides a simple but nevertheless useful model for the analysis of lifetimes, especially when investigating reliability of technical equipment.This paper is concerned with estimation of parameters of the two parameter (location and scale exponential distribution. We used the least squares method (LSM, relative least squares method (RELS, ridge regression method (RR,  moment estimators (ME, modified moment estimators (MME, maximum likelihood estimators (MLE and modified maximum likelihood estimators (MMLE. We used the mean square error MSE, and total deviation TD, as measurement for the comparison between these methods. We determined the best method for estimation using different values for the parameters and different sample sizes

  9. Estimating the Doppler centroid of SAR data

    DEFF Research Database (Denmark)

    Madsen, Søren Nørvang

    1989-01-01

    attractive properties. An evaluation based on an existing SEASAT processor is reported. The time-domain algorithms are shown to be extremely efficient with respect to requirements on calculations and memory, and hence they are well suited to real-time systems where the Doppler estimation is based on raw SAR......After reviewing frequency-domain techniques for estimating the Doppler centroid of synthetic-aperture radar (SAR) data, the author describes a time-domain method and highlights its advantages. In particular, a nonlinear time-domain algorithm called the sign-Doppler estimator (SDE) is shown to have...... data. For offline processors where the Doppler estimation is performed on processed data, which removes the problem of partial coverage of bright targets, the ΔE estimator and the CDE (correlation Doppler estimator) algorithm give similar performance. However, for nonhomogeneous scenes it is found...

  10. Science yield estimation for AFTA coronagraphs

    Science.gov (United States)

    Traub, Wesley A.; Belikov, Ruslan; Guyon, Olivier; Kasdin, N. Jeremy; Krist, John; Macintosh, Bruce; Mennesson, Bertrand; Savransky, Dmitry; Shao, Michael; Serabyn, Eugene; Trauger, John

    2014-08-01

    We describe the algorithms and results of an estimation of the science yield for five candidate coronagraph designs for the WFIRST-AFTA space mission. The targets considered are of three types, known radial-velocity planets, expected but as yet undiscovered exoplanets, and debris disks, all around nearby stars. The results of the original estimation are given, as well as those from subsequently updated designs that take advantage of experience from the initial estimates.

  11. Estimating Elevation Angles From SAR Crosstalk

    Science.gov (United States)

    Freeman, Anthony

    1994-01-01

    Scheme for processing polarimetric synthetic-aperture-radar (SAR) image data yields estimates of elevation angles along radar beam to target resolution cells. By use of estimated elevation angles, measured distances along radar beam to targets (slant ranges), and measured altitude of aircraft carrying SAR equipment, one can estimate height of target terrain in each resolution cell. Monopulselike scheme yields low-resolution topographical data.

  12. Robust motion estimation using connected operators

    OpenAIRE

    Salembier Clairon, Philippe Jean; Sanson, H

    1997-01-01

    This paper discusses the use of connected operators for robust motion estimation The proposed strategy involves a motion estimation step extracting the dominant motion and a ltering step relying on connected operators that remove objects that do not fol low the dominant motion. These two steps are iterated in order to obtain an accurate motion estimation and a precise de nition of the objects fol lowing this motion This strategy can be applied on the entire frame or on individual connected c...

  13. Application of spreadsheet to estimate infiltration parameters

    OpenAIRE

    Zakwan, Mohammad; Muzzammil, Mohammad; Alam, Javed

    2016-01-01

    Infiltration is the process of flow of water into the ground through the soil surface. Soil water although contributes a negligible fraction of total water present on earth surface, but is of utmost importance for plant life. Estimation of infiltration rates is of paramount importance for estimation of effective rainfall, groundwater recharge, and designing of irrigation systems. Numerous infiltration models are in use for estimation of infiltration rates. The conventional graphical approach ...

  14. Dynamic Diffusion Estimation in Exponential Family Models

    Czech Academy of Sciences Publication Activity Database

    Dedecius, Kamil; Sečkárová, Vladimíra

    2013-01-01

    Roč. 20, č. 11 (2013), s. 1114-1117 ISSN 1070-9908 R&D Projects: GA MŠk 7D12004; GA ČR GA13-13502S Keywords : diffusion estimation * distributed estimation * paremeter estimation Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 1.639, year: 2013 http://library.utia.cas.cz/separaty/2013/AS/dedecius-0396518.pdf

  15. State energy data report 1994: Consumption estimates

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-10-01

    This document provides annual time series estimates of State-level energy consumption by major economic sector. The estimates are developed in the State Energy Data System (SEDS), operated by EIA. SEDS provides State energy consumption estimates to members of Congress, Federal and State agencies, and the general public, and provides the historical series needed for EIA`s energy models. Division is made for each energy type and end use sector. Nuclear electric power is included.

  16. Self-learning estimation of quantum states

    International Nuclear Information System (INIS)

    Hannemann, Th.; Reiss, D.; Balzer, Ch.; Neuhauser, W.; Toschek, P.E.; Wunderlich, Ch.

    2002-01-01

    We report the experimental estimation of arbitrary qubit states using a succession of N measurements on individual qubits, where the measurement basis is changed during the estimation procedure conditioned on the outcome of previous measurements (self-learning estimation). Two hyperfine states of a single trapped 171 Yb + ion serve as a qubit. It is demonstrated that the difference in fidelity between this adaptive strategy and passive strategies increases in the presence of decoherence

  17. Estimation of Correlation Functions by Random Decrement

    DEFF Research Database (Denmark)

    Asmussen, J. C.; Brincker, Rune

    This paper illustrates how correlation functions can be estimated by the random decrement technique. Several different formulations of the random decrement technique, estimating the correlation functions are considered. The speed and accuracy of the different formulations of the random decrement...... and the length of the correlation functions. The accuracy of the estimates with respect to the theoretical correlation functions and the modal parameters are both investigated. The modal parameters are extracted from the correlation functions using the polyreference time domain technique....

  18. State energy data report 1994: Consumption estimates

    International Nuclear Information System (INIS)

    1996-10-01

    This document provides annual time series estimates of State-level energy consumption by major economic sector. The estimates are developed in the State Energy Data System (SEDS), operated by EIA. SEDS provides State energy consumption estimates to members of Congress, Federal and State agencies, and the general public, and provides the historical series needed for EIA's energy models. Division is made for each energy type and end use sector. Nuclear electric power is included

  19. UAV State Estimation Modeling Techniques in AHRS

    Science.gov (United States)

    Razali, Shikin; Zhahir, Amzari

    2017-11-01

    Autonomous unmanned aerial vehicle (UAV) system is depending on state estimation feedback to control flight operation. Estimation on the correct state improves navigation accuracy and achieves flight mission safely. One of the sensors configuration used in UAV state is Attitude Heading and Reference System (AHRS) with application of Extended Kalman Filter (EKF) or feedback controller. The results of these two different techniques in estimating UAV states in AHRS configuration are displayed through position and attitude graphs.

  20. Improved diagnostic model for estimating wind energy

    Energy Technology Data Exchange (ETDEWEB)

    Endlich, R.M.; Lee, J.D.

    1983-03-01

    Because wind data are available only at scattered locations, a quantitative method is needed to estimate the wind resource at specific sites where wind energy generation may be economically feasible. This report describes a computer model that makes such estimates. The model uses standard weather reports and terrain heights in deriving wind estimates; the method of computation has been changed from what has been used previously. The performance of the current model is compared with that of the earlier version at three sites; estimates of wind energy at four new sites are also presented.

  1. Outer planet probe cost estimates: First impressions

    Science.gov (United States)

    Niehoff, J.

    1974-01-01

    An examination was made of early estimates of outer planetary atmospheric probe cost by comparing the estimates with past planetary projects. Of particular interest is identification of project elements which are likely cost drivers for future probe missions. Data are divided into two parts: first, the description of a cost model developed by SAI for the Planetary Programs Office of NASA, and second, use of this model and its data base to evaluate estimates of probe costs. Several observations are offered in conclusion regarding the credibility of current estimates and specific areas of the outer planet probe concept most vulnerable to cost escalation.

  2. Application of spreadsheet to estimate infiltration parameters

    Directory of Open Access Journals (Sweden)

    Mohammad Zakwan

    2016-09-01

    Full Text Available Infiltration is the process of flow of water into the ground through the soil surface. Soil water although contributes a negligible fraction of total water present on earth surface, but is of utmost importance for plant life. Estimation of infiltration rates is of paramount importance for estimation of effective rainfall, groundwater recharge, and designing of irrigation systems. Numerous infiltration models are in use for estimation of infiltration rates. The conventional graphical approach for estimation of infiltration parameters often fails to estimate the infiltration parameters precisely. The generalised reduced gradient (GRG solver is reported to be a powerful tool for estimating parameters of nonlinear equations and it has, therefore, been implemented to estimate the infiltration parameters in the present paper. Field data of infiltration rate available in literature for sandy loam soils of Umuahia, Nigeria were used to evaluate the performance of GRG solver. A comparative study of graphical method and GRG solver shows that the performance of GRG solver is better than that of conventional graphical method for estimation of infiltration rates. Further, the performance of Kostiakov model has been found to be better than the Horton and Philip's model in most of the cases based on both the approaches of parameter estimation.

  3. Estimation of Conditional Quantile using Neural Networks

    DEFF Research Database (Denmark)

    Kulczycki, P.; Schiøler, Henrik

    1999-01-01

    The problem of estimating conditional quantiles using neural networks is investigated here. A basic structure is developed using the methodology of kernel estimation, and a theory guaranteeing con-sistency on a mild set of assumptions is provided. The constructed structure constitutes a basis...... for the design of a variety of different neural networks, some of which are considered in detail. The task of estimating conditional quantiles is related to Bayes point estimation whereby a broad range of applications within engineering, economics and management can be suggested. Numerical results illustrating...... the capabilities of the elaborated neural network are also given....

  4. Track length estimation applied to point detectors

    International Nuclear Information System (INIS)

    Rief, H.; Dubi, A.; Elperin, T.

    1984-01-01

    The concept of the track length estimator is applied to the uncollided point flux estimator (UCF) leading to a new algorithm of calculating fluxes at a point. It consists essentially of a line integral of the UCF, and although its variance is unbounded, the convergence rate is that of a bounded variance estimator. In certain applications, involving detector points in the vicinity of collimated beam sources, it has a lower variance than the once-more-collided point flux estimator, and its application is more straightforward

  5. OPTIMAL CORRELATION ESTIMATORS FOR QUANTIZED SIGNALS

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, M. D.; Chou, H. H.; Gwinn, C. R., E-mail: michaeltdh@physics.ucsb.edu, E-mail: cgwinn@physics.ucsb.edu [Department of Physics, University of California, Santa Barbara, CA 93106 (United States)

    2013-03-10

    Using a maximum-likelihood criterion, we derive optimal correlation strategies for signals with and without digitization. We assume that the signals are drawn from zero-mean Gaussian distributions, as is expected in radio-astronomical applications, and we present correlation estimators both with and without a priori knowledge of the signal variances. We demonstrate that traditional estimators of correlation, which rely on averaging products, exhibit large and paradoxical noise when the correlation is strong. However, we also show that these estimators are fully optimal in the limit of vanishing correlation. We calculate the bias and noise in each of these estimators and discuss their suitability for implementation in modern digital correlators.

  6. Linear Covariance Analysis and Epoch State Estimators

    Science.gov (United States)

    Markley, F. Landis; Carpenter, J. Russell

    2014-01-01

    This paper extends in two directions the results of prior work on generalized linear covariance analysis of both batch least-squares and sequential estimators. The first is an improved treatment of process noise in the batch, or epoch state, estimator with an epoch time that may be later than some or all of the measurements in the batch. The second is to account for process noise in specifying the gains in the epoch state estimator. We establish the conditions under which the latter estimator is equivalent to the Kalman filter.

  7. Surface tensor estimation from linear sections

    DEFF Research Database (Denmark)

    Kousholt, Astrid; Kiderlen, Markus; Hug, Daniel

    From Crofton's formula for Minkowski tensors we derive stereological estimators of translation invariant surface tensors of convex bodies in the n-dimensional Euclidean space. The estimators are based on one-dimensional linear sections. In a design based setting we suggest three types of estimators....... These are based on isotropic uniform random lines, vertical sections, and non-isotropic random lines, respectively. Further, we derive estimators of the specific surface tensors associated with a stationary process of convex particles in the model based setting....

  8. Surface tensor estimation from linear sections

    DEFF Research Database (Denmark)

    Kousholt, Astrid; Kiderlen, Markus; Hug, Daniel

    2015-01-01

    From Crofton’s formula for Minkowski tensors we derive stereological estimators of translation invariant surface tensors of convex bodies in the n-dimensional Euclidean space. The estimators are based on one-dimensional linear sections. In a design based setting we suggest three types of estimators....... These are based on isotropic uniform random lines, vertical sections, and non-isotropic random lines, respectively. Further, we derive estimators of the specific surface tensors associated with a stationary process of convex particles in the model based setting....

  9. OPTIMAL CORRELATION ESTIMATORS FOR QUANTIZED SIGNALS

    International Nuclear Information System (INIS)

    Johnson, M. D.; Chou, H. H.; Gwinn, C. R.

    2013-01-01

    Using a maximum-likelihood criterion, we derive optimal correlation strategies for signals with and without digitization. We assume that the signals are drawn from zero-mean Gaussian distributions, as is expected in radio-astronomical applications, and we present correlation estimators both with and without a priori knowledge of the signal variances. We demonstrate that traditional estimators of correlation, which rely on averaging products, exhibit large and paradoxical noise when the correlation is strong. However, we also show that these estimators are fully optimal in the limit of vanishing correlation. We calculate the bias and noise in each of these estimators and discuss their suitability for implementation in modern digital correlators.

  10. Load Estimation from Natural input Modal Analysis

    DEFF Research Database (Denmark)

    Aenlle, Manuel López; Brincker, Rune; Canteli, Alfonso Fernández

    2005-01-01

    One application of Natural Input Modal Analysis consists in estimating the unknown load acting on structures such as wind loads, wave loads, traffic loads, etc. In this paper, a procedure to determine loading from a truncated modal model, as well as the results of an experimental testing programme...... estimation. In the experimental program a small structure subjected to vibration was used to estimate the loading from the measurements and the experimental modal space. The modal parameters were estimated by Natural Input Modal Analysis and the scaling factors of the mode shapes obtained by the mass change...

  11. Towards Greater Harmonisation of Decommissioning Cost Estimates

    International Nuclear Information System (INIS)

    O'Sullivan, Patrick; ); Laraia, Michele; ); LaGuardia, Thomas S.

    2010-01-01

    The NEA Decommissioning Cost Estimation Group (DCEG), in collaboration with the IAEA Waste Technology Section and the EC Directorate-General for Energy and Transport, has recently studied cost estimation practices in 12 countries - Belgium, Canada, France, Germany, Italy, Japan, the Netherlands, Slovakia, Spain, Sweden, the United Kingdom and the United States. Its findings are to be published in an OECD/NEA report entitled Cost Estimation for Decommissioning: An International Overview of Cost Elements, Estimation Practices and Reporting Requirements. This booklet highlights the findings contained in the full report. (authors)

  12. Detection of Irrigated Crops from Sentinel-1 and Sentinel-2 Data to Estimate Seasonal Groundwater Use in South India

    Directory of Open Access Journals (Sweden)

    Sylvain Ferrant

    2017-11-01

    Full Text Available Indian agriculture relies on monsoon rainfall and irrigation from surface and groundwater. The interannual variability of monsoon rainfalls is high, which forces South Indian farmers to adapt their irrigated areas to local water availability. In this study, we have developed and tested a methodology for monitoring these spatiotemporal variations using Sentinel-1 and -2 observations over the Kudaliar catchment, Telangana State (~1000 km2. These free radar and optical data have been acquired since 2015 on a weekly basis over continental areas, at a high spatial resolution (10–20 m that is well adapted to the small areas of South Indian field crops. A machine learning algorithm, the Random Forest method, was used over three growing seasons (January to March and July to November 2016 and January to March 2017 to classify small patches of inundated rice paddy, maize, and other irrigated crops, as well as surface water stored in the small reservoirs scattered across the landscape. The crop production comprises only irrigated crops (less than 20% of the areas during the dry season (Rabi, December to March, to which rain-fed cotton is added to reach 60% of the areas during the monsoon season (Kharif, June to November. Sentinel-1 radar backscatter provides useful observations during the cloudy monsoon season. The lowest irrigated area totals were found during Rabi 2016 and Kharif 2016, accounting for 3.5 and 5% with moderate classification confusion. This confusion decreases with increasing areas of irrigated crops during Rabi 2017. During this season, 16% of rice and 6% of irrigated crops were detected after the exceptional rainfalls observed in September. Surface water in small surface reservoirs reached 3% of the total area, which corresponds to a high value. The use of both Sentinel datasets improves the method accuracy and strengthens our confidence in the resulting maps. This methodology shows the potential of automatically monitoring, in near

  13. Accuracy of prehospital transport time estimation.

    Science.gov (United States)

    Wallace, David J; Kahn, Jeremy M; Angus, Derek C; Martin-Gill, Christian; Callaway, Clifton W; Rea, Thomas D; Chhatwal, Jagpreet; Kurland, Kristen; Seymour, Christopher W

    2014-01-01

    Estimates of prehospital transport times are an important part of emergency care system research and planning; however, the accuracy of these estimates is unknown. The authors examined the accuracy of three estimation methods against observed transport times in a large cohort of prehospital patient transports. This was a validation study using prehospital records in King County, Washington, and southwestern Pennsylvania from 2002 to 2006 and 2005 to 2011, respectively. Transport time estimates were generated using three methods: linear arc distance, Google Maps, and ArcGIS Network Analyst. Estimation error, defined as the absolute difference between observed and estimated transport time, was assessed, as well as the proportion of estimated times that were within specified error thresholds. Based on the primary results, a regression estimate was used that incorporated population density, time of day, and season to assess improved accuracy. Finally, hospital catchment areas were compared using each method with a fixed drive time. The authors analyzed 29,935 prehospital transports to 44 hospitals. The mean (± standard deviation [±SD]) absolute error was 4.8 (±7.3) minutes using linear arc, 3.5 (±5.4) minutes using Google Maps, and 4.4 (±5.7) minutes using ArcGIS. All pairwise comparisons were statistically significant (p Google Maps, and 11.6 [±10.9] minutes for ArcGIS). Estimates were within 5 minutes of observed transport time for 79% of linear arc estimates, 86.6% of Google Maps estimates, and 81.3% of ArcGIS estimates. The regression-based approach did not substantially improve estimation. There were large differences in hospital catchment areas estimated by each method. Route-based transport time estimates demonstrate moderate accuracy. These methods can be valuable for informing a host of decisions related to the system organization and patient access to emergency medical care; however, they should be employed with sensitivity to their limitations.

  14. Cost Estimating Handbook for Environmental Restoration

    International Nuclear Information System (INIS)

    1993-01-01

    Environmental restoration (ER) projects have presented the DOE and cost estimators with a number of properties that are not comparable to the normal estimating climate within DOE. These properties include: An entirely new set of specialized expressions and terminology. A higher than normal exposure to cost and schedule risk, as compared to most other DOE projects, due to changing regulations, public involvement, resource shortages, and scope of work. A higher than normal percentage of indirect costs to the total estimated cost due primarily to record keeping, special training, liability, and indemnification. More than one estimate for a project, particularly in the assessment phase, in order to provide input into the evaluation of alternatives for the cleanup action. While some aspects of existing guidance for cost estimators will be applicable to environmental restoration projects, some components of the present guidelines will have to be modified to reflect the unique elements of these projects. The purpose of this Handbook is to assist cost estimators in the preparation of environmental restoration estimates for Environmental Restoration and Waste Management (EM) projects undertaken by DOE. The DOE has, in recent years, seen a significant increase in the number, size, and frequency of environmental restoration projects that must be costed by the various DOE offices. The coming years will show the EM program to be the largest non-weapons program undertaken by DOE. These projects create new and unique estimating requirements since historical cost and estimating precedents are meager at best. It is anticipated that this Handbook will enhance the quality of cost data within DOE in several ways by providing: The basis for accurate, consistent, and traceable baselines. Sound methodologies, guidelines, and estimating formats. Sources of cost data/databases and estimating tools and techniques available at DOE cost professionals

  15. L’estime de soi : un cas particulier d’estime sociale ?

    OpenAIRE

    Santarelli, Matteo

    2016-01-01

    Un des traits plus originaux de la théorie intersubjective de la reconnaissance d’Axel Honneth, consiste dans la façon dont elle discute la relation entre estime sociale et estime de soi. En particulier, Honneth présente l’estime de soi comme un reflet de l’estime sociale au niveau individuel. Dans cet article, je discute cette conception, en posant la question suivante : l’estime de soi est-elle un cas particulier de l’estime sociale ? Pour ce faire, je me concentre sur deux problèmes crucia...

  16. Generalized Jackknife Estimators of Weighted Average Derivatives

    DEFF Research Database (Denmark)

    Cattaneo, Matias D.; Crump, Richard K.; Jansson, Michael

    With the aim of improving the quality of asymptotic distributional approximations for nonlinear functionals of nonparametric estimators, this paper revisits the large-sample properties of an important member of that class, namely a kernel-based weighted average derivative estimator. Asymptotic...

  17. The Problems of Multiple Feedback Estimation.

    Science.gov (United States)

    Bulcock, Jeffrey W.

    The use of two-stage least squares (2SLS) for the estimation of feedback linkages is inappropriate for nonorthogonal data sets because 2SLS is extremely sensitive to multicollinearity. It is argued that what is needed is use of a different estimating criterion than the least squares criterion. Theoretically the variance normalization criterion has…

  18. Spectral Estimation by the Random Dec Technique

    DEFF Research Database (Denmark)

    Brincker, Rune; Jensen, Jacob L.; Krenk, Steen

    1990-01-01

    This paper contains an empirical study of the accuracy of the Random Dec (RDD) technique. Realizations of the response from a single-degree-of-freedom system loaded by white noise are simulated using an ARMA model. The Autocorrelation function is estimated using the RDD technique and the estimated...

  19. Spectral Estimation by the Random DEC Technique

    DEFF Research Database (Denmark)

    Brincker, Rune; Jensen, J. Laigaard; Krenk, S.

    This paper contains an empirical study of the accuracy of the Random Dec (RDD) technique. Realizations of the response from a single-degree-of-freedom system loaded by white noise are simulated using an ARMA model. The Autocorrelation function is estimated using the RDD technique and the estimated...

  20. Least-squares variance component estimation

    NARCIS (Netherlands)

    Teunissen, P.J.G.; Amiri-Simkooei, A.R.

    2007-01-01

    Least-squares variance component estimation (LS-VCE) is a simple, flexible and attractive method for the estimation of unknown variance and covariance components. LS-VCE is simple because it is based on the well-known principle of LS; it is flexible because it works with a user-defined weight