WorldWideScience

Sample records for maximum temperatures approaching

  1. Extreme Maximum Land Surface Temperatures.

    Science.gov (United States)

    Garratt, J. R.

    1992-09-01

    There are numerous reports in the literature of observations of land surface temperatures. Some of these, almost all made in situ, reveal maximum values in the 50°-70°C range, with a few, made in desert regions, near 80°C. Consideration of a simplified form of the surface energy balance equation, utilizing likely upper values of absorbed shortwave flux (1000 W m2) and screen air temperature (55°C), that surface temperatures in the vicinity of 90°-100°C may occur for dry, darkish soils of low thermal conductivity (0.1-0.2 W m1 K1). Numerical simulations confirm this and suggest that temperature gradients in the first few centimeters of soil may reach 0.5°-1°C mm1 under these extreme conditions. The study bears upon the intrinsic interest of identifying extreme maximum temperatures and yields interesting information regarding the comfort zone of animals (including man).

  2. Maximum Temperature Detection System for Integrated Circuits

    Science.gov (United States)

    Frankiewicz, Maciej; Kos, Andrzej

    2015-03-01

    The paper describes structure and measurement results of the system detecting present maximum temperature on the surface of an integrated circuit. The system consists of the set of proportional to absolute temperature sensors, temperature processing path and a digital part designed in VHDL. Analogue parts of the circuit where designed with full-custom technique. The system is a part of temperature-controlled oscillator circuit - a power management system based on dynamic frequency scaling method. The oscillator cooperates with microprocessor dedicated for thermal experiments. The whole system is implemented in UMC CMOS 0.18 μm (1.8 V) technology.

  3. Dynamical maximum entropy approach to flocking.

    Science.gov (United States)

    Cavagna, Andrea; Giardina, Irene; Ginelli, Francesco; Mora, Thierry; Piovani, Duccio; Tavarone, Raffaele; Walczak, Aleksandra M

    2014-04-01

    We derive a new method to infer from data the out-of-equilibrium alignment dynamics of collectively moving animal groups, by considering the maximum entropy model distribution consistent with temporal and spatial correlations of flight direction. When bird neighborhoods evolve rapidly, this dynamical inference correctly learns the parameters of the model, while a static one relying only on the spatial correlations fails. When neighbors change slowly and the detailed balance is satisfied, we recover the static procedure. We demonstrate the validity of the method on simulated data. The approach is applicable to other systems of active matter.

  4. Maximum vehicle cabin temperatures under different meteorological conditions

    Science.gov (United States)

    Grundstein, Andrew; Meentemeyer, Vernon; Dowd, John

    2009-05-01

    A variety of studies have documented the dangerously high temperatures that may occur within the passenger compartment (cabin) of cars under clear sky conditions, even at relatively low ambient air temperatures. Our study, however, is the first to examine cabin temperatures under variable weather conditions. It uses a unique maximum vehicle cabin temperature dataset in conjunction with directly comparable ambient air temperature, solar radiation, and cloud cover data collected from April through August 2007 in Athens, GA. Maximum cabin temperatures, ranging from 41-76°C, varied considerably depending on the weather conditions and the time of year. Clear days had the highest cabin temperatures, with average values of 68°C in the summer and 61°C in the spring. Cloudy days in both the spring and summer were on average approximately 10°C cooler. Our findings indicate that even on cloudy days with lower ambient air temperatures, vehicle cabin temperatures may reach deadly levels. Additionally, two predictive models of maximum daily vehicle cabin temperatures were developed using commonly available meteorological data. One model uses maximum ambient air temperature and average daily solar radiation while the other uses cloud cover percentage as a surrogate for solar radiation. From these models, two maximum vehicle cabin temperature indices were developed to assess the level of danger. The models and indices may be useful for forecasting hazardous conditions, promoting public awareness, and to estimate past cabin temperatures for use in forensic analyses.

  5. Modeling maximum daily temperature using a varying coefficient regression model

    Science.gov (United States)

    Han Li; Xinwei Deng; Dong-Yum Kim; Eric P. Smith

    2014-01-01

    Relationships between stream water and air temperatures are often modeled using linear or nonlinear regression methods. Despite a strong relationship between water and air temperatures and a variety of models that are effective for data summarized on a weekly basis, such models did not yield consistently good predictions for summaries such as daily maximum temperature...

  6. New results on the mid-latitude midnight temperature maximum

    Science.gov (United States)

    Mesquita, Rafael L. A.; Meriwether, John W.; Makela, Jonathan J.; Fisher, Daniel J.; Harding, Brian J.; Sanders, Samuel C.; Tesema, Fasil; Ridley, Aaron J.

    2018-04-01

    Fabry-Perot interferometer (FPI) measurements of thermospheric temperatures and winds show the detection and successful determination of the latitudinal distribution of the midnight temperature maximum (MTM) in the continental mid-eastern United States. These results were obtained through the operation of the five FPI observatories in the North American Thermosphere Ionosphere Observing Network (NATION) located at the Pisgah Astronomic Research Institute (PAR) (35.2° N, 82.8° W), Virginia Tech (VTI) (37.2° N, 80.4° W), Eastern Kentucky University (EKU) (37.8° N, 84.3° W), Urbana-Champaign (UAO) (40.2° N, 88.2° W), and Ann Arbor (ANN) (42.3° N, 83.8° W). A new approach for analyzing the MTM phenomenon is developed, which features the combination of a method of harmonic thermal background removal followed by a 2-D inversion algorithm to generate sequential 2-D temperature residual maps at 30 min intervals. The simultaneous study of the temperature data from these FPI stations represents a novel analysis of the MTM and its large-scale latitudinal and longitudinal structure. The major finding in examining these maps is the frequent detection of a secondary MTM peak occurring during the early evening hours, nearly 4.5 h prior to the timing of the primary MTM peak that generally appears after midnight. The analysis of these observations shows a strong night-to-night variability for this double-peaked MTM structure. A statistical study of the behavior of the MTM events was carried out to determine the extent of this variability with regard to the seasonal and latitudinal dependence. The results show the presence of the MTM peak(s) in 106 out of the 472 determinable nights (when the MTM presence, or lack thereof, can be determined with certainty in the data set) selected for analysis (22 %) out of the total of 846 nights available. The MTM feature is seen to appear slightly more often during the summer (27 %), followed by fall (22 %), winter (20 %), and spring

  7. Mid-depth temperature maximum in an estuarine lake

    Science.gov (United States)

    Stepanenko, V. M.; Repina, I. A.; Artamonov, A. Yu; Gorin, S. L.; Lykossov, V. N.; Kulyamin, D. V.

    2018-03-01

    The mid-depth temperature maximum (TeM) was measured in an estuarine Bol’shoi Vilyui Lake (Kamchatka peninsula, Russia) in summer 2015. We applied 1D k-ɛ model LAKE to the case, and found it successfully simulating the phenomenon. We argue that the main prerequisite for mid-depth TeM development is a salinity increase below the freshwater mixed layer, sharp enough in order to increase the temperature with depth not to cause convective mixing and double diffusion there. Given that this condition is satisfied, the TeM magnitude is controlled by physical factors which we identified as: radiation absorption below the mixed layer, mixed-layer temperature dynamics, vertical heat conduction and water-sediments heat exchange. In addition to these, we formulate the mechanism of temperature maximum ‘pumping’, resulting from the phase shift between diurnal cycles of mixed-layer depth and temperature maximum magnitude. Based on the LAKE model results we quantify the contribution of the above listed mechanisms and find their individual significance highly sensitive to water turbidity. Relying on physical mechanisms identified we define environmental conditions favouring the summertime TeM development in salinity-stratified lakes as: small-mixed layer depth (roughly, ~wind and cloudless weather. We exemplify the effect of mixed-layer depth on TeM by a set of selected lakes.

  8. Device for determining the maximum temperature of an environment

    International Nuclear Information System (INIS)

    Cartier, Louis.

    1976-01-01

    This invention concerns a device for determining the maximum temperature of an environment. Its main characteristic is a central cylindrical rod on which can slide two identical tubes, the facing ends of which are placed end to end and the far ends are shaped to provide a sliding friction along the rod. The rod and tubes are fabricated in materials of which the linear expansion factors are different in value. The far ends are composed of tongs of which the fingers, fitted with claws, bear on the central rod. Because of this arrangement of the device the two tubes, placed end to end on being fitted, can expand under the effect of a rise in the temperature of the environment into which the device is introduced, with the result that there occurs an increase in the distance between the two far ends. This distance is maximal when the device is raised to its highest temperature. The far ends are shaped to allow the tubes to slide under the effect of expansion but to prevent sliding in the opposite direction when the device is taken back into the open air and the temperature drops to within ambient temperature. It follows that the tubes tend to return to their initial length and the ends that were placed end to end when fitted now have a gap between them. The measurement of this gap makes it possible to know the maximal temperature sought [fr

  9. Narrow band interference cancelation in OFDM: Astructured maximum likelihood approach

    KAUST Repository

    Sohail, Muhammad Sadiq; Al-Naffouri, Tareq Y.; Al-Ghadhban, Samir N.

    2012-01-01

    This paper presents a maximum likelihood (ML) approach to mitigate the effect of narrow band interference (NBI) in a zero padded orthogonal frequency division multiplexing (ZP-OFDM) system. The NBI is assumed to be time variant and asynchronous

  10. Future changes over the Himalayas: Maximum and minimum temperature

    Science.gov (United States)

    Dimri, A. P.; Kumar, D.; Choudhary, A.; Maharana, P.

    2018-03-01

    An assessment of the projection of minimum and maximum air temperature over the Indian Himalayan region (IHR) from the COordinated Regional Climate Downscaling EXperiment- South Asia (hereafter, CORDEX-SA) regional climate model (RCM) experiments have been carried out under two different Representative Concentration Pathway (RCP) scenarios. The major aim of this study is to assess the probable future changes in the minimum and maximum climatology and its long-term trend under different RCPs along with the elevation dependent warming over the IHR. A number of statistical analysis such as changes in mean climatology, long-term spatial trend and probability distribution function are carried out to detect the signals of changes in climate. The study also tries to quantify the uncertainties associated with different model experiments and their ensemble in space, time and for different seasons. The model experiments and their ensemble show prominent cold bias over Himalayas for present climate. However, statistically significant higher warming rate (0.23-0.52 °C/decade) for both minimum and maximum air temperature (Tmin and Tmax) is observed for all the seasons under both RCPs. The rate of warming intensifies with the increase in the radiative forcing under a range of greenhouse gas scenarios starting from RCP4.5 to RCP8.5. In addition to this, a wide range of spatial variability and disagreements in the magnitude of trend between different models describes the uncertainty associated with the model projections and scenarios. The projected rate of increase of Tmin may destabilize the snow formation at the higher altitudes in the northern and western parts of Himalayan region, while rising trend of Tmax over southern flank may effectively melt more snow cover. Such combined effect of rising trend of Tmin and Tmax may pose a potential threat to the glacial deposits. The overall trend of Diurnal temperature range (DTR) portrays increasing trend across entire area with

  11. Maximum surface level and temperature histories for Hanford waste tanks

    International Nuclear Information System (INIS)

    Flanagan, B.D.; Ha, N.D.; Huisingh, J.S.

    1994-01-01

    Radioactive defense waste resulting from the chemical processing of spent nuclear fuel has been accumulating at the Hanford Site since 1944. This waste is stored in underground waste-storage tanks. The Hanford Site Tank Farm Facilities Interim Safety Basis (ISB) provides a ready reference to the safety envelope for applicable tank farm facilities and installations. During preparation of the ISB, tank structural integrity concerns were identified as a key element in defining the safety envelope. These concerns, along with several deficiencies in the technical bases associated with the structural integrity issues and the corresponding operational limits/controls specified for conduct of normal tank farm operations are documented in the ISB. Consequently, a plan was initiated to upgrade the safety envelope technical bases by conducting Accelerated Safety Analyses-Phase 1 (ASA-Phase 1) sensitivity studies and additional structural evaluations. The purpose of this report is to facilitate the ASA-Phase 1 studies and future analyses of the single-shell tanks (SSTs) and double-shell tanks (DSTs) by compiling a quantitative summary of some of the past operating conditions the tanks have experienced during their existence. This report documents the available summaries of recorded maximum surface levels and maximum waste temperatures and references other sources for more specific data

  12. Impact of soil moisture on extreme maximum temperatures in Europe

    Directory of Open Access Journals (Sweden)

    Kirien Whan

    2015-09-01

    Full Text Available Land-atmosphere interactions play an important role for hot temperature extremes in Europe. Dry soils may amplify such extremes through feedbacks with evapotranspiration. While previous observational studies generally focused on the relationship between precipitation deficits and the number of hot days, we investigate here the influence of soil moisture (SM on summer monthly maximum temperatures (TXx using water balance model-based SM estimates (driven with observations and temperature observations. Generalized extreme value distributions are fitted to TXx using SM as a covariate. We identify a negative relationship between SM and TXx, whereby a 100 mm decrease in model-based SM is associated with a 1.6 °C increase in TXx in Southern-Central and Southeastern Europe. Dry SM conditions result in a 2–4 °C increase in the 20-year return value of TXx compared to wet conditions in these two regions. In contrast with SM impacts on the number of hot days (NHD, where low and high surface-moisture conditions lead to different variability, we find a mostly linear dependency of the 20-year return value on surface-moisture conditions. We attribute this difference to the non-linear relationship between TXx and NHD that stems from the threshold-based calculation of NHD. Furthermore the employed SM data and the Standardized Precipitation Index (SPI are only weakly correlated in the investigated regions, highlighting the importance of evapotranspiration and runoff for resulting SM. Finally, in a case study for the hot 2003 summer we illustrate that if 2003 spring conditions in Southern-Central Europe had been as dry as in the more recent 2011 event, temperature extremes in summer would have been higher by about 1 °C, further enhancing the already extreme conditions which prevailed in that year.

  13. PNNL: A Supervised Maximum Entropy Approach to Word Sense Disambiguation

    Energy Technology Data Exchange (ETDEWEB)

    Tratz, Stephen C.; Sanfilippo, Antonio P.; Gregory, Michelle L.; Chappell, Alan R.; Posse, Christian; Whitney, Paul D.

    2007-06-23

    In this paper, we described the PNNL Word Sense Disambiguation system as applied to the English All-Word task in Se-mEval 2007. We use a supervised learning approach, employing a large number of features and using Information Gain for dimension reduction. Our Maximum Entropy approach combined with a rich set of features produced results that are significantly better than baseline and are the highest F-score for the fined-grained English All-Words subtask.

  14. Operational forecasting of daily temperatures in the Valencia Region. Part I: maximum temperatures in summer.

    Science.gov (United States)

    Gómez, I.; Estrela, M.

    2009-09-01

    Extreme temperature events have a great impact on human society. Knowledge of summer maximum temperatures is very useful for both the general public and organisations whose workers have to operate in the open, e.g. railways, roadways, tourism, etc. Moreover, summer maximum daily temperatures are considered a parameter of interest and concern since persistent heat-waves can affect areas as diverse as public health, energy consumption, etc. Thus, an accurate forecasting of these temperatures could help to predict heat-wave conditions and permit the implementation of strategies aimed at minimizing the negative effects that high temperatures have on human health. The aim of this work is to evaluate the skill of the RAMS model in determining daily maximum temperatures during summer over the Valencia Region. For this, we have used the real-time configuration of this model currently running at the CEAM Foundation. To carry out the model verification process, we have analysed not only the global behaviour of the model for the whole Valencia Region, but also its behaviour for the individual stations distributed within this area. The study has been performed for the summer forecast period of 1 June - 30 September, 2007. The results obtained are encouraging and indicate a good agreement between the observed and simulated maximum temperatures. Moreover, the model captures quite well the temperatures in the extreme heat episodes. Acknowledgement. This work was supported by "GRACCIE" (CSD2007-00067, Programa Consolider-Ingenio 2010), by the Spanish Ministerio de Educación y Ciencia, contract number CGL2005-03386/CLI, and by the Regional Government of Valencia Conselleria de Sanitat, contract "Simulación de las olas de calor e invasiones de frío y su regionalización en la Comunidad Valenciana" ("Heat wave and cold invasion simulation and their regionalization at Valencia Region"). The CEAM Foundation is supported by the Generalitat Valenciana and BANCAIXA (Valencia, Spain).

  15. The Maximum Cross-Correlation approach to detecting translational motions from sequential remote-sensing images

    Science.gov (United States)

    Gao, J.; Lythe, M. B.

    1996-06-01

    This paper presents the principle of the Maximum Cross-Correlation (MCC) approach in detecting translational motions within dynamic fields from time-sequential remotely sensed images. A C program implementing the approach is presented and illustrated in a flowchart. The program is tested with a pair of sea-surface temperature images derived from Advanced Very High Resolution Radiometer (AVHRR) images near East Cape, New Zealand. Results show that the mean currents in the region have been detected satisfactorily with the approach.

  16. Statistical assessment of changes in extreme maximum temperatures over Saudi Arabia, 1985-2014

    Science.gov (United States)

    Raggad, Bechir

    2018-05-01

    In this study, two statistical approaches were adopted in the analysis of observed maximum temperature data collected from fifteen stations over Saudi Arabia during the period 1985-2014. In the first step, the behavior of extreme temperatures was analyzed and their changes were quantified with respect to the Expert Team on Climate Change Detection Monitoring indices. The results showed a general warming trend over most stations, in maximum temperature-related indices, during the period of analysis. In the second step, stationary and non-stationary extreme-value analyses were conducted for the temperature data. The results revealed that the non-stationary model with increasing linear trend in its location parameter outperforms the other models for two-thirds of the stations. Additionally, the 10-, 50-, and 100-year return levels were found to change with time considerably and that the maximum temperature could start to reappear in the different T-year return period for most stations. This analysis shows the importance of taking account the change over time in the estimation of return levels and therefore justifies the use of the non-stationary generalized extreme value distribution model to describe most of the data. Furthermore, these last findings are in line with the result of significant warming trends found in climate indices analyses.

  17. Narrow band interference cancelation in OFDM: Astructured maximum likelihood approach

    KAUST Repository

    Sohail, Muhammad Sadiq

    2012-06-01

    This paper presents a maximum likelihood (ML) approach to mitigate the effect of narrow band interference (NBI) in a zero padded orthogonal frequency division multiplexing (ZP-OFDM) system. The NBI is assumed to be time variant and asynchronous with the frequency grid of the ZP-OFDM system. The proposed structure based technique uses the fact that the NBI signal is sparse as compared to the ZP-OFDM signal in the frequency domain. The structure is also useful in reducing the computational complexity of the proposed method. The paper also presents a data aided approach for improved NBI estimation. The suitability of the proposed method is demonstrated through simulations. © 2012 IEEE.

  18. A Maximum Entropy Approach to Loss Distribution Analysis

    Directory of Open Access Journals (Sweden)

    Marco Bee

    2013-03-01

    Full Text Available In this paper we propose an approach to the estimation and simulation of loss distributions based on Maximum Entropy (ME, a non-parametric technique that maximizes the Shannon entropy of the data under moment constraints. Special cases of the ME density correspond to standard distributions; therefore, this methodology is very general as it nests most classical parametric approaches. Sampling the ME distribution is essential in many contexts, such as loss models constructed via compound distributions. Given the difficulties in carrying out exact simulation,we propose an innovative algorithm, obtained by means of an extension of Adaptive Importance Sampling (AIS, for the approximate simulation of the ME distribution. Several numerical experiments confirm that the AIS-based simulation technique works well, and an application to insurance data gives further insights in the usefulness of the method for modelling, estimating and simulating loss distributions.

  19. Investigation on maximum transition temperature of phonon mediated superconductivity

    Energy Technology Data Exchange (ETDEWEB)

    Fusui, L; Yi, S; Yinlong, S [Physics Department, Beijing University (CN)

    1989-05-01

    Three model effective phonon spectra are proposed to get plots of {ital T}{sub {ital c}}-{omega} adn {lambda}-{omega}. It can be concluded that there is no maximum limit of {ital T}{sub {ital c}} in phonon mediated superconductivity for reasonable values of {lambda}. The importance of high frequency LO phonon is also emphasized. Some discussions on high {ital T}{sub {ital c}} are given.

  20. Maximum weight of greenhouse effect to global temperature variation

    International Nuclear Information System (INIS)

    Sun, Xian; Jiang, Chuangye

    2007-01-01

    Full text: The global average temperature has risen by 0.74 0 C since the late 19th century. Many studies have concluded that the observed warming in the last 50 years may be attributed to increasing concentrations of anthropogenic greenhouse gases. But some scientists have a different point of view. Global climate change is affected not only by anthropogenic activities, but also constraints in climate system natural factors. How much is the influencing weight of C02's greenhouse effects to the global temperature variation? Does global climate continue warming or decreasing in the next 20 years? They are two hot spots in global climate change. The multi-timescales analysis method - Empirical mode decomposition (EMD) is used to diagnose global annual mean air temperature dataset for land surface provided by IPCC and atmospheric content of C02 provided by the Carbon Dioxide Information Analysis Center (CDIAC) during 1881-2002. The results show that: Global temperature variation contains quasi-periodic oscillations on four timescales (3 yr, 6 yr, 20 yr and 60 yr, respectively) and a century-scale warming trend. The variance contribution of IMF1-IMF4 and trend is 17.55%, 11.34%, 6.77%, 24.15% and 40.19%, respectively. The trend and quasi-60 yr oscillation of temperature variation are the most prominent; C02's greenhouse effect on global temperature variation is mainly century-scale trend. The contribution of C02 concentration to global temperature variability is not more than 40.19%, whereas 59.81% contribution to global temperature variation is non-greenhouse effect. Therefore, it is necessary to re-study the dominant factors that induce the global climate change; It has been noticed that on the periods of 20 yr and 60 yr oscillation, the global temperature is beginning to decreased in the next 20 years. If the present C02 concentration is maintained, the greenhouse effect will be too small to countercheck the natural variation in global climate cooling in the next 20

  1. Decadal trends in Red Sea maximum surface temperature.

    Science.gov (United States)

    Chaidez, V; Dreano, D; Agusti, S; Duarte, C M; Hoteit, I

    2017-08-15

    Ocean warming is a major consequence of climate change, with the surface of the ocean having warmed by 0.11 °C decade -1 over the last 50 years and is estimated to continue to warm by an additional 0.6 - 2.0 °C before the end of the century 1 . However, there is considerable variability in the rates experienced by different ocean regions, so understanding regional trends is important to inform on possible stresses for marine organisms, particularly in warm seas where organisms may be already operating in the high end of their thermal tolerance. Although the Red Sea is one of the warmest ecosystems on earth, its historical warming trends and thermal evolution remain largely understudied. We characterized the Red Sea's thermal regimes at the basin scale, with a focus on the spatial distribution and changes over time of sea surface temperature maxima, using remotely sensed sea surface temperature data from 1982 - 2015. The overall rate of warming for the Red Sea is 0.17 ± 0.07 °C decade -1 , while the northern Red Sea is warming between 0.40 and 0.45 °C decade -1 , all exceeding the global rate. Our findings show that the Red Sea is fast warming, which may in the future challenge its organisms and communities.

  2. Decadal trends in Red Sea maximum surface temperature

    KAUST Repository

    Chaidez, Veronica

    2017-08-09

    Ocean warming is a major consequence of climate change, with the surface of the ocean having warmed by 0.11 °C decade-1 over the last 50 years and is estimated to continue to warm by an additional 0.6 - 2.0 °C before the end of the century1. However, there is considerable variability in the rates experienced by different ocean regions, so understanding regional trends is important to inform on possible stresses for marine organisms, particularly in warm seas where organisms may be already operating in the high end of their thermal tolerance. Although the Red Sea is one of the warmest ecosystems on earth, its historical warming trends and thermal evolution remain largely understudied. We characterized the Red Sea\\'s thermal regimes at the basin scale, with a focus on the spatial distribution and changes over time of sea surface temperature maxima, using remotely sensed sea surface temperature data from 1982 - 2015. The overall rate of warming for the Red Sea is 0.17 ± 0.07 °C decade-1, while the northern Red Sea is warming between 0.40 and 0.45 °C decade-1, all exceeding the global rate. Our findings show that the Red Sea is fast warming, which may in the future challenge its organisms and communities.

  3. Decadal trends in Red Sea maximum surface temperature

    KAUST Repository

    Chaidez, Veronica; Dreano, Denis; Agusti, Susana; Duarte, Carlos M.; Hoteit, Ibrahim

    2017-01-01

    Ocean warming is a major consequence of climate change, with the surface of the ocean having warmed by 0.11 °C decade-1 over the last 50 years and is estimated to continue to warm by an additional 0.6 - 2.0 °C before the end of the century1. However, there is considerable variability in the rates experienced by different ocean regions, so understanding regional trends is important to inform on possible stresses for marine organisms, particularly in warm seas where organisms may be already operating in the high end of their thermal tolerance. Although the Red Sea is one of the warmest ecosystems on earth, its historical warming trends and thermal evolution remain largely understudied. We characterized the Red Sea's thermal regimes at the basin scale, with a focus on the spatial distribution and changes over time of sea surface temperature maxima, using remotely sensed sea surface temperature data from 1982 - 2015. The overall rate of warming for the Red Sea is 0.17 ± 0.07 °C decade-1, while the northern Red Sea is warming between 0.40 and 0.45 °C decade-1, all exceeding the global rate. Our findings show that the Red Sea is fast warming, which may in the future challenge its organisms and communities.

  4. Maximum likelihood approach for several stochastic volatility models

    International Nuclear Information System (INIS)

    Camprodon, Jordi; Perelló, Josep

    2012-01-01

    Volatility measures the amplitude of price fluctuations. Despite it being one of the most important quantities in finance, volatility is not directly observable. Here we apply a maximum likelihood method which assumes that price and volatility follow a two-dimensional diffusion process where volatility is the stochastic diffusion coefficient of the log-price dynamics. We apply this method to the simplest versions of the expOU, the OU and the Heston stochastic volatility models and we study their performance in terms of the log-price probability, the volatility probability, and its Mean First-Passage Time. The approach has some predictive power on the future returns amplitude by only knowing the current volatility. The assumed models do not consider long-range volatility autocorrelation and the asymmetric return-volatility cross-correlation but the method still yields very naturally these two important stylized facts. We apply the method to different market indices and with a good performance in all cases. (paper)

  5. Assessment of extreme value distributions for maximum temperature in the Mediterranean area

    Science.gov (United States)

    Beck, Alexander; Hertig, Elke; Jacobeit, Jucundus

    2015-04-01

    Extreme maximum temperatures highly affect the natural as well as the societal environment Heat stress has great effects on flora, fauna and humans and culminates in heat related morbidity and mortality. Agriculture and different industries are severely affected by extreme air temperatures. Even more under climate change conditions, it is necessary to detect potential hazards which arise from changes in the distributional parameters of extreme values, and this is especially relevant for the Mediterranean region which is characterized as a climate change hot spot. Therefore statistical approaches are developed to estimate these parameters with a focus on non-stationarities emerging in the relationship between regional climate variables and their large-scale predictors like sea level pressure, geopotential heights, atmospheric temperatures and relative humidity. Gridded maximum temperature data from the daily E-OBS dataset (Haylock et al., 2008) with a spatial resolution of 0.25° x 0.25° from January 1950 until December 2012 are the predictands for the present analyses. A s-mode principal component analysis (PCA) has been performed in order to reduce data dimension and to retain different regions of similar maximum temperature variability. The grid box with the highest PC-loading represents the corresponding principal component. A central part of the analyses is the model development for temperature extremes under the use of extreme value statistics. A combined model is derived consisting of a Generalized Pareto Distribution (GPD) model and a quantile regression (QR) model which determines the GPD location parameters. The QR model as well as the scale parameters of the GPD model are conditioned by various large-scale predictor variables. In order to account for potential non-stationarities in the predictors-temperature relationships, a special calibration and validation scheme is applied, respectively. Haylock, M. R., N. Hofstra, A. M. G. Klein Tank, E. J. Klok, P

  6. A new global reconstruction of temperature changes at the Last Glacial Maximum

    Directory of Open Access Journals (Sweden)

    J. D. Annan

    2013-02-01

    Full Text Available Some recent compilations of proxy data both on land and ocean (MARGO Project Members, 2009; Bartlein et al., 2011; Shakun et al., 2012, have provided a new opportunity for an improved assessment of the overall climatic state of the Last Glacial Maximum. In this paper, we combine these proxy data with the ensemble of structurally diverse state of the art climate models which participated in the PMIP2 project (Braconnot et al., 2007 to generate a spatially complete reconstruction of surface air (and sea surface temperatures. We test a variety of approaches, and show that multiple linear regression performs well for this application. Our reconstruction is significantly different to and more accurate than previous approaches and we obtain an estimated global mean cooling of 4.0 ± 0.8 °C (95% CI.

  7. The Hengill geothermal area, Iceland: Variation of temperature gradients deduced from the maximum depth of seismogenesis

    Science.gov (United States)

    Foulger, G. R.

    1995-04-01

    Given a uniform lithology and strain rate and a full seismic data set, the maximum depth of earthquakes may be viewed to a first order as an isotherm. These conditions are approached at the Hengill geothermal area S. Iceland, a dominantly basaltic area. The likely strain rate calculated from thermal and tectonic considerations is 10 -15 s -1, and temperature measurements from four drill sites within the area indicate average, near-surface geothermal gradients of up to 150 °C km -1 throughout the upper 2 km. The temperature at which seismic failure ceases for the strain rates likely at the Hengill geothermal area is determined by analogy with oceanic crust, and is about 650 ± 50 °C. The topographies of the top and bottom of the seismogenic layer were mapped using 617 earthquakes located highly accurately by performing a simultaneous inversion for three-dimensional structure and hypocentral parameters. The thickness of the seismogenic layer is roughly constant and about 3 km. A shallow, aseismic, low-velocity volume within the spreading plate boundary that crosses the area occurs above the top of the seismogenic layer and is interpreted as an isolated body of partial melt. The base of the seismogenic layer has a maximum depth of about 6.5 km beneath the spreading axis and deepens to about 7 km beneath a transform zone in the south of the area. Beneath the high-temperature part of the geothermal area, the maximum depth of earthquakes may be as shallow as 4 km. The geothermal gradient below drilling depths in various parts of the area ranges from 84 ± 9 °Ckm -1 within the low-temperature geothermal area of the transform zone to 138 ± 15 °Ckm -1 below the centre of the high-temperature geothermal area. Shallow maximum depths of earthquakes and therefore high average geothermal gradients tend to correlate with the intensity of the geothermal area and not with the location of the currently active spreading axis.

  8. Stochastic modelling of the monthly average maximum and minimum temperature patterns in India 1981-2015

    Science.gov (United States)

    Narasimha Murthy, K. V.; Saravana, R.; Vijaya Kumar, K.

    2018-04-01

    The paper investigates the stochastic modelling and forecasting of monthly average maximum and minimum temperature patterns through suitable seasonal auto regressive integrated moving average (SARIMA) model for the period 1981-2015 in India. The variations and distributions of monthly maximum and minimum temperatures are analyzed through Box plots and cumulative distribution functions. The time series plot indicates that the maximum temperature series contain sharp peaks in almost all the years, while it is not true for the minimum temperature series, so both the series are modelled separately. The possible SARIMA model has been chosen based on observing autocorrelation function (ACF), partial autocorrelation function (PACF), and inverse autocorrelation function (IACF) of the logarithmic transformed temperature series. The SARIMA (1, 0, 0) × (0, 1, 1)12 model is selected for monthly average maximum and minimum temperature series based on minimum Bayesian information criteria. The model parameters are obtained using maximum-likelihood method with the help of standard error of residuals. The adequacy of the selected model is determined using correlation diagnostic checking through ACF, PACF, IACF, and p values of Ljung-Box test statistic of residuals and using normal diagnostic checking through the kernel and normal density curves of histogram and Q-Q plot. Finally, the forecasting of monthly maximum and minimum temperature patterns of India for the next 3 years has been noticed with the help of selected model.

  9. New England observed and predicted Julian day of maximum growing season stream/river temperature points

    Data.gov (United States)

    U.S. Environmental Protection Agency — The shapefile contains points with associated observed and predicted Julian day of maximum growing season stream/river temperatures in New England based on a spatial...

  10. New England observed and predicted growing season maximum stream/river temperature points

    Data.gov (United States)

    U.S. Environmental Protection Agency — The shapefile contains points with associated observed and predicted growing season maximum stream/river temperatures in New England based on a spatial statistical...

  11. New England observed and predicted August stream/river temperature maximum daily rate of change points

    Data.gov (United States)

    U.S. Environmental Protection Agency — The shapefile contains points with associated observed and predicted August stream/river temperature maximum negative rate of change in New England based on a...

  12. Measurement of the temperature of density maximum of water solutions using a convective flow technique

    OpenAIRE

    Cawley, M.F.; McGlynn, D.; Mooney, P.A.

    2006-01-01

    A technique is described which yields an accurate measurement of the temperature of density maximum of fluids which exhibit such anomalous behaviour. The method relies on the detection of changes in convective flow in a rectangular cavity containing the test fluid.The normal single-cell convection which occurs in the presence of a horizontal temperature gradient changes to a double cell configuration in the vicinity of the density maximum, and this transition manifests itself in changes in th...

  13. The Hengill geothermal area, Iceland: variation of temperature gradients deduced from the maximum depth of seismogenesis

    Science.gov (United States)

    Foulger, G.R.

    1995-01-01

    Given a uniform lithology and strain rate and a full seismic data set, the maximum depth of earthquakes may be viewed to a first order as an isotherm. These conditions are approached at the Hengill geothermal area, S. Iceland, a dominantly basaltic area. The temperature at which seismic failure ceases for the strain rates likely at the Hengill geothermal area is determined by analogy with oceanic crust, and is about 650 ?? 50??C. The topographies of the top and bottom of the seismogenic layer were mapped using 617 earthquakes. The thickness of the seismogenic layer is roughly constant and about 3 km. A shallow, aseismic, low-velocity volume within the spreading plate boundary that crosses the area occurs above the top of the seismogenic layer and is interpreted as an isolated body of partial melt. The base of the seismogenic layer has a maximum depth of about 6.5 km beneath the spreading axis and deepens to about 7 km beneath a transform zone in the south of the area. -from Author

  14. A Hybrid Maximum Power Point Search Method Using Temperature Measurements in Partial Shading Conditions

    Directory of Open Access Journals (Sweden)

    Mroczka Janusz

    2014-12-01

    Full Text Available Photovoltaic panels have a non-linear current-voltage characteristics to produce the maximum power at only one point called the maximum power point. In the case of the uniform illumination a single solar panel shows only one maximum power, which is also the global maximum power point. In the case an irregularly illuminated photovoltaic panel many local maxima on the power-voltage curve can be observed and only one of them is the global maximum. The proposed algorithm detects whether a solar panel is in the uniform insolation conditions. Then an appropriate strategy of tracking the maximum power point is taken using a decision algorithm. The proposed method is simulated in the environment created by the authors, which allows to stimulate photovoltaic panels in real conditions of lighting, temperature and shading.

  15. Maximum Smoke Temperature in Non-Smoke Model Evacuation Region for Semi-Transverse Tunnel Fire

    OpenAIRE

    B. Lou; Y. Qiu; X. Long

    2017-01-01

    Smoke temperature distribution in non-smoke evacuation under different mechanical smoke exhaust rates of semi-transverse tunnel fire were studied by FDS numerical simulation in this paper. The effect of fire heat release rate (10MW 20MW and 30MW) and exhaust rate (from 0 to 160m3/s) on the maximum smoke temperature in non-smoke evacuation region was discussed. Results show that the maximum smoke temperature in non-smoke evacuation region decreased with smoke exhaust rate. Plug-holing was obse...

  16. Performance analysis and comparison of an Atkinson cycle coupled to variable temperature heat reservoirs under maximum power and maximum power density conditions

    International Nuclear Information System (INIS)

    Wang, P.-Y.; Hou, S.-S.

    2005-01-01

    In this paper, performance analysis and comparison based on the maximum power and maximum power density conditions have been conducted for an Atkinson cycle coupled to variable temperature heat reservoirs. The Atkinson cycle is internally reversible but externally irreversible, since there is external irreversibility of heat transfer during the processes of constant volume heat addition and constant pressure heat rejection. This study is based purely on classical thermodynamic analysis methodology. It should be especially emphasized that all the results and conclusions are based on classical thermodynamics. The power density, defined as the ratio of power output to maximum specific volume in the cycle, is taken as the optimization objective because it considers the effects of engine size as related to investment cost. The results show that an engine design based on maximum power density with constant effectiveness of the hot and cold side heat exchangers or constant inlet temperature ratio of the heat reservoirs will have smaller size but higher efficiency, compression ratio, expansion ratio and maximum temperature than one based on maximum power. From the view points of engine size and thermal efficiency, an engine design based on maximum power density is better than one based on maximum power conditions. However, due to the higher compression ratio and maximum temperature in the cycle, an engine design based on maximum power density conditions requires tougher materials for engine construction than one based on maximum power conditions

  17. Maximum entropy method approach to the θ term

    International Nuclear Information System (INIS)

    Imachi, Masahiro; Shinno, Yasuhiko; Yoneyama, Hiroshi

    2004-01-01

    In Monte Carlo simulations of lattice field theory with a θ term, one confronts the complex weight problem, or the sign problem. This is circumvented by performing the Fourier transform of the topological charge distribution P(Q). This procedure, however, causes flattening phenomenon of the free energy f(θ), which makes study of the phase structure unfeasible. In order to treat this problem, we apply the maximum entropy method (MEM) to a Gaussian form of P(Q), which serves as a good example to test whether the MEM can be applied effectively to the θ term. We study the case with flattering as well as that without flattening. In the latter case, the results of the MEM agree with those obtained from the direct application of the Fourier transform. For the former, the MEM gives a smoother f(θ) than that of the Fourier transform. Among various default models investigated, the images which yield the least error do not show flattening, although some others cannot be excluded given the uncertainly related to statistical error. (author)

  18. Maximum temperature accounts for annual soil CO2 efflux in temperate forests of Northern China

    Science.gov (United States)

    Zhou, Zhiyong; Xu, Meili; Kang, Fengfeng; Jianxin Sun, Osbert

    2015-01-01

    It will help understand the representation legality of soil temperature to explore the correlations of soil respiration with variant properties of soil temperature. Soil temperature at 10 cm depth was hourly logged through twelve months. Basing on the measured soil temperature, soil respiration at different temporal scales were calculated using empirical functions for temperate forests. On monthly scale, soil respiration significantly correlated with maximum, minimum, mean and accumulated effective soil temperatures. Annual soil respiration varied from 409 g C m−2 in coniferous forest to 570 g C m−2 in mixed forest and to 692 g C m−2 in broadleaved forest, and was markedly explained by mean soil temperatures of the warmest day, July and summer, separately. These three soil temperatures reflected the maximum values on diurnal, monthly and annual scales. In accordance with their higher temperatures, summer soil respiration accounted for 51% of annual soil respiration across forest types, and broadleaved forest also had higher soil organic carbon content (SOC) and soil microbial biomass carbon content (SMBC), but a lower contribution of SMBC to SOC. This added proof to the findings that maximum soil temperature may accelerate the transformation of SOC to CO2-C via stimulating activities of soil microorganisms. PMID:26179467

  19. Influence of aliphatic amides on the temperature of maximum density of water

    International Nuclear Information System (INIS)

    Torres, Andrés Felipe; Romero, Carmen M.

    2017-01-01

    Highlights: • The addition of amides decreases the temperature of maximum density of water suggesting a disruptive effect on water structure. • The amides in aqueous solution do not follow the Despretz equation in the concentration range considered. • The temperature shift Δθ as a function of molality is represented by a second order equation. • The Despretz constants were determined considering the dilute concentration region for each amide solution. • Solute disrupting effect of amides becomes smaller as its hydrophobic character increases. - Abstract: The influence of dissolved substances on the temperature of the maximum density of water has been studied in relation to their effect on water structure as they can change the equilibrium between structured and unstructured species of water. However, most work has been performed using salts and the studies with small organic solutes such as amides are scarce. In this work, the effect of acetamide, propionamide and butyramide on the temperature of maximum density of water was determined from density measurements using a magnetic float densimeter. Densities of aqueous solutions were measured within the temperature range from T = (275.65–278.65) K at intervals of 0.50 K in the concentration range between (0.10000 and 0.80000) mol·kg −1 . The temperature of maximum density was determined from the experimental results. The effect of the three amides is to decrease the temperature of maximum density of water and the change does not follow the Despretz equation. The results are discussed in terms of solute-water interactions and the disrupting effect of amides on water structure.

  20. Trends in mean maximum temperature, mean minimum temperature and mean relative humidity for Lautoka, Fiji during 2003 – 2013

    Directory of Open Access Journals (Sweden)

    Syed S. Ghani

    2017-12-01

    Full Text Available The current work observes the trends in Lautoka’s temperature and relative humidity during the period 2003 – 2013, which were analyzed using the recently updated data obtained from Fiji Meteorological Services (FMS. Four elements, mean maximum temperature, mean minimum temperature along with diurnal temperature range (DTR and mean relative humidity are investigated. From 2003–2013, the annual mean temperature has been enhanced between 0.02 and 0.080C. The heating is more in minimum temperature than in maximum temperature, resulting in a decrease of diurnal temperature range. The statistically significant increase was mostly seen during the summer months of December and January. Mean Relative Humidity has also increased from 3% to 8%. The bases of abnormal climate conditions are also studied. These bases were defined with temperature or humidity anomalies in their appropriate time sequences. These established the observed findings and exhibited that climate has been becoming gradually damper and heater throughout Lautoka during this period. While we are only at an initial phase in the probable inclinations of temperature changes, ecological reactions to recent climate change are already evidently noticeable. So it is proposed that it would be easier to identify climate alteration in a small island nation like Fiji.

  1. Large temperature variability in the southern African tropics since the Last Glacial Maximum

    NARCIS (Netherlands)

    Powers, L.A.; Johnson, T.C.; Werne, J.P.; Castañeda, I.S.; Hopmans, E.; Sinninghe Damsté, J.S.; Schouten, S.

    2005-01-01

    The role of the tropics in global climate change is actively debated, particularly in regard to the timing and magnitude of thermal and hydrological response. Continuous, high-resolution temperature records through the Last Glacial Maximum (LGM) from tropical oceans have provided much insight

  2. Maximum entropy approach to H-theory: Statistical mechanics of hierarchical systems.

    Science.gov (United States)

    Vasconcelos, Giovani L; Salazar, Domingos S P; Macêdo, A M S

    2018-02-01

    A formalism, called H-theory, is applied to the problem of statistical equilibrium of a hierarchical complex system with multiple time and length scales. In this approach, the system is formally treated as being composed of a small subsystem-representing the region where the measurements are made-in contact with a set of "nested heat reservoirs" corresponding to the hierarchical structure of the system, where the temperatures of the reservoirs are allowed to fluctuate owing to the complex interactions between degrees of freedom at different scales. The probability distribution function (pdf) of the temperature of the reservoir at a given scale, conditioned on the temperature of the reservoir at the next largest scale in the hierarchy, is determined from a maximum entropy principle subject to appropriate constraints that describe the thermal equilibrium properties of the system. The marginal temperature distribution of the innermost reservoir is obtained by integrating over the conditional distributions of all larger scales, and the resulting pdf is written in analytical form in terms of certain special transcendental functions, known as the Fox H functions. The distribution of states of the small subsystem is then computed by averaging the quasiequilibrium Boltzmann distribution over the temperature of the innermost reservoir. This distribution can also be written in terms of H functions. The general family of distributions reported here recovers, as particular cases, the stationary distributions recently obtained by Macêdo et al. [Phys. Rev. E 95, 032315 (2017)10.1103/PhysRevE.95.032315] from a stochastic dynamical approach to the problem.

  3. EXTREME MAXIMUM AND MINIMUM AIR TEMPERATURE IN MEDİTERRANEAN COASTS IN TURKEY

    Directory of Open Access Journals (Sweden)

    Barbaros Gönençgil

    2016-01-01

    Full Text Available In this study, we determined extreme maximum and minimum temperatures in both summer and winter seasons at the stations in the Mediterranean coastal areas of Turkey.In the study, the data of 24 meteorological stations for the daily maximum and minimumtemperatures of the period from 1970–2010 were used. From this database, a set of four extreme temperature indices applied warm (TX90 and cold (TN10 days and warm spells (WSDI and cold spell duration (CSDI. The threshold values were calculated for each station to determine the temperatures that were above and below the seasonal norms in winter and summer. The TX90 index displays a positive statistically significant trend, while TN10 display negative nonsignificant trend. The occurrence of warm spells shows statistically significant increasing trend while the cold spells shows significantly decreasing trend over the Mediterranean coastline in Turkey.

  4. Maximum And Minimum Temperature Trends In Mexico For The Last 31 Years

    Science.gov (United States)

    Romero-Centeno, R.; Zavala-Hidalgo, J.; Allende Arandia, M. E.; Carrasco-Mijarez, N.; Calderon-Bustamante, O.

    2013-05-01

    Based on high-resolution (1') daily maps of the maximum and minimum temperatures in Mexico, an analysis of the last 31-year trends is performed. The maps were generated using all the available information from more than 5,000 stations of the Mexican Weather Service (Servicio Meteorológico Nacional, SMN) for the period 1979-2009, along with data from the North American Regional Reanalysis (NARR). The data processing procedure includes a quality control step, in order to eliminate erroneous daily data, and make use of a high-resolution digital elevation model (from GEBCO), the relationship between air temperature and elevation by means of the average environmental lapse rate, and interpolation algorithms (linear and inverse-distance weighting). Based on the monthly gridded maps for the mentioned period, the maximum and minimum temperature trends calculated by least-squares linear regression and their statistical significance are obtained and discussed.

  5. Trends in Mean Annual Minimum and Maximum Near Surface Temperature in Nairobi City, Kenya

    Directory of Open Access Journals (Sweden)

    George Lukoye Makokha

    2010-01-01

    Full Text Available This paper examines the long-term urban modification of mean annual conditions of near surface temperature in Nairobi City. Data from four weather stations situated in Nairobi were collected from the Kenya Meteorological Department for the period from 1966 to 1999 inclusive. The data included mean annual maximum and minimum temperatures, and was first subjected to homogeneity test before analysis. Both linear regression and Mann-Kendall rank test were used to discern the mean annual trends. Results show that the change of temperature over the thirty-four years study period is higher for minimum temperature than maximum temperature. The warming trends began earlier and are more significant at the urban stations than is the case at the sub-urban stations, an indication of the spread of urbanisation from the built-up Central Business District (CBD to the suburbs. The established significant warming trends in minimum temperature, which are likely to reach higher proportions in future, pose serious challenges on climate and urban planning of the city. In particular the effect of increased minimum temperature on human physiological comfort, building and urban design, wind circulation and air pollution needs to be incorporated in future urban planning programmes of the city.

  6. Dynamic Performance of Maximum Power Point Trackers in TEG Systems Under Rapidly Changing Temperature Conditions

    Science.gov (United States)

    Man, E. A.; Sera, D.; Mathe, L.; Schaltz, E.; Rosendahl, L.

    2016-03-01

    Characterization of thermoelectric generators (TEG) is widely discussed and equipment has been built that can perform such analysis. One method is often used to perform such characterization: constant temperature with variable thermal power input. Maximum power point tracking (MPPT) methods for TEG systems are mostly tested under steady-state conditions for different constant input temperatures. However, for most TEG applications, the input temperature gradient changes, exposing the MPPT to variable tracking conditions. An example is the exhaust pipe on hybrid vehicles, for which, because of the intermittent operation of the internal combustion engine, the TEG and its MPPT controller are exposed to a cyclic temperature profile. Furthermore, there are no guidelines on how fast the MPPT must be under such dynamic conditions. In the work discussed in this paper, temperature gradients for TEG integrated in several applications were evaluated; the results showed temperature variation up to 5°C/s for TEG systems. Electrical characterization of a calcium-manganese oxide TEG was performed at steady-state for different input temperatures and a maximum temperature of 401°C. By using electrical data from characterization of the oxide module, a solar array simulator was emulated to perform as a TEG. A trapezoidal temperature profile with different gradients was used on the TEG simulator to evaluate the dynamic MPPT efficiency. It is known that the perturb and observe (P&O) algorithm may have difficulty accurately tracking under rapidly changing conditions. To solve this problem, a compromise must be found between the magnitude of the increment and the sampling frequency of the control algorithm. The standard P&O performance was evaluated experimentally by using different temperature gradients for different MPPT sampling frequencies, and efficiency values are provided for all cases. The results showed that a tracking speed of 2.5 Hz can be successfully implemented on a TEG

  7. Uninterrupted thermoelectric energy harvesting using temperature-sensor-based maximum power point tracking system

    International Nuclear Information System (INIS)

    Park, Jae-Do; Lee, Hohyun; Bond, Matthew

    2014-01-01

    Highlights: • Feedforward MPPT scheme for uninterrupted TEG energy harvesting is suggested. • Temperature sensors are used to avoid current measurement or source disconnection. • MPP voltage reference is generated based on OCV vs. temperature differential model. • Optimal operating condition is maintained using hysteresis controller. • Any type of power converter can be used in the proposed scheme. - Abstract: In this paper, a thermoelectric generator (TEG) energy harvesting system with a temperature-sensor-based maximum power point tracking (MPPT) method is presented. Conventional MPPT algorithms for photovoltaic cells may not be suitable for thermoelectric power generation because a significant amount of time is required for TEG systems to reach a steady state. Moreover, complexity and additional power consumption in conventional circuits and periodic disconnection of power source are not desirable for low-power energy harvesting applications. The proposed system can track the varying maximum power point (MPP) with a simple and inexpensive temperature-sensor-based circuit without instantaneous power measurement or TEG disconnection. This system uses TEG’s open circuit voltage (OCV) characteristic with respect to temperature gradient to generate a proper reference voltage signal, i.e., half of the TEG’s OCV. The power converter controller maintains the TEG output voltage at the reference level so that the maximum power can be extracted for the given temperature condition. This feedforward MPPT scheme is inherently stable and can be implemented without any complex microcontroller circuit. The proposed system has been validated analytically and experimentally, and shows a maximum power tracking error of 1.15%

  8. Maximum Power Point Tracking Control of Photovoltaic Systems: A Polynomial Fuzzy Model-Based Approach

    DEFF Research Database (Denmark)

    Rakhshan, Mohsen; Vafamand, Navid; Khooban, Mohammad Hassan

    2018-01-01

    This paper introduces a polynomial fuzzy model (PFM)-based maximum power point tracking (MPPT) control approach to increase the performance and efficiency of the solar photovoltaic (PV) electricity generation. The proposed method relies on a polynomial fuzzy modeling, a polynomial parallel......, a direct maximum power (DMP)-based control structure is considered for MPPT. Using the PFM representation, the DMP-based control structure is formulated in terms of SOS conditions. Unlike the conventional approaches, the proposed approach does not require exploring the maximum power operational point...

  9. Maximum Power Tracking by VSAS approach for Wind Turbine, Renewable Energy Sources

    Directory of Open Access Journals (Sweden)

    Nacer Kouider Msirdi

    2015-08-01

    Full Text Available This paper gives a review of the most efficient algorithms designed to track the maximum power point (MPP for catching the maximum wind power by a variable speed wind turbine (VSWT. We then design a new maximum power point tracking (MPPT algorithm using the Variable Structure Automatic Systems approach (VSAS. The proposed approachleads efficient algorithms as shown in this paper by the analysis and simulations.

  10. Effects of fasting on maximum thermogenesis in temperature-acclimated rats

    Science.gov (United States)

    Wang, L. C. H.

    1981-09-01

    To further investigate the limiting effect of substrates on maximum thermogenesis in acute cold exposure, the present study examined the prevalence of this effect at different thermogenic capabilities consequent to cold- or warm-acclimation. Male Sprague-Dawley rats (n=11) were acclimated to 6, 16 and 26‡C, in succession, their thermogenic capabilities after each acclimation temperature were measured under helium-oxygen (21% oxygen, balance helium) at -10‡C after overnight fasting or feeding. Regardless of feeding conditions, both maximum and total heat production were significantly greater in 6>16>26‡C-acclimated conditions. In the fed state, the total heat production was significantly greater than that in the fasted state at all acclimating temperatures but the maximum thermogenesis was significant greater only in the 6 and 16‡C-acclimated states. The results indicate that the limiting effect of substrates on maximum and total thermogenesis is independent of the magnitude of thermogenic capability, suggesting a substrate-dependent component in restricting the effective expression of existing aerobic metabolic capability even under severe stress.

  11. Temperature dependence of attitude sensor coalignments on the Solar Maximum Mission (SMM)

    Science.gov (United States)

    Pitone, D. S.; Eudell, A. H.; Patt, F. S.

    1990-01-01

    The temperature correlation of the relative coalignment between the fine-pointing sun sensor and fixed-head star trackers measured on the Solar Maximum Mission (SMM) is analyzed. An overview of the SMM, including mission history and configuration, is given. Possible causes of the misalignment variation are discussed, with focus placed on spacecraft bending due to solar-radiation pressure, electronic or mechanical changes in the sensors, uncertainty in the attitude solutions, and mounting-plate expansion and contraction due to thermal effects. Yaw misalignment variation from the temperature profile is assessed, and suggestions for spacecraft operations are presented, involving methods to incorporate flight measurements of the temperature-versus-alignment function and its variance in operational procedures and the spacecraft structure temperatures in the attitude telemetry record.

  12. Application of Markov chain model to daily maximum temperature for thermal comfort in Malaysia

    International Nuclear Information System (INIS)

    Nordin, Muhamad Asyraf bin Che; Hassan, Husna

    2015-01-01

    The Markov chain’s first order principle has been widely used to model various meteorological fields, for prediction purposes. In this study, a 14-year (2000-2013) data of daily maximum temperatures in Bayan Lepas were used. Earlier studies showed that the outdoor thermal comfort range based on physiologically equivalent temperature (PET) index in Malaysia is less than 34°C, thus the data obtained were classified into two state: normal state (within thermal comfort range) and hot state (above thermal comfort range). The long-run results show the probability of daily temperature exceed TCR will be only 2.2%. On the other hand, the probability daily temperature within TCR will be 97.8%

  13. THE MAXIMUM EFFECT OF DEEP LAKES ON TEMPERATURE PROFILES – DETERMINATION OF THE GEOTHERMAL GRADIENT

    Directory of Open Access Journals (Sweden)

    Eppelbaum L. V.

    2009-07-01

    Full Text Available Understanding the climate change processes on the basis of geothermal observations in boreholes is an important and at the same time high-intricate problem. Many non-climatic effects could cause changes in ground surface temperatures. In this study we investigate the effects of deep lakes on the borehole temperature profilesobserved within or in the vicinity of the lakes. We propose a method based on utilization of Laplace equation with nonuniform boundary conditions. The proposed method makes possible to estimate the maximum effect of deep lakes (here the term "deep lake" means that long term mean annual temperature of bottom sediments can beconsidered as a constant value on the borehole temperature profiles. This method also allows one to estimate an accuracy of the determination of the geothermal gradient.

  14. Evaluation of empirical relationships between extreme rainfall and daily maximum temperature in Australia

    Science.gov (United States)

    Herath, Sujeewa Malwila; Sarukkalige, Ranjan; Nguyen, Van Thanh Van

    2018-01-01

    Understanding the relationships between extreme daily and sub-daily rainfall events and their governing factors is important in order to analyse the properties of extreme rainfall events in a changing climate. Atmospheric temperature is one of the dominant climate variables which has a strong relationship with extreme rainfall events. In this study, a temperature-rainfall binning technique is used to evaluate the dependency of extreme rainfall on daily maximum temperature. The Clausius-Clapeyron (C-C) relation was found to describe the relationship between daily maximum temperature and a range of rainfall durations from 6 min up to 24 h for seven Australian weather stations, the stations being located in Adelaide, Brisbane, Canberra, Darwin, Melbourne, Perth and Sydney. The analysis shows that the rainfall - temperature scaling varies with location, temperature and rainfall duration. The Darwin Airport station shows a negative scaling relationship, while the other six stations show a positive relationship. To identify the trend in scaling relationship over time the same analysis is conducted using data covering 10 year periods. Results indicate that the dependency of extreme rainfall on temperature also varies with the analysis period. Further, this dependency shows an increasing trend for more extreme short duration rainfall and a decreasing trend for average long duration rainfall events at most stations. Seasonal variations of the scale changing trends were analysed by categorizing the summer and autumn seasons in one group and the winter and spring seasons in another group. Most of 99th percentile of 6 min, 1 h and 24 h rain durations at Perth, Melbourne and Sydney stations show increasing trend for both groups while Adelaide and Darwin show decreasing trend. Furthermore, majority of scaling trend of 50th percentile are decreasing for both groups.

  15. Subtropical Arctic Ocean temperatures during the Palaeocene/Eocene thermal maximum

    Science.gov (United States)

    Sluijs, A.; Schouten, S.; Pagani, M.; Woltering, M.; Brinkhuis, H.; Damste, J.S.S.; Dickens, G.R.; Huber, M.; Reichart, G.-J.; Stein, R.; Matthiessen, J.; Lourens, L.J.; Pedentchouk, N.; Backman, J.; Moran, K.; Clemens, S.; Cronin, T.; Eynaud, F.; Gattacceca, J.; Jakobsson, M.; Jordan, R.; Kaminski, M.; King, J.; Koc, N.; Martinez, N.C.; McInroy, D.; Moore, T.C.; O'Regan, M.; Onodera, J.; Palike, H.; Rea, B.; Rio, D.; Sakamoto, T.; Smith, D.C.; St John, K.E.K.; Suto, I.; Suzuki, N.; Takahashi, K.; Watanabe, M. E.; Yamamoto, M.

    2006-01-01

    The Palaeocene/Eocene thermal maximum, ???55 million years ago, was a brief period of widespread, extreme climatic warming, that was associated with massive atmospheric greenhouse gas input. Although aspects of the resulting environmental changes are well documented at low latitudes, no data were available to quantify simultaneous changes in the Arctic region. Here we identify the Palaeocene/Eocene thermal maximum in a marine sedimentary sequence obtained during the Arctic Coring Expedition. We show that sea surface temperatures near the North Pole increased from ???18??C to over 23??C during this event. Such warm values imply the absence of ice and thus exclude the influence of ice-albedo feedbacks on this Arctic warming. At the same time, sea level rose while anoxic and euxinic conditions developed in the ocean's bottom waters and photic zone, respectively. Increasing temperature and sea level match expectations based on palaeoclimate model simulations, but the absolute polar temperatures that we derive before, during and after the event are more than 10??C warmer than those model-predicted. This suggests that higher-than-modern greenhouse gas concentrations must have operated in conjunction with other feedback mechanisms-perhaps polar stratospheric clouds or hurricane-induced ocean mixing-to amplify early Palaeogene polar temperatures. ?? 2006 Nature Publishing Group.

  16. Probing Ionic Liquid Aqueous Solutions Using Temperature of Maximum Density Isotope Effects

    Directory of Open Access Journals (Sweden)

    Mohammad Tariq

    2013-03-01

    Full Text Available This work is a new development of an extensive research program that is investigating for the first time shifts in the temperature of maximum density (TMD of aqueous solutions caused by ionic liquid solutes. In the present case we have compared the shifts caused by three ionic liquid solutes with a common cation—1-ethyl-3-methylimidazolium coupled with acetate, ethylsulfate and tetracyanoborate anions—in normal and deuterated water solutions. The observed differences are discussed in terms of the nature of the corresponding anion-water interactions.

  17. Verification of surface minimum, mean, and maximum temperature forecasts in Calabria for summer 2008

    Directory of Open Access Journals (Sweden)

    S. Federico

    2011-02-01

    Full Text Available Since 2005, one-hour temperature forecasts for the Calabria region (southern Italy, modelled by the Regional Atmospheric Modeling System (RAMS, have been issued by CRATI/ISAC-CNR (Consortium for Research and Application of Innovative Technologies/Institute for Atmospheric and Climate Sciences of the National Research Council and are available online at http://meteo.crati.it/previsioni.html (every six hours. Beginning in June 2008, the horizontal resolution was enhanced to 2.5 km. In the present paper, forecast skill and accuracy are evaluated out to four days for the 2008 summer season (from 6 June to 30 September, 112 runs. For this purpose, gridded high horizontal resolution forecasts of minimum, mean, and maximum temperatures are evaluated against gridded analyses at the same horizontal resolution (2.5 km.

    Gridded analysis is based on Optimal Interpolation (OI and uses the RAMS first-day temperature forecast as the background field. Observations from 87 thermometers are used in the analysis system. The analysis error is introduced to quantify the effect of using the RAMS first-day forecast as the background field in the OI analyses and to define the forecast error unambiguously, while spatial interpolation (SI analysis is considered to quantify the statistics' sensitivity to the verifying analysis and to show the quality of the OI analyses for different background fields.

    Two case studies, the first one with a low (less than the 10th percentile root mean square error (RMSE in the OI analysis, the second with the largest RMSE of the whole period in the OI analysis, are discussed to show the forecast performance under two different conditions. Cumulative statistics are used to quantify forecast errors out to four days. Results show that maximum temperature has the largest RMSE, while minimum and mean temperature errors are similar. For the period considered

  18. Global view of F-region electron density and temperature at solar maximum

    International Nuclear Information System (INIS)

    Brace, L.H.; Theis, R.F.; Hoegy, W.R.

    1982-01-01

    Dynamics Explorer-2 is permitting the first measurements of the global structure of the F-regions at very high levels of solar activity (S>200). Selected full orbits of Langmuir probe measurements of electron temperature, T/sub e/, and density, N/sub e/, are shown to illustrate this global structure and some of the ionospheric features that are the topic of other papers in this issue. The ionospheric thermal structure is of particular interest because T/sub e/ is a sensitive indicator of the coupling of magnetospheric energy into the upper atmosphere. A comparison of these heating effects with those observed at solar minimum shows that the magnetospheric sources are more important at solar maximum, as might have been expected. Heating at the cusp, the auroral oval and the plasma-pause is generally both greater and more variable. Electron cooling rate calculations employing low latitude measurements indicate that solar extreme ultraviolet heating of the F region at solar maximum is enhanced by a factor that is greater than the increase in solar flux. Some of this enhanced electron heating arises from the increase in electron heating efficiency at the higher N/sub e/ of solar maximum, but this appears insufficient to completely resolve the discrepancy

  19. Bayesian Reliability Estimation for Deteriorating Systems with Limited Samples Using the Maximum Entropy Approach

    OpenAIRE

    Xiao, Ning-Cong; Li, Yan-Feng; Wang, Zhonglai; Peng, Weiwen; Huang, Hong-Zhong

    2013-01-01

    In this paper the combinations of maximum entropy method and Bayesian inference for reliability assessment of deteriorating system is proposed. Due to various uncertainties, less data and incomplete information, system parameters usually cannot be determined precisely. These uncertainty parameters can be modeled by fuzzy sets theory and the Bayesian inference which have been proved to be useful for deteriorating systems under small sample sizes. The maximum entropy approach can be used to cal...

  20. Maximum Likelihood Approach for RFID Tag Set Cardinality Estimation with Detection Errors

    DEFF Research Database (Denmark)

    Nguyen, Chuyen T.; Hayashi, Kazunori; Kaneko, Megumi

    2013-01-01

    Abstract Estimation schemes of Radio Frequency IDentification (RFID) tag set cardinality are studied in this paper using Maximum Likelihood (ML) approach. We consider the estimation problem under the model of multiple independent reader sessions with detection errors due to unreliable radio...... is evaluated under dierent system parameters and compared with that of the conventional method via computer simulations assuming flat Rayleigh fading environments and framed-slotted ALOHA based protocol. Keywords RFID tag cardinality estimation maximum likelihood detection error...

  1. Temperature of maximum density and excess thermodynamics of aqueous mixtures of methanol

    Energy Technology Data Exchange (ETDEWEB)

    González-Salgado, D.; Zemánková, K. [Departamento de Física Aplicada, Universidad de Vigo, Campus del Agua, Edificio Manuel Martínez-Risco, E-32004 Ourense (Spain); Noya, E. G.; Lomba, E. [Instituto de Química Física Rocasolano, CSIC, Calle Serrano 119, E-28006 Madrid (Spain)

    2016-05-14

    In this work, we present a study of representative excess thermodynamic properties of aqueous mixtures of methanol over the complete concentration range, based on extensive computer simulation calculations. In addition to test various existing united atom model potentials, we have developed a new force-field which accurately reproduces the excess thermodynamics of this system. Moreover, we have paid particular attention to the behavior of the temperature of maximum density (TMD) in dilute methanol mixtures. The presence of a temperature of maximum density is one of the essential anomalies exhibited by water. This anomalous behavior is modified in a non-monotonous fashion by the presence of fully miscible solutes that partly disrupt the hydrogen bond network of water, such as methanol (and other short chain alcohols). In order to obtain a better insight into the phenomenology of the changes in the TMD of water induced by small amounts of methanol, we have performed a new series of experimental measurements and computer simulations using various force fields. We observe that none of the force-fields tested capture the non-monotonous concentration dependence of the TMD for highly diluted methanol solutions.

  2. Relationship between plants in Europe and surface temperatures of the Atlantic Ocean during the glacial maximum

    Energy Technology Data Exchange (ETDEWEB)

    Van Campo, M

    1984-01-01

    In Europe and North America, the deciduous forest, whether or not mixed with conifers, prevails within boundaries which coincide with the 12 and 18/sup 0/C isotherms of Ocean surface temperatures in August; within Europe this forest points to the limit of the Atlantic influence and bevels out as it is squeezed between coniferous forest to the NE (thermic boundary) and steppe to the SE (hydric boundary). During the glacial age this forest disappeared from its main European area and remained only in mountain refuges. Thus, the temperature of the eastern Atlantic surface waters, off Europe, control the nature of its vegetation. Variations in the pollen curve of pines, birches, Artemisia, Chenopodiaceae and Ephedra are accounted for by the climatic variations in southern Europe before 13,000 yr BP. It is seen that a very arid climate culminated at about 15,000 yr BP. It corresponds to the most active iceberg calving which considerably lowered the Ocean surface temperature far to the south. In spite of the increasing summer temperatures, this temperature remained as cold as it was during the glacial maximum. The result is the lowest evaporation from the Ocean hence a minimum of clouds and a minimum of rain. The end of the first phase of the deglaciation at +/- 13,000 yr BP corresponds to a warming up of the Ocean surface bringing about increased evaporation, hence rains over the continent. The evolution of the vegetation in Europe at the end of the glacial times from south of the ice sheet down to the Mediterranean, depends as much, if not more, on rains than on temperatures.

  3. Comparison of candidate solar array maximum power utilization approaches. [for spacecraft propulsion

    Science.gov (United States)

    Costogue, E. N.; Lindena, S.

    1976-01-01

    A study was made of five potential approaches that can be utilized to detect the maximum power point of a solar array while sustaining operations at or near maximum power and without endangering stability or causing array voltage collapse. The approaches studied included: (1) dynamic impedance comparator, (2) reference array measurement, (3) onset of solar array voltage collapse detection, (4) parallel tracker, and (5) direct measurement. The study analyzed the feasibility and adaptability of these approaches to a future solar electric propulsion (SEP) mission, and, specifically, to a comet rendezvous mission. Such missions presented the most challenging requirements to a spacecraft power subsystem in terms of power management over large solar intensity ranges of 1.0 to 3.5 AU. The dynamic impedance approach was found to have the highest figure of merit, and the reference array approach followed closely behind. The results are applicable to terrestrial solar power systems as well as to other than SEP space missions.

  4. Estimation of daily maximum and minimum air temperatures in urban landscapes using MODIS time series satellite data

    Science.gov (United States)

    Yoo, Cheolhee; Im, Jungho; Park, Seonyoung; Quackenbush, Lindi J.

    2018-03-01

    Urban air temperature is considered a significant variable for a variety of urban issues, and analyzing the spatial patterns of air temperature is important for urban planning and management. However, insufficient weather stations limit accurate spatial representation of temperature within a heterogeneous city. This study used a random forest machine learning approach to estimate daily maximum and minimum air temperatures (Tmax and Tmin) for two megacities with different climate characteristics: Los Angeles, USA, and Seoul, South Korea. This study used eight time-series land surface temperature (LST) data from Moderate Resolution Imaging Spectroradiometer (MODIS), with seven auxiliary variables: elevation, solar radiation, normalized difference vegetation index, latitude, longitude, aspect, and the percentage of impervious area. We found different relationships between the eight time-series LSTs with Tmax/Tmin for the two cities, and designed eight schemes with different input LST variables. The schemes were evaluated using the coefficient of determination (R2) and Root Mean Square Error (RMSE) from 10-fold cross-validation. The best schemes produced R2 of 0.850 and 0.777 and RMSE of 1.7 °C and 1.2 °C for Tmax and Tmin in Los Angeles, and R2 of 0.728 and 0.767 and RMSE of 1.1 °C and 1.2 °C for Tmax and Tmin in Seoul, respectively. LSTs obtained the day before were crucial for estimating daily urban air temperature. Estimated air temperature patterns showed that Tmax was highly dependent on the geographic factors (e.g., sea breeze, mountains) of the two cities, while Tmin showed marginally distinct temperature differences between built-up and vegetated areas in the two cities.

  5. Finite temperature approach to confinement

    International Nuclear Information System (INIS)

    Gave, E.; Jengo, R.; Omero, C.

    1980-06-01

    The finite temperature treatment of gauge theories, formulated in terms of a gauge invariant variable as in a Polyakov method, is used as a device for obtaining an effective theory where the confinement test takes the form of a correlation function. The formalism is discussed for the abelian CPsup(n-1) model in various dimensionalities and for the pure Yang-Mills theory in the limit of zero temperature. In the latter case a class of vortex like configurations of the effective theory which induce confinement correspond in particular to the instanton solutions. (author)

  6. Effect of glycine, DL-alanine and DL-2-aminobutyric acid on the temperature of maximum density of water

    International Nuclear Information System (INIS)

    Romero, Carmen M.; Torres, Andres Felipe

    2015-01-01

    Highlights: • Effect of α-amino acids on the temperature of maximum density of water is presented. • The addition of α-amino acids decreases the temperature of maximum density of water. • Despretz constants suggest that the amino acids behave as water structure breakers. • Despretz constants decrease as the number of CH 2 groups of the amino acid increase. • Solute disrupting effect becomes smaller as its hydrophobic character increases. - Abstract: The effect of glycine, DL-alanine and DL-2-aminobutyric acid on the temperature of maximum density of water was determined from density measurements using a magnetic float densimeter. Densities of aqueous solutions were measured within the temperature range from T = (275.65 to 278.65) K at intervals of T = 0.50 K over the concentration range between (0.0300 and 0.1000) mol · kg −1 . A linear relationship between density and concentration was obtained for all the systems in the temperature range considered. The temperature of maximum density was determined from the experimental results. The effect of the three amino acids is to decrease the temperature of maximum density of water and the decrease is proportional to molality according to Despretz equation. The effect of the amino acids on the temperature of maximum density decreases as the number of methylene groups of the alkyl chain becomes larger. The results are discussed in terms of (solute + water) interactions and the effect of amino acids on water structure

  7. Maximum Evaporation Rates of Water Droplets Approaching Obstacles in the Atmosphere Under Icing Conditions

    Science.gov (United States)

    Lowell, H. H.

    1953-01-01

    When a closed body or a duct envelope moves through the atmosphere, air pressure and temperature rises occur ahead of the body or, under ram conditions, within the duct. If cloud water droplets are encountered, droplet evaporation will result because of the air-temperature rise and the relative velocity between the droplet and stagnating air. It is shown that the solution of the steady-state psychrometric equation provides evaporation rates which are the maximum possible when droplets are entrained in air moving along stagnation lines under such conditions. Calculations are made for a wide variety of water droplet diameters, ambient conditions, and flight Mach numbers. Droplet diameter, body size, and Mach number effects are found to predominate, whereas wide variation in ambient conditions are of relatively small significance in the determination of evaporation rates. The results are essentially exact for the case of movement of droplets having diameters smaller than about 30 microns along relatively long ducts (length at least several feet) or toward large obstacles (wings), since disequilibrium effects are then of little significance. Mass losses in the case of movement within ducts will often be significant fractions (one-fifth to one-half) of original droplet masses, while very small droplets within ducts will often disappear even though the entraining air is not fully stagnated. Wing-approach evaporation losses will usually be of the order of several percent of original droplet masses. Two numerical examples are given of the determination of local evaporation rates and total mass losses in cases involving cloud droplets approaching circular cylinders along stagnation lines. The cylinders chosen were of 3.95-inch (10.0+ cm) diameter and 39.5-inch 100+ cm) diameter. The smaller is representative of icing-rate measurement cylinders, while with the larger will be associated an air-flow field similar to that ahead of an airfoil having a leading-edge radius

  8. New England observed and predicted August stream/river temperature maximum positive daily rate of change points

    Data.gov (United States)

    U.S. Environmental Protection Agency — The shapefile contains points with associated observed and predicted August stream/river temperature maximum positive daily rate of change in New England based on a...

  9. New England observed and predicted July stream/river temperature maximum positive daily rate of change points

    Data.gov (United States)

    U.S. Environmental Protection Agency — The shapefile contains points with associated observed and predicted July stream/river temperature maximum positive daily rate of change in New England based on a...

  10. New England observed and predicted July maximum negative stream/river temperature daily rate of change points

    Data.gov (United States)

    U.S. Environmental Protection Agency — The shapefile contains points with associated observed and predicted July stream/river temperature maximum negative daily rate of change in New England based on a...

  11. Impacts of Land Cover and Seasonal Variation on Maximum Air Temperature Estimation Using MODIS Imagery

    Directory of Open Access Journals (Sweden)

    Yulin Cai

    2017-03-01

    Full Text Available Daily maximum surface air temperature (Tamax is a crucial factor for understanding complex land surface processes under rapid climate change. Remote detection of Tamax has widely relied on the empirical relationship between air temperature and land surface temperature (LST, a product derived from remote sensing. However, little is known about how such a relationship is affected by the high heterogeneity in landscapes and dynamics in seasonality. This study aims to advance our understanding of the roles of land cover and seasonal variation in the estimation of Tamax using the MODIS (Moderate Resolution Imaging Spectroradiometer LST product. We developed statistical models to link Tamax and LST in the middle and lower reaches of the Yangtze River in China for five major land-cover types (i.e., forest, shrub, water, impervious surface, cropland, and grassland and two seasons (i.e., growing season and non-growing season. Results show that the performance of modeling the Tamax-LST relationship was highly dependent on land cover and seasonal variation. Estimating Tamax over grasslands and water bodies achieved superior performance; while uncertainties were high over forested lands that contained extensive heterogeneity in species types, plant structure, and topography. We further found that all the land-cover specific models developed for the plant non-growing season outperformed the corresponding models developed for the growing season. Discrepancies in model performance mainly occurred in the vegetated areas (forest, cropland, and shrub, suggesting an important role of plant phenology in defining the statistical relationship between Tamax and LST. For impervious surfaces, the challenge of capturing the high spatial heterogeneity in urban settings using the low-resolution MODIS data made Tamax estimation a difficult task, which was especially true in the growing season.

  12. Bayesian Reliability Estimation for Deteriorating Systems with Limited Samples Using the Maximum Entropy Approach

    Directory of Open Access Journals (Sweden)

    Ning-Cong Xiao

    2013-12-01

    Full Text Available In this paper the combinations of maximum entropy method and Bayesian inference for reliability assessment of deteriorating system is proposed. Due to various uncertainties, less data and incomplete information, system parameters usually cannot be determined precisely. These uncertainty parameters can be modeled by fuzzy sets theory and the Bayesian inference which have been proved to be useful for deteriorating systems under small sample sizes. The maximum entropy approach can be used to calculate the maximum entropy density function of uncertainty parameters more accurately for it does not need any additional information and assumptions. Finally, two optimization models are presented which can be used to determine the lower and upper bounds of systems probability of failure under vague environment conditions. Two numerical examples are investigated to demonstrate the proposed method.

  13. Evaluation of parameters effect on the maximum fuel temperature in the core thermal and hydraulic design of HTTR

    International Nuclear Information System (INIS)

    Fujimoto, Nozomu; Maruyama, Soh; Sudo, Yukio; Fujii, Sadao; Niguma, Yoshinori.

    1988-10-01

    This report presents the results of quantitative evaluation on the effects of the dominant parameters on the maximum fuel temperature in the core thermal hydraulic design of the High Temperature Engineering Test Reactor(HTTR) of 30 MW in thermal power, 950 deg C in reactor outlet coolant temperature and 40 kg/cm 2 G in coolant pressure. The dominant parameters investigated are 1) Gap conductance. 2) Effect of eccertricity of fuel compacts in graphite sleeve. 3) Effect of spacer ribs on heat transfer coefficients. 4) Contact probability of fuel compact and graphite sleeve. 5) Validity of uniform radial power density in the fuel compacts. 6) Effect of impurity gas on gap conductance. 7) Effect of FP gas on gap conductance. The effects of these items on the maximum fuel temperature were quantitalively identified as hot spot factors. A probability of the appearance of the maximum fuel temperature was also evaluated in this report. (author)

  14. New results on equatorial thermospheric winds and the midnight temperature maximum

    Directory of Open Access Journals (Sweden)

    J. Meriwether

    2008-03-01

    Full Text Available Optical observations of thermospheric winds and temperatures determined with high resolution measurements of Doppler shifts and Doppler widths of the OI 630-nm equatorial nightglow emission have been made with improved accuracy at Arequipa, Peru (16.4° S, 71.4° W with an imaging Fabry-Perot interferometer. An observing procedure previously used at Arecibo Observatory was applied to achieve increased spatial and temporal sampling of the thermospheric wind and temperature with the selection of eight azimuthal directions, equally spaced from 0 to 360°, at a zenith angle of 60°. By assuming the equivalence of longitude and local time, the data obtained using this technique is analyzed to determine the mean neutral wind speeds and mean horizontal gradients of the wind field in the zonal and meridional directions. The new temperature measurements obtained with the improved instrumental accuracy clearly show the midnight temperature maximum (MTM peak with amplitudes of 25 to 200 K in all directions observed for most nights. The horizontal wind field maps calculated from the mean winds and gradients show the MTM peak is always preceded by an equatorward wind surge lasting 1–2 h. The results also show for winter events a meridional wind abatement seen after the MTM peak. On one occasion, near the September equinox, a reversal was observed during the poleward transit of the MTM over Arequipa. Analysis inferring vertical winds from the observed convergence yielded inconsistent results, calling into question the validity of this calculation for the MTM structure at equatorial latitudes during solar minimum. Comparison of the observations with the predictions of the NCAR general circulation model indicates that the model fails to reproduce the observed amplitude by a factor of 5 or more. This is attributed in part to the lack of adequate spatial resolution in the model as the MTM phenomenon takes place within a scale of 300–500 km and ~45 min in

  15. Merging daily sea surface temperature data from multiple satellites using a Bayesian maximum entropy method

    Science.gov (United States)

    Tang, Shaolei; Yang, Xiaofeng; Dong, Di; Li, Ziwei

    2015-12-01

    Sea surface temperature (SST) is an important variable for understanding interactions between the ocean and the atmosphere. SST fusion is crucial for acquiring SST products of high spatial resolution and coverage. This study introduces a Bayesian maximum entropy (BME) method for blending daily SSTs from multiple satellite sensors. A new spatiotemporal covariance model of an SST field is built to integrate not only single-day SSTs but also time-adjacent SSTs. In addition, AVHRR 30-year SST climatology data are introduced as soft data at the estimation points to improve the accuracy of blended results within the BME framework. The merged SSTs, with a spatial resolution of 4 km and a temporal resolution of 24 hours, are produced in the Western Pacific Ocean region to demonstrate and evaluate the proposed methodology. Comparisons with in situ drifting buoy observations show that the merged SSTs are accurate and the bias and root-mean-square errors for the comparison are 0.15°C and 0.72°C, respectively.

  16. The maximum allowable temperature of zircaloy-2 fuel cladding under dry storage conditions

    International Nuclear Information System (INIS)

    Mayuzumi, M.; Yoshiki, S.; Yasuda, T.; Nakatsuka, M.

    1990-09-01

    Japan plans to reprocess and reutilise the spent nuclear fuel from nuclear power generation. However, the temporary storage of spent fuel is assuming increasing importance as a means of ensuring flexibility in the nuclear fuel cycle. Our investigations of various methods of storage have shown that casks are the most suitable means of storing small quantities of spent fuel of around 500 t, and research and development are in progress to establish dry storage technology for such casks. The soundness of fuel cladding is being investigated. The most important factor in evaluating soundness in storage under inert gas as currently envisaged is creep deformation and rupture, and a number of investigations have been made of the creep behaviour of cladding. The present study was conducted on the basis of existing in-house results in collaboration with Nippon Kakunenryo Kaihatsu KK (Nippon Nuclear Fuel Department Co.), which has hot lab facilities. Tests were run on the creep deformation behaviour of irradiated cladding, and the maximum allowable temperature during dry storage was investigated. (author)

  17. A Maximum Likelihood Approach to Determine Sensor Radiometric Response Coefficients for NPP VIIRS Reflective Solar Bands

    Science.gov (United States)

    Lei, Ning; Chiang, Kwo-Fu; Oudrari, Hassan; Xiong, Xiaoxiong

    2011-01-01

    Optical sensors aboard Earth orbiting satellites such as the next generation Visible/Infrared Imager/Radiometer Suite (VIIRS) assume that the sensors radiometric response in the Reflective Solar Bands (RSB) is described by a quadratic polynomial, in relating the aperture spectral radiance to the sensor Digital Number (DN) readout. For VIIRS Flight Unit 1, the coefficients are to be determined before launch by an attenuation method, although the linear coefficient will be further determined on-orbit through observing the Solar Diffuser. In determining the quadratic polynomial coefficients by the attenuation method, a Maximum Likelihood approach is applied in carrying out the least-squares procedure. Crucial to the Maximum Likelihood least-squares procedure is the computation of the weight. The weight not only has a contribution from the noise of the sensor s digital count, with an important contribution from digitization error, but also is affected heavily by the mathematical expression used to predict the value of the dependent variable, because both the independent and the dependent variables contain random noise. In addition, model errors have a major impact on the uncertainties of the coefficients. The Maximum Likelihood approach demonstrates the inadequacy of the attenuation method model with a quadratic polynomial for the retrieved spectral radiance. We show that using the inadequate model dramatically increases the uncertainties of the coefficients. We compute the coefficient values and their uncertainties, considering both measurement and model errors.

  18. A maximum pseudo-likelihood approach for estimating species trees under the coalescent model

    Directory of Open Access Journals (Sweden)

    Edwards Scott V

    2010-10-01

    Full Text Available Abstract Background Several phylogenetic approaches have been developed to estimate species trees from collections of gene trees. However, maximum likelihood approaches for estimating species trees under the coalescent model are limited. Although the likelihood of a species tree under the multispecies coalescent model has already been derived by Rannala and Yang, it can be shown that the maximum likelihood estimate (MLE of the species tree (topology, branch lengths, and population sizes from gene trees under this formula does not exist. In this paper, we develop a pseudo-likelihood function of the species tree to obtain maximum pseudo-likelihood estimates (MPE of species trees, with branch lengths of the species tree in coalescent units. Results We show that the MPE of the species tree is statistically consistent as the number M of genes goes to infinity. In addition, the probability that the MPE of the species tree matches the true species tree converges to 1 at rate O(M -1. The simulation results confirm that the maximum pseudo-likelihood approach is statistically consistent even when the species tree is in the anomaly zone. We applied our method, Maximum Pseudo-likelihood for Estimating Species Trees (MP-EST to a mammal dataset. The four major clades found in the MP-EST tree are consistent with those in the Bayesian concatenation tree. The bootstrap supports for the species tree estimated by the MP-EST method are more reasonable than the posterior probability supports given by the Bayesian concatenation method in reflecting the level of uncertainty in gene trees and controversies over the relationship of four major groups of placental mammals. Conclusions MP-EST can consistently estimate the topology and branch lengths (in coalescent units of the species tree. Although the pseudo-likelihood is derived from coalescent theory, and assumes no gene flow or horizontal gene transfer (HGT, the MP-EST method is robust to a small amount of HGT in the

  19. Maximum likelihood approach to “informed” Sound Source Localization for Hearing Aid applications

    DEFF Research Database (Denmark)

    Farmani, Mojtaba; Pedersen, Michael Syskind; Tan, Zheng-Hua

    2015-01-01

    Most state-of-the-art Sound Source Localization (SSL) algorithms have been proposed for applications which are "uninformed'' about the target sound content; however, utilizing a wireless microphone worn by a target talker, enables recent Hearing Aid Systems (HASs) to access to an almost noise......-free sound signal of the target talker at the HAS via the wireless connection. Therefore, in this paper, we propose a maximum likelihood (ML) approach, which we call MLSSL, to estimate the Direction of Arrival (DoA) of the target signal given access to the target signal content. Compared with other "informed...

  20. Maximum Entropy Approach in Dynamic Contrast-Enhanced Magnetic Resonance Imaging.

    Science.gov (United States)

    Farsani, Zahra Amini; Schmid, Volker J

    2017-01-01

    In the estimation of physiological kinetic parameters from Dynamic Contrast-Enhanced Magnetic Resonance Imaging (DCE-MRI) data, the determination of the arterial input function (AIF) plays a key role. This paper proposes a Bayesian method to estimate the physiological parameters of DCE-MRI along with the AIF in situations, where no measurement of the AIF is available. In the proposed algorithm, the maximum entropy method (MEM) is combined with the maximum a posterior approach (MAP). To this end, MEM is used to specify a prior probability distribution of the unknown AIF. The ability of this method to estimate the AIF is validated using the Kullback-Leibler divergence. Subsequently, the kinetic parameters can be estimated with MAP. The proposed algorithm is evaluated with a data set from a breast cancer MRI study. The application shows that the AIF can reliably be determined from the DCE-MRI data using MEM. Kinetic parameters can be estimated subsequently. The maximum entropy method is a powerful tool to reconstructing images from many types of data. This method is useful for generating the probability distribution based on given information. The proposed method gives an alternative way to assess the input function from the existing data. The proposed method allows a good fit of the data and therefore a better estimation of the kinetic parameters. In the end, this allows for a more reliable use of DCE-MRI. Schattauer GmbH.

  1. Effect of temperature dependent properties on MHD convection of water near its density maximum in a square cavity

    International Nuclear Information System (INIS)

    Sivasankaran, S.; Hoa, C.J.

    2008-01-01

    Natural convection of water near its density maximum in the presence of magnetic field in a cavity with temperature dependent properties is studied numerically. The viscosity and thermal conductivity of the water is varied with reference temperature and calculated by cubic polynomial. The finite volume method is used to solve the governing equations. The results are presented graphically in the form of streamlines, isotherms and velocity vectors and are discussed for various combinations of reference temperature parameter, Rayleigh number, density inversion parameter and Hartmann number. It is observed that flow and temperature field are affected significantly by changing the reference temperature parameter for temperature dependent thermal conductivity and both temperature dependent viscosity and thermal conductivity cases. There is no significant effect on fluid flow and temperature distributions for temperature dependent viscosity case when changing the values of reference temperature parameter. The average heat transfer rate considering temperature-dependent viscosity are higher than considering temperature-dependent thermal conductivity and both temperature-dependent viscosity and thermal conductivity. The average Nusselt number decreases with an increase of Hartmann number. It is observed that the density inversion of water leaves strong effects on fluid flow and heat transfer due to the formation of bi-cellular structure. The heat transfer rate behaves non-linearly with density inversion parameter. The direction of external magnetic field also affect the fluid flow and heat transfer. (authors)

  2. BMRC: A Bitmap-Based Maximum Range Counting Approach for Temporal Data in Sensor Monitoring Networks

    Directory of Open Access Journals (Sweden)

    Bin Cao

    2017-09-01

    Full Text Available Due to the rapid development of the Internet of Things (IoT, many feasible deployments of sensor monitoring networks have been made to capture the events in physical world, such as human diseases, weather disasters and traffic accidents, which generate large-scale temporal data. Generally, the certain time interval that results in the highest incidence of a severe event has significance for society. For example, there exists an interval that covers the maximum number of people who have the same unusual symptoms, and knowing this interval can help doctors to locate the reason behind this phenomenon. As far as we know, there is no approach available for solving this problem efficiently. In this paper, we propose the Bitmap-based Maximum Range Counting (BMRC approach for temporal data generated in sensor monitoring networks. Since sensor nodes can update their temporal data at high frequency, we present a scalable strategy to support the real-time insert and delete operations. The experimental results show that the BMRC outperforms the baseline algorithm in terms of efficiency.

  3. Probabilistic measures of climate change vulnerability, adaptation action benefits, and related uncertainty from maximum temperature metric selection

    Science.gov (United States)

    DeWeber, Jefferson T.; Wagner, Tyler

    2018-01-01

    Predictions of the projected changes in species distributions and potential adaptation action benefits can help guide conservation actions. There is substantial uncertainty in projecting species distributions into an unknown future, however, which can undermine confidence in predictions or misdirect conservation actions if not properly considered. Recent studies have shown that the selection of alternative climate metrics describing very different climatic aspects (e.g., mean air temperature vs. mean precipitation) can be a substantial source of projection uncertainty. It is unclear, however, how much projection uncertainty might stem from selecting among highly correlated, ecologically similar climate metrics (e.g., maximum temperature in July, maximum 30‐day temperature) describing the same climatic aspect (e.g., maximum temperatures) known to limit a species’ distribution. It is also unclear how projection uncertainty might propagate into predictions of the potential benefits of adaptation actions that might lessen climate change effects. We provide probabilistic measures of climate change vulnerability, adaptation action benefits, and related uncertainty stemming from the selection of four maximum temperature metrics for brook trout (Salvelinus fontinalis), a cold‐water salmonid of conservation concern in the eastern United States. Projected losses in suitable stream length varied by as much as 20% among alternative maximum temperature metrics for mid‐century climate projections, which was similar to variation among three climate models. Similarly, the regional average predicted increase in brook trout occurrence probability under an adaptation action scenario of full riparian forest restoration varied by as much as .2 among metrics. Our use of Bayesian inference provides probabilistic measures of vulnerability and adaptation action benefits for individual stream reaches that properly address statistical uncertainty and can help guide conservation

  4. Probabilistic measures of climate change vulnerability, adaptation action benefits, and related uncertainty from maximum temperature metric selection.

    Science.gov (United States)

    DeWeber, Jefferson T; Wagner, Tyler

    2018-06-01

    Predictions of the projected changes in species distributions and potential adaptation action benefits can help guide conservation actions. There is substantial uncertainty in projecting species distributions into an unknown future, however, which can undermine confidence in predictions or misdirect conservation actions if not properly considered. Recent studies have shown that the selection of alternative climate metrics describing very different climatic aspects (e.g., mean air temperature vs. mean precipitation) can be a substantial source of projection uncertainty. It is unclear, however, how much projection uncertainty might stem from selecting among highly correlated, ecologically similar climate metrics (e.g., maximum temperature in July, maximum 30-day temperature) describing the same climatic aspect (e.g., maximum temperatures) known to limit a species' distribution. It is also unclear how projection uncertainty might propagate into predictions of the potential benefits of adaptation actions that might lessen climate change effects. We provide probabilistic measures of climate change vulnerability, adaptation action benefits, and related uncertainty stemming from the selection of four maximum temperature metrics for brook trout (Salvelinus fontinalis), a cold-water salmonid of conservation concern in the eastern United States. Projected losses in suitable stream length varied by as much as 20% among alternative maximum temperature metrics for mid-century climate projections, which was similar to variation among three climate models. Similarly, the regional average predicted increase in brook trout occurrence probability under an adaptation action scenario of full riparian forest restoration varied by as much as .2 among metrics. Our use of Bayesian inference provides probabilistic measures of vulnerability and adaptation action benefits for individual stream reaches that properly address statistical uncertainty and can help guide conservation actions. Our

  5. Predicting the Maximum Dynamic Strength in Bench Press: The High Precision of the Bar Velocity Approach.

    Science.gov (United States)

    Loturco, Irineu; Kobal, Ronaldo; Moraes, José E; Kitamura, Katia; Cal Abad, César C; Pereira, Lucas A; Nakamura, Fábio Y

    2017-04-01

    Loturco, I, Kobal, R, Moraes, JE, Kitamura, K, Cal Abad, CC, Pereira, LA, and Nakamura, FY. Predicting the maximum dynamic strength in bench press: the high precision of the bar velocity approach. J Strength Cond Res 31(4): 1127-1131, 2017-The aim of this study was to determine the force-velocity relationship and test the possibility of determining the 1 repetition maximum (1RM) in "free weight" and Smith machine bench presses. Thirty-six male top-level athletes from 3 different sports were submitted to a standardized 1RM bench press assessment (free weight or Smith machine, in randomized order), following standard procedures encompassing lifts performed at 40-100% of 1RM. The mean propulsive velocity (MPV) was measured in all attempts. A linear regression was performed to establish the relationships between bar velocities and 1RM percentages. The actual and predicted 1RM for each exercise were compared using a paired t-test. Although the Smith machine 1RM was higher (10% difference) than the free weight 1RM, in both cases the actual and predicted values did not differ. In addition, the linear relationship between MPV and percentage of 1RM (coefficient of determination ≥95%) allow determination of training intensity based on the bar velocity. The linear relationships between the MPVs and the relative percentages of 1RM throughout the entire range of loads enable coaches to use the MPV to accurately monitor their athletes on a daily basis and accurately determine their actual 1RM without the need to perform standard maximum dynamic strength assessments.

  6. Dynamic Performance of Maximum Power Point Trackers in TEG Systems Under Rapidly Changing Temperature Conditions

    DEFF Research Database (Denmark)

    Man, E. A.; Sera, D.; Mathe, L.

    2016-01-01

    of the intermittent operation of the internal combustion engine, the TEG and its MPPT controller are exposed to a cyclic temperature profile. Furthermore, there are no guidelines on how fast the MPPT must be under such dynamic conditions. In the work discussed in this paper, temperature gradients for TEG integrated...

  7. Influence of maximum water temperature on occurrence of Lahontan cutthroat trout within streams

    Science.gov (United States)

    J. Dunham; R. Schroeter; B. Rieman

    2003-01-01

    We measured water temperature at 87 sites in six streams in two different years (1998 and 1999) to test for association with the occurrence of Lahontan cutthroat trout Oncorhynchus clarki henshawi. Because laboratory studies suggest that Lahontan cutthroat trout begin to show signs of acute stress at warm (>22°C) temperatures, we focused on the...

  8. An analysis of annual maximum streamflows in Terengganu, Malaysia using TL-moments approach

    Science.gov (United States)

    Ahmad, Ummi Nadiah; Shabri, Ani; Zakaria, Zahrahtul Amani

    2013-02-01

    TL-moments approach has been used in an analysis to determine the best-fitting distributions to represent the annual series of maximum streamflow data over 12 stations in Terengganu, Malaysia. The TL-moments with different trimming values are used to estimate the parameter of the selected distributions namely: generalized pareto (GPA), generalized logistic, and generalized extreme value distribution. The influence of TL-moments on estimated probability distribution functions are examined by evaluating the relative root mean square error and relative bias of quantile estimates through Monte Carlo simulations. The boxplot is used to show the location of the median and the dispersion of the data, which helps in reaching the decisive conclusions. For most of the cases, the results show that TL-moments with one smallest value was trimmed from the conceptual sample (TL-moments (1,0)), of GPA distribution was the most appropriate in majority of the stations for describing the annual maximum streamflow series in Terengganu, Malaysia.

  9. Estimating distribution parameters of annual maximum streamflows in Johor, Malaysia using TL-moments approach

    Science.gov (United States)

    Mat Jan, Nur Amalina; Shabri, Ani

    2017-01-01

    TL-moments approach has been used in an analysis to identify the best-fitting distributions to represent the annual series of maximum streamflow data over seven stations in Johor, Malaysia. The TL-moments with different trimming values are used to estimate the parameter of the selected distributions namely: Three-parameter lognormal (LN3) and Pearson Type III (P3) distribution. The main objective of this study is to derive the TL-moments ( t 1,0), t 1 = 1,2,3,4 methods for LN3 and P3 distributions. The performance of TL-moments ( t 1,0), t 1 = 1,2,3,4 was compared with L-moments through Monte Carlo simulation and streamflow data over a station in Johor, Malaysia. The absolute error is used to test the influence of TL-moments methods on estimated probability distribution functions. From the cases in this study, the results show that TL-moments with four trimmed smallest values from the conceptual sample (TL-moments [4, 0]) of LN3 distribution was the most appropriate in most of the stations of the annual maximum streamflow series in Johor, Malaysia.

  10. Climate Prediction Center (CPC) U.S. Daily Maximum Air Temperature Observations

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Observational reports of daily air temperature (1200 UTC to 1200 UTC) are made by members of the NWS Automated Surface Observing Systems (ASOS) network; NWS...

  11. Maximum Efficiency of Thermoelectric Heat Conversion in High-Temperature Power Devices

    Directory of Open Access Journals (Sweden)

    V. I. Khvesyuk

    2016-01-01

    Full Text Available Modern trends in development of aircraft engineering go with development of vehicles of the fifth generation. The features of aircrafts of the fifth generation are motivation to use new high-performance systems of onboard power supply. The operating temperature of the outer walls of engines is of 800–1000 K. This corresponds to radiation heat flux of 10 kW/m2 . The thermal energy including radiation of the engine wall may potentially be converted into electricity. The main objective of this paper is to analyze if it is possible to use a high efficiency thermoelectric conversion of heat into electricity. The paper considers issues such as working processes, choice of materials, and optimization of thermoelectric conversion. It presents the analysis results of operating conditions of thermoelectric generator (TEG used in advanced hightemperature power devices. A high-temperature heat source is a favorable factor for the thermoelectric conversion of heat. It is shown that for existing thermoelectric materials a theoretical conversion efficiency can reach the level of 15–20% at temperatures up to 1500 K and available values of Ioffe parameter being ZT = 2–3 (Z is figure of merit, T is temperature. To ensure temperature regime and high efficiency thermoelectric conversion simultaneously it is necessary to have a certain match between TEG power, temperature of hot and cold surfaces, and heat transfer coefficient of the cooling system. The paper discusses a concept of radiation absorber on the TEG hot surface. The analysis has demonstrated a number of potentialities for highly efficient conversion through using the TEG in high-temperature power devices. This work has been implemented under support of the Ministry of Education and Science of the Russian Federation; project No. 1145 (the programme “Organization of Research Engineering Activities”.

  12. Experimental program to determine maximum temperatures for dry storage of spent fuel

    International Nuclear Information System (INIS)

    Knox, C.A.; Gilbert, E.R.; White, G.D.

    1985-02-01

    Although air is used as a cover gas in some dry storage facilities, other facilities use inert cover gases which must be monitored to assure inertness of the atmosphere. Thus qualifying air as a cover gas is attractive for the dry storage of spent fuels. At sufficiently high temperatures, air can react with spent fuel (UO 2 ) at the site of cladding breaches that formed during reactor irradiation or during dry storage. The reaction rate is temperature dependent; hence the rates can be maintained at acceptable levels if temperatures are low. Tests with spent fuel are being conducted at Pacific Northwest Laboratory (PNL) to determine the allowable temperatures for storage of spent fuel in air. Tests performed with nonirradiated UO 2 pellets indicated that moisture, surface condition, gamma radiation, gadolinia content of the fuel pellet, and temperature are important variables. Tests were then initiated on spent fuel to develop design data under simulated dry storage conditions. Tests have been conducted at 200 and 230 0 C on spent fuel in air and 275 0 C in moist nitrogen. The results for nonirradiated UO 2 and published data for irradiated fuel indicate that above 230 0 C, oxidation rates are unacceptably high for extended storage in air. The tests with spent fuel will be continued for approximately three years to enable reliable extrapolations to be made for extended storage in air and inert gases with oxidizing constituents. 6 refs., 6 figs., 3 tabs

  13. Causal nexus between energy consumption and carbon dioxide emission for Malaysia using maximum entropy bootstrap approach.

    Science.gov (United States)

    Gul, Sehrish; Zou, Xiang; Hassan, Che Hashim; Azam, Muhammad; Zaman, Khalid

    2015-12-01

    This study investigates the relationship between energy consumption and carbon dioxide emission in the causal framework, as the direction of causality remains has a significant policy implication for developed and developing countries. The study employed maximum entropy bootstrap (Meboot) approach to examine the causal nexus between energy consumption and carbon dioxide emission using bivariate as well as multivariate framework for Malaysia, over a period of 1975-2013. This is a unified approach without requiring the use of conventional techniques based on asymptotical theory such as testing for possible unit root and cointegration. In addition, it can be applied in the presence of non-stationary of any type including structural breaks without any type of data transformation to achieve stationary. Thus, it provides more reliable and robust inferences which are insensitive to time span as well as lag length used. The empirical results show that there is a unidirectional causality running from energy consumption to carbon emission both in the bivariate model and multivariate framework, while controlling for broad money supply and population density. The results indicate that Malaysia is an energy-dependent country and hence energy is stimulus to carbon emissions.

  14. THE MAXIMUM EFFECT OF DEEP LAKES ON TEMPERATURE PROFILES – DETERMINATION OF THE GEOTHERMAL GRADIENT

    OpenAIRE

    Eppelbaum L. V.; Kutasov I. M.; Balobaev V. T.

    2009-01-01

    Understanding the climate change processes on the basis of geothermal observations in boreholes is an important and at the same time high-intricate problem. Many non-climatic effects could cause changes in ground surface temperatures. In this study we investigate the effects of deep lakes on the borehole temperature profilesobserved within or in the vicinity of the lakes. We propose a method based on utilization of Laplace equation with nonuniform boundary conditions. The proposed method make...

  15. Temperature of the Icelandic crust: Inferred from electrical conductivity, temperature surface gradient, and maximum depth of earthquakes

    Science.gov (United States)

    Björnsson, Axel

    2008-02-01

    Two different models of the structure of the Icelandic crust have been presented. One is the thin-crust model with a 10-15 km thick crust beneath the axial rift zones, with an intermediate layer of partially molten basalt at the base of the crust and on the top of an up-domed asthenosphere. The thick-crust model assumes a 40 km thick and relatively cold crust beneath central Iceland. The most important and crucial parameter to distinguish between these different models is the temperature distribution with depth. Three methods are used to estimate the temperature distribution with depth. First, the surface temperature gradient measured in shallow wells drilled outside geothermal areas. Second, the thickness of the seismogenic zone which is associated with a 750 °C isothermal surface. Third, the depth to a layer with high electrical conductivity which is associated with partially molten basalt with temperature around 1100 °C at the base of the crust. Combination of these data shows that the temperature gradient can be assumed to be nearly linear from the surface down to the base of the crust. These results are strongly in favour of the thin-crust model. The scattered deep seismic reflectors interpreted as Moho in the thick-crust model could be caused by phase transitions or reflections from melt pockets in the mantle.

  16. County-Level Climate Uncertainty for Risk Assessments: Volume 4 Appendix C - Historical Maximum Near-Surface Air Temperature.

    Energy Technology Data Exchange (ETDEWEB)

    Backus, George A.; Lowry, Thomas Stephen; Jones, Shannon M; Walker, La Tonya Nicole; Roberts, Barry L; Malczynski, Leonard A.

    2017-06-01

    This report uses the CMIP5 series of climate model simulations to produce country- level uncertainty distributions for use in socioeconomic risk assessments of climate change impacts. It provides appropriate probability distributions, by month, for 169 countries and autonomous-areas on temperature, precipitation, maximum temperature, maximum wind speed, humidity, runoff, soil moisture and evaporation for the historical period (1976-2005), and for decadal time periods to 2100. It also provides historical and future distributions for the Arctic region on ice concentration, ice thickness, age of ice, and ice ridging in 15-degree longitude arc segments from the Arctic Circle to 80 degrees latitude, plus two polar semicircular regions from 80 to 90 degrees latitude. The uncertainty is meant to describe the lack of knowledge rather than imprecision in the physical simulation because the emphasis is on unfalsified risk and its use to determine potential socioeconomic impacts. The full report is contained in 27 volumes.

  17. Experimental determination of a critical temperature for maximum anaerobic digester biogas production

    CSIR Research Space (South Africa)

    Sichilalu, S

    2017-08-01

    Full Text Available fission of methanogenic bacteria. The temperature was varied over time over several days and the biogas production is recorded every after 24 hours(1 day) . Based on the experiment setup, the results show a higher biogas production proportional to the rise...

  18. Ion permeability of the cytoplasmic membrane limits the maximum growth temperature of bacteria and archaea

    NARCIS (Netherlands)

    van de Vossenberg, J.L C M; Ubbink-Kok, T.; Elferink, M.G.L.; Driessen, A.J.M.; Konings, W.N

    1995-01-01

    Protons and sodium ions are the most commonly used coupling ions in energy transduction in bacteria and archaea. At their growth temperature, the permeability of the cytoplasmic membrane of thermophilic bacteria to protons is high compared with that of sodium ions. In some thermophiles, sodium is

  19. A spatiotemporal dengue fever early warning model accounting for nonlinear associations with meteorological factors: a Bayesian maximum entropy approach

    Science.gov (United States)

    Lee, Chieh-Han; Yu, Hwa-Lung; Chien, Lung-Chang

    2014-05-01

    Dengue fever has been identified as one of the most widespread vector-borne diseases in tropical and sub-tropical. In the last decade, dengue is an emerging infectious disease epidemic in Taiwan especially in the southern area where have annually high incidences. For the purpose of disease prevention and control, an early warning system is urgently needed. Previous studies have showed significant relationships between climate variables, in particular, rainfall and temperature, and the temporal epidemic patterns of dengue cases. However, the transmission of the dengue fever is a complex interactive process that mostly understated the composite space-time effects of dengue fever. This study proposes developing a one-week ahead warning system of dengue fever epidemics in the southern Taiwan that considered nonlinear associations between weekly dengue cases and meteorological factors across space and time. The early warning system based on an integration of distributed lag nonlinear model (DLNM) and stochastic Bayesian Maximum Entropy (BME) analysis. The study identified the most significant meteorological measures including weekly minimum temperature and maximum 24-hour rainfall with continuous 15-week lagged time to dengue cases variation under condition of uncertainty. Subsequently, the combination of nonlinear lagged effects of climate variables and space-time dependence function is implemented via a Bayesian framework to predict dengue fever occurrences in the southern Taiwan during 2012. The result shows the early warning system is useful for providing potential outbreak spatio-temporal prediction of dengue fever distribution. In conclusion, the proposed approach can provide a practical disease control tool for environmental regulators seeking more effective strategies for dengue fever prevention.

  20. An entropy approach for evaluating the maximum information content achievable by an urban rainfall network

    Directory of Open Access Journals (Sweden)

    E. Ridolfi

    2011-07-01

    Full Text Available Hydrological models are the basis of operational flood-forecasting systems. The accuracy of these models is strongly dependent on the quality and quantity of the input information represented by rainfall height. Finer space-time rainfall resolution results in more accurate hazard forecasting. In this framework, an optimum raingauge network is essential in predicting flood events.

    This paper develops an entropy-based approach to evaluate the maximum information content achievable by a rainfall network for different sampling time intervals. The procedure is based on the determination of the coefficients of transferred and nontransferred information and on the relative isoinformation contours.

    The nontransferred information value achieved by the whole network is strictly dependent on the sampling time intervals considered. An empirical curve is defined, to assess the objective of the research: the nontransferred information value is plotted versus the associated sampling time on a semi-log scale. The curve has a linear trend.

    In this paper, the methodology is applied to the high-density raingauge network of the urban area of Rome.

  1. A Maximum Entropy Approach to Assess Debonding in Honeycomb aluminum Plates

    Directory of Open Access Journals (Sweden)

    Viviana Meruane

    2014-05-01

    Full Text Available Honeycomb sandwich structures are used in a wide variety of applications. Nevertheless, due to manufacturing defects or impact loads, these structures can be subject to imperfect bonding or debonding between the skin and the honeycomb core. The presence of debonding reduces the bending stiffness of the composite panel, which causes detectable changes in its vibration characteristics. This article presents a new supervised learning algorithm to identify debonded regions in aluminum honeycomb panels. The algorithm uses a linear approximation method handled by a statistical inference model based on the maximum-entropy principle. The merits of this new approach are twofold: training is avoided and data is processed in a period of time that is comparable to the one of neural networks. The honeycomb panels are modeled with finite elements using a simplified three-layer shell model. The adhesive layer between the skin and core is modeled using linear springs, the rigidities of which are reduced in debonded sectors. The algorithm is validated using experimental data of an aluminum honeycomb panel under different damage scenarios.

  2. Maximum entropy approach to statistical inference for an ocean acoustic waveguide.

    Science.gov (United States)

    Knobles, D P; Sagers, J D; Koch, R A

    2012-02-01

    A conditional probability distribution suitable for estimating the statistical properties of ocean seabed parameter values inferred from acoustic measurements is derived from a maximum entropy principle. The specification of the expectation value for an error function constrains the maximization of an entropy functional. This constraint determines the sensitivity factor (β) to the error function of the resulting probability distribution, which is a canonical form that provides a conservative estimate of the uncertainty of the parameter values. From the conditional distribution, marginal distributions for individual parameters can be determined from integration over the other parameters. The approach is an alternative to obtaining the posterior probability distribution without an intermediary determination of the likelihood function followed by an application of Bayes' rule. In this paper the expectation value that specifies the constraint is determined from the values of the error function for the model solutions obtained from a sparse number of data samples. The method is applied to ocean acoustic measurements taken on the New Jersey continental shelf. The marginal probability distribution for the values of the sound speed ratio at the surface of the seabed and the source levels of a towed source are examined for different geoacoustic model representations. © 2012 Acoustical Society of America

  3. Comparison of maximum runup through analytical and numerical approaches for different fault parameters estimates

    Science.gov (United States)

    Kanoglu, U.; Wronna, M.; Baptista, M. A.; Miranda, J. M. A.

    2017-12-01

    The one-dimensional analytical runup theory in combination with near shore synthetic waveforms is a promising tool for tsunami rapid early warning systems. Its application in realistic cases with complex bathymetry and initial wave condition from inverse modelling have shown that maximum runup values can be estimated reasonably well. In this study we generate a simplistic bathymetry domains which resemble realistic near-shore features. We investigate the accuracy of the analytical runup formulae to the variation of fault source parameters and near-shore bathymetric features. To do this we systematically vary the fault plane parameters to compute the initial tsunami wave condition. Subsequently, we use the initial conditions to run the numerical tsunami model using coupled system of four nested grids and compare the results to the analytical estimates. Variation of the dip angle of the fault plane showed that analytical estimates have less than 10% difference for angles 5-45 degrees in a simple bathymetric domain. These results shows that the use of analytical formulae for fast run up estimates constitutes a very promising approach in a simple bathymetric domain and might be implemented in Hazard Mapping and Early Warning.

  4. Seasonal maximum temperature prediction skill over Southern Africa: 1- vs 2-tiered forecasting systems

    CSIR Research Space (South Africa)

    Lazenby, MJ

    2011-09-01

    Full Text Available TEMPERATURE PREDICTION SKILL OVER SOUTHERN AFRICA: 1- VS. 2-TIERED FORECASTING SYSTEMS Melissa J. Lazenby University of Pretoria, Private Bag X20, Pretoria, 0028, South Africa Willem A. Landman Council for Scientific and Industrial....J., Tyson, P.D. and Tennant, W.J., 2001. Retro-active skill of multi- tiered forecasts of summer rainfall over southern Africa. International Journal of Climatology, 21, 1- 19. Mason, S.J. and Graham, N.E., 2002. Areas beneath the relative operating...

  5. Extended Kalman Filtering to estimate temperature and irradiation for maximum power point tracking of a photovoltaic module

    International Nuclear Information System (INIS)

    Docimo, D.J.; Ghanaatpishe, M.; Mamun, A.

    2017-01-01

    This paper develops an algorithm for estimating photovoltaic (PV) module temperature and effective irradiation level. The power output of a PV system depends directly on both of these states. Estimating the temperature and irradiation allows for improved state-based control methods while eliminating the need of additional sensors. Thermal models and irradiation estimators have been developed in the literature, but none incorporate feedback for estimation. This paper outlines an Extended Kalman Filter for temperature and irradiation estimation. These estimates are, in turn, used within a novel state-based controller that tracks the maximum power point of the PV system. Simulation results indicate this state-based controller provides up to an 8.5% increase in energy produced per day as compared to an impedance matching controller. A sensitivity analysis is provided to examine the impact state estimate errors have on the ability to find the optimal operating point of the PV system. - Highlights: • Developed a temperature and irradiation estimator for photovoltaic systems. • Designed an Extended Kalman Filter to handle model and measurement uncertainty. • Developed a state-based controller for maximum power point tracking (MPPT). • Validated combined estimator/controller algorithm for different weather conditions. • Algorithm increases energy captured up to 8.5% over traditional MPPT algorithms.

  6. Determination of maximum water temperature within the spent fuel pool of Angra Nuclear Power Plant - Unit 3

    Energy Technology Data Exchange (ETDEWEB)

    Werner, F.L., E-mail: fernanda.werner@poli.ufrj.br [Universidade Federal do Rio de Janeiro (UFRJ), RJ (Brazil). Departamento de Engenharia Nuclear; Alves, A.S.M., E-mail: asergi@eletronuclear.gov.br [Eletrobras Termonuclear (Eletronuclear), Rio de Janeiro, RJ (Brazil); Frutuoso e Melo, P.F., E-mail: frutuoso@nuclear.ufrj.br [Coordenacao de Pos-Graduacao e Pesquisa de Engenharia (COPPE/UFRJ), Rio de Janeiro, RJ (Brazil)

    2017-07-01

    In this paper, a mathematical model for the determination of the maximum water temperature within the spent fuel pool of Angra Nuclear Power Plant – Unit 3 was developed. The model was obtained from the boundary layer analysis and the application of Navier-Stokes equation to a vertical flat plate immersed in a water flow under free convection regime. Both types of pressure loss coefficients through the flow channel were considers in the modeling, the form coefficient for fuel assemblies (FAs) and the loss due to rod friction. The resulting equations enabled the determination of a mixed water temperature below the storage racks (High Density Storage Racks) as well as the estimation of a temperature gradient through the racks. The model was applied to the authorized operation of the plant (power operation, plant outage and upset condition) and faulted conditions (loss of coolant accidents and external events). The results obtained are in agreement with Brazilian and international standards. (author)

  7. Determination of maximum water temperature within the spent fuel pool of Angra Nuclear Power Plant - Unit 3

    International Nuclear Information System (INIS)

    Werner, F.L.; Frutuoso e Melo, P.F.

    2017-01-01

    In this paper, a mathematical model for the determination of the maximum water temperature within the spent fuel pool of Angra Nuclear Power Plant – Unit 3 was developed. The model was obtained from the boundary layer analysis and the application of Navier-Stokes equation to a vertical flat plate immersed in a water flow under free convection regime. Both types of pressure loss coefficients through the flow channel were considers in the modeling, the form coefficient for fuel assemblies (FAs) and the loss due to rod friction. The resulting equations enabled the determination of a mixed water temperature below the storage racks (High Density Storage Racks) as well as the estimation of a temperature gradient through the racks. The model was applied to the authorized operation of the plant (power operation, plant outage and upset condition) and faulted conditions (loss of coolant accidents and external events). The results obtained are in agreement with Brazilian and international standards. (author)

  8. Estimating daily minimum, maximum, and mean near surface air temperature using hybrid satellite models across Israel.

    Science.gov (United States)

    Rosenfeld, Adar; Dorman, Michael; Schwartz, Joel; Novack, Victor; Just, Allan C; Kloog, Itai

    2017-11-01

    Meteorological stations measure air temperature (Ta) accurately with high temporal resolution, but usually suffer from limited spatial resolution due to their sparse distribution across rural, undeveloped or less populated areas. Remote sensing satellite-based measurements provide daily surface temperature (Ts) data in high spatial and temporal resolution and can improve the estimation of daily Ta. In this study we developed spatiotemporally resolved models which allow us to predict three daily parameters: Ta Max (day time), 24h mean, and Ta Min (night time) on a fine 1km grid across the state of Israel. We used and compared both the Aqua and Terra MODIS satellites. We used linear mixed effect models, IDW (inverse distance weighted) interpolations and thin plate splines (using a smooth nonparametric function of longitude and latitude) to first calibrate between Ts and Ta in those locations where we have available data for both and used that calibration to fill in neighboring cells without surface monitors or missing Ts. Out-of-sample ten-fold cross validation (CV) was used to quantify the accuracy of our predictions. Our model performance was excellent for both days with and without available Ts observations for both Aqua and Terra (CV Aqua R 2 results for min 0.966, mean 0.986, and max 0.967; CV Terra R 2 results for min 0.965, mean 0.987, and max 0.968). Our research shows that daily min, mean and max Ta can be reliably predicted using daily MODIS Ts data even across Israel, with high accuracy even for days without Ta or Ts data. These predictions can be used as three separate Ta exposures in epidemiology studies for better diurnal exposure assessment. Copyright © 2017 Elsevier Inc. All rights reserved.

  9. Comparative Study of Regional Estimation Methods for Daily Maximum Temperature (A Case Study of the Isfahan Province

    Directory of Open Access Journals (Sweden)

    Ghamar Fadavi

    2016-02-01

    Full Text Available Introduction: As the statistical time series are in short period and the meteorological station are not distributed well in mountainous area determining of climatic criteria are complex. Therefore, in recent years interpolation methods for establishment of continuous climatic data have been considered. Continuous daily maximum temperature data are a key factor for climate-crop modeling which is fundamental for water resources management, drought, and optimal use from climatic potentials of different regions. The main objective of this study is to evaluate different interpolation methods for estimation of regional maximum temperature in the Isfahan province. Materials and Methods: Isfahan province has about 937,105 square kilometers, between 30 degree and 43 minutes to 34 degree and 27 minutes North latitude equator line and 49 degree and 36 minutes to 55 degree and 31 minutes east longitude Greenwich. It is located in the center of Iran and it's western part extend to eastern footage of the Zagros mountain range. It should be mentioned that elevation range of meteorological stations are between 845 to 2490 in the study area. This study was done using daily maximum temperature data of 1992 and 2007 years of synoptic and climatology stations of I.R. of Iran meteorological organization (IRIMO. In order to interpolate temperature data, two years including 1992 and 2007 with different number of meteorological stations have been selected the temperature data of thirty meteorological stations (17 synoptic and 13 climatologically stations for 1992 year and fifty four meteorological stations (31 synoptic and 23 climatologically stations for 2007 year were used from Isfahan province and neighboring provinces. In order to regionalize the point data of daily maximum temperature, the interpolation methods, including inverse distance weighted (IDW, Kriging, Co-Kriging, Kriging-Regression, multiple regression and Spline were used. Therefore, for this allocated

  10. Computed estimates of maximum temperature elevations in fetal tissues during transabdominal pulsed Doppler examinations.

    Science.gov (United States)

    Bly, S H; Vlahovich, S; Mabee, P R; Hussey, R G

    1992-01-01

    Measured characteristics of ultrasonic fields were obtained in submissions from manufacturers of diagnostic ultrasound equipment for devices operating in pulsed Doppler mode. Simple formulae were used with these data to generate upper limits to fetal temperature elevations, delta Tlim, during a transabdominal pulsed Doppler examination. A total of 236 items were analyzed, each item being a console/transducer/operating-mode/intended-use combination, for which the spatial-peak temporal-average intensity, ISPTA, was greater than 500 mW cm-2. The largest calculated delta Tlim values were approximately 1.5, 7.1 and 8.7 degrees C for first-, second- and third-trimester examinations, respectively. The vast majority of items yielded delta Tlim values which were less than 1 degree C in the first trimester. For second- and third-trimester examinations, where heating of fetal bone determines delta Tlim, most delta Tlim values were less than 4 degrees C. The clinical significance of the results is discussed.

  11. An ecological function and services approach to total maximum daily load (TMDL) prioritization.

    Science.gov (United States)

    Hall, Robert K; Guiliano, David; Swanson, Sherman; Philbin, Michael J; Lin, John; Aron, Joan L; Schafer, Robin J; Heggem, Daniel T

    2014-04-01

    Prioritizing total maximum daily load (TMDL) development starts by considering the scope and severity of water pollution and risks to public health and aquatic life. Methodology using quantitative assessments of in-stream water quality is appropriate and effective for point source (PS) dominated discharge, but less so in watersheds with mostly nonpoint source (NPS) related impairments. For NPSs, prioritization in TMDL development and implementation of associated best management practices should focus on restoration of ecosystem physical functions, including how restoration effectiveness depends on design, maintenance and placement within the watershed. To refine the approach to TMDL development, regulators and stakeholders must first ask if the watershed, or ecosystem, is at risk of losing riparian or other ecologically based physical attributes and processes. If so, the next step is an assessment of the spatial arrangement of functionality with a focus on the at-risk areas that could be lost, or could, with some help, regain functions. Evaluating stream and wetland riparian function has advantages over the traditional means of water quality and biological assessments for NPS TMDL development. Understanding how an ecosystem functions enables stakeholders and regulators to determine the severity of problem(s), identify source(s) of impairment, and predict and avoid a decline in water quality. The Upper Reese River, Nevada, provides an example of water quality impairment caused by NPS pollution. In this river basin, stream and wetland riparian proper functioning condition (PFC) protocol, water quality data, and remote sensing imagery were used to identify sediment sources, transport, distribution, and its impact on water quality and aquatic resources. This study found that assessments of ecological function could be used to generate leading (early) indicators of water quality degradation for targeting pollution control measures, while traditional in-stream water

  12. Finite mixture model: A maximum likelihood estimation approach on time series data

    Science.gov (United States)

    Yen, Phoong Seuk; Ismail, Mohd Tahir; Hamzah, Firdaus Mohamad

    2014-09-01

    Recently, statistician emphasized on the fitting of finite mixture model by using maximum likelihood estimation as it provides asymptotic properties. In addition, it shows consistency properties as the sample sizes increases to infinity. This illustrated that maximum likelihood estimation is an unbiased estimator. Moreover, the estimate parameters obtained from the application of maximum likelihood estimation have smallest variance as compared to others statistical method as the sample sizes increases. Thus, maximum likelihood estimation is adopted in this paper to fit the two-component mixture model in order to explore the relationship between rubber price and exchange rate for Malaysia, Thailand, Philippines and Indonesia. Results described that there is a negative effect among rubber price and exchange rate for all selected countries.

  13. Modelling the occurrence of heat waves in maximum and minimum temperatures over Spain and projections for the period 2031-60

    Science.gov (United States)

    Abaurrea, J.; Asín, J.; Cebrián, A. C.

    2018-02-01

    The occurrence of extreme heat events in maximum and minimum daily temperatures is modelled using a non-homogeneous common Poisson shock process. It is applied to five Spanish locations, representative of the most common climates over the Iberian Peninsula. The model is based on an excess over threshold approach and distinguishes three types of extreme events: only in maximum temperature, only in minimum temperature and in both of them (simultaneous events). It takes into account the dependence between the occurrence of extreme events in both temperatures and its parameters are expressed as functions of time and temperature related covariates. The fitted models allow us to characterize the occurrence of extreme heat events and to compare their evolution in the different climates during the observed period. This model is also a useful tool for obtaining local projections of the occurrence rate of extreme heat events under climate change conditions, using the future downscaled temperature trajectories generated by Earth System Models. The projections for 2031-60 under scenarios RCP4.5, RCP6.0 and RCP8.5 are obtained and analysed using the trajectories from four earth system models which have successfully passed a preliminary control analysis. Different graphical tools and summary measures of the projected daily intensities are used to quantify the climate change on a local scale. A high increase in the occurrence of extreme heat events, mainly in July and August, is projected in all the locations, all types of event and in the three scenarios, although in 2051-60 the increase is higher under RCP8.5. However, relevant differences are found between the evolution in the different climates and the types of event, with a specially high increase in the simultaneous ones.

  14. The Impacts of Maximum Temperature and Climate Change to Current and Future Pollen Distribution in Skopje, Republic of Macedonia

    Directory of Open Access Journals (Sweden)

    Vladimir Kendrovski

    2012-02-01

    Full Text Available BACKGROUND. The goal of the present paper was to assess the impact of current and future burden of the ambient temperature to pollen distributions in Skopje. METHODS. In the study we have evaluated a correlation between the concentration of pollen grains in the atmosphere of Skopje and maximum temperature, during the vegetation period of 1996, 2003, 2007 and 2009 as a current burden in context of climate change. For our analysis we have selected 9 representative of each phytoallergen group (trees, grasses, weeds. The concentration of pollen grains has been monitored by a Lanzoni volumetric pollen trap. The correlation between the concentration of pollen grains in the atmosphere and selected meteorological variable from weekly monitoring has been studied with the help of linear regression and correlation coefficients. RESULTS. The prevalence of the sensibilization of standard pollen allergens in Skopje during the some period shows increasing from 16,9% in 1996 to 19,8% in 2009. We detect differences in onset of flowering, maximum and end of the length of seasons for pollen. The pollen distributions and risk increases in 3 main periods: early spring, spring and summer which are the main cause of allergies during these seasons. The largest increase of air temperature due to climate change in Skopje is expected in the summer season. CONCLUSION. The impacts of climate change by increasing of the temperature in the next decades very likely will include impacts on pollen production and differences in current pollen season. [TAF Prev Med Bull 2012; 11(1.000: 35-40

  15. Influence of Dynamic Neuromuscular Stabilization Approach on Maximum Kayak Paddling Force

    Directory of Open Access Journals (Sweden)

    Davidek Pavel

    2018-03-01

    Full Text Available The purpose of this study was to examine the effect of Dynamic Neuromuscular Stabilization (DNS exercise on maximum paddling force (PF and self-reported pain perception in the shoulder girdle area in flatwater kayakers. Twenty male flatwater kayakers from a local club (age = 21.9 ± 2.4 years, body height = 185.1 ± 7.9 cm, body mass = 83.9 ± 9.1 kg were randomly assigned to the intervention or control groups. During the 6-week study, subjects from both groups performed standard off-season training. Additionally, the intervention group engaged in a DNS-based core stabilization exercise program (quadruped exercise, side sitting exercise, sitting exercise and squat exercise after each standard training session. Using a kayak ergometer, the maximum PF stroke was measured four times during the six weeks. All subjects completed the Disabilities of the Arm, Shoulder and Hand (DASH questionnaire before and after the 6-week interval to evaluate subjective pain perception in the shoulder girdle area. Initially, no significant differences in maximum PF and the DASH questionnaire were identified between the two groups. Repeated measures analysis of variance indicated that the experimental group improved significantly compared to the control group on maximum PF (p = .004; Cohen’s d = .85, but not on the DASH questionnaire score (p = .731 during the study. Integration of DNS with traditional flatwater kayak training may significantly increase maximum PF, but may not affect pain perception to the same extent.

  16. Reconstructing temperatures in the Maritime Alps, Italy, since the Last Glacial Maximum using cosmogenic noble gas paleothermometry

    Science.gov (United States)

    Tremblay, Marissa; Spagnolo, Matteo; Ribolini, Adriano; Shuster, David

    2016-04-01

    The Gesso Valley, located in the southwestern-most, Maritime portion of the European Alps, contains an exceptionally well-preserved record of glacial advances during the late Pleistocene and Holocene. Detailed geomorphic mapping, geochronology of glacial deposits, and glacier reconstructions indicate that glaciers in this Mediterranean region responded to millennial scale climate variability differently than glaciers in the interior of the European Alps. This suggests that the Mediterranean Sea somehow modulated the climate of this region. However, since glaciers respond to changes in temperature and precipitation, both variables were potentially influenced by proximity to the Sea. To disentangle the competing effects of temperature and precipitation changes on glacier size, we are constraining past temperature variations in the Gesso Valley since the Last Glacial Maximum (LGM) using cosmogenic noble gas paleothermometry. The cosmogenic noble gases 3He and 21Ne experience diffusive loss from common minerals like quartz and feldspars at Earth surface temperatures. Cosmogenic noble gas paleothermometry utilizes this open-system behavior to quantitatively constrain thermal histories of rocks during exposure to cosmic ray particles at the Earth's surface. We will present measurements of cosmogenic 3He in quartz sampled from moraines in the Gesso Valley with LGM, Bühl stadial, and Younger Dryas ages. With these 3He measurements and experimental data quantifying the diffusion kinetics of 3He in quartz, we will provide a preliminary temperature reconstruction for the Gesso Valley since the LGM. Future work on samples from younger moraines in the valley system will be used to fill in details of the more recent temperature history.

  17. Improving Estimations of Spatial Distribution of Soil Respiration Using the Bayesian Maximum Entropy Algorithm and Soil Temperature as Auxiliary Data.

    Directory of Open Access Journals (Sweden)

    Junguo Hu

    Full Text Available Soil respiration inherently shows strong spatial variability. It is difficult to obtain an accurate characterization of soil respiration with an insufficient number of monitoring points. However, it is expensive and cumbersome to deploy many sensors. To solve this problem, we proposed employing the Bayesian Maximum Entropy (BME algorithm, using soil temperature as auxiliary information, to study the spatial distribution of soil respiration. The BME algorithm used the soft data (auxiliary information effectively to improve the estimation accuracy of the spatiotemporal distribution of soil respiration. Based on the functional relationship between soil temperature and soil respiration, the BME algorithm satisfactorily integrated soil temperature data into said spatial distribution. As a means of comparison, we also applied the Ordinary Kriging (OK and Co-Kriging (Co-OK methods. The results indicated that the root mean squared errors (RMSEs and absolute values of bias for both Day 1 and Day 2 were the lowest for the BME method, thus demonstrating its higher estimation accuracy. Further, we compared the performance of the BME algorithm coupled with auxiliary information, namely soil temperature data, and the OK method without auxiliary information in the same study area for 9, 21, and 37 sampled points. The results showed that the RMSEs for the BME algorithm (0.972 and 1.193 were less than those for the OK method (1.146 and 1.539 when the number of sampled points was 9 and 37, respectively. This indicates that the former method using auxiliary information could reduce the required number of sampling points for studying spatial distribution of soil respiration. Thus, the BME algorithm, coupled with soil temperature data, can not only improve the accuracy of soil respiration spatial interpolation but can also reduce the number of sampling points.

  18. Improving Estimations of Spatial Distribution of Soil Respiration Using the Bayesian Maximum Entropy Algorithm and Soil Temperature as Auxiliary Data.

    Science.gov (United States)

    Hu, Junguo; Zhou, Jian; Zhou, Guomo; Luo, Yiqi; Xu, Xiaojun; Li, Pingheng; Liang, Junyi

    2016-01-01

    Soil respiration inherently shows strong spatial variability. It is difficult to obtain an accurate characterization of soil respiration with an insufficient number of monitoring points. However, it is expensive and cumbersome to deploy many sensors. To solve this problem, we proposed employing the Bayesian Maximum Entropy (BME) algorithm, using soil temperature as auxiliary information, to study the spatial distribution of soil respiration. The BME algorithm used the soft data (auxiliary information) effectively to improve the estimation accuracy of the spatiotemporal distribution of soil respiration. Based on the functional relationship between soil temperature and soil respiration, the BME algorithm satisfactorily integrated soil temperature data into said spatial distribution. As a means of comparison, we also applied the Ordinary Kriging (OK) and Co-Kriging (Co-OK) methods. The results indicated that the root mean squared errors (RMSEs) and absolute values of bias for both Day 1 and Day 2 were the lowest for the BME method, thus demonstrating its higher estimation accuracy. Further, we compared the performance of the BME algorithm coupled with auxiliary information, namely soil temperature data, and the OK method without auxiliary information in the same study area for 9, 21, and 37 sampled points. The results showed that the RMSEs for the BME algorithm (0.972 and 1.193) were less than those for the OK method (1.146 and 1.539) when the number of sampled points was 9 and 37, respectively. This indicates that the former method using auxiliary information could reduce the required number of sampling points for studying spatial distribution of soil respiration. Thus, the BME algorithm, coupled with soil temperature data, can not only improve the accuracy of soil respiration spatial interpolation but can also reduce the number of sampling points.

  19. Derivation of some new distributions in statistical mechanics using maximum entropy approach

    Directory of Open Access Journals (Sweden)

    Ray Amritansu

    2014-01-01

    Full Text Available The maximum entropy principle has been earlier used to derive the Bose Einstein(B.E., Fermi Dirac(F.D. & Intermediate Statistics(I.S. distribution of statistical mechanics. The central idea of these distributions is to predict the distribution of the microstates, which are the particle of the system, on the basis of the knowledge of some macroscopic data. The latter information is specified in the form of some simple moment constraints. One distribution differs from the other in the way in which the constraints are specified. In the present paper, we have derived some new distributions similar to B.E., F.D. distributions of statistical mechanics by using maximum entropy principle. Some proofs of B.E. & F.D. distributions are shown, and at the end some new results are discussed.

  20. Multi-approach analysis of maximum riverbed scour depth above subway tunnel

    OpenAIRE

    Jun Chen; Hong-wu Tang; Zui-sen Li; Wen-hong Dai

    2010-01-01

    When subway tunnels are routed underneath rivers, riverbed scour may expose the structure, with potentially severe consequences. Thus, it is important to identify the maximum scour depth to ensure that the designed buried depth is adequate. There are a range of methods that may be applied to this problem, including the fluvial process analysis method, geological structure analysis method, scour formula method, scour model experiment method, and numerical simulation method. However, the applic...

  1. The Location-Scale Mixture Exponential Power Distribution: A Bayesian and Maximum Likelihood Approach

    OpenAIRE

    Rahnamaei, Z.; Nematollahi, N.; Farnoosh, R.

    2012-01-01

    We introduce an alternative skew-slash distribution by using the scale mixture of the exponential power distribution. We derive the properties of this distribution and estimate its parameter by Maximum Likelihood and Bayesian methods. By a simulation study we compute the mentioned estimators and their mean square errors, and we provide an example on real data to demonstrate the modeling strength of the new distribution.

  2. The Location-Scale Mixture Exponential Power Distribution: A Bayesian and Maximum Likelihood Approach

    Directory of Open Access Journals (Sweden)

    Z. Rahnamaei

    2012-01-01

    Full Text Available We introduce an alternative skew-slash distribution by using the scale mixture of the exponential power distribution. We derive the properties of this distribution and estimate its parameter by Maximum Likelihood and Bayesian methods. By a simulation study we compute the mentioned estimators and their mean square errors, and we provide an example on real data to demonstrate the modeling strength of the new distribution.

  3. A new maximum power point method based on a sliding mode approach for solar energy harvesting

    International Nuclear Information System (INIS)

    Farhat, Maissa; Barambones, Oscar; Sbita, Lassaad

    2017-01-01

    Highlights: • Create a simple, easy of implement and accurate V_M_P_P estimator. • Stability analysis of the proposed system based on the Lyapunov’s theory. • A comparative study versus P&O, highlight SMC good performances. • Construct a new PS-SMC algorithm to include the partial shadow case. • Experimental validation of the SMC MPP tracker. - Abstract: This paper presents a photovoltaic (PV) system with a maximum power point tracking (MPPT) facility. The goal of this work is to maximize power extraction from the photovoltaic generator (PVG). This goal is achieved using a sliding mode controller (SMC) that drives a boost converter connected between the PVG and the load. The system is modeled and tested under MATLAB/SIMULINK environment. In simulation, the sliding mode controller offers fast and accurate convergence to the maximum power operating point that outperforms the well-known perturbation and observation method (P&O). The sliding mode controller performance is evaluated during steady-state, against load varying and panel partial shadow (PS) disturbances. To confirm the above conclusion, a practical implementation of the maximum power point tracker based sliding mode controller on a hardware setup is performed on a dSPACE real time digital control platform. The data acquisition and the control system are conducted all around dSPACE 1104 controller board and its RTI environment. The experimental results demonstrate the validity of the proposed control scheme over a stand-alone real photovoltaic system.

  4. Maximum power tracking in WECS (Wind energy conversion systems) via numerical and stochastic approaches

    International Nuclear Information System (INIS)

    Elnaggar, M.; Abdel Fattah, H.A.; Elshafei, A.L.

    2014-01-01

    This paper presents a complete design of a two-level control system to capture maximum power in wind energy conversion systems. The upper level of the proposed control system adopts a modified line search optimization algorithm to determine a setpoint for the wind turbine speed. The calculated speed setpoint corresponds to the maximum power point at given operating conditions. The speed setpoint is fed to a generalized predictive controller at the lower level of the control system. A different formulation, that treats the aerodynamic torque as a disturbance, is postulated to derive the control law. The objective is to accurately track the setpoint while keeping the control action free from unacceptably fast or frequent variations. Simulation results based on a realistic model of a 1.5 MW wind turbine confirm the superiority of the proposed control scheme to the conventional ones. - Highlights: • The structure of a MPPT (maximum power point tracking) scheme is presented. • The scheme is divided into the optimization algorithm and the tracking controller. • The optimization algorithm is based on an online line search numerical algorithm. • The tracking controller is treating the aerodynamics torque as a loop disturbance. • The control technique is simulated with stochastic wind speed by Simulink and FAST

  5. Multi-approach analysis of maximum riverbed scour depth above subway tunnel

    Directory of Open Access Journals (Sweden)

    Jun Chen

    2010-12-01

    Full Text Available When subway tunnels are routed underneath rivers, riverbed scour may expose the structure, with potentially severe consequences. Thus, it is important to identify the maximum scour depth to ensure that the designed buried depth is adequate. There are a range of methods that may be applied to this problem, including the fluvial process analysis method, geological structure analysis method, scour formula method, scour model experiment method, and numerical simulation method. However, the application ranges and forecasting precision of these methods vary considerably. In order to quantitatively analyze the characteristics of the different methods, a subway tunnel passing underneath a river was selected, and the aforementioned five methods were used to forecast the maximum scour depth. The fluvial process analysis method was used to characterize the river regime and evolution trend, which were the baseline for examination of the scour depth of the riverbed. The results obtained from the scour model experiment and the numerical simulation methods are reliable; these two methods are suitable for application to tunnel projects passing underneath rivers. The scour formula method was less accurate than the scour model experiment method; it is suitable for application to lower risk projects such as pipelines. The results of the geological structure analysis had low precision; the method is suitable for use as a secondary method to assist other research methods. To forecast the maximum scour depth of the riverbed above the subway tunnel, a combination of methods is suggested, and the appropriate analysis method should be chosen with respect to the local conditions.

  6. Calculating the Prior Probability Distribution for a Causal Network Using Maximum Entropy: Alternative Approaches

    Directory of Open Access Journals (Sweden)

    Michael J. Markham

    2011-07-01

    Full Text Available Some problems occurring in Expert Systems can be resolved by employing a causal (Bayesian network and methodologies exist for this purpose. These require data in a specific form and make assumptions about the independence relationships involved. Methodologies using Maximum Entropy (ME are free from these conditions and have the potential to be used in a wider context including systems consisting of given sets of linear and independence constraints, subject to consistency and convergence. ME can also be used to validate results from the causal network methodologies. Three ME methods for determining the prior probability distribution of causal network systems are considered. The first method is Sequential Maximum Entropy in which the computation of a progression of local distributions leads to the over-all distribution. This is followed by development of the Method of Tribus. The development takes the form of an algorithm that includes the handling of explicit independence constraints. These fall into two groups those relating parents of vertices, and those deduced from triangulation of the remaining graph. The third method involves a variation in the part of that algorithm which handles independence constraints. Evidence is presented that this adaptation only requires the linear constraints and the parental independence constraints to emulate the second method in a substantial class of examples.

  7. Optimisation of sea surface current retrieval using a maximum cross correlation technique on modelled sea surface temperature

    Science.gov (United States)

    Heuzé, Céline; Eriksson, Leif; Carvajal, Gisela

    2017-04-01

    Using sea surface temperature from satellite images to retrieve sea surface currents is not a new idea, but so far its operational near-real time implementation has not been possible. Validation studies are too region-specific or uncertain, due to the errors induced by the images themselves. Moreover, the sensitivity of the most common retrieval method, the maximum cross correlation, to the three parameters that have to be set is unknown. Using model outputs instead of satellite images, biases induced by this method are assessed here, for four different seas of Western Europe, and the best of nine settings and eight temporal resolutions are determined. For all regions, tracking a small 5 km pattern from the first image over a large 30 km region around its original location on a second image, separated from the first image by 6 to 9 hours returned the most accurate results. Moreover, for all regions, the problem is not inaccurate results but missing results, where the velocity is too low to be picked by the retrieval. The results are consistent both with limitations caused by ocean surface current dynamics and with the available satellite technology, indicating that automated sea surface current retrieval from sea surface temperature images is feasible now, for search and rescue operations, pollution confinement or even for more energy efficient and comfortable ship navigation.

  8. Apparent molal volumes of HMT and TATD in aqueous solutions around the temperature of maximum density of water

    International Nuclear Information System (INIS)

    Clavijo Penagos, J.A.; Blanco, L.H.

    2012-01-01

    Highlights: ►V φ for HMT and TATD in aqueous solutions around the temperature of maximum density of water are reported. ► V φ is linear in m form m = 0.025 for all the aqueous solutions investigated. ► Variation of V ¯ 2 ∞ with T obeys a second grade polynomial trend. ► The solutes are classified as structure breakers according to Hepler’s criterion. - Abstract: Apparent molal volumes V φ have been determined from density measurements for several aqueous solutions of 1,3,5,7-tetraazatricyclo[3.3.1.1(3,7)]decane (HMT) and 1,3,6,8-tetraazatricyclo[4.4.1.1(3,8)]dodecane (TATD) at T = (275.15, 275.65, 276.15, 276.65, 277.15, 277.65 and 278.15) K as function of composition. The infinite dilution partial molar volumes of solutes in aqueous solution are evaluated through extrapolation. Interactions of the solutes with water are discussed in terms of the effect of the temperature on the volumetric properties and the structure of the solutes. The results are interpreted in terms of water structure-breaking or structure forming character of the solutes.

  9. Estimation of Land Surface Temperature through Blending MODIS and AMSR-E Data with the Bayesian Maximum Entropy Method

    Directory of Open Access Journals (Sweden)

    Xiaokang Kou

    2016-01-01

    Full Text Available Land surface temperature (LST plays a major role in the study of surface energy balances. Remote sensing techniques provide ways to monitor LST at large scales. However, due to atmospheric influences, significant missing data exist in LST products retrieved from satellite thermal infrared (TIR remotely sensed data. Although passive microwaves (PMWs are able to overcome these atmospheric influences while estimating LST, the data are constrained by low spatial resolution. In this study, to obtain complete and high-quality LST data, the Bayesian Maximum Entropy (BME method was introduced to merge 0.01° and 0.25° LSTs inversed from MODIS and AMSR-E data, respectively. The result showed that the missing LSTs in cloudy pixels were filled completely, and the availability of merged LSTs reaches 100%. Because the depths of LST and soil temperature measurements are different, before validating the merged LST, the station measurements were calibrated with an empirical equation between MODIS LST and 0~5 cm soil temperatures. The results showed that the accuracy of merged LSTs increased with the increasing quantity of utilized data, and as the availability of utilized data increased from 25.2% to 91.4%, the RMSEs of the merged data decreased from 4.53 °C to 2.31 °C. In addition, compared with the filling gap method in which MODIS LST gaps were filled with AMSR-E LST directly, the merged LSTs from the BME method showed better spatial continuity. The different penetration depths of TIR and PMWs may influence fusion performance and still require further studies.

  10. Spatial-temporal changes of maximum and minimum temperatures in the Wei River Basin, China: Changing patterns, causes and implications

    Science.gov (United States)

    Liu, Saiyan; Huang, Shengzhi; Xie, Yangyang; Huang, Qiang; Leng, Guoyong; Hou, Beibei; Zhang, Ying; Wei, Xiu

    2018-05-01

    Due to the important role of temperature in the global climate system and energy cycles, it is important to investigate the spatial-temporal change patterns, causes and implications of annual maximum (Tmax) and minimum (Tmin) temperatures. In this study, the Cloud model were adopted to fully and accurately analyze the changing patterns of annual Tmax and Tmin from 1958 to 2008 by quantifying their mean, uniformity, and stability in the Wei River Basin (WRB), a typical arid and semi-arid region in China. Additionally, the cross wavelet analysis was applied to explore the correlations among annual Tmax and Tmin and the yearly sunspots number, Arctic Oscillation, Pacific Decadal Oscillation, and soil moisture with an aim to determine possible causes of annual Tmax and Tmin variations. Furthermore, temperature-related impacts on vegetation cover and precipitation extremes were also examined. Results indicated that: (1) the WRB is characterized by increasing trends in annual Tmax and Tmin, with a more evident increasing trend in annual Tmin, which has a higher dispersion degree and is less uniform and stable than annual Tmax; (2) the asymmetric variations of Tmax and Tmin can be generally explained by the stronger effects of solar activity (primarily), large-scale atmospheric circulation patterns, and soil moisture on annual Tmin than on annual Tmax; and (3) increasing annual Tmax and Tmin have exerted strong influences on local precipitation extremes, in terms of their duration, intensity, and frequency in the WRB. This study presents new analyses of Tmax and Tmin in the WRB, and the findings may help guide regional agricultural production and water resources management.

  11. Estimation of Road Vehicle Speed Using Two Omnidirectional Microphones: A Maximum Likelihood Approach

    Directory of Open Access Journals (Sweden)

    López-Valcarce Roberto

    2004-01-01

    Full Text Available We address the problem of estimating the speed of a road vehicle from its acoustic signature, recorded by a pair of omnidirectional microphones located next to the road. This choice of sensors is motivated by their nonintrusive nature as well as low installation and maintenance costs. A novel estimation technique is proposed, which is based on the maximum likelihood principle. It directly estimates car speed without any assumptions on the acoustic signal emitted by the vehicle. This has the advantages of bypassing troublesome intermediate delay estimation steps as well as eliminating the need for an accurate yet general enough acoustic traffic model. An analysis of the estimate for narrowband and broadband sources is provided and verified with computer simulations. The estimation algorithm uses a bank of modified crosscorrelators and therefore it is well suited to DSP implementation, performing well with preliminary field data.

  12. A Sum-of-Squares and Semidefinite Programming Approach for Maximum Likelihood DOA Estimation

    Directory of Open Access Journals (Sweden)

    Shu Cai

    2016-12-01

    Full Text Available Direction of arrival (DOA estimation using a uniform linear array (ULA is a classical problem in array signal processing. In this paper, we focus on DOA estimation based on the maximum likelihood (ML criterion, transform the estimation problem into a novel formulation, named as sum-of-squares (SOS, and then solve it using semidefinite programming (SDP. We first derive the SOS and SDP method for DOA estimation in the scenario of a single source and then extend it under the framework of alternating projection for multiple DOA estimation. The simulations demonstrate that the SOS- and SDP-based algorithms can provide stable and accurate DOA estimation when the number of snapshots is small and the signal-to-noise ratio (SNR is low. Moveover, it has a higher spatial resolution compared to existing methods based on the ML criterion.

  13. The MCE (Maximum Credible Earthquake) - an approach to reduction of seismic risk

    International Nuclear Information System (INIS)

    Asmis, G.J.K.; Atchison, R.J.

    1979-01-01

    It is the responsibility of the Regulatory Body (in Canada, the AECB) to ensure that radiological risks resulting from the effects of earthquakes on nuclear facilities, do not exceed acceptable levels. In simplified numerical terms this means that the frequency of an unacceptable radiation dose must be kept below 10 -6 per annum. Unfortunately, seismic events fall into the class of external events which are not well defined at these low frequency levels. Thus, design earthquakes have been chosen, at the 10 -3 - 10 -4 frequency level, a level commensurate with the limits of statistical data. There exists, therefore, a need to define an additional level of earthquake. A seismic design explicitly and implicitly recognizes three levels of earthquake loading; one comfortably below yield, one at or about yield, and one at ultimate. The ultimate level earthquake, contrary to the first two, has been implicitly addressed by conscientious designers by choosing systems, materials and details compatible with postulated dynamic forces. It is the purpose of this paper to discuss the regulatory specifications required to quantify this third level, or Maximum Credible Earthquake (MCE). (orig.)

  14. Quantile-based Bayesian maximum entropy approach for spatiotemporal modeling of ambient air quality levels.

    Science.gov (United States)

    Yu, Hwa-Lung; Wang, Chih-Hsin

    2013-02-05

    Understanding the daily changes in ambient air quality concentrations is important to the assessing human exposure and environmental health. However, the fine temporal scales (e.g., hourly) involved in this assessment often lead to high variability in air quality concentrations. This is because of the complex short-term physical and chemical mechanisms among the pollutants. Consequently, high heterogeneity is usually present in not only the averaged pollution levels, but also the intraday variance levels of the daily observations of ambient concentration across space and time. This characteristic decreases the estimation performance of common techniques. This study proposes a novel quantile-based Bayesian maximum entropy (QBME) method to account for the nonstationary and nonhomogeneous characteristics of ambient air pollution dynamics. The QBME method characterizes the spatiotemporal dependence among the ambient air quality levels based on their location-specific quantiles and accounts for spatiotemporal variations using a local weighted smoothing technique. The epistemic framework of the QBME method can allow researchers to further consider the uncertainty of space-time observations. This study presents the spatiotemporal modeling of daily CO and PM10 concentrations across Taiwan from 1998 to 2009 using the QBME method. Results show that the QBME method can effectively improve estimation accuracy in terms of lower mean absolute errors and standard deviations over space and time, especially for pollutants with strong nonhomogeneous variances across space. In addition, the epistemic framework can allow researchers to assimilate the site-specific secondary information where the observations are absent because of the common preferential sampling issues of environmental data. The proposed QBME method provides a practical and powerful framework for the spatiotemporal modeling of ambient pollutants.

  15. A binary genetic programing model for teleconnection identification between global sea surface temperature and local maximum monthly rainfall events

    Science.gov (United States)

    Danandeh Mehr, Ali; Nourani, Vahid; Hrnjica, Bahrudin; Molajou, Amir

    2017-12-01

    The effectiveness of genetic programming (GP) for solving regression problems in hydrology has been recognized in recent studies. However, its capability to solve classification problems has not been sufficiently explored so far. This study develops and applies a novel classification-forecasting model, namely Binary GP (BGP), for teleconnection studies between sea surface temperature (SST) variations and maximum monthly rainfall (MMR) events. The BGP integrates certain types of data pre-processing and post-processing methods with conventional GP engine to enhance its ability to solve both regression and classification problems simultaneously. The model was trained and tested using SST series of Black Sea, Mediterranean Sea, and Red Sea as potential predictors as well as classified MMR events at two locations in Iran as predictand. Skill of the model was measured in regard to different rainfall thresholds and SST lags and compared to that of the hybrid decision tree-association rule (DTAR) model available in the literature. The results indicated that the proposed model can identify potential teleconnection signals of surrounding seas beneficial to long-term forecasting of the occurrence of the classified MMR events.

  16. Scalable pumping approach for extracting the maximum TEM(00) solar laser power.

    Science.gov (United States)

    Liang, Dawei; Almeida, Joana; Vistas, Cláudia R

    2014-10-20

    A scalable TEM(00) solar laser pumping approach is composed of four pairs of first-stage Fresnel lens-folding mirror collectors, four fused-silica secondary concentrators with light guides of rectangular cross-section for radiation homogenization, four hollow two-dimensional compound parabolic concentrators for further concentration of uniform radiations from the light guides to a 3 mm diameter, 76 mm length Nd:YAG rod within four V-shaped pumping cavities. An asymmetric resonator ensures an efficient large-mode matching between pump light and oscillating laser light. Laser power of 59.1 W TEM(00) is calculated by ZEMAX and LASCAD numerical analysis, revealing 20 times improvement in brightness figure of merit.

  17. A maximum information utilization approach in X-ray fluorescence analysis

    International Nuclear Information System (INIS)

    Papp, T.; Maxwell, J.A.; Papp, A.T.

    2009-01-01

    X-ray fluorescence data bases have significant contradictions, and inconsistencies. We have identified that the main source of the contradictions, after the human factors, is rooted in the signal processing approaches. We have developed signal processors to overcome many of the problems by maximizing the information available to the analyst. These non-paralyzable, fully digital signal processors have yielded improved resolution, line shape, tailing and pile up recognition. The signal processors account for and register all events, sorting them into two spectra, one spectrum for the desirable or accepted events, and one spectrum for the rejected events. The information contained in the rejected spectrum is mandatory to have control over the measurement and to make a proper accounting and allocation of the events. It has established the basis for the application of the fundamental parameter method approach. A fundamental parameter program was also developed. The primary X-ray line shape (Lorentzian) is convoluted with a system line shape (Gaussian) and corrected for the sample material absorption, X-ray absorbers and detector efficiency. The peaks also can have, a lower and upper energy side tailing, including the physical interaction based long range functions. It also employs a peak and continuum pile up and can handle layered samples of up to five layers. The application of a fundamental parameter method demands the proper equipment characterization. We have also developed an inverse fundamental parameter method software package for equipment characterisation. The program calculates the excitation function at the sample position and the detector efficiency, supplying an internally consistent system.

  18. The maximum temperature of a thermodynamic cycle effect on weight-dimensional characteristics of the NPP energy blocks with air cooling

    International Nuclear Information System (INIS)

    Bezborodov, Yu.A.; Bubnov, V.P.; Nesterenko, V.B.

    1982-01-01

    The cycle maximum temperature effect on the properties of individual apparatuses and total NPP energy blocks characteristics has been investigated. Air, nitrogen, helium and chemically reacting system N 2 O 4 +2NO+O 2 have been considered as coolants. The conducted investigations have shown that maximum temperature of thermodynamical cycle affects considerably both the weight-dimensional characteristics of individual elements of NPP and total characteristics of NPP energy block. Energy blocks of NPP with air cooling wherein dissociating nitrogen tetroxide is used as working body, have better indexes on the majority of characteristics in comparison with blocks with air, nitrogen and helium cooling. If technical restrictions are to be taken into account (thermal resistance of metals, coolant decomposition under high temperatures, etc.) then dissociating nitrogen tetroxide should be recommended as working body and maximum cycle temperature in the range from 500 up to 600 deg C

  19. Estimating Daily Maximum and Minimum Land Air Surface Temperature Using MODIS Land Surface Temperature Data and Ground Truth Data in Northern Vietnam

    Directory of Open Access Journals (Sweden)

    Phan Thanh Noi

    2016-12-01

    Full Text Available This study aims to evaluate quantitatively the land surface temperature (LST derived from MODIS (Moderate Resolution Imaging Spectroradiometer MOD11A1 and MYD11A1 Collection 5 products for daily land air surface temperature (Ta estimation over a mountainous region in northern Vietnam. The main objective is to estimate maximum and minimum Ta (Ta-max and Ta-min using both TERRA and AQUA MODIS LST products (daytime and nighttime and auxiliary data, solving the discontinuity problem of ground measurements. There exist no studies about Vietnam that have integrated both TERRA and AQUA LST of daytime and nighttime for Ta estimation (using four MODIS LST datasets. In addition, to find out which variables are the most effective to describe the differences between LST and Ta, we have tested several popular methods, such as: the Pearson correlation coefficient, stepwise, Bayesian information criterion (BIC, adjusted R-squared and the principal component analysis (PCA of 14 variables (including: LST products (four variables, NDVI, elevation, latitude, longitude, day length in hours, Julian day and four variables of the view zenith angle, and then, we applied nine models for Ta-max estimation and nine models for Ta-min estimation. The results showed that the differences between MODIS LST and ground truth temperature derived from 15 climate stations are time and regional topography dependent. The best results for Ta-max and Ta-min estimation were achieved when we combined both LST daytime and nighttime of TERRA and AQUA and data from the topography analysis.

  20. Novel maximum likelihood approach for passive detection and localisation of multiple emitters

    Science.gov (United States)

    Hernandez, Marcel

    2017-12-01

    In this paper, a novel target acquisition and localisation algorithm (TALA) is introduced that offers a capability for detecting and localising multiple targets using the intermittent "signals-of-opportunity" (e.g. acoustic impulses or radio frequency transmissions) they generate. The TALA is a batch estimator that addresses the complex multi-sensor/multi-target data association problem in order to estimate the locations of an unknown number of targets. The TALA is unique in that it does not require measurements to be of a specific type, and can be implemented for systems composed of either homogeneous or heterogeneous sensors. The performance of the TALA is demonstrated in simulated scenarios with a network of 20 sensors and up to 10 targets. The sensors generate angle-of-arrival (AOA), time-of-arrival (TOA), or hybrid AOA/TOA measurements. It is shown that the TALA is able to successfully detect 83-99% of the targets, with a negligible number of false targets declared. Furthermore, the localisation errors of the TALA are typically within 10% of the errors generated by a "genie" algorithm that is given the correct measurement-to-target associations. The TALA also performs well in comparison with an optimistic Cramér-Rao lower bound, with typical differences in performance of 10-20%, and differences in performance of 40-50% in the most difficult scenarios considered. The computational expense of the TALA is also controllable, which allows the TALA to maintain computational feasibility even in the most challenging scenarios considered. This allows the approach to be implemented in time-critical scenarios, such as in the localisation of artillery firing events. It is concluded that the TALA provides a powerful situational awareness aid for passive surveillance operations.

  1. Effects of the midnight temperature maximum observed in the thermosphere-ionosphere over the northeast of Brazil

    Science.gov (United States)

    Figueiredo, Cosme Alexandre O. B.; Buriti, Ricardo A.; Paulino, Igo; Meriwether, John W.; Makela, Jonathan J.; Batista, Inez S.; Barros, Diego; Medeiros, Amauri F.

    2017-08-01

    The midnight temperature maximum (MTM) has been observed in the lower thermosphere by two Fabry-Pérot interferometers (FPIs) at São João do Cariri (7.4° S, 36.5° W) and Cajazeiras (6.9° S, 38.6° W) during 2011, when the solar activity was moderate and the solar flux was between 90 and 155 SFU (1 SFU = 10-22 W m-2 Hz-1). The MTM is studied in detail using measurements of neutral temperature, wind and airglow relative intensity of OI630.0 nm (referred to as OI6300), and ionospheric parameters, such as virtual height (h'F), the peak height of the F2 region (hmF2), and critical frequency of the F region (foF2), which were measured by a Digisonde instrument (DPS) at Eusébio (3.9° S, 38.4° W; geomagnetic coordinates 7.31° S, 32.40° E for 2011). The MTM peak was observed mostly along the year, except in May, June, and August. The amplitudes of the MTM varied from 64 ± 46 K in April up to 144 ± 48 K in October. The monthly temperature average showed a phase shift in the MTM peak around 0.25 h in September to 2.5 h in December before midnight. On the other hand, in February, March, and April the MTM peak occurred around midnight. International Reference Ionosphere 2012 (IRI-2012) model was compared to the neutral temperature observations and the IRI-2012 model failed in reproducing the MTM peaks. The zonal component of neutral wind flowed eastward the whole night; regardless of the month and the magnitude of the zonal wind, it was typically within the range of 50 to 150 m s-1 during the early evening. The meridional component of the neutral wind changed its direction over the months: from November to February, the meridional wind in the early evening flowed equatorward with a magnitude between 25 and 100 m s-1; in contrast, during the winter months, the meridional wind flowed to the pole within the range of 0 to -50 m s-1. Our results indicate that the reversal (changes in equator to poleward flow) or abatement of the meridional winds is an important factor in

  2. Recurrence quantification analysis of extremes of maximum and minimum temperature patterns for different climate scenarios in the Mesochora catchment in Central-Western Greece

    Science.gov (United States)

    Panagoulia, Dionysia; Vlahogianni, Eleni I.

    2018-06-01

    A methodological framework based on nonlinear recurrence analysis is proposed to examine the historical data evolution of extremes of maximum and minimum daily mean areal temperature patterns over time under different climate scenarios. The methodology is based on both historical data and atmospheric General Circulation Model (GCM) produced climate scenarios for the periods 1961-2000 and 2061-2100 which correspond to 1 × CO2 and 2 × CO2 scenarios. Historical data were derived from the actual daily observations coupled with atmospheric circulation patterns (CPs). The dynamics of the temperature was reconstructed in the phase-space from the time series of temperatures. The statistically comparing different temperature patterns were based on some discriminating statistics obtained by the Recurrence Quantification Analysis (RQA). Moreover, the bootstrap method of Schinkel et al. (2009) was adopted to calculate the confidence bounds of RQA parameters based on a structural preserving resampling. The overall methodology was implemented to the mountainous Mesochora catchment in Central-Western Greece. The results reveal substantial similarities between the historical maximum and minimum daily mean areal temperature statistical patterns and their confidence bounds, as well as the maximum and minimum temperature patterns in evolution under the 2 × CO2 scenario. A significant variability and non-stationary behaviour characterizes all climate series analyzed. Fundamental differences are produced from the historical and maximum 1 × CO2 scenarios, the maximum 1 × CO2 and minimum 1 × CO2 scenarios, as well as the confidence bounds for the two CO2 scenarios. The 2 × CO2 scenario reflects the strongest shifts in intensity, duration and frequency in temperature patterns. Such transitions can help the scientists and policy makers to understand the effects of extreme temperature changes on water resources, economic development, and health of ecosystems and hence to proceed to

  3. A rapid method for measuring maximum density temperatures in water and aqueous solutions for the study of quantum zero point energy effects in these liquids

    International Nuclear Information System (INIS)

    Deeney, F A; O'Leary, J P

    2008-01-01

    The connection between quantum zero point fluctuations and a density maximum in water and in liquid He 4 has recently been established. Here we present a description of a simple and rapid method of determining the temperatures at which maximum densities in water and aqueous solutions occur. The technique is such as to allow experiments to be carried out in one session of an undergraduate laboratory thereby introducing students to the concept of quantum zero point energy

  4. Temperature fluctuations in little bang : hydrodynamical approach

    International Nuclear Information System (INIS)

    Basu, Sumit; Chatterjee, Rupa; Nayak, Tapan K.

    2015-01-01

    The physics of heavy-ion collisions at ultra-relativistic energies, popularly known as little bangs, has often been compared to the Big Bang phenomenon of early universe. The matter produced at extreme conditions of energy density (ε) and temperature (T) in heavy-ion collisions is a Big Bang replica in a tiny scale. In little bangs, the produced fireball goes through a rapid evolution from an early state of partonic quark-gluon plasma (QGP) to a hadronic phase, and finally freezes out within a few tens of fm

  5. Performance and separation occurrence of binary probit regression estimator using maximum likelihood method and Firths approach under different sample size

    Science.gov (United States)

    Lusiana, Evellin Dewi

    2017-12-01

    The parameters of binary probit regression model are commonly estimated by using Maximum Likelihood Estimation (MLE) method. However, MLE method has limitation if the binary data contains separation. Separation is the condition where there are one or several independent variables that exactly grouped the categories in binary response. It will result the estimators of MLE method become non-convergent, so that they cannot be used in modeling. One of the effort to resolve the separation is using Firths approach instead. This research has two aims. First, to identify the chance of separation occurrence in binary probit regression model between MLE method and Firths approach. Second, to compare the performance of binary probit regression model estimator that obtained by MLE method and Firths approach using RMSE criteria. Those are performed using simulation method and under different sample size. The results showed that the chance of separation occurrence in MLE method for small sample size is higher than Firths approach. On the other hand, for larger sample size, the probability decreased and relatively identic between MLE method and Firths approach. Meanwhile, Firths estimators have smaller RMSE than MLEs especially for smaller sample sizes. But for larger sample sizes, the RMSEs are not much different. It means that Firths estimators outperformed MLE estimator.

  6. Ecosystem approach to fisheries: Exploring environmental and trophic effects on Maximum Sustainable Yield (MSY reference point estimates.

    Directory of Open Access Journals (Sweden)

    Rajeev Kumar

    Full Text Available We present a comprehensive analysis of estimation of fisheries Maximum Sustainable Yield (MSY reference points using an ecosystem model built for Mille Lacs Lake, the second largest lake within Minnesota, USA. Data from single-species modelling output, extensive annual sampling for species abundances, annual catch-survey, stomach-content analysis for predatory-prey interactions, and expert opinions were brought together within the framework of an Ecopath with Ecosim (EwE ecosystem model. An increase in the lake water temperature was observed in the last few decades; therefore, we also incorporated a temperature forcing function in the EwE model to capture the influences of changing temperature on the species composition and food web. The EwE model was fitted to abundance and catch time-series for the period 1985 to 2006. Using the ecosystem model, we estimated reference points for most of the fished species in the lake at single-species as well as ecosystem levels with and without considering the influence of temperature change; therefore, our analysis investigated the trophic and temperature effects on the reference points. The paper concludes that reference points such as MSY are not stationary, but change when (1 environmental conditions alter species productivity and (2 fishing on predators alters the compensatory response of their prey. Thus, it is necessary for the management to re-estimate or re-evaluate the reference points when changes in environmental conditions and/or major shifts in species abundance or community structure are observed.

  7. Comparison of the Spatiotemporal Variability of Temperature, Precipitation, and Maximum Daily Spring Flows in Two Watersheds in Quebec Characterized by Different Land Use

    Directory of Open Access Journals (Sweden)

    Ali A. Assani

    2016-01-01

    Full Text Available We compared the spatiotemporal variability of temperatures and precipitation with that of the magnitude and timing of maximum daily spring flows in the geographically adjacent L’Assomption River (agricultural and Matawin River (forested watersheds during the period from 1932 to 2013. With regard to spatial variability, fall, winter, and spring temperatures as well as total precipitation are higher in the agricultural watershed than in the forested one. The magnitude of maximum daily spring flows is also higher in the first watershed as compared with the second, owing to substantial runoff, given that the amount of snow that gives rise to these flows is not significantly different in the two watersheds. These flows occur early in the season in the agricultural watershed because of the relatively high temperatures. With regard to temporal variability, minimum temperatures increased over time in both watersheds. Maximum temperatures in the fall only increased in the agricultural watershed. The amount of spring rain increased over time in both watersheds, whereas total precipitation increased significantly in the agricultural watershed only. However, the amount of snow decreased in the forested watershed. The magnitude of maximum daily spring flows increased over time in the forested watershed.

  8. The Effects of Data Gaps on the Calculated Monthly Mean Maximum and Minimum Temperatures in the Continental United States: A Spatial and Temporal Study.

    Science.gov (United States)

    Stooksbury, David E.; Idso, Craig D.; Hubbard, Kenneth G.

    1999-05-01

    Gaps in otherwise regularly scheduled observations are often referred to as missing data. This paper explores the spatial and temporal impacts that data gaps in the recorded daily maximum and minimum temperatures have on the calculated monthly mean maximum and minimum temperatures. For this analysis 138 climate stations from the United States Historical Climatology Network Daily Temperature and Precipitation Data set were selected. The selected stations had no missing maximum or minimum temperature values during the period 1951-80. The monthly mean maximum and minimum temperatures were calculated for each station for each month. For each month 1-10 consecutive days of data from each station were randomly removed. This was performed 30 times for each simulated gap period. The spatial and temporal impact of the 1-10-day data gaps were compared. The influence of data gaps is most pronounced in the continental regions during the winter and least pronounced in the southeast during the summer. In the north central plains, 10-day data gaps during January produce a standard deviation value greater than 2°C about the `true' mean. In the southeast, 10-day data gaps in July produce a standard deviation value less than 0.5°C about the mean. The results of this study will be of value in climate variability and climate trend research as well as climate assessment and impact studies.

  9. Random Forest-Based Approach for Maximum Power Point Tracking of Photovoltaic Systems Operating under Actual Environmental Conditions

    Directory of Open Access Journals (Sweden)

    Hussain Shareef

    2017-01-01

    Full Text Available Many maximum power point tracking (MPPT algorithms have been developed in recent years to maximize the produced PV energy. These algorithms are not sufficiently robust because of fast-changing environmental conditions, efficiency, accuracy at steady-state value, and dynamics of the tracking algorithm. Thus, this paper proposes a new random forest (RF model to improve MPPT performance. The RF model has the ability to capture the nonlinear association of patterns between predictors, such as irradiance and temperature, to determine accurate maximum power point. A RF-based tracker is designed for 25 SolarTIFSTF-120P6 PV modules, with the capacity of 3 kW peak using two high-speed sensors. For this purpose, a complete PV system is modeled using 300,000 data samples and simulated using the MATLAB/SIMULINK package. The proposed RF-based MPPT is then tested under actual environmental conditions for 24 days to validate the accuracy and dynamic response. The response of the RF-based MPPT model is also compared with that of the artificial neural network and adaptive neurofuzzy inference system algorithms for further validation. The results show that the proposed MPPT technique gives significant improvement compared with that of other techniques. In addition, the RF model passes the Bland–Altman test, with more than 95 percent acceptability.

  10. Random Forest-Based Approach for Maximum Power Point Tracking of Photovoltaic Systems Operating under Actual Environmental Conditions.

    Science.gov (United States)

    Shareef, Hussain; Mutlag, Ammar Hussein; Mohamed, Azah

    2017-01-01

    Many maximum power point tracking (MPPT) algorithms have been developed in recent years to maximize the produced PV energy. These algorithms are not sufficiently robust because of fast-changing environmental conditions, efficiency, accuracy at steady-state value, and dynamics of the tracking algorithm. Thus, this paper proposes a new random forest (RF) model to improve MPPT performance. The RF model has the ability to capture the nonlinear association of patterns between predictors, such as irradiance and temperature, to determine accurate maximum power point. A RF-based tracker is designed for 25 SolarTIFSTF-120P6 PV modules, with the capacity of 3 kW peak using two high-speed sensors. For this purpose, a complete PV system is modeled using 300,000 data samples and simulated using the MATLAB/SIMULINK package. The proposed RF-based MPPT is then tested under actual environmental conditions for 24 days to validate the accuracy and dynamic response. The response of the RF-based MPPT model is also compared with that of the artificial neural network and adaptive neurofuzzy inference system algorithms for further validation. The results show that the proposed MPPT technique gives significant improvement compared with that of other techniques. In addition, the RF model passes the Bland-Altman test, with more than 95 percent acceptability.

  11. Effects of the midnight temperature maximum observed in the thermosphere–ionosphere over the northeast of Brazil

    Directory of Open Access Journals (Sweden)

    C. A. O. B. Figueiredo

    2017-08-01

    Full Text Available The midnight temperature maximum (MTM has been observed in the lower thermosphere by two Fabry–Pérot interferometers (FPIs at São João do Cariri (7.4° S, 36.5° W and Cajazeiras (6.9° S, 38.6° W during 2011, when the solar activity was moderate and the solar flux was between 90 and 155 SFU (1 SFU  =  10−22 W m−2 Hz−1. The MTM is studied in detail using measurements of neutral temperature, wind and airglow relative intensity of OI630.0 nm (referred to as OI6300, and ionospheric parameters, such as virtual height (h′F, the peak height of the F2 region (hmF2, and critical frequency of the F region (foF2, which were measured by a Digisonde instrument (DPS at Eusébio (3.9° S, 38.4° W; geomagnetic coordinates 7.31° S, 32.40° E for 2011. The MTM peak was observed mostly along the year, except in May, June, and August. The amplitudes of the MTM varied from 64 ± 46 K in April up to 144 ± 48 K in October. The monthly temperature average showed a phase shift in the MTM peak around 0.25 h in September to 2.5 h in December before midnight. On the other hand, in February, March, and April the MTM peak occurred around midnight. International Reference Ionosphere 2012 (IRI-2012 model was compared to the neutral temperature observations and the IRI-2012 model failed in reproducing the MTM peaks. The zonal component of neutral wind flowed eastward the whole night; regardless of the month and the magnitude of the zonal wind, it was typically within the range of 50 to 150 m s−1 during the early evening. The meridional component of the neutral wind changed its direction over the months: from November to February, the meridional wind in the early evening flowed equatorward with a magnitude between 25 and 100 m s−1; in contrast, during the winter months, the meridional wind flowed to the pole within the range of 0 to −50 m s−1. Our results indicate that the reversal (changes

  12. An effective temperature compensation approach for ultrasonic hydrogen sensors

    Science.gov (United States)

    Tan, Xiaolong; Li, Min; Arsad, Norhana; Wen, Xiaoyan; Lu, Haifei

    2018-03-01

    Hydrogen is a kind of promising clean energy resource with a wide application prospect, which will, however, cause a serious security issue upon the leakage of hydrogen gas. The measurement of its concentration is of great significance. In a traditional approach of ultrasonic hydrogen sensing, a temperature drift of 0.1 °C results in a concentration error of about 250 ppm, which is intolerable for trace amount of gas sensing. In order to eliminate the influence brought by temperature drift, we propose a feasible approach named as linear compensation algorithm, which utilizes the linear relationship between the pulse count and temperature to compensate for the pulse count error (ΔN) caused by temperature drift. Experimental results demonstrate that our proposed approach is capable of improving the measurement accuracy and can easily detect sub-100 ppm of hydrogen concentration under variable temperature conditions.

  13. Experimental application of the "total maximum daily load" approach as a tool for WFD implementation in temporary rivers

    Science.gov (United States)

    Lo Porto, A.; De Girolamo, A. M.; Santese, G.

    2012-04-01

    In this presentation, the experience gained in the first experimental use in the UE (as far as we know) of the concept and methodology of the "Total Maximum Daily Load" (TMDL) is reported. The TMDL is an instrument required in the Clean Water Act in U.S.A for the management of water bodies classified impaired. The TMDL calculates the maximum amount of a pollutant that a waterbody can receive and still safely meet water quality standards. It permits to establish a scientifically-based strategy on the regulation of the emission loads control according to the characteristic of the watershed/basin. The implementation of the TMDL is a process analogous to the Programmes of Measures required by the WFD, the main difference being the analysis of the linkage between loads of different sources and the water quality of water bodies. The TMDL calculation was used in this study for the Candelaro River, a temporary Italian river, classified impaired in the first steps of the implementation of the WFD. A specific approach based on the "Load Duration Curves" was adopted for the calculation of nutrient TMDLs due to the more robust approach specific for rivers featuring large changes in river flow compared to the classic approach based on average long term flow conditions. This methodology permits to establish the maximum allowable loads across to the different flow conditions of a river. This methodology enabled: to evaluate the allowable loading of a water body; to identify the sources and estimate their loads; to estimate the total loading that the water bodies can receives meeting the water quality standards established; to link the effects of point and diffuse sources on the water quality status and finally to individuate the reduction necessary for each type of sources. The loads reductions were calculated for nitrate, total phosphorus and ammonia. The simulated measures showed a remarkable ability to reduce the pollutants for the Candelaro River. The use of the Soil and

  14. Assessing suitable area for Acacia dealbata Mill. in the Ceira River Basin (Central Portugal based on maximum entropy modelling approach

    Directory of Open Access Journals (Sweden)

    Jorge Pereira

    2015-12-01

    Full Text Available Biological invasion by exotic organisms became a key issue, a concern associated to the deep impacts on several domains described as resultant from such processes. A better understanding of the processes, the identification of more susceptible areas, and the definition of preventive or mitigation measures are identified as critical for the purpose of reducing associated impacts. The use of species distribution modeling might help on the purpose of identifying areas that are more susceptible to invasion. This paper aims to present preliminary results on assessing the susceptibility to invasion by the exotic species Acacia dealbata Mill. in the Ceira river basin. The results are based on the maximum entropy modeling approach, considered one of the correlative modelling techniques with better predictive performance. Models which validation is based on independent data sets present better performance, an evaluation based on the AUC of ROC accuracy measure.

  15. Determination of hot spot factors for calculation of the maximum fuel temperatures in the core thermal and hydraulic design of HTTR

    International Nuclear Information System (INIS)

    Maruyama, Soh; Yamashita, Kiyonobu; Fujimoto, Nozomu; Murata, Isao; Shindo, Ryuichi; Sudo, Yukio

    1988-12-01

    The Japan Atomic Energy Research Institute (JAERI) has been designing the High Temperature Engineering Test Reactor (HTTR), which is 30 MW in thermal power, 950deg C in reactor outlet coolant temperature and 40 kg/cm 2 G in primary coolant pressure. This report summarizes the hot spot factors and their estimated values used in the evaluation of the maximum fuel temperature which is one of the major items in the core thermal and hydraulic design of the HTTR. The hot spot factors consist of systematic factors and random factors. They were identified and their values adopted in the thermal and hydraulic design were determined considering the features of the HTTR. (author)

  16. Task 08/41, Low temperature loop at the RA reactor, Review IV - Maximum temperature values in the samples without forced cooling; Zadatak 08/41, Niskotemperaturna petlja u reaktoru 'RA', Pregled IV - Maksimalne temperature u uzorcima bez prinudnog hladjenja

    Energy Technology Data Exchange (ETDEWEB)

    Zaric, Z [Institute of Nuclear Sciences Boris Kidric, Vinca, Beograd (Serbia and Montenegro)

    1961-12-15

    The quantity of heat generated in the sample was calculated in the Review III. In stationary regime the heat is transferred through the air layer between the sample and the wall of the channel to the heavy water of graphite. Certain value of maximum temperature t{sub 0} is achieved in the sample. The objective of this review is determination of this temperature. [Serbo-Croat] Kolicina toplote generisana u uzorku, izracunata u pregledu III, u ravnoteznom stanju odvodi se kroz vazdusni sloj izmedju uzorka i zida kanala na tesku vodu odnosno grafit, pri cemu se u uzorku dostize izvesna maksimalna temperatura t{sub 0}. Odredjivanje ove temperature je predmet ovog pregleda.

  17. Maximum flow approach to prioritize potential drug targets of Mycobacterium tuberculosis H37Rv from protein-protein interaction network.

    Science.gov (United States)

    Melak, Tilahun; Gakkhar, Sunita

    2015-12-01

    In spite of the implementations of several strategies, tuberculosis (TB) is overwhelmingly a serious global public health problem causing millions of infections and deaths every year. This is mainly due to the emergence of drug-resistance varieties of TB. The current treatment strategies for the drug-resistance TB are of longer duration, more expensive and have side effects. This highlights the importance of identification and prioritization of targets for new drugs. This study has been carried out to prioritize potential drug targets of Mycobacterium tuberculosis H37Rv based on their flow to resistance genes. The weighted proteome interaction network of the pathogen was constructed using a dataset from STRING database. Only a subset of the dataset with interactions that have a combined score value ≥770 was considered. Maximum flow approach has been used to prioritize potential drug targets. The potential drug targets were obtained through comparative genome and network centrality analysis. The curated set of resistance genes was retrieved from literatures. Detail literature review and additional assessment of the method were also carried out for validation. A list of 537 proteins which are essential to the pathogen and non-homologous with human was obtained from the comparative genome analysis. Through network centrality measures, 131 of them were found within the close neighborhood of the centre of gravity of the proteome network. These proteins were further prioritized based on their maximum flow value to resistance genes and they are proposed as reliable drug targets of the pathogen. Proteins which interact with the host were also identified in order to understand the infection mechanism. Potential drug targets of Mycobacterium tuberculosis H37Rv were successfully prioritized based on their flow to resistance genes of existing drugs which is believed to increase the druggability of the targets since inhibition of a protein that has a maximum flow to

  18. A novel approach to estimating potential maximum heavy metal exposure to ship recycling yard workers in Alang, India

    Energy Technology Data Exchange (ETDEWEB)

    Deshpande, Paritosh C.; Tilwankar, Atit K.; Asolekar, Shyam R., E-mail: asolekar@iitb.ac.in

    2012-11-01

    yards in India. -- Highlights: Black-Right-Pointing-Pointer Conceptual framework to apportion pollution loads from plate-cutting in ship recycling. Black-Right-Pointing-Pointer Estimates upper bound (pollutants in air) and lower bound (intertidal sediments). Black-Right-Pointing-Pointer Mathematical model using vector addition approach and based on Gaussian dispersion. Black-Right-Pointing-Pointer Model predicted maximum emissions of heavy metals at different wind speeds. Black-Right-Pointing-Pointer Exposure impacts on a worker's health and the intertidal sediments can be assessed.

  19. A novel approach to estimating potential maximum heavy metal exposure to ship recycling yard workers in Alang, India

    International Nuclear Information System (INIS)

    Deshpande, Paritosh C.; Tilwankar, Atit K.; Asolekar, Shyam R.

    2012-01-01

    : ► Conceptual framework to apportion pollution loads from plate-cutting in ship recycling. ► Estimates upper bound (pollutants in air) and lower bound (intertidal sediments). ► Mathematical model using vector addition approach and based on Gaussian dispersion. ► Model predicted maximum emissions of heavy metals at different wind speeds. ► Exposure impacts on a worker's health and the intertidal sediments can be assessed.

  20. Temperature reconstruction and volcanic eruption signal from tree-ring width and maximum latewood density over the past 304 years in the southeastern Tibetan Plateau.

    Science.gov (United States)

    Li, Mingqi; Huang, Lei; Yin, Zhi-Yong; Shao, Xuemei

    2017-11-01

    This study presents a 304-year mean July-October maximum temperature reconstruction for the southeastern Tibetan Plateau based on both tree-ring width and maximum latewood density data. The reconstruction explained 58% of the variance in July-October maximum temperature during the calibration period (1958-2005). On the decadal scale, we identified two prominent cold periods during AD 1801-1833 and 1961-2003 and two prominent warm periods during AD 1730-1800 and 1928-1960, which are consistent with other reconstructions from the nearby region. Based on the reconstructed temperature series and volcanic eruption chronology, we found that most extreme cold years were in good agreement with major volcanic eruptions, such as 1816 after the Tambora eruption in 1815. Also, clusters of volcanic eruptions probably made the 1810s the coldest decade in the past 300 years. Our results indicated that fingerprints of major volcanic eruptions can be found in the reconstructed temperature records, while the responses of regional climate to these eruption events varied in space and time in the southeastern Tibetan Plateau.

  1. Effect of Temperature on Wettability and Optimum Wetting Conditions for Maximum Oil Recovery in Carbonate Reservoir System

    DEFF Research Database (Denmark)

    Sohal, Muhammad Adeel Nassar; Thyne, Geoffrey; Søgaard, Erik Gydesen

    2017-01-01

    The additional oil recovery from fractured & oil-wet carbonates by ionically modified water is principally based on changing wettability and often attributed to an improvement in water wetness. The influence of different parameters like dilution of salinity, potential anions, temperature, pressure......, lithology, pH, oil acid and base numbers to improve water wetting has been tested in recovery experiments. In these studies temperature is mainly investigated to observe the reactivity of potential anions (SO42-, PO33-, and BO33-) at different concentrations. But the influence of systematically increasing...... and 100 times. It was observed that as temperature increased the water-wetness decreased for seawater and seawater dilutions, however, the presence of elevated sulfate can somewhat counter this trend as sulfate increased oil wetting....

  2. Test Plan to Determine the Maximum Surface Temperatures for a Plutonium Storage Cubicle with Horizontal 3013 Canisters

    International Nuclear Information System (INIS)

    HEARD, F.J.

    2000-01-01

    A simulated full-scale plutonium storage cubicle with 22 horizontally positioned and heated 3013 canisters is proposed to confirm the effectiveness of natural circulation. Temperature and airflow measurements will be made for different heat generation and cubicle door configurations. Comparisons will be made to computer based thermal Hydraulic models

  3. A New Approach to Identify Optimal Properties of Shunting Elements for Maximum Damping of Structural Vibration Using Piezoelectric Patches

    Science.gov (United States)

    Park, Junhong; Palumbo, Daniel L.

    2004-01-01

    The use of shunted piezoelectric patches in reducing vibration and sound radiation of structures has several advantages over passive viscoelastic elements, e.g., lower weight with increased controllability. The performance of the piezoelectric patches depends on the shunting electronics that are designed to dissipate vibration energy through a resistive element. In past efforts most of the proposed tuning methods were based on modal properties of the structure. In these cases, the tuning applies only to one mode of interest and maximum tuning is limited to invariant points when based on den Hartog's invariant points concept. In this study, a design method based on the wave propagation approach is proposed. Optimal tuning is investigated depending on the dynamic and geometric properties that include effects from boundary conditions and position of the shunted piezoelectric patch relative to the structure. Active filters are proposed as shunting electronics to implement the tuning criteria. The developed tuning methods resulted in superior capabilities in minimizing structural vibration and noise radiation compared to other tuning methods. The tuned circuits are relatively insensitive to changes in modal properties and boundary conditions, and can applied to frequency ranges in which multiple modes have effects.

  4. Simple landmark for preservation of the cochlea during maximum drilling of the petrous apex through the anterior transpetrosal approach

    International Nuclear Information System (INIS)

    Seo, Yoshinobu; Sasaki, Takehiko; Nakamura, Hirohiko

    2010-01-01

    The cochlea is one of the most important organs to preserve during skull base surgery. However, no definite landmark for the cochlea has been identified during maximum drilling of the petrous apex such as anterior transpetrosal approach. The relationship between the cochlea and the petrous portion of the internal carotid artery (ICA) was assessed with computed tomography (CT) in 70 petrous bones of 35 patients, 16 males and 19 females aged 12-85 years (mean 48.6 years). After accumulation of volume data with multidetector CT, axial bone window images of 1-mm thickness were obtained to identify the cochlea and the horizontal petrous portion of the ICA. The distance was measured between the extended line of the posteromedial side of the horizontal petrous portion of the ICA and the basal turn of the cochlea. If the cochlea was located posteromedial to the ICA, the distance was expressed as a positive number, but if anterolateral, as a negative number. The mean distance was 0.6 mm (range -4.9 to 3.9 mm) and had no significant correlation with sex or age. The cochlea varies in location compared with the horizontal petrous portion of the ICA. Measurement of the depth and distance between the extended line of the posteromedial side of the horizontal intrapetrous ICA and the cochlea before surgery will save time, increase safety, and maximize bone evacuation during drilling of the petrous apex. (author)

  5. Impacts of projected maximum temperature extremes for C21 by an ensemble of regional climate models on cereal cropping systems in the Iberian Peninsula

    Directory of Open Access Journals (Sweden)

    M. Ruiz-Ramos

    2011-12-01

    Full Text Available Crops growing in the Iberian Peninsula may be subjected to damagingly high temperatures during the sensitive development periods of flowering and grain filling. Such episodes are considered important hazards and farmers may take insurance to offset their impact. Increases in value and frequency of maximum temperature have been observed in the Iberian Peninsula during the 20th century, and studies on climate change indicate the possibility of further increase by the end of the 21st century. Here, impacts of current and future high temperatures on cereal cropping systems of the Iberian Peninsula are evaluated, focusing on vulnerable development periods of winter and summer crops. Climate change scenarios obtained from an ensemble of ten Regional Climate Models (multimodel ensemble combined with crop simulation models were used for this purpose and related uncertainty was estimated. Results reveal that higher extremes of maximum temperature represent a threat to summer-grown but not to winter-grown crops in the Iberian Peninsula. The study highlights the different vulnerability of crops in the two growing seasons and the need to account for changes in extreme temperatures in developing adaptations in cereal cropping systems. Finally, this work contributes to clarifying the causes of high-uncertainty impact projections from previous studies.

  6. A first-principles approach to finite temperature elastic constants

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Y; Wang, J J; Zhang, H; Manga, V R; Shang, S L; Chen, L-Q; Liu, Z-K [Department of Materials Science and Engineering, Pennsylvania State University, University Park, PA 16802 (United States)

    2010-06-09

    A first-principles approach to calculating the elastic stiffness coefficients at finite temperatures was proposed. It is based on the assumption that the temperature dependence of elastic stiffness coefficients mainly results from volume change as a function of temperature; it combines the first-principles calculations of elastic constants at 0 K and the first-principles phonon theory of thermal expansion. Its applications to elastic constants of Al, Cu, Ni, Mo, Ta, NiAl, and Ni{sub 3}Al from 0 K up to their respective melting points show excellent agreement between the predicted values and existing experimental measurements.

  7. A first-principles approach to finite temperature elastic constants

    International Nuclear Information System (INIS)

    Wang, Y; Wang, J J; Zhang, H; Manga, V R; Shang, S L; Chen, L-Q; Liu, Z-K

    2010-01-01

    A first-principles approach to calculating the elastic stiffness coefficients at finite temperatures was proposed. It is based on the assumption that the temperature dependence of elastic stiffness coefficients mainly results from volume change as a function of temperature; it combines the first-principles calculations of elastic constants at 0 K and the first-principles phonon theory of thermal expansion. Its applications to elastic constants of Al, Cu, Ni, Mo, Ta, NiAl, and Ni 3 Al from 0 K up to their respective melting points show excellent agreement between the predicted values and existing experimental measurements.

  8. A comparison of PMIP2 model simulations and the MARGO proxy reconstruction for tropical sea surface temperatures at last glacial maximum

    Energy Technology Data Exchange (ETDEWEB)

    Otto-Bliesner, Bette L.; Brady, E.C. [National Center for Atmospheric Research, Climate and Global Dynamics Division, Boulder, CO (United States); Schneider, Ralph; Weinelt, M. [Christian-Albrechts Universitaet, Institut fuer Geowissenschaften, Kiel (Germany); Kucera, M. [Eberhard-Karls Universitaet Tuebingen, Institut fuer Geowissenschaften, Tuebingen (Germany); Abe-Ouchi, A. [The University of Tokyo, Center for Climate System Research, Kashiwa (Japan); Bard, E. [CEREGE, College de France, CNRS, Universite Aix-Marseille, Aix-en-Provence (France); Braconnot, P.; Kageyama, M.; Marti, O.; Waelbroeck, C. [Unite mixte CEA-CNRS-UVSQ, Laboratoire des Sciences du Climat et de l' Environnement, Gif-sur-Yvette Cedex (France); Crucifix, M. [Universite Catholique de Louvain, Institut d' Astronomie et de Geophysique Georges Lemaitre, Louvain-la-Neuve (Belgium); Hewitt, C.D. [Met Office Hadley Centre, Exeter (United Kingdom); Paul, A. [Bremen University, Department of Geosciences, Bremen (Germany); Rosell-Mele, A. [Universitat Autonoma de Barcelona, ICREA and Institut de Ciencia i Tecnologia Ambientals, Barcelona (Spain); Weber, S.L. [Royal Netherlands Meteorological Institute (KNMI), De Bilt (Netherlands); Yu, Y. [Chinese Academy of Sciences, LASG, Institute of Atmospheric Physics, Beijing (China)

    2009-05-15

    Results from multiple model simulations are used to understand the tropical sea surface temperature (SST) response to the reduced greenhouse gas concentrations and large continental ice sheets of the last glacial maximum (LGM). We present LGM simulations from the Paleoclimate Modelling Intercomparison Project, Phase 2 (PMIP2) and compare these simulations to proxy data collated and harmonized within the Multiproxy Approach for the Reconstruction of the Glacial Ocean Surface Project (MARGO). Five atmosphere-ocean coupled climate models (AOGCMs) and one coupled model of intermediate complexity have PMIP2 ocean results available for LGM. The models give a range of tropical (defined for this paper as 15 S-15 N) SST cooling of 1.0-2.4 C, comparable to the MARGO estimate of annual cooling of 1.7{+-}1 C. The models simulate greater SST cooling in the tropical Atlantic than tropical Pacific, but interbasin and intrabasin variations of cooling are much smaller than those found in the MARGO reconstruction. The simulated tropical coolings are relatively insensitive to season, a feature also present in the MARGO transferred-based estimates calculated from planktonic foraminiferal assemblages for the Indian and Pacific Oceans. These assemblages indicate seasonality in cooling in the Atlantic basin, with greater cooling in northern summer than northern winter, not captured by the model simulations. Biases in the simulations of the tropical upwelling and thermocline found in the preindustrial control simulations remain for the LGM simulations and are partly responsible for the more homogeneous spatial and temporal LGM tropical cooling simulated by the models. The PMIP2 LGM simulations give estimates for the climate sensitivity parameter of 0.67 -0.83 C per Wm{sup -2}, which translates to equilibrium climate sensitivity for doubling of atmospheric CO{sub 2} of 2.6-3.1 C. (orig.)

  9. Technology and education: First approach for measuring temperature with Arduino

    Science.gov (United States)

    Carrillo, Alejandro

    2017-04-01

    This poster session presents some ideas and approaches to understand concepts of thermal equilibrium, temperature and heat in order to bulid a man-nature relationship in a harmonious and responsible manner, emphasizing the interaction between science and technology, without neglecting the relationship of the environment and society, an approach to sustainability. It is proposed the development of practices that involve the use of modern technology, of easy access and low cost to measure temperature. We believe that the Arduino microcontroller and some temperature sensors can open the doors of innovation to carry out such practices. In this work we present some results of simple practices presented to a population of students between the ages of 16 and 17 years old. The practices in this proposal are: Zero law of thermodynamics and the concept of temperature, calibration of thermometers and measurement of temperature for heating and cooling of three different substances under the same physical conditions. Finally the student is asked to make an application that involves measuring of temperature and other physical parameters. Some suggestions are: to determine the temperature at which we take some food, measure the temperature difference at different rooms of a house, housing constructions that favour optimal condition, measure the temperature of different regions, measure of temperature trough different colour filters, solar activity and UV, propose applications to understand current problems such as global warming, etc. It is concluded that the Arduino practices and electrical sensors increase the cultural horizon of the students while awaking their interest to understand their operation, basic physics and its application from a modern perspective.

  10. Mapping distribution of Rastrelliger kanagurta in the exclusive economic zone (EEZ) of Malaysia using maximum entropy modeling approach

    Science.gov (United States)

    Yusop, Syazwani Mohd; Mustapha, Muzzneena Ahmad

    2018-04-01

    The coupling of fishing locations for R. kanagurta obtained from SEAFDEC and multi-sensor satellite imageries of oceanographic variables; sea surface temperature (SST), sea surface height (SSH) and chl-a concentration (chl-a) were utilized to evaluate the performance of maximum entropy (MaxEnt) models for R. kanagurta fishing ground for prediction. Besides, this study was conducted to identify the relative percentage contribution of each environmental variable considered in order to describe the effects of the oceanographic factors on the species distribution in the study area. The potential fishing grounds during intermonsoon periods; April and October 2008-2009 were simulated separately and covered the near-coast of Kelantan, Terengganu, Pahang and Johor. The oceanographic conditions differed between regions by the inherent seasonal variability. The seasonal and spatial extents of potential fishing grounds were largely explained by chl-a concentration (0.21-0.99 mg/m3 in April and 0.28-1.00 mg/m3 in October), SSH (77.37-85.90 cm in April and 107.60-108.97 cm in October) and SST (30.43-33.70 °C in April and 30.48-30.97 °C in October). The constructed models were applicable and therefore they were suitable for predicting the potential fishing zones of R. kanagurta in EEZ. The results from this study revealed MaxEnt's potential for predicting the spatial distribution of R. kanagurta and highlighted the use of multispectral satellite images for describing the seasonal potential fishing grounds.

  11. Temperature effect on the inter-micellar collision and maximum packaging volume fraction in water/AOT/isooctane micro-emulsions

    International Nuclear Information System (INIS)

    Guettari, Moez; Ben Naceur, Imen; Kassab, Ghazi; Tajouri, Tahar

    2016-01-01

    We have studied the viscosity behaviour of water/AOT/isooctane micro-emulsions as a function of the volume fraction of the dispersed phase over a temperature range from the (298.15 to 328.15) K. For all the studied temperature range, a sharp increase of the viscosities is observed when the droplets concentration was varied. Several equations based on hard sphere model were examined to explain the behaviours of micro-emulsions under temperature and concentration effects. According to these equations, the shape factor and the inter-particle interaction parameters were found to be dependent on temperature which is in contradiction with experimental results reported in the literature. A modified Vand equation, taking into account the inter-particle collision time, is used to interpret the results obtained. This deviation is attributed to the aggregation of the droplets which becomes important by increasing temperature. The maximum packaging volume fraction of particles Φ_d_m and the intrinsic viscosity [η] were determined according to the Krieger and Dougherty equation through the temperature range studied. These two parameters were shown to be dependent on temperature but their product was found to be constant and close to 2 as reported in theory.

  12. Effect of in-pile degradation of the meat thermal conductivity on the maximum temperature of the plate-type U-Mo dispersion fuels

    International Nuclear Information System (INIS)

    Medvedev, Pavel G.

    2009-01-01

    Effect of in-pile degradation of thermal conductivity on the maximum temperature of the plate-type research reactor fuels has been assessed using the steady-state heat conduction equation and assuming convection cooling. It was found that due to very low meat thickness, characteristic for this type of fuel, the effect of thermal conductivity degradation on the maximum fuel temperature is minor. For example, the fuel plate featuring 0.635 mm thick meat operating at heat flux of 600 W/cm2 would experience only a 20 C temperature rise if the meat thermal conductivity degrades from 0.8 W/cm-s to 0.3 W/cm-s. While degradation of meat thermal conductivity in dispersion-type U-Mo fuel can be very substantial due to formation of interaction layer between the particles and the matrix, and development of fission gas filled porosity, this simple analysis demonstrates that this phenomenon is unlikely to significantly affect the temperature-based safety margin of the fuel during normal operation.

  13. New climatic targets against global warming: will the maximum 2 °C temperature rise affect estuarine benthic communities?

    Science.gov (United States)

    Crespo, Daniel; Grilo, Tiago Fernandes; Baptista, Joana; Coelho, João Pedro; Lillebø, Ana Isabel; Cássio, Fernanda; Fernandes, Isabel; Pascoal, Cláudia; Pardal, Miguel Ângelo; Dolbeth, Marina

    2017-06-20

    The Paris Agreement signed by 195 countries in 2015 sets out a global action plan to avoid dangerous climate change by limiting global warming to remain below 2 °C. Under that premise, in situ experiments were run to test the effects of 2 °C temperature increase on the benthic communities in a seagrass bed and adjacent bare sediment, from a temperate European estuary. Temperature was artificially increased in situ and diversity and ecosystem functioning components measured after 10 and 30 days. Despite some warmness effects on the analysed components, significant impacts were not verified on macro and microfauna structure, bioturbation or in the fluxes of nutrients. The effect of site/habitat seemed more important than the effects of the warmness, with the seagrass habitat providing more homogenous results and being less impacted by warmness than the adjacent bare sediment. The results reinforce that most ecological responses to global changes are context dependent and that ecosystem stability depends not only on biological diversity but also on the availability of different habitats and niches, highlighting the role of coastal wetlands. In the context of the Paris Agreement it seems that estuarine benthic ecosystems will be able to cope if global warming remains below 2 °C.

  14. Evaluation of daily maximum and minimum 2-m temperatures as simulated with the Regional Climate Model COSMO-CLM over Africa

    Directory of Open Access Journals (Sweden)

    Stefan Krähenmann

    2013-07-01

    Full Text Available The representation of the diurnal 2-m temperature cycle is challenging because of the many processes involved, particularly land-atmosphere interactions. This study examines the ability of the regional climate model COSMO-CLM (version 4.8 to capture the statistics of daily maximum and minimum 2-m temperatures (Tmin/Tmax over Africa. The simulations are carried out at two different horizontal grid-spacings (0.22° and 0.44°, and are driven by ECMWF ERA-Interim reanalyses as near-perfect lateral boundary conditions. As evaluation reference, a high-resolution gridded dataset of daily maximum and minimum temperatures (Tmin/Tmax for Africa (covering the period 2008–2010 is created using the regression-kriging-regression-kriging (RKRK algorithm. RKRK applies, among other predictors, the remotely sensed predictors land surface temperature and cloud cover to compensate for the missing information about the temperature pattern due to the low station density over Africa. This dataset allows the evaluation of temperature characteristics like the frequencies of Tmin/Tmax, the diurnal temperature range, and the 90th percentile of Tmax. Although the large-scale patterns of temperature are reproduced well, COSMO-CLM shows significant under- and overestimation of temperature at regional scales. The hemispheric summers are generally too warm and the day-to-day temperature variability is overestimated over northern and southern extra-tropical Africa. The average diurnal temperature range is underestimated by about 2°C across arid areas, yet overestimated by around 2°C over the African tropics. An evaluation based on frequency distributions shows good model performance for simulated Tmin (the simulated frequency distributions capture more than 80% of the observed ones, but less well performance for Tmax (capture below 70%. Further, over wide parts of Africa a too large fraction of daily Tmax values exceeds the observed 90th percentile of Tmax, particularly

  15. Evaluation of daily maximum and minimum 2-m temperatures as simulated with the regional climate model COSMO-CLM over Africa

    Energy Technology Data Exchange (ETDEWEB)

    Kraehenmann, Stefan; Kothe, Steffen; Ahrens, Bodo [Frankfurt Univ. (Germany). Inst. for Atmospheric and Environmental Sciences; Panitz, Hans-Juergen [Karlsruhe Institute of Technology (KIT), Eggenstein-Leopoldshafen (Germany)

    2013-10-15

    The representation of the diurnal 2-m temperature cycle is challenging because of the many processes involved, particularly land-atmosphere interactions. This study examines the ability of the regional climate model COSMO-CLM (version 4.8) to capture the statistics of daily maximum and minimum 2-m temperatures (Tmin/Tmax) over Africa. The simulations are carried out at two different horizontal grid-spacings (0.22 and 0.44 ), and are driven by ECMWF ERA-Interim reanalyses as near-perfect lateral boundary conditions. As evaluation reference, a high-resolution gridded dataset of daily maximum and minimum temperatures (Tmin/Tmax) for Africa (covering the period 2008-2010) is created using the regression-kriging-regression-kriging (RKRK) algorithm. RKRK applies, among other predictors, the remotely sensed predictors land surface temperature and cloud cover to compensate for the missing information about the temperature pattern due to the low station density over Africa. This dataset allows the evaluation of temperature characteristics like the frequencies of Tmin/Tmax, the diurnal temperature range, and the 90{sup th} percentile of Tmax. Although the large-scale patterns of temperature are reproduced well, COSMO-CLM shows significant under- and overestimation of temperature at regional scales. The hemispheric summers are generally too warm and the day-to-day temperature variability is overestimated over northern and southern extra-tropical Africa. The average diurnal temperature range is underestimated by about 2 C across arid areas, yet overestimated by around 2 C over the African tropics. An evaluation based on frequency distributions shows good model performance for simulated Tmin (the simulated frequency distributions capture more than 80% of the observed ones), but less well performance for Tmax (capture below 70%). Further, over wide parts of Africa a too large fraction of daily Tmax values exceeds the observed 90{sup th} percentile of Tmax, particularly across

  16. Bonding in Heavier Group 14 Zero-Valent Complexes-A Combined Maximum Probability Domain and Valence Bond Theory Approach.

    Science.gov (United States)

    Turek, Jan; Braïda, Benoît; De Proft, Frank

    2017-10-17

    The bonding in heavier Group 14 zero-valent complexes of a general formula L 2 E (E=Si-Pb; L=phosphine, N-heterocyclic and acyclic carbene, cyclic tetrylene and carbon monoxide) is probed by combining valence bond (VB) theory and maximum probability domain (MPD) approaches. All studied complexes are initially evaluated on the basis of the structural parameters and the shape of frontier orbitals revealing a bent structural motif and the presence of two lone pairs at the central E atom. For the VB calculations three resonance structures are suggested, representing the "ylidone", "ylidene" and "bent allene" structures, respectively. The influence of both ligands and central atoms on the bonding situation is clearly expressed in different weights of the resonance structures for the particular complexes. In general, the bonding in the studied E 0 compounds, the tetrylones, is best described as a resonating combination of "ylidone" and "ylidene" structures with a minor contribution of the "bent allene" structure. Moreover, the VB calculations allow for a straightforward assessment of the π-backbonding (E→L) stabilization energy. The validity of the suggested resonance model is further confirmed by the complementary MPD calculations focusing on the E lone pair region as well as the E-L bonding region. Likewise, the MPD method reveals a strong influence of the σ-donating and π-accepting properties of the ligand. In particular, either one single domain or two symmetrical domains are found in the lone pair region of the central atom, supporting the predominance of either the "ylidene" or "ylidone" structures having one or two lone pairs at the central atom, respectively. Furthermore, the calculated average populations in the lone pair MPDs correlate very well with the natural bond orbital (NBO) populations, and can be related to the average number of electrons that is backdonated to the ligands. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. Urban pavement surface temperature. Comparison of numerical and statistical approach

    Science.gov (United States)

    Marchetti, Mario; Khalifa, Abderrahmen; Bues, Michel; Bouilloud, Ludovic; Martin, Eric; Chancibaut, Katia

    2015-04-01

    The forecast of pavement surface temperature is very specific in the context of urban winter maintenance. to manage snow plowing and salting of roads. Such forecast mainly relies on numerical models based on a description of the energy balance between the atmosphere, the buildings and the pavement, with a canyon configuration. Nevertheless, there is a specific need in the physical description and the numerical implementation of the traffic in the energy flux balance. This traffic was originally considered as a constant. Many changes were performed in a numerical model to describe as accurately as possible the traffic effects on this urban energy balance, such as tires friction, pavement-air exchange coefficient, and infrared flux neat balance. Some experiments based on infrared thermography and radiometry were then conducted to quantify the effect fo traffic on urban pavement surface. Based on meteorological data, corresponding pavement temperature forecast were calculated and were compared with fiels measurements. Results indicated a good agreement between the forecast from the numerical model based on this energy balance approach. A complementary forecast approach based on principal component analysis (PCA) and partial least-square regression (PLS) was also developed, with data from thermal mapping usng infrared radiometry. The forecast of pavement surface temperature with air temperature was obtained in the specific case of urban configurtation, and considering traffic into measurements used for the statistical analysis. A comparison between results from the numerical model based on energy balance, and PCA/PLS was then conducted, indicating the advantages and limits of each approach.

  18. Bayesian Monte Carlo and Maximum Likelihood Approach for Uncertainty Estimation and Risk Management: Application to Lake Oxygen Recovery Model

    Science.gov (United States)

    Model uncertainty estimation and risk assessment is essential to environmental management and informed decision making on pollution mitigation strategies. In this study, we apply a probabilistic methodology, which combines Bayesian Monte Carlo simulation and Maximum Likelihood e...

  19. Surface temperature evolution and the location of maximum and average surface temperature of a lithium-ion pouch cell under variable load profiles

    DEFF Research Database (Denmark)

    Goutam, Shovon; Timmermans, Jean-Marc; Omar, Noshin

    2014-01-01

    This experimental work attempts to determine the surface temperature evolution of large (20 Ah-rated capacity) commercial Lithium-Ion pouch cells for the application of rechargeable energy storage of plug in hybrid electric vehicles and electric vehicles. The cathode of the cells is nickel...

  20. Further studies of the stability of LiF:Mg,Cu,P (GR-200) at maximum readout temperatures between 240oC and 280oC

    International Nuclear Information System (INIS)

    Oster, L.; Horowitz, Y.S.; Horowitz, A.

    1996-01-01

    It has recently been shown that LiF:Mg,Cu,P (GR-200) can be read out to temperatures as high as 270 o C for 12 s with negligible loss in sensitivity. In the present work the long-term sensitivity of GR-200 was studied at readout temperatures between 240 o C and 280 o C. The idea was that the readout temperatures above 240 o C might initiate reaction processes which influence the sensitivity only after long-term storage. No difference was found in the behaviour of GR-200 chips with 80 accumulated readouts to 240 o C or 270 o C and after storage of up to four months. Slight losses in sensitivity of 4% for 240 o C and 10% for 270 o C are observed after 80 readouts during four months storage. However, at a maximum readout temperature of 280 o C, a 33% loss in sensitivity after 80 cycles is observed. In conclusion it is found that GR-200 can be read out at temperatures as high as 270 o C with negligible loss in sensitivity (less than 0.1% per readout following an initialisation procedure of 1 readout) and acceptable residual signal (0.6%). (author)

  1. Simulation of the maximum yield of sugar cane at different altitudes: effect of temperature on the conversion of radiation into biomass

    International Nuclear Information System (INIS)

    Martine, J.F.; Siband, P.; Bonhomme, R.

    1999-01-01

    To minimize the production costs of sugar cane, for the diverse sites of production found in La Réunion, an improved understanding of the influence of temperature on the dry matter radiation quotient is required. Existing models simulate poorly the temperature-radiation interaction. A model of sugar cane growth has been fitted to the results from two contrasting sites (mean temperatures: 14-30 °C; total radiation: 10-25 MJ·m -2 ·d -1 ), on a ratoon crop of cv R570, under conditions of non-limiting resources. Radiation interception, aerial biomass, the fraction of millable stems, and their moisture content, were measured. The time-courses of the efficiency of radiation interception differed between sites. As a function of the sum of day-degrees, they were similar. The dry matter radiation quotient was related to temperature. The moisture content of millable stems depended on the day-degree sum. On the other hand, the leaf/stem ratio was independent of temperature. The relationships established enabled the construction of a simple model of yield potential. Applied to a set of sites representing the sugar cane growing area of La Réunion, it gave a good prediction of maximum yields. (author) [fr

  2. Cumulant approach to dynamical correlation functions at finite temperatures

    International Nuclear Information System (INIS)

    Tran Minhtien.

    1993-11-01

    A new theoretical approach, based on the introduction of cumulants, to calculate thermodynamic averages and dynamical correlation functions at finite temperatures is developed. The method is formulated in Liouville instead of Hilbert space and can be applied to operators which do not require to satisfy fermion or boson commutation relations. The application of the partitioning and projection methods for the dynamical correlation functions is discussed. The present method can be applied to weakly as well as to strongly correlated systems. (author). 9 refs

  3. Approximate maximum parsimony and ancestral maximum likelihood.

    Science.gov (United States)

    Alon, Noga; Chor, Benny; Pardi, Fabio; Rapoport, Anat

    2010-01-01

    We explore the maximum parsimony (MP) and ancestral maximum likelihood (AML) criteria in phylogenetic tree reconstruction. Both problems are NP-hard, so we seek approximate solutions. We formulate the two problems as Steiner tree problems under appropriate distances. The gist of our approach is the succinct characterization of Steiner trees for a small number of leaves for the two distances. This enables the use of known Steiner tree approximation algorithms. The approach leads to a 16/9 approximation ratio for AML and asymptotically to a 1.55 approximation ratio for MP.

  4. Study on wavelength of maximum absorbance for phenyl- thiourea derivatives: A topological and non-conventional physicochemical approach

    International Nuclear Information System (INIS)

    Thakur, Suprajnya; Mishra, Ashutosh; Thakur, Mamta; Thakur, Abhilash

    2014-01-01

    In present study efforts have been made to analyze the role of different structural/ topological and non-conventional physicochemical features on the X-ray absorption property wavelength of maximum absorption λ m . Efforts are also made to compare the magnitude of various parameters for optimization of the features mainly responsible to characterize the wavelength of maximum absorbance λ m in X-ray absorption. For the purpose multiple linear regression method is used and on the basis of regression and correlation value suitable model have been developed.

  5. Controls on seasonal patterns of maximum ecosystem carbon uptake and canopy-scale photosynthetic light response: contributions from both temperature and photoperiod.

    Science.gov (United States)

    Stoy, Paul C; Trowbridge, Amy M; Bauerle, William L

    2014-02-01

    Most models of photosynthetic activity assume that temperature is the dominant control over physiological processes. Recent studies have found, however, that photoperiod is a better descriptor than temperature of the seasonal variability of photosynthetic physiology at the leaf scale. Incorporating photoperiodic control into global models consequently improves their representation of the seasonality and magnitude of atmospheric CO2 concentration. The role of photoperiod versus that of temperature in controlling the seasonal variability of photosynthetic function at the canopy scale remains unexplored. We quantified the seasonal variability of ecosystem-level light response curves using nearly 400 site years of eddy covariance data from over eighty Free Fair-Use sites in the FLUXNET database. Model parameters describing maximum canopy CO2 uptake and the initial slope of the light response curve peaked after peak temperature in about 2/3 of site years examined, emphasizing the important role of temperature in controlling seasonal photosynthetic function. Akaike's Information Criterion analyses indicated that photoperiod should be included in models of seasonal parameter variability in over 90% of the site years investigated here, demonstrating that photoperiod also plays an important role in controlling seasonal photosynthetic function. We also performed a Granger causality analysis on both gross ecosystem productivity (GEP) and GEP normalized by photosynthetic photon flux density (GEP n ). While photoperiod Granger-caused GEP and GEP n in 99 and 92% of all site years, respectively, air temperature Granger-caused GEP in a mere 32% of site years but Granger-caused GEP n in 81% of all site years. Results demonstrate that incorporating photoperiod may be a logical step toward improving models of ecosystem carbon uptake, but not at the expense of including enzyme kinetic-based temperature constraints on canopy-scale photosynthesis.

  6. Intelligent approach to maximum power point tracking control strategy for variable-speed wind turbine generation system

    Energy Technology Data Exchange (ETDEWEB)

    Lin, Whei-Min; Hong, Chih-Ming [Department of Electrical Engineering, National Sun Yat-Sen University, Kaohsiung 80424 (China)

    2010-06-15

    To achieve maximum power point tracking (MPPT) for wind power generation systems, the rotational speed of wind turbines should be adjusted in real time according to wind speed. In this paper, a Wilcoxon radial basis function network (WRBFN) with hill-climb searching (HCS) MPPT strategy is proposed for a permanent magnet synchronous generator (PMSG) with a variable-speed wind turbine. A high-performance online training WRBFN using a back-propagation learning algorithm with modified particle swarm optimization (MPSO) regulating controller is designed for a PMSG. The MPSO is adopted in this study to adapt to the learning rates in the back-propagation process of the WRBFN to improve the learning capability. The MPPT strategy locates the system operation points along the maximum power curves based on the dc-link voltage of the inverter, thus avoiding the generator speed detection. (author)

  7. One repetition maximum bench press performance: a new approach for its evaluation in inexperienced males and females: a pilot study.

    Science.gov (United States)

    Bianco, Antonino; Filingeri, Davide; Paoli, Antonio; Palma, Antonio

    2015-04-01

    The aim of this study was to evaluate a new method to perform the one repetition maximum (1RM) bench press test, by combining previously validated predictive and practical procedures. Eight young male and 7 females participants, with no previous experience of resistance training, performed a first set of repetitions to fatigue (RTF) with a workload corresponding to ⅓ of their body mass (BM) for a maximum of 25 repetitions. Following a 5-min recovery period, a second set of RTF was performed with a workload corresponding to ½ of participants' BM. The number of repetitions performed in this set was then used to predict the workload to be used for the 1RM bench press test using Mayhew's equation. Oxygen consumption, heart rate and blood lactate were monitored before, during and after each 1RM attempt. A significant effect of gender was found on the maximum number of repetitions achieved during the RTF set performed with ½ of participants' BM (males: 25.0 ± 6.3; females: 11.0x± 10.6; t = 6.2; p bench press test. We conclude that, by combining previously validated predictive equations with practical procedures (i.e. using a fraction of participants' BM to determine the workload for an RTF set), the new method we tested appeared safe, accurate (particularly in females) and time-effective in the practical evaluation of 1RM performance in inexperienced individuals. Copyright © 2014 Elsevier Ltd. All rights reserved.

  8. Entropy generation minimization: A practical approach for performance evaluation of temperature cascaded co-generation plants

    KAUST Repository

    Myat, Aung; Thu, Kyaw; Kim, Youngdeuk; Saha, Bidyut Baran; Ng, K. C.

    2012-01-01

    We present a practical tool that employs entropy generation minimization (EGM) approach for an in-depth performance evaluation of a co-generation plant with a temperature-cascaded concept. Co-generation plant produces useful effect production sequentially, i.e., (i) electricity from the micro-turbines, (ii) low pressure steam at 250 °C or about 8-10 bars, (iii) cooling capacity of 4 refrigeration tones (Rtons) and (iv) dehumidification of outdoor air for air conditioned space. The main objective is to configure the most efficient configuration of producing power and heat. We employed entropy generation minimization (EGM) which reflects to minimize the dissipative losses and maximize the cycle efficiency of the individual thermally activated systems. The minimization of dissipative losses or EGM is performed in two steps namely, (i) adjusting heat source temperatures for the heat-fired cycles and (ii) the use of Genetic Algorithm (GA), to seek out the sensitivity of heat transfer areas, flow rates of working fluids, inlet temperatures of heat sources and coolant, etc., over the anticipated range of operation to achieve maximum efficiency. With EGM equipped with GA, we verified that the local minimization of entropy generation individually at each of the heat-activated processes would lead to the maximum efficiency of the system. © 2012.

  9. Entropy generation minimization: A practical approach for performance evaluation of temperature cascaded co-generation plants

    KAUST Repository

    Myat, Aung

    2012-10-01

    We present a practical tool that employs entropy generation minimization (EGM) approach for an in-depth performance evaluation of a co-generation plant with a temperature-cascaded concept. Co-generation plant produces useful effect production sequentially, i.e., (i) electricity from the micro-turbines, (ii) low pressure steam at 250 °C or about 8-10 bars, (iii) cooling capacity of 4 refrigeration tones (Rtons) and (iv) dehumidification of outdoor air for air conditioned space. The main objective is to configure the most efficient configuration of producing power and heat. We employed entropy generation minimization (EGM) which reflects to minimize the dissipative losses and maximize the cycle efficiency of the individual thermally activated systems. The minimization of dissipative losses or EGM is performed in two steps namely, (i) adjusting heat source temperatures for the heat-fired cycles and (ii) the use of Genetic Algorithm (GA), to seek out the sensitivity of heat transfer areas, flow rates of working fluids, inlet temperatures of heat sources and coolant, etc., over the anticipated range of operation to achieve maximum efficiency. With EGM equipped with GA, we verified that the local minimization of entropy generation individually at each of the heat-activated processes would lead to the maximum efficiency of the system. © 2012.

  10. An additive approach to low temperature zero pressure sintering of bismuth antimony telluride thermoelectric materials

    Science.gov (United States)

    Catlin, Glenn C.; Tripathi, Rajesh; Nunes, Geoffrey; Lynch, Philip B.; Jones, Howard D.; Schmitt, Devin C.

    2017-03-01

    This paper presents an additive-based approach to the formulation of thermoelectric materials suitable for screen printing. Such printing processes are a likely route to such thermoelectric applications as micro-generators for wireless sensor networks and medical devices, but require the development of materials that can be sintered at ambient pressure and low temperatures. Using a rapid screening process, we identify the eutectic combination of antimony and tellurium as an additive for bismuth-antimony-telluride that enables good thermoelectric performance without a high pressure step. An optimized composite of 15 weight percent Sb7.5Te92.5 in Bi0.5Sb1.5Te3 is scaled up and formulated into a screen-printable paste. Samples fabricated from this paste achieve a thermoelectric figure of merit (ZT) of 0.74 using a maximum processing temperature of 748 K and a total thermal processing budget of 12 K-hours.

  11. Analytical approach for evaluating temperature field of thermal modified asphalt pavement and urban heat island effect

    International Nuclear Information System (INIS)

    Chen, Jiaqi; Wang, Hao; Zhu, Hongzhou

    2017-01-01

    Highlights: • Derive an analytical approach to predict temperature fields of multi-layered asphalt pavement based on Green’s function. • Analyze the effects of thermal modifications on heat output from pavement to near-surface environment. • Evaluate pavement solutions for reducing urban heat island (UHI) effect. - Abstract: This paper aims to present an analytical approach to predict temperature fields in asphalt pavement and evaluate the effects of thermal modification on near-surface environment for urban heat island (UHI) effect. The analytical solution of temperature fields in the multi-layered pavement structure was derived with the Green’s function method, using climatic factors including solar radiation, wind velocity, and air temperature as input parameters. The temperature solutions were validated with an outdoor field experiment. By using the proposed analytical solution, temperature fields in the pavement with different pavement surface albedo, thermal conductivity, and layer combinations were analyzed. Heat output from pavement surface to the near-surface environment was studied as an indicator of pavement contribution to UHI effect. The analysis results show that increasing pavement surface albedo could decrease pavement temperature at various depths, and increase heat output intensity in the daytime but decrease heat output intensity in the nighttime. Using reflective pavement to mitigate UHI may be effective for an open street but become ineffective for the street surrounded by high buildings. On the other hand, high-conductivity pavement could alleviate the UHI effect in the daytime for both the open street and the street surrounded by high buildings. Among different combinations of thermal-modified asphalt mixtures, the layer combination of high-conductivity surface course and base course could reduce the maximum heat output intensity and alleviate the UHI effect most.

  12. A Two-Stage Information-Theoretic Approach to Modeling Landscape-Level Attributes and Maximum Recruitment of Chinook Salmon in the Columbia River Basin.

    Energy Technology Data Exchange (ETDEWEB)

    Thompson, William L.; Lee, Danny C.

    2000-11-01

    Many anadromous salmonid stocks in the Pacific Northwest are at their lowest recorded levels, which has raised questions regarding their long-term persistence under current conditions. There are a number of factors, such as freshwater spawning and rearing habitat, that could potentially influence their numbers. Therefore, we used the latest advances in information-theoretic methods in a two-stage modeling process to investigate relationships between landscape-level habitat attributes and maximum recruitment of 25 index stocks of chinook salmon (Oncorhynchus tshawytscha) in the Columbia River basin. Our first-stage model selection results indicated that the Ricker-type, stock recruitment model with a constant Ricker a (i.e., recruits-per-spawner at low numbers of fish) across stocks was the only plausible one given these data, which contrasted with previous unpublished findings. Our second-stage results revealed that maximum recruitment of chinook salmon had a strongly negative relationship with percentage of surrounding subwatersheds categorized as predominantly containing U.S. Forest Service and private moderate-high impact managed forest. That is, our model predicted that average maximum recruitment of chinook salmon would decrease by at least 247 fish for every increase of 33% in surrounding subwatersheds categorized as predominantly containing U.S. Forest Service and privately managed forest. Conversely, mean annual air temperature had a positive relationship with salmon maximum recruitment, with an average increase of at least 179 fish for every increase in 2 C mean annual air temperature.

  13. Estimating future temperature maxima in lakes across the United States using a surrogate modeling approach.

    Directory of Open Access Journals (Sweden)

    Jonathan B Butcher

    Full Text Available A warming climate increases thermal inputs to lakes with potential implications for water quality and aquatic ecosystems. In a previous study, we used a dynamic water column temperature and mixing simulation model to simulate chronic (7-day average maximum temperatures under a range of potential future climate projections at selected sites representative of different U.S. regions. Here, to extend results to lakes where dynamic models have not been developed, we apply a novel machine learning approach that uses Gaussian Process regression to describe the model response surface as a function of simplified lake characteristics (depth, surface area, water clarity and climate forcing (winter and summer air temperatures and potential evapotranspiration. We use this approach to extrapolate predictions from the simulation model to the statistical sample of U.S. lakes in the National Lakes Assessment (NLA database. Results provide a national-scale scoping assessment of the potential thermal risk to lake water quality and ecosystems across the U.S. We suggest a small fraction of lakes will experience less risk of summer thermal stress events due to changes in stratification and mixing dynamics, but most will experience increases. The percentage of lakes in the NLA with simulated 7-day average maximum water temperatures in excess of 30°C is projected to increase from less than 2% to approximately 22% by the end of the 21st century, which could significantly reduce the number of lakes that can support cold water fisheries. Site-specific analysis of the full range of factors that influence thermal profiles in individual lakes is needed to develop appropriate adaptation strategies.

  14. An integrate-over-temperature approach for enhanced sampling.

    Science.gov (United States)

    Gao, Yi Qin

    2008-02-14

    A simple method is introduced to achieve efficient random walking in the energy space in molecular dynamics simulations which thus enhances the sampling over a large energy range. The approach is closely related to multicanonical and replica exchange simulation methods in that it allows configurations of the system to be sampled in a wide energy range by making use of Boltzmann distribution functions at multiple temperatures. A biased potential is quickly generated using this method and is then used in accelerated molecular dynamics simulations.

  15. Probabilistic properties of the date of maximum river flow, an approach based on circular statistics in lowland, highland and mountainous catchment

    Science.gov (United States)

    Rutkowska, Agnieszka; Kohnová, Silvia; Banasik, Kazimierz

    2018-04-01

    Probabilistic properties of dates of winter, summer and annual maximum flows were studied using circular statistics in three catchments differing in topographic conditions; a lowland, highland and mountainous catchment. The circular measures of location and dispersion were used in the long-term samples of dates of maxima. The mixture of von Mises distributions was assumed as the theoretical distribution function of the date of winter, summer and annual maximum flow. The number of components was selected on the basis of the corrected Akaike Information Criterion and the parameters were estimated by means of the Maximum Likelihood method. The goodness of fit was assessed using both the correlation between quantiles and a version of the Kuiper's and Watson's test. Results show that the number of components varied between catchments and it was different for seasonal and annual maxima. Differences between catchments in circular characteristics were explained using climatic factors such as precipitation and temperature. Further studies may include circular grouping catchments based on similarity between distribution functions and the linkage between dates of maximum precipitation and maximum flow.

  16. Glass precursor approach to high-temperature superconductors

    Science.gov (United States)

    Bansal, Narottam P.

    1992-01-01

    The available studies on the synthesis of high T sub c superconductors (HTS) via the glass precursor approach were reviewed. Melts of the Bi-Sr-Ca-Cu-O system as well as those doped with oxides of some other elements (Pb, Al, V, Te, Nb, etc.) could be quenched into glasses which, on further heat treatments under appropriate conditions, crystallized into the superconducting phase(s). The nature of the HTS phase(s) formed depends on the annealing temperature, time, atmosphere, and the cooling rate and also on the glass composition. Long term annealing was needed to obtain a large fraction of the 110 K phase. The high T sub c phase did not crystallize out directly from the glass matrix, but was preceded by the precipitation of other phases. The 110 K HTS was produced at high temperatures by reaction between the phases formed at lower temperatures resulting in multiphase material. The presence of a glass former such as B2O3 was necessary for the Y-Ba-Cu-O melt to form a glass on fast cooling. A discontinuous YBa2Cu3O(7-delta) HTS phase crystallized out on heat treatment of this glass. Attempts to prepare Tl-Ba-Ca-Cu-O system in the glassy state were not successful.

  17. Waste Load Allocation Based on Total Maximum Daily Load Approach Using the Charged System Search (CSS Algorithm

    Directory of Open Access Journals (Sweden)

    Elham Faraji

    2016-03-01

    Full Text Available In this research, the capability of a charged system search algorithm (CSS in handling water management optimization problems is investigated. First, two complex mathematical problems are solved by CSS and the results are compared with those obtained from other metaheuristic algorithms. In the last step, the optimization model developed by the CSS algorithm is applied to the waste load allocation in rivers based on the total maximum daily load (TMDL concept. The results are presented in Tables and Figures for easy comparison. The study indicates the superiority of the CSS algorithm in terms of its speed and performance over the other metaheuristic algorithms while its precision in water management optimization problems is verified.

  18. Optimization of a Nucleic Acids united-RESidue 2-Point model (NARES-2P) with a maximum-likelihood approach

    International Nuclear Information System (INIS)

    He, Yi; Scheraga, Harold A.; Liwo, Adam

    2015-01-01

    Coarse-grained models are useful tools to investigate the structural and thermodynamic properties of biomolecules. They are obtained by merging several atoms into one interaction site. Such simplified models try to capture as much as possible information of the original biomolecular system in all-atom representation but the resulting parameters of these coarse-grained force fields still need further optimization. In this paper, a force field optimization method, which is based on maximum-likelihood fitting of the simulated to the experimental conformational ensembles and least-squares fitting of the simulated to the experimental heat-capacity curves, is applied to optimize the Nucleic Acid united-RESidue 2-point (NARES-2P) model for coarse-grained simulations of nucleic acids recently developed in our laboratory. The optimized NARES-2P force field reproduces the structural and thermodynamic data of small DNA molecules much better than the original force field

  19. Human body temperature and new approaches to constructing temperature-sensitive bacterial vaccines.

    Science.gov (United States)

    White, Matthew D; Bosio, Catharine M; Duplantis, Barry N; Nano, Francis E

    2011-09-01

    Many of the live human and animal vaccines that are currently in use are attenuated by virtue of their temperature-sensitive (TS) replication. These vaccines are able to function because they can take advantage of sites in mammalian bodies that are cooler than the core temperature, where TS vaccines fail to replicate. In this article, we discuss the distribution of temperature in the human body, and relate how the temperature differential can be exploited for designing and using TS vaccines. We also examine how one of the coolest organs of the body, the skin, contains antigen-processing cells that can be targeted to provoke the desired immune response from a TS vaccine. We describe traditional approaches to making TS vaccines, and highlight new information and technologies that are being used to create a new generation of engineered TS vaccines. We pay particular attention to the recently described technology of substituting essential genes from Arctic bacteria for their homologues in mammalian pathogens as a way of creating TS vaccines.

  20. A stochastic-deterministic approach for evaluation of uncertainty in the predicted maximum fuel bundle enthalpy in a CANDU postulated LBLOCA event

    Energy Technology Data Exchange (ETDEWEB)

    Serghiuta, D.; Tholammakkil, J.; Shen, W., E-mail: Dumitru.Serghiuta@cnsc-ccsn.gc.ca [Canadian Nuclear Safety Commission, Ottawa, Ontario (Canada)

    2014-07-01

    A stochastic-deterministic approach based on representation of uncertainties by subjective probabilities is proposed for evaluation of bounding values of functional failure probability and assessment of probabilistic safety margins. The approach is designed for screening and limited independent review verification. Its application is illustrated for a postulated generic CANDU LBLOCA and evaluation of the possibility distribution function of maximum bundle enthalpy considering the reactor physics part of LBLOCA power pulse simulation only. The computer codes HELIOS and NESTLE-CANDU were used in a stochastic procedure driven by the computer code DAKOTA to simulate the LBLOCA power pulse using combinations of core neutronic characteristics randomly generated from postulated subjective probability distributions with deterministic constraints and fixed transient bundle-wise thermal hydraulic conditions. With this information, a bounding estimate of functional failure probability using the limit for the maximum fuel bundle enthalpy can be derived for use in evaluation of core damage frequency. (author)

  1. Thermogravimetric analysis and kinetic modeling of low-transition-temperature mixtures pretreated oil palm empty fruit bunch for possible maximum yield of pyrolysis oil.

    Science.gov (United States)

    Yiin, Chung Loong; Yusup, Suzana; Quitain, Armando T; Uemura, Yoshimitsu; Sasaki, Mitsuru; Kida, Tetsuya

    2018-05-01

    The impacts of low-transition-temperature mixtures (LTTMs) pretreatment on thermal decomposition and kinetics of empty fruit bunch (EFB) were investigated by thermogravimetric analysis. EFB was pretreated with the LTTMs under different duration of pretreatment which enabled various degrees of alteration to their structure. The TG-DTG curves showed that LTTMs pretreatment on EFB shifted the temperature and rate of decomposition to higher values. The EFB pretreated with sucrose and choline chloride-based LTTMs had attained the highest mass loss of volatile matter (78.69% and 75.71%) after 18 h of pretreatment. For monosodium glutamate-based LTTMs, the 24 h pretreated EFB had achieved the maximum mass loss (76.1%). Based on the Coats-Redfern integral method, the LTTMs pretreatment led to an increase in activation energy of the thermal decomposition of EFB from 80.00 to 82.82-94.80 kJ/mol. The activation energy was mainly affected by the demineralization and alteration in cellulose crystallinity after LTTMs pretreatment. Copyright © 2018 Elsevier Ltd. All rights reserved.

  2. DEVELOPMENT OF GREEN’S FUNCTION APPROACH CONSIDERING TEMPERATURE-DEPENDENT MATERIAL PROPERTIES AND ITS APPLICATION

    Directory of Open Access Journals (Sweden)

    HAN-OK KO

    2014-02-01

    Full Text Available About 40% of reactors in the world are being operated beyond design life or are approaching the end of their life cycle. During long-term operation, various degradation mechanisms occur. Fatigue caused by alternating operational stresses in terms of temperature or pressure change is an important damage mechanism in continued operation of nuclear power plants. To monitor the fatigue damage of components, Fatigue Monitoring System (FMS has been installed. Most FMSs have used Green's Function Approach (GFA to calculate the thermal stresses rapidly. However, if temperature-dependent material properties are used in a detailed FEM, there is a maximum peak stress discrepancy between a conventional GFA and a detailed FEM because constant material properties are used in a conventional method. Therefore, if a conventional method is used in the fatigue evaluation, thermal stresses for various operating cycles may be calculated incorrectly and it may lead to an unreliable estimation. So, in this paper, the modified GFA which can consider temperature-dependent material properties is proposed by using an artificial neural network and weight factor. To verify the proposed method, thermal stresses by the new method are compared with those by FEM. Finally, pros and cons of the new method as well as technical findings from the assessment are discussed.

  3. Technical basis for the reduction of the maximum temperature TGA-MS analysis of oxide samples from the 3013 destructive examination program

    International Nuclear Information System (INIS)

    Scogin, J. H.

    2016-01-01

    Thermogravimetric analysis with mass spectroscopy of the evolved gas (TGA-MS) is used to quantify the moisture content of materials in the 3013 destructive examination (3013 DE) surveillance program. Salts frequently present in the 3013 DE materials volatilize in the TGA and condense in the gas lines just outside the TGA furnace. The buildup of condensate can restrict the flow of purge gas and affect both the TGA operations and the mass spectrometer calibration. Removal of the condensed salts requires frequent maintenance and subsequent calibration runs to keep the moisture measurements by mass spectroscopy within acceptable limits, creating delays in processing samples. In this report, the feasibility of determining the total moisture from TGA-MS measurements at a lower temperature is investigated. A temperature of the TGA-MS analysis which reduces the complications caused by the condensation of volatile materials is determined. Analysis shows that an excellent prediction of the presently measured total moisture value can be made using only the data generated up to 700 °C and there is a sound physical basis for this estimate. It is recommended that the maximum temperature of the TGA-MS determination of total moisture for the 3013 DE program be reduced from 1000 °C to 700 °C. It is also suggested that cumulative moisture measurements at 550 °C and 700°C be substituted for the measured value of total moisture in the 3013 DE database. Using these raw values, any of predictions of the total moisture discussed in this report can be made.

  4. Treponema pallidum 3-Phosphoglycerate Mutase Is a Heat-Labile Enzyme That May Limit the Maximum Growth Temperature for the Spirochete

    Science.gov (United States)

    Benoit, Stéphane; Posey, James E.; Chenoweth, Matthew R.; Gherardini, Frank C.

    2001-01-01

    In the causative agent of syphilis, Treponema pallidum, the gene encoding 3-phosphoglycerate mutase, gpm, is part of a six-gene operon (tro operon) that is regulated by the Mn-dependent repressor TroR. Since substrate-level phosphorylation via the Embden-Meyerhof pathway is the principal way to generate ATP in T. pallidum and Gpm is a key enzyme in this pathway, Mn could exert a regulatory effect on central metabolism in this bacterium. To study this, T. pallidum gpm was cloned, Gpm was purified from Escherichia coli, and antiserum against the recombinant protein was raised. Immunoblots indicated that Gpm was expressed in freshly extracted infective T. pallidum. Enzyme assays indicated that Gpm did not require Mn2+ while 2,3-diphosphoglycerate (DPG) was required for maximum activity. Consistent with these observations, Mn did not copurify with Gpm. The purified Gpm was stable for more than 4 h at 25°C, retained only 50% activity after incubation for 20 min at 34°C or 10 min at 37°C, and was completely inactive after 10 min at 42°C. The temperature effect was attenuated when 1 mM DPG was added to the assay mixture. The recombinant Gpm from pSLB2 complemented E. coli strain PL225 (gpm) and restored growth on minimal glucose medium in a temperature-dependent manner. Increasing the temperature of cultures of E. coli PL225 harboring pSLB2 from 34 to 42°C resulted in a 7- to 11-h period in which no growth occurred (compared to wild-type E. coli). These data suggest that biochemical properties of Gpm could be one contributing factor to the heat sensitivity of T. pallidum. PMID:11466272

  5. Technical basis for the reduction of the maximum temperature TGA-MS analysis of oxide samples from the 3013 destructive examination program

    Energy Technology Data Exchange (ETDEWEB)

    Scogin, J. H. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)

    2016-03-24

    Thermogravimetric analysis with mass spectroscopy of the evolved gas (TGA-MS) is used to quantify the moisture content of materials in the 3013 destructive examination (3013 DE) surveillance program. Salts frequently present in the 3013 DE materials volatilize in the TGA and condense in the gas lines just outside the TGA furnace. The buildup of condensate can restrict the flow of purge gas and affect both the TGA operations and the mass spectrometer calibration. Removal of the condensed salts requires frequent maintenance and subsequent calibration runs to keep the moisture measurements by mass spectroscopy within acceptable limits, creating delays in processing samples. In this report, the feasibility of determining the total moisture from TGA-MS measurements at a lower temperature is investigated. A temperature of the TGA-MS analysis which reduces the complications caused by the condensation of volatile materials is determined. Analysis shows that an excellent prediction of the presently measured total moisture value can be made using only the data generated up to 700 °C and there is a sound physical basis for this estimate. It is recommended that the maximum temperature of the TGA-MS determination of total moisture for the 3013 DE program be reduced from 1000 °C to 700 °C. It is also suggested that cumulative moisture measurements at 550 °C and 700°C be substituted for the measured value of total moisture in the 3013 DE database. Using these raw values, any of predictions of the total moisture discussed in this report can be made.

  6. A bottom-up approach to identifying the maximum operational adaptive capacity of water resource systems to a changing climate

    Science.gov (United States)

    Culley, S.; Noble, S.; Yates, A.; Timbs, M.; Westra, S.; Maier, H. R.; Giuliani, M.; Castelletti, A.

    2016-09-01

    Many water resource systems have been designed assuming that the statistical characteristics of future inflows are similar to those of the historical record. This assumption is no longer valid due to large-scale changes in the global climate, potentially causing declines in water resource system performance, or even complete system failure. Upgrading system infrastructure to cope with climate change can require substantial financial outlay, so it might be preferable to optimize existing system performance when possible. This paper builds on decision scaling theory by proposing a bottom-up approach to designing optimal feedback control policies for a water system exposed to a changing climate. This approach not only describes optimal operational policies for a range of potential climatic changes but also enables an assessment of a system's upper limit of its operational adaptive capacity, beyond which upgrades to infrastructure become unavoidable. The approach is illustrated using the Lake Como system in Northern Italy—a regulated system with a complex relationship between climate and system performance. By optimizing system operation under different hydrometeorological states, it is shown that the system can continue to meet its minimum performance requirements for more than three times as many states as it can under current operations. Importantly, a single management policy, no matter how robust, cannot fully utilize existing infrastructure as effectively as an ensemble of flexible management policies that are updated as the climate changes.

  7. Improving on hidden Markov models: An articulatorily constrained, maximum likelihood approach to speech recognition and speech coding

    Energy Technology Data Exchange (ETDEWEB)

    Hogden, J.

    1996-11-05

    The goal of the proposed research is to test a statistical model of speech recognition that incorporates the knowledge that speech is produced by relatively slow motions of the tongue, lips, and other speech articulators. This model is called Maximum Likelihood Continuity Mapping (Malcom). Many speech researchers believe that by using constraints imposed by articulator motions, we can improve or replace the current hidden Markov model based speech recognition algorithms. Unfortunately, previous efforts to incorporate information about articulation into speech recognition algorithms have suffered because (1) slight inaccuracies in our knowledge or the formulation of our knowledge about articulation may decrease recognition performance, (2) small changes in the assumptions underlying models of speech production can lead to large changes in the speech derived from the models, and (3) collecting measurements of human articulator positions in sufficient quantity for training a speech recognition algorithm is still impractical. The most interesting (and in fact, unique) quality of Malcom is that, even though Malcom makes use of a mapping between acoustics and articulation, Malcom can be trained to recognize speech using only acoustic data. By learning the mapping between acoustics and articulation using only acoustic data, Malcom avoids the difficulties involved in collecting articulator position measurements and does not require an articulatory synthesizer model to estimate the mapping between vocal tract shapes and speech acoustics. Preliminary experiments that demonstrate that Malcom can learn the mapping between acoustics and articulation are discussed. Potential applications of Malcom aside from speech recognition are also discussed. Finally, specific deliverables resulting from the proposed research are described.

  8. Differences between true mean temperatures and means calculated with four different approaches: a case study from three Croatian stations

    Science.gov (United States)

    Bonacci, Ognjen; Željković, Ivana

    2018-01-01

    Different countries use varied methods for daily mean temperature calculation. None of them assesses precisely the true daily mean temperature, which is defined as the integral of continuous temperature measurements in a day. Of special scientific as well as practical importance is to find out how temperatures calculated by different methods and approaches deviate from the true daily mean temperature. Five mean daily temperatures were calculated (T0, T1, T2, T3, T4) using five different equations. The mean of 24-h temperature observations during the calendar day is accepted to represent the true, daily mean T0. The differences Δ i between T0 and four other mean daily temperatures T1, T2, T3, and T4 were calculated and analysed. In the paper, analyses were done with hourly data measured in a period from 1 January 1999 to 31 December 2014 (149,016 h, 192 months and 16 years) at three Croatian meteorological stations. The stations are situated in distinct climatological areas: Zagreb Grič in a mild climate, Zavižan in the cold mountain region and Dubrovnik in the hot Mediterranean. Influence of fog on the temperature is analysed. Special attention is given to analyses of extreme (maximum and minimum) daily differences occurred at three analysed stations. Selection of the fixed local hours, which is in use for calculation of mean daily temperature, plays a crucial role in diminishing of bias from the true daily temperature.

  9. 230Th and 234Th as coupled tracers of particle cycling in the ocean: A maximum likelihood approach

    Science.gov (United States)

    Wang, Wei-Lei; Armstrong, Robert A.; Cochran, J. Kirk; Heilbrun, Christina

    2016-05-01

    We applied maximum likelihood estimation to measurements of Th isotopes (234,230Th) in Mediterranean Sea sediment traps that separated particles according to settling velocity. This study contains two unique aspects. First, it relies on settling velocities that were measured using sediment traps, rather than on measured particle sizes and an assumed relationship between particle size and sinking velocity. Second, because of the labor and expense involved in obtaining these data, they were obtained at only a few depths, and their analysis required constructing a new type of box-like model, which we refer to as a "two-layer" model, that we then analyzed using likelihood techniques. Likelihood techniques were developed in the 1930s by statisticians, and form the computational core of both Bayesian and non-Bayesian statistics. Their use has recently become very popular in ecology, but they are relatively unknown in geochemistry. Our model was formulated by assuming steady state and first-order reaction kinetics for thorium adsorption and desorption, and for particle aggregation, disaggregation, and remineralization. We adopted a cutoff settling velocity (49 m/d) from Armstrong et al. (2009) to separate particles into fast- and slow-sinking classes. A unique set of parameters with no dependence on prior values was obtained. Adsorption rate constants for both slow- and fast-sinking particles are slightly higher in the upper layer than in the lower layer. Slow-sinking particles have higher adsorption rate constants than fast-sinking particles. Desorption rate constants are higher in the lower layer (slow-sinking particles: 13.17 ± 1.61, fast-sinking particles: 13.96 ± 0.48) than in the upper layer (slow-sinking particles: 7.87 ± 0.60 y-1, fast-sinking particles: 1.81 ± 0.44 y-1). Aggregation rate constants were higher, 1.88 ± 0.04, in the upper layer and just 0.07 ± 0.01 y-1 in the lower layer. Disaggregation rate constants were just 0.30 ± 0.10 y-1 in the upper

  10. Stochastic modeling and control system designs of the NASA/MSFC Ground Facility for large space structures: The maximum entropy/optimal projection approach

    Science.gov (United States)

    Hsia, Wei-Shen

    1986-01-01

    In the Control Systems Division of the Systems Dynamics Laboratory of the NASA/MSFC, a Ground Facility (GF), in which the dynamics and control system concepts being considered for Large Space Structures (LSS) applications can be verified, was designed and built. One of the important aspects of the GF is to design an analytical model which will be as close to experimental data as possible so that a feasible control law can be generated. Using Hyland's Maximum Entropy/Optimal Projection Approach, a procedure was developed in which the maximum entropy principle is used for stochastic modeling and the optimal projection technique is used for a reduced-order dynamic compensator design for a high-order plant.

  11. MOnthly TEmperature DAtabase of Spain 1951-2010: MOTEDAS (2): The Correlation Decay Distance (CDD) and the spatial variability of maximum and minimum monthly temperature in Spain during (1981-2010).

    Science.gov (United States)

    Cortesi, Nicola; Peña-Angulo, Dhais; Simolo, Claudia; Stepanek, Peter; Brunetti, Michele; Gonzalez-Hidalgo, José Carlos

    2014-05-01

    One of the key point in the develop of the MOTEDAS dataset (see Poster 1 MOTEDAS) in the framework of the HIDROCAES Project (Impactos Hidrológicos del Calentamiento Global en España, Spanish Ministery of Research CGL2011-27574-C02-01) is the reference series for which no generalized metadata exist. In this poster we present an analysis of spatial variability of monthly minimum and maximum temperatures in the conterminous land of Spain (Iberian Peninsula, IP), by using the Correlation Decay Distance function (CDD), with the aim of evaluating, at sub-regional level, the optimal threshold distance between neighbouring stations for producing the set of reference series used in the quality control (see MOTEDAS Poster 1) and the reconstruction (see MOREDAS Poster 3). The CDD analysis for Tmax and Tmin was performed calculating a correlation matrix at monthly scale between 1981-2010 among monthly mean values of maximum (Tmax) and minimum (Tmin) temperature series (with at least 90% of data), free of anomalous data and homogenized (see MOTEDAS Poster 1), obtained from AEMEt archives (National Spanish Meteorological Agency). Monthly anomalies (difference between data and mean 1981-2010) were used to prevent the dominant effect of annual cycle in the CDD annual estimation. For each station, and time scale, the common variance r2 (using the square of Pearson's correlation coefficient) was calculated between all neighbouring temperature series and the relation between r2 and distance was modelled according to the following equation (1): Log (r2ij) = b*°dij (1) being Log(rij2) the common variance between target (i) and neighbouring series (j), dij the distance between them and b the slope of the ordinary least-squares linear regression model applied taking into account only the surrounding stations within a starting radius of 50 km and with a minimum of 5 stations required. Finally, monthly, seasonal and annual CDD values were interpolated using the Ordinary Kriging with a

  12. Compton scattering at finite temperature: thermal field dynamics approach

    International Nuclear Information System (INIS)

    Juraev, F.I.

    2006-01-01

    Full text: Compton scattering is a classical problem of quantum electrodynamics and has been studied in its early beginnings. Perturbation theory and Feynman diagram technique enables comprehensive analysis of this problem on the basis of which famous Klein-Nishina formula is obtained [1, 2]. In this work this problem is extended to the case of finite temperature. Finite-temperature effects in Compton scattering is of practical importance for various processes in relativistic thermal plasmas in astrophysics. Recently Compton effect have been explored using closed-time path formalism with temperature corrections estimated [3]. It was found that the thermal cross section can be larger than that for zero-temperature by several orders of magnitude for the high temperature realistic in astrophysics [3]. In our work we use a main tool to account finite-temperature effects, a real-time finite-temperature quantum field theory, so-called thermofield dynamics [4, 5]. Thermofield dynamics is a canonical formalism to explore field-theoretical processes at finite temperature. It consists of two steps, doubling of Fock space and Bogolyubov transformations. Doubling leads to appearing additional degrees of freedom, called tilded operators which together with usual field operators create so-called thermal doublet. Bogolyubov transformations make field operators temperature-dependent. Using this formalism we treat Compton scattering at finite temperature via replacing in transition amplitude zero-temperature propagators by finite-temperature ones. As a result finite-temperature extension of the Klein-Nishina formula is obtained in which differential cross section is represented as a sum of zero-temperature cross section and finite-temperature correction. The obtained result could be useful in quantum electrodynamics of lasers and for relativistic thermal plasma processes in astrophysics where correct account of finite-temperature effects is important. (author)

  13. Temperature renormalization group approach to spontaneous symmetry breaking

    International Nuclear Information System (INIS)

    Manesis, E.; Sakakibara, S.

    1985-01-01

    We apply renormalization group equations that describe the finite-temperature behavior of Green's functions to investigate thermal properties of spontaneous symmetry breaking. Specifically, in the O(N).O(N) symmetric model we study the change of symmetry breaking patterns with temperature, and show that there always exists the unbroken symmetry phase at high temperature, modifying the naive result of leading order in finite-temperature perturbation theory. (orig.)

  14. Fragile-to-fragile liquid transition at Tg and stable-glass phase nucleation rate maximum at the Kauzmann temperature TK

    International Nuclear Information System (INIS)

    Tournier, Robert F.

    2014-01-01

    An undercooled liquid is unstable. The driving force of the glass transition at T g is a change of the undercooled-liquid Gibbs free energy. The classical Gibbs free energy change for a crystal formation is completed including an enthalpy saving. The crystal growth critical nucleus is used as a probe to observe the Laplace pressure change Δp accompanying the enthalpy change −V m ×Δp at T g where V m is the molar volume. A stable glass–liquid transition model predicts the specific heat jump of fragile liquids at T≤T g , the Kauzmann temperature T K where the liquid entropy excess with regard to crystal goes to zero, the equilibrium enthalpy between T K and T g , the maximum nucleation rate at T K of superclusters containing magic atom numbers, and the equilibrium latent heats at T g and T K . Strong-to-fragile and strong-to-strong liquid transitions at T g are also described and all their thermodynamic parameters are determined from their specific heat jumps. The existence of fragile liquids quenched in the amorphous state, which do not undergo liquid–liquid transition during heating preceding their crystallization, is predicted. Long ageing times leading to the formation at T K of a stable glass composed of superclusters containing up to 147 atom, touching and interpenetrating, are evaluated from nucleation rates. A fragile-to-fragile liquid transition occurs at T g without stable-glass formation while a strong glass is stable after transition

  15. Heat Convection at the Density Maximum Point of Water

    Science.gov (United States)

    Balta, Nuri; Korganci, Nuri

    2018-01-01

    Water exhibits a maximum in density at normal pressure at around 4° degree temperature. This paper demonstrates that during cooling, at around 4 °C, the temperature remains constant for a while because of heat exchange associated with convective currents inside the water. Superficial approach implies it as a new anomaly of water, but actually it…

  16. Finite spatial volume approach to finite temperature field theory

    International Nuclear Information System (INIS)

    Weiss, Nathan

    1981-01-01

    A relativistic quantum field theory at finite temperature T=β -1 is equivalent to the same field theory at zero temperature but with one spatial dimension of finite length β. This equivalence is discussed for scalars, for fermions, and for gauge theories. The relationship is checked for free field theory. The translation of correlation functions between the two formulations is described with special emphasis on the nonlocal order parameters of gauge theories. Possible applications are mentioned. (auth)

  17. A simulation study of Linsley's approach to infer elongation rate and fluctuations of the EAS maximum depth from muon arrival time distributions

    International Nuclear Information System (INIS)

    Badea, A.F.; Brancus, I.M.; Rebel, H.; Haungs, A.; Oehlschlaeger, J.; Zazyan, M.

    1999-01-01

    The average depth of the maximum X m of the EAS (Extensive Air Shower) development depends on the energy E 0 and the mass of the primary particle, and its dependence from the energy is traditionally expressed by the so-called elongation rate D e defined as change in the average depth of the maximum per decade of E 0 i.e. D e = dX m /dlog 10 E 0 . Invoking the superposition model approximation i.e. assuming that a heavy primary (A) has the same shower elongation rate like a proton, but scaled with energies E 0 /A, one can write X m = X init + D e log 10 (E 0 /A). In 1977 an indirect approach studying D e has been suggested by Linsley. This approach can be applied to shower parameters which do not depend explicitly on the energy of the primary particle, but do depend on the depth of observation X and on the depth X m of shower maximum. The distribution of the EAS muon arrival times, measured at a certain observation level relatively to the arrival time of the shower core reflect the pathlength distribution of the muon travel from locus of production (near the axis) to the observation locus. The basic a priori assumption is that we can associate the mean value or median T of the time distribution to the height of the EAS maximum X m , and that we can express T = f(X,X m ). In order to derive from the energy variation of the arrival time quantities information about elongation rate, some knowledge is required about F i.e. F = - ∂ T/∂X m ) X /∂(T/∂X) X m , in addition to the variations with the depth of observation and the zenith-angle (θ) dependence, respectively. Thus ∂T/∂log 10 E 0 | X = - F·D e ·1/X v ·∂T/∂secθ| E 0 . In a similar way the fluctuations σ(X m ) of X m may be related to the fluctuations σ(T) of T i.e. σ(T) = - σ(X m )· F σ ·1/X v ·∂T/∂secθ| E 0 , with F σ being the corresponding scaling factor for the fluctuation of F. By simulations of the EAS development using the Monte Carlo code CORSIKA the energy and angle

  18. Incorporation of the equilibrium temperature approach in a Soil and Water Assessment Tool hydroclimatological stream temperature model

    Science.gov (United States)

    Du, Xinzhong; Shrestha, Narayan Kumar; Ficklin, Darren L.; Wang, Junye

    2018-04-01

    Stream temperature is an important indicator for biodiversity and sustainability in aquatic ecosystems. The stream temperature model currently in the Soil and Water Assessment Tool (SWAT) only considers the impact of air temperature on stream temperature, while the hydroclimatological stream temperature model developed within the SWAT model considers hydrology and the impact of air temperature in simulating the water-air heat transfer process. In this study, we modified the hydroclimatological model by including the equilibrium temperature approach to model heat transfer processes at the water-air interface, which reflects the influences of air temperature, solar radiation, wind speed and streamflow conditions on the heat transfer process. The thermal capacity of the streamflow is modeled by the variation of the stream water depth. An advantage of this equilibrium temperature model is the simple parameterization, with only two parameters added to model the heat transfer processes. The equilibrium temperature model proposed in this study is applied and tested in the Athabasca River basin (ARB) in Alberta, Canada. The model is calibrated and validated at five stations throughout different parts of the ARB, where close to monthly samplings of stream temperatures are available. The results indicate that the equilibrium temperature model proposed in this study provided better and more consistent performances for the different regions of the ARB with the values of the Nash-Sutcliffe Efficiency coefficient (NSE) greater than those of the original SWAT model and the hydroclimatological model. To test the model performance for different hydrological and environmental conditions, the equilibrium temperature model was also applied to the North Fork Tolt River Watershed in Washington, United States. The results indicate a reasonable simulation of stream temperature using the model proposed in this study, with minimum relative error values compared to the other two models

  19. Temperature of thermal plasma jets: A time resolved approach

    Energy Technology Data Exchange (ETDEWEB)

    Sahasrabudhe, S N; Joshi, N K; Barve, D N; Ghorui, S; Tiwari, N; Das, A K, E-mail: sns@barc.gov.i [Laser and Plasma Technology Division, Bhabha Atomic Research Centre, Mumbai - 400 094 (India)

    2010-02-01

    Boltzmann Plot method is routinely used for temperature measurement of thermal plasma jets emanating from plasma torches. Here, it is implicitly assumed that the plasma jet is 'steady' in time. However, most of the experimenters do not take into account the variations due to ripple in the high current DC power supplies used to run plasma torches. If a 3-phase transductor type of power supply is used, then the ripple frequency is 150 Hz and if 3- phase SCR based power supply is used, then the ripple frequency is 300 Hz. The electrical power fed to plasma torch varies at ripple frequency. In time scale, it is about 3.3 to 6.7 ms for one cycle of ripple and it is much larger than the arc root movement times which are within 0.2 ms. Fast photography of plasma jets shows that the luminosity of plasma jet also varies exactly like the ripple in the power supply voltage and thus with the power. Intensity of line radiations varies nonlinearly with the instantaneous power fed to the torch and the simple time average of line intensities taken for calculation of temperature is not appropriate. In this paper, these variations and their effect on temperature determination are discussed and a method to get appropriate data is suggested. With a small adaptation discussed here, this method can be used to get temperature profile of plasma jet within a short time.

  20. Three-particle recombination at low temperature: QED approach

    International Nuclear Information System (INIS)

    Bhattacharyya, S.; Roy, A.

    2001-01-01

    A theoretical study of three-body recombination of proton in presence of a spectator electron with electronic beam at near-zero temperature is presented using field theory and invariant Lorentz gauge. Contributions from the Feynman diagrams of different orders give an insight into the physics of the phenomena. Recombination rate coefficient is obtained for low lying principal quantum number n = 1 to 10. At a fixed ion beam temperature (300 K) recombination rate coefficient is found to increase in general with n, having a flat and a sharp peak at quantum states 3 to 5, respectively. In absence of any theoretical and experimental results for low temperature formation of H-atom by three-body recombination at low lying quantum states, we have presented the theoretical results of Stevefelt and group for three-body recombination of deuteron with electron along with the present results. Three-body recombination of antihydrogen in antiproton-positron plasma is expected to yield similar result as that for three-body recombination of hydrogen formation in proton-electron plasma. The necessity for experimental investigation of low temperature three-body recombination at low quantum states is stressed. (author)

  1. Maximum volume cuboids for arbitrarily shaped in-situ rock blocks as determined by discontinuity analysis—A genetic algorithm approach

    Science.gov (United States)

    Ülker, Erkan; Turanboy, Alparslan

    2009-07-01

    The block stone industry is one of the main commercial use of rock. The economic potential of any block quarry depends on the recovery rate, which is defined as the total volume of useful rough blocks extractable from a fixed rock volume in relation to the total volume of moved material. The natural fracture system, the rock type(s) and the extraction method used directly influence the recovery rate. The major aims of this study are to establish a theoretical framework for optimising the extraction process in marble quarries for a given fracture system, and for predicting the recovery rate of the excavated blocks. We have developed a new approach by taking into consideration only the fracture structure for maximum block recovery in block quarries. The complete model uses a linear approach based on basic geometric features of discontinuities for 3D models, a tree structure (TS) for individual investigation and finally a genetic algorithm (GA) for the obtained cuboid volume(s). We tested our new model in a selected marble quarry in the town of İscehisar (AFYONKARAHİSAR—TURKEY).

  2. Long-term trends of daily maximum and minimum temperatures for the major cities of South Korea and their implications on human health

    Czech Academy of Sciences Publication Activity Database

    Choi, B. C.; Kim, J.; Lee, D. G.; Kyselý, Jan

    2007-01-01

    Roč. 17, č. 2 (2007), s. 171-183 ISSN N R&D Projects: GA ČR GC205/07/J044 Institutional research plan: CEZ:AV0Z30420517 Keywords : Temperature trends * Biometeorology * Climate change * Global warming * Human health * Temperature extremes * Urbanization Subject RIV: DG - Athmosphere Sciences, Meteorology

  3. A Dyson-Schwinger approach to finite temperature QCD

    Energy Technology Data Exchange (ETDEWEB)

    Mueller, Jens Andreas

    2011-10-26

    The different phases of quantum chromodynamics at finite temperature are studied. To this end the nonperturbative quark propagator in Matsubara formalism is determined from its equation of motion, the Dyson-Schwinger equation. A novel truncation scheme is introduced including the nonperturbative, temperature dependent gluon propagator as extracted from lattice gauge theory. In the first part of the thesis a deconfinement order parameter, the dual condensate, and the critical temperature are determined from the dependence of the quark propagator on the temporal boundary conditions. The chiral transition is investigated by means of the quark condensate as order parameter. In addition differences in the chiral and deconfinement transition between gauge groups SU(2) and SU(3) are explored. In the following the quenched quark propagator is studied with respect to a possible spectral representation at finite temperature. In doing so, the quark propagator turns out to possess different analytic properties below and above the deconfinement transition. This result motivates the consideration of an alternative deconfinement order parameter signaling positivity violations of the spectral function. A criterion for positivity violations of the spectral function based on the curvature of the Schwinger function is derived. Using a variety of ansaetze for the spectral function, the possible quasi-particle spectrum is analyzed, in particular its quark mass and momentum dependence. The results motivate a more direct determination of the spectral function in the framework of Dyson-Schwinger equations. In the two subsequent chapters extensions of the truncation scheme are considered. The influence of dynamical quark degrees of freedom on the chiral and deconfinement transition is investigated. This serves as a first step towards a complete self-consistent consideration of dynamical quarks and the extension to finite chemical potential. The goodness of the truncation is verified first

  4. A Dyson-Schwinger approach to finite temperature QCD

    International Nuclear Information System (INIS)

    Mueller, Jens Andreas

    2011-01-01

    The different phases of quantum chromodynamics at finite temperature are studied. To this end the nonperturbative quark propagator in Matsubara formalism is determined from its equation of motion, the Dyson-Schwinger equation. A novel truncation scheme is introduced including the nonperturbative, temperature dependent gluon propagator as extracted from lattice gauge theory. In the first part of the thesis a deconfinement order parameter, the dual condensate, and the critical temperature are determined from the dependence of the quark propagator on the temporal boundary conditions. The chiral transition is investigated by means of the quark condensate as order parameter. In addition differences in the chiral and deconfinement transition between gauge groups SU(2) and SU(3) are explored. In the following the quenched quark propagator is studied with respect to a possible spectral representation at finite temperature. In doing so, the quark propagator turns out to possess different analytic properties below and above the deconfinement transition. This result motivates the consideration of an alternative deconfinement order parameter signaling positivity violations of the spectral function. A criterion for positivity violations of the spectral function based on the curvature of the Schwinger function is derived. Using a variety of ansaetze for the spectral function, the possible quasi-particle spectrum is analyzed, in particular its quark mass and momentum dependence. The results motivate a more direct determination of the spectral function in the framework of Dyson-Schwinger equations. In the two subsequent chapters extensions of the truncation scheme are considered. The influence of dynamical quark degrees of freedom on the chiral and deconfinement transition is investigated. This serves as a first step towards a complete self-consistent consideration of dynamical quarks and the extension to finite chemical potential. The goodness of the truncation is verified first

  5. A data-driven approach for retrieving temperatures and abundances in brown dwarf atmospheres

    OpenAIRE

    Line, MR; Fortney, JJ; Marley, MS; Sorahana, S

    2014-01-01

    © 2014. The American Astronomical Society. All rights reserved. Brown dwarf spectra contain a wealth of information about their molecular abundances, temperature structure, and gravity. We present a new data driven retrieval approach, previously used in planetary atmosphere studies, to extract the molecular abundances and temperature structure from brown dwarf spectra. The approach makes few a priori physical assumptions about the state of the atmosphere. The feasibility of the approach is fi...

  6. Maximum power point tracking

    International Nuclear Information System (INIS)

    Enslin, J.H.R.

    1990-01-01

    A well engineered renewable remote energy system, utilizing the principal of Maximum Power Point Tracking can be m ore cost effective, has a higher reliability and can improve the quality of life in remote areas. This paper reports that a high-efficient power electronic converter, for converting the output voltage of a solar panel, or wind generator, to the required DC battery bus voltage has been realized. The converter is controlled to track the maximum power point of the input source under varying input and output parameters. Maximum power point tracking for relative small systems is achieved by maximization of the output current in a battery charging regulator, using an optimized hill-climbing, inexpensive microprocessor based algorithm. Through practical field measurements it is shown that a minimum input source saving of 15% on 3-5 kWh/day systems can easily be achieved. A total cost saving of at least 10-15% on the capital cost of these systems are achievable for relative small rating Remote Area Power Supply systems. The advantages at larger temperature variations and larger power rated systems are much higher. Other advantages include optimal sizing and system monitor and control

  7. Functional approach without path integrals to finite temperature free fermions

    International Nuclear Information System (INIS)

    Souza, S.M. de; Santos, O. Rojas; Thomaz, M.T.

    1999-01-01

    Charret et al applied the properties of Grassmann generators to develop a new method to calculate the coefficients of the high temperature expansion of the grand canonical partition function of self-interacting fermionic models on d-dimensions (d ≥1). The methodology explores the anti-commuting nature of fermionic fields and avoids the calculation of the fermionic path integral. we apply this new method to the relativistic free Dirac fermions and recover the known results in the literature without the β-independent and μindependent infinities that plague the continuum path integral formulation. (author)

  8. Little Cross-Feeding of the Mycorrhizal Networks Shared Between C3-Panicum bisulcatum and C4-Panicum maximum Under Different Temperature Regimes

    Directory of Open Access Journals (Sweden)

    Veronika Řezáčová

    2018-04-01

    Full Text Available Common mycorrhizal networks (CMNs formed by arbuscular mycorrhizal fungi (AMF interconnect plants of the same and/or different species, redistributing nutrients and draining carbon (C from the different plant partners at different rates. Here, we conducted a plant co-existence (intercropping experiment testing the role of AMF in resource sharing and exploitation by simplified plant communities composed of two congeneric grass species (Panicum spp. with different photosynthetic metabolism types (C3 or C4. The grasses had spatially separated rooting zones, conjoined through a root-free (but AMF-accessible zone added with 15N-labeled plant (clover residues. The plants were grown under two different temperature regimes: high temperature (36/32°C day/night or ambient temperature (25/21°C day/night applied over 49 days after an initial period of 26 days at ambient temperature. We made use of the distinct C-isotopic composition of the two plant species sharing the same CMN (composed of a synthetic AMF community of five fungal genera to estimate if the CMN was or was not fed preferentially under the specific environmental conditions by one or the other plant species. Using the C-isotopic composition of AMF-specific fatty acid (C16:1ω5 in roots and in the potting substrate harboring the extraradical AMF hyphae, we found that the C3-Panicum continued feeding the CMN at both temperatures with a significant and invariable share of C resources. This was surprising because the growth of the C3 plants was more susceptible to high temperature than that of the C4 plants and the C3-Panicum alone suppressed abundance of the AMF (particularly Funneliformis sp. in its roots due to the elevated temperature. Moreover, elevated temperature induced a shift in competition for nitrogen between the two plant species in favor of the C4-Panicum, as demonstrated by significantly lower 15N yields of the C3-Panicum but higher 15N yields of the C4-Panicum at elevated as

  9. Setting the renormalization scale in pQCD: Comparisons of the principle of maximum conformality with the sequential extended Brodsky-Lepage-Mackenzie approach

    Energy Technology Data Exchange (ETDEWEB)

    Ma, Hong -Hao [Chongqing Univ., Chongqing (People' s Republic of China); Wu, Xing -Gang [Chongqing Univ., Chongqing (People' s Republic of China); Ma, Yang [Chongqing Univ., Chongqing (People' s Republic of China); Brodsky, Stanley J. [Stanford Univ., Stanford, CA (United States); Mojaza, Matin [KTH Royal Inst. of Technology and Stockholm Univ., Stockholm (Sweden)

    2015-05-26

    A key problem in making precise perturbative QCD (pQCD) predictions is how to set the renormalization scale of the running coupling unambiguously at each finite order. The elimination of the uncertainty in setting the renormalization scale in pQCD will greatly increase the precision of collider tests of the Standard Model and the sensitivity to new phenomena. Renormalization group invariance requires that predictions for observables must also be independent on the choice of the renormalization scheme. The well-known Brodsky-Lepage-Mackenzie (BLM) approach cannot be easily extended beyond next-to-next-to-leading order of pQCD. Several suggestions have been proposed to extend the BLM approach to all orders. In this paper we discuss two distinct methods. One is based on the “Principle of Maximum Conformality” (PMC), which provides a systematic all-orders method to eliminate the scale and scheme ambiguities of pQCD. The PMC extends the BLM procedure to all orders using renormalization group methods; as an outcome, it significantly improves the pQCD convergence by eliminating renormalon divergences. An alternative method is the “sequential extended BLM” (seBLM) approach, which has been primarily designed to improve the convergence of pQCD series. The seBLM, as originally proposed, introduces auxiliary fields and follows the pattern of the β0-expansion to fix the renormalization scale. However, the seBLM requires a recomputation of pQCD amplitudes including the auxiliary fields; due to the limited availability of calculations using these auxiliary fields, the seBLM has only been applied to a few processes at low orders. In order to avoid the complications of adding extra fields, we propose a modified version of seBLM which allows us to apply this method to higher orders. As a result, we then perform detailed numerical comparisons of the two alternative scale-setting approaches by investigating their predictions for the annihilation cross section ratio R

  10. A new approach to hierarchical data analysis: Targeted maximum likelihood estimation for the causal effect of a cluster-level exposure.

    Science.gov (United States)

    Balzer, Laura B; Zheng, Wenjing; van der Laan, Mark J; Petersen, Maya L

    2018-01-01

    We often seek to estimate the impact of an exposure naturally occurring or randomly assigned at the cluster-level. For example, the literature on neighborhood determinants of health continues to grow. Likewise, community randomized trials are applied to learn about real-world implementation, sustainability, and population effects of interventions with proven individual-level efficacy. In these settings, individual-level outcomes are correlated due to shared cluster-level factors, including the exposure, as well as social or biological interactions between individuals. To flexibly and efficiently estimate the effect of a cluster-level exposure, we present two targeted maximum likelihood estimators (TMLEs). The first TMLE is developed under a non-parametric causal model, which allows for arbitrary interactions between individuals within a cluster. These interactions include direct transmission of the outcome (i.e. contagion) and influence of one individual's covariates on another's outcome (i.e. covariate interference). The second TMLE is developed under a causal sub-model assuming the cluster-level and individual-specific covariates are sufficient to control for confounding. Simulations compare the alternative estimators and illustrate the potential gains from pairing individual-level risk factors and outcomes during estimation, while avoiding unwarranted assumptions. Our results suggest that estimation under the sub-model can result in bias and misleading inference in an observational setting. Incorporating working assumptions during estimation is more robust than assuming they hold in the underlying causal model. We illustrate our approach with an application to HIV prevention and treatment.

  11. A maximum likelihood approach to generate hypotheses on the evolution and historical biogeography in the Lower Volga Valley regions (southwest Russia)

    Science.gov (United States)

    Mavrodiev, Evgeny V; Laktionov, Alexy P; Cellinese, Nico

    2012-01-01

    The evolution of the diverse flora in the Lower Volga Valley (LVV) (southwest Russia) is complex due to the composite geomorphology and tectonic history of the Caspian Sea and adjacent areas. In the absence of phylogenetic studies and temporal information, we implemented a maximum likelihood (ML) approach and stochastic character mapping reconstruction aiming at recovering historical signals from species occurrence data. A taxon-area matrix of 13 floristic areas and 1018 extant species was constructed and analyzed with RAxML and Mesquite. Additionally, we simulated scenarios with numbers of hypothetical extinct taxa from an unknown palaeoflora that occupied the areas before the dramatic transgression and regression events that have occurred from the Pleistocene to the present day. The flora occurring strictly along the river valley and delta appear to be younger than that of adjacent steppes and desert-like regions, regardless of the chronology of transgression and regression events that led to the geomorphological formation of the LVV. This result is also supported when hypothetical extinct taxa are included in the analyses. The history of each species was inferred by using a stochastic character mapping reconstruction method as implemented in Mesquite. Individual histories appear to be independent from one another and have been shaped by repeated dispersal and extinction events. These reconstructions provide testable hypotheses for more in-depth investigations of their population structure and dynamics. PMID:22957179

  12. Thermal dimensioning of the deep repository. Influence of canister spacing, canister power, rock thermal properties and nearfield design on the maximum canister surface temperature

    International Nuclear Information System (INIS)

    Hoekmark, Harald; Faelth, Billy

    2003-12-01

    The report addresses the problem of the minimum spacing required between neighbouring canisters in the deep repository. That spacing is calculated for a number of assumptions regarding the conditions that govern the temperature in the nearfield and at the surfaces of the canisters. The spacing criterion is that the temperature at the canister surfaces must not exceed 100 deg C .The results are given in the form of nomographic charts, such that it is in principle possible to determine the spacing as soon as site data, i.e. the initial undisturbed rock temperature and the host rock heat transport properties, are available. Results of canister spacing calculations are given for the KBS-3V concept as well as for the KBS-3H concept. A combination of numerical and analytical methods is used for the KBS-3H calculations, while the KBS-3V calculations are purely analytical. Both methods are described in detail. Open gaps are assigned equivalent heat conductivities, calculated such that the conduction across the gaps will include also the heat transferred by radiation. The equivalent heat conductivities are based on the emissivities of the different gap surfaces. For the canister copper surface, the emissivity is determined by back-calculation of temperatures measured in the Prototype experiment at Aespoe HRL. The size of the different gaps and the emissivity values are of great importance for the results and will be investigated further in the future

  13. Shifting distributions of adult Atlantic sturgeon amidst post-industrialization and future impacts in the Delaware River: a maximum entropy approach.

    Directory of Open Access Journals (Sweden)

    Matthew W Breece

    Full Text Available Atlantic sturgeon (Acipenser oxyrinchus oxyrinchus experienced severe declines due to habitat destruction and overfishing beginning in the late 19(th century. Subsequent to the boom and bust period of exploitation, there has been minimal fishing pressure and improving habitats. However, lack of recovery led to the 2012 listing of Atlantic sturgeon under the Endangered Species Act. Although habitats may be improving, the availability of high quality spawning habitat, essential for the survival and development of eggs and larvae may still be a limiting factor in the recovery of Atlantic sturgeon. To estimate adult Atlantic sturgeon spatial distributions during riverine occupancy in the Delaware River, we utilized a maximum entropy (MaxEnt approach along with passive biotelemetry during the likely spawning season. We found that substrate composition and distance from the salt front significantly influenced the locations of adult Atlantic sturgeon in the Delaware River. To broaden the scope of this study we projected our model onto four scenarios depicting varying locations of the salt front in the Delaware River: the contemporary location of the salt front during the likely spawning season, the location of the salt front during the historic fishery in the late 19(th century, an estimated shift in the salt front by the year 2100 due to climate change, and an extreme drought scenario, similar to that which occurred in the 1960's. The movement of the salt front upstream as a result of dredging and climate change likely eliminated historic spawning habitats and currently threatens areas where Atlantic sturgeon spawning may be taking place. Identifying where suitable spawning substrate and water chemistry intersect with the likely occurrence of adult Atlantic sturgeon in the Delaware River highlights essential spawning habitats, enhancing recovery prospects for this imperiled species.

  14. Development of Green's Function Approach Considering Temperature-Dependent Material Properties and its Application

    Energy Technology Data Exchange (ETDEWEB)

    Ko, Hanok; Jhung, Myung Jo [Korea Institute of Nuclear Safety, Daejeon (Korea, Republic of); Choi, Jaeboong [Sungkyunkwan Univ., Suwon (Korea, Republic of)

    2014-02-15

    About 40% of reactors in the world are being operated beyond design life or are approaching the end of their life cycle. During long-term operation, various degradation mechanisms occur. Fatigue caused by alternating operational stresses in terms of temperature or pressure change is an important damage mechanism in continued operation of nuclear power plants. To monitor the fatigue damage of components, Fatigue Monitoring System (FMS) has been installed. Most FMSs have used Green's Function Approach (GFA) to calculate the thermal stresses rapidly. However, if temperature-dependent material properties are used in a detailed FEM, there is a maximum peak stress discrepancy between a conventional GFA and a detailed FEM because constant material properties are used in a conventional method. Therefore, if a conventional method is used in the fatigue evaluation, thermal stresses for various operating cycles may be calculated incorrectly and it may lead to an unreliable estimation. So, in this paper, the modified GFA which can consider temperature-dependent material properties is proposed by using an artificial neural network and weight factor. To verify the proposed method, thermal stresses by the new method are compared with those by FEM. Finally, pros and cons of the new method as well as technical findings from the assessment are discussed.

  15. The last glacial maximum

    Science.gov (United States)

    Clark, P.U.; Dyke, A.S.; Shakun, J.D.; Carlson, A.E.; Clark, J.; Wohlfarth, B.; Mitrovica, J.X.; Hostetler, S.W.; McCabe, A.M.

    2009-01-01

    We used 5704 14C, 10Be, and 3He ages that span the interval from 10,000 to 50,000 years ago (10 to 50 ka) to constrain the timing of the Last Glacial Maximum (LGM) in terms of global ice-sheet and mountain-glacier extent. Growth of the ice sheets to their maximum positions occurred between 33.0 and 26.5 ka in response to climate forcing from decreases in northern summer insolation, tropical Pacific sea surface temperatures, and atmospheric CO2. Nearly all ice sheets were at their LGM positions from 26.5 ka to 19 to 20 ka, corresponding to minima in these forcings. The onset of Northern Hemisphere deglaciation 19 to 20 ka was induced by an increase in northern summer insolation, providing the source for an abrupt rise in sea level. The onset of deglaciation of the West Antarctic Ice Sheet occurred between 14 and 15 ka, consistent with evidence that this was the primary source for an abrupt rise in sea level ???14.5 ka.

  16. Density and viscosity study of nicotinic acid and nicotinamide in dilute aqueous solutions at and around the temperature of the maximum density of water

    International Nuclear Information System (INIS)

    Dhondge, Sudhakar S.; Dahasahasra, Prachi N.; Paliwal, Lalitmohan J.; Deshmukh, Dinesh W.

    2014-01-01

    Highlights: • Volumetric and transport behaviour of aqueous solutions of important vitamins are reported. • Various interactions of nicotinic acid and nicotinamide with water have been reported. • The temperature dependence of interactions between solute and solvent is discussed. • The study indicates that nicotinamide is more hydrated as compared to nicotinic acid. - Abstract: In the present study, we report experimental densities (ρ) and viscosities (η) of aqueous solutions of nicotinic acid and nicotinamide within the concentration range (0 to 0.1) mol · kg −1 at T = (275.15, 277.15 and 279.15) K. These parameters are then used to obtain thermodynamic and transport functions such as apparent molar volume of solute (V ϕ ), limiting apparent molar volume of solute (V ϕ 0 ), limiting apparent molar expansivity of solute (E ϕ 0 ), coefficient of thermal expansion (α ∗ ), Jones–Dole equation viscosity A, B and D coefficients, temperature derivative of B coefficient i.e. (dB/dT) and hydration number (n H ), etc. The activation parameters of viscous flow for the binary mixtures have been determined and discussed in terms of Eyring’s transition state theory. These significant parameters are helpful to study the structure promoting or destroying tendency of solute and various interactions present in (nicotinic acid + water) and (nicotinamide + water) binary mixtures

  17. A two dimensional approach for temperature distribution in reactor lower head during severe accident

    International Nuclear Information System (INIS)

    Cao, Zhen; Liu, Xiaojing; Cheng, Xu

    2015-01-01

    Highlights: • Two dimensional module is developed to analyze integrity of lower head. • Verification step has been done to evaluate feasibility of new module. • The new module is applied to simulate large-scale advanced PWR. • Importance of 2-D approach is clearly quantified. • Major parameters affecting vessel temperature distribution are identified. - Abstract: In order to evaluate the safety margin during a postulated severe accident, a module named ASAP-2D (Accident Simulation on Pressure vessel-2 Dimensional), which can be implemented into the severe accident simulation codes (such as ATHLET-CD), is developed in Shanghai Jiao Tong University. Based on two-dimensional spherical coordinates, heat conduction equation for transient state is solved implicitly. Together with solid vessel thickness, heat flux distribution and heat transfer coefficient at outer vessel surface are obtained. Heat transfer regime when critical heat flux has been exceeded (POST-CHF regime) could be simulated in the code, and the transition behavior of boiling crisis (from spatial and temporal points of view) can be predicted. The module is verified against a one-dimensional analytical solution with uniform heat flux distribution, and afterwards this module is applied to the benchmark illustrated in NUREG/CR-6849. Benchmark calculation indicates that maximum heat flux at outer surface of RPV could be around 20% lower than that of at inner surface due to two-dimensional heat conduction. Then a preliminary analysis is performed on the integrity of the reactor vessel for which the geometric parameters and boundary conditions are derived from a large scale advanced pressurized water reactor. Results indicate that heat flux remains lower than critical heat flux. Sensitivity analysis indicates that outer heat flux distribution is more sensitive to input heat flux distribution and the transition boiling correlation than mass flow rate in external reactor vessel cooling (ERVC) channel

  18. Identifying the optimal supply temperature in district heating networks - A modelling approach

    DEFF Research Database (Denmark)

    Mohammadi, Soma; Bojesen, Carsten

    2014-01-01

    of this study is to develop a model for thermo-hydraulic calculation of low temperature DH system. The modelling is performed with emphasis on transient heat transfer in pipe networks. The pseudo-dynamic approach is adopted to model the District Heating Network [DHN] behaviour which estimates the temperature...... dynamically while the flow and pressure are calculated on the basis of steady state conditions. The implicit finite element method is applied to simulate the transient temperature behaviour in the network. Pipe network heat losses, pressure drop in the network and return temperature to the plant...... are calculated in the developed model. The model will serve eventually as a basis to find out the optimal supply temperature in an existing DHN in later work. The modelling results are used as decision support for existing DHN; proposing possible modifications to operate at optimal supply temperature....

  19. A portable storage maximum thermometer

    International Nuclear Information System (INIS)

    Fayart, Gerard.

    1976-01-01

    A clinical thermometer storing the voltage corresponding to the maximum temperature in an analog memory is described. End of the measurement is shown by a lamp switch out. The measurement time is shortened by means of a low thermal inertia platinum probe. This portable thermometer is fitted with cell test and calibration system [fr

  20. Sharp Reduction in Maximum LEU Fuel Temperatures during Loss of Coolant Accidents in a PBMR DPP-400 core by means of Optimised Placement of Neutron Poisons: Implications for Pu fuel-cycles

    International Nuclear Information System (INIS)

    Serfontein, Dawid E.

    2013-01-01

    The optimisation of the power profiles by means of placing an optimised distribution of neutron poison concentrations in the central reflector resulted in a large reduction in the maximum DLOFC temperature, which may produce far reaching safety and licensing benefits. Unfortunately this came at the expense of losing the ability to execute effective load following. The neutron poisons also caused a large reduction of 22% in the average burn-up of the fuel. Further optimisation is required to counter this reduction in burn-up

  1. Maximum permissible dose

    International Nuclear Information System (INIS)

    Anon.

    1979-01-01

    This chapter presents a historic overview of the establishment of radiation guidelines by various national and international agencies. The use of maximum permissible dose and maximum permissible body burden limits to derive working standards is discussed

  2. Topological transitions at finite temperatures: A real-time numerical approach

    International Nuclear Information System (INIS)

    Grigoriev, D.Yu.; Rubakov, V.A.; Shaposhnikov, M.E.

    1989-01-01

    We study topological transitions at finite temperatures within the (1+1)-dimensional abelian Higgs model by a numerical simulation in real time. Basic ideas of the real-time approach are presented and some peculiarities of the Metropolis technique are discussed. It is argued that the processes leading to topological transitions are of classical origin; the transitions can be observed by solving the classical field equations in real time. We show that the topological transitions actually pass via the sphaleron configuration. The transition rate as a function of temperature is found to be in good agreement with the analytical predictions. No extra suppression of the rate is observed. The conditions of applicability of our approach are discussed. The temperature interval where the low-temperature broken phase persists is estimated. (orig.)

  3. Unified approach for determining the enthalpic fictive temperature of glasses with arbitrary thermal history

    DEFF Research Database (Denmark)

    Guo, Xiaoju; Potuzak, M.; Mauro, J. C.

    2011-01-01

    We propose a unified routine to determine the enthalpic fictive temperature of a glass with arbitrary thermal history under isobaric conditions. The technique is validated both experimentally and numerically using a novel approach for modeling of glass relaxation behavior. The technique is applic......We propose a unified routine to determine the enthalpic fictive temperature of a glass with arbitrary thermal history under isobaric conditions. The technique is validated both experimentally and numerically using a novel approach for modeling of glass relaxation behavior. The technique...... is applicable to glasses of any thermal history, as proved through a series of numerical simulations where the enthalpic fictive temperature is precisely known within the model. Also, we demonstrate that the enthalpic fictive temperature of a glass can be determined at any calorimetric scan rate in excellent...

  4. Maximum Entropy Fundamentals

    Directory of Open Access Journals (Sweden)

    F. Topsøe

    2001-09-01

    Full Text Available Abstract: In its modern formulation, the Maximum Entropy Principle was promoted by E.T. Jaynes, starting in the mid-fifties. The principle dictates that one should look for a distribution, consistent with available information, which maximizes the entropy. However, this principle focuses only on distributions and it appears advantageous to bring information theoretical thinking more prominently into play by also focusing on the "observer" and on coding. This view was brought forward by the second named author in the late seventies and is the view we will follow-up on here. It leads to the consideration of a certain game, the Code Length Game and, via standard game theoretical thinking, to a principle of Game Theoretical Equilibrium. This principle is more basic than the Maximum Entropy Principle in the sense that the search for one type of optimal strategies in the Code Length Game translates directly into the search for distributions with maximum entropy. In the present paper we offer a self-contained and comprehensive treatment of fundamentals of both principles mentioned, based on a study of the Code Length Game. Though new concepts and results are presented, the reading should be instructional and accessible to a rather wide audience, at least if certain mathematical details are left aside at a rst reading. The most frequently studied instance of entropy maximization pertains to the Mean Energy Model which involves a moment constraint related to a given function, here taken to represent "energy". This type of application is very well known from the literature with hundreds of applications pertaining to several different elds and will also here serve as important illustration of the theory. But our approach reaches further, especially regarding the study of continuity properties of the entropy function, and this leads to new results which allow a discussion of models with so-called entropy loss. These results have tempted us to speculate over

  5. Hamiltonian approach to QCD in Coulomb gauge at zero and finite temperature

    Directory of Open Access Journals (Sweden)

    Reinhardt H.

    2017-01-01

    Full Text Available I report on recent results obtained within the Hamiltonian approach to QCD in Coulomb gauge. By relating the Gribov confinement scenario to the center vortex picture of confinement it is shown that the Coulomb string tension is tied to the spatial string tension. For the quark sector a vacuum wave functional is used which results in variational equations which are free of ultraviolet divergences. The variational approach is extended to finite temperatures by compactifying a spatial dimension. For the chiral and deconfinement phase transition pseudo-critical temperatures of 170MeV and 198 MeV, respectively, are obtained.

  6. Hamiltonian approach to QCD in Coulomb gauge: From the vacuum to finite temperatures

    Directory of Open Access Journals (Sweden)

    Reinhardt H.

    2016-01-01

    Full Text Available The variational Hamiltonian approach to QCD in Coulomb gauge is reviewedand the essential results obtained in recent years are summarized. First the results for thevacuum sector are discussed, with a special emphasis on the mechansim of confinementand chiral symmetry breaking. Then the deconfinement phase transition is described byintroducing temperature in the Hamiltonian approach via compactification of one spatialdimension. The effective action for the Polyakov loop is calculated and the order of thephase transition as well as the critical temperatures are obtained for the color group SU(2 and SU(3. In both cases, our predictions are in good agreement with lattice calculations.

  7. Maximum Entropy in Drug Discovery

    Directory of Open Access Journals (Sweden)

    Chih-Yuan Tseng

    2014-07-01

    Full Text Available Drug discovery applies multidisciplinary approaches either experimentally, computationally or both ways to identify lead compounds to treat various diseases. While conventional approaches have yielded many US Food and Drug Administration (FDA-approved drugs, researchers continue investigating and designing better approaches to increase the success rate in the discovery process. In this article, we provide an overview of the current strategies and point out where and how the method of maximum entropy has been introduced in this area. The maximum entropy principle has its root in thermodynamics, yet since Jaynes’ pioneering work in the 1950s, the maximum entropy principle has not only been used as a physics law, but also as a reasoning tool that allows us to process information in hand with the least bias. Its applicability in various disciplines has been abundantly demonstrated. We give several examples of applications of maximum entropy in different stages of drug discovery. Finally, we discuss a promising new direction in drug discovery that is likely to hinge on the ways of utilizing maximum entropy.

  8. Temperature issues with white laser diodes, calculation and approach for new packages

    Science.gov (United States)

    Lachmayer, Roland; Kloppenburg, Gerolf; Stephan, Serge

    2015-01-01

    Bright white light sources are of significant importance for automotive front lighting systems. Today's upper class systems mainly use HID or LED light sources. As a further step laser diode based systems offer a high luminance, efficiency and allow the realization of new dynamic and adaptive light functions and styling concepts. The use of white laser diode systems in automotive applications is still limited to laboratories and prototypes even though announcements of laser based front lighting systems have been made. But the environment conditions for vehicles and other industry sectors differ from laboratory conditions. Therefor a model of the system's thermal behavior is set up. The power loss of a laser diode is transported as thermal flux from the junction layer to the diode's case and on to the environment. Therefor its optical power is limited by the maximum junction temperature (for blue diodes typically 125 - 150 °C), the environment temperature and the diode's packaging with its thermal resistances. In a car's headlamp the environment temperature can reach up to 80 °C. While the difference between allowed case temperature and environment temperature is getting small or negative the relevant heat flux also becomes small or negative. In early stages of LED development similar challenges had to be solved. Adapting LED packages to the conditions in a vehicle environment lead to today's efficient and bright headlights. In this paper the need to transfer these results to laser diodes is shown by calculating the diodes lifetimes based on the presented model.

  9. Finite temperature dynamics of a Holstein polaron: The thermo-field dynamics approach

    Science.gov (United States)

    Chen, Lipeng; Zhao, Yang

    2017-12-01

    Combining the multiple Davydov D2 Ansatz with the method of thermo-field dynamics, we study finite temperature dynamics of a Holstein polaron on a lattice. It has been demonstrated, using the hierarchy equations of motion method as a benchmark, that our approach provides an efficient, robust description of finite temperature dynamics of the Holstein polaron in the simultaneous presence of diagonal and off-diagonal exciton-phonon coupling. The method of thermo-field dynamics handles temperature effects in the Hilbert space with key numerical advantages over other treatments of finite-temperature dynamics based on quantum master equations in the Liouville space or wave function propagation with Monte Carlo importance sampling. While for weak to moderate diagonal coupling temperature increases inhibit polaron mobility, it is found that off-diagonal coupling induces phonon-assisted transport that dominates at high temperatures. Results on the mean square displacements show that band-like transport features dominate the diagonal coupling cases, and there exists a crossover from band-like to hopping transport with increasing temperature when including off-diagonal coupling. As a proof of concept, our theory provides a unified treatment of coherent and incoherent transport in molecular crystals and is applicable to any temperature.

  10. Temperature-dependent striped antiferromagnetism of LaFeAsO in a Green's function approach

    International Nuclear Information System (INIS)

    Liu Guibin; Liu Banggui

    2009-01-01

    We use a Green's function method to study the temperature-dependent average moment and magnetic phase-transition temperature of the striped antiferromagnetism of LaFeAsO, and other similar compounds, as the parents of FeAs-based superconductors. We consider the nearest and the next-nearest couplings in the FeAs layer, and the nearest coupling for inter-layer spin interaction. The dependence of the transition temperature T N and the zero-temperature average spin on the interaction constants is investigated. We obtain an analytical expression for T N and determine our temperature-dependent average spin from zero temperature to T N in terms of unified self-consistent equations. For LaFeAsO, we obtain a reasonable estimation of the coupling interactions with the experimental transition temperature T N = 138 K. Our results also show that a non-zero antiferromagnetic (AFM) inter-layer coupling is essential for the existence of a non-zero T N , and the many-body AFM fluctuations reduce substantially the low-temperature magnetic moment per Fe towards the experimental value. Our Green's function approach can be used for other FeAs-based parent compounds and these results should be useful to understand the physical properties of FeAs-based superconductors.

  11. A Real-Time Temperature Data Transmission Approach for Intelligent Cooling Control of Mass Concrete

    Directory of Open Access Journals (Sweden)

    Peng Lin

    2014-01-01

    Full Text Available The primary aim of the study presented in this paper is to propose a real-time temperature data transmission approach for intelligent cooling control of mass concrete. A mathematical description of a digital temperature control model is introduced in detail. Based on pipe mounted and electrically linked temperature sensors, together with postdata handling hardware and software, a stable, real-time, highly effective temperature data transmission solution technique is developed and utilized within the intelligent mass concrete cooling control system. Once the user has issued the relevant command, the proposed programmable logic controllers (PLC code performs all necessary steps without further interaction. The code can control the hardware, obtain, read, and perform calculations, and display the data accurately. Hardening concrete is an aggregate of complex physicochemical processes including the liberation of heat. The proposed control system prevented unwanted structural change within the massive concrete blocks caused by these exothermic processes based on an application case study analysis. In conclusion, the proposed temperature data transmission approach has proved very useful for the temperature monitoring of a high arch dam and is able to control thermal stresses in mass concrete for similar projects involving mass concrete.

  12. An informatics approach to transformation temperatures of NiTi-based shape memory alloys

    International Nuclear Information System (INIS)

    Xue, Dezhen; Xue, Deqing; Yuan, Ruihao; Zhou, Yumei; Balachandran, Prasanna V.; Ding, Xiangdong; Sun, Jun; Lookman, Turab

    2017-01-01

    The martensitic transformation serves as the basis for applications of shape memory alloys (SMAs). The ability to make rapid and accurate predictions of the transformation temperature of SMAs is therefore of much practical importance. In this study, we demonstrate that a statistical learning approach using three features or material descriptors related to the chemical bonding and atomic radii of the elements in the alloys, provides a means to predict transformation temperatures. Together with an adaptive design framework, we show that iteratively learning and improving the statistical model can accelerate the search for SMAs with targeted transformation temperatures. The possible mechanisms underlying the dependence of the transformation temperature on these features is discussed based on a Landau-type phenomenological model.

  13. Neoendemic ground beetles and private tree haplotypes: two independent proxies attest a moderate last glacial maximum summer temperature depression of 3-4 °C for the southern Tibetan Plateau

    Science.gov (United States)

    Schmidt, Joachim; Opgenoorth, Lars; Martens, Jochen; Miehe, Georg

    2011-07-01

    Previous findings regarding the Last Glacial Maximum LGM summer temperature depression (maxΔT in July) on the Tibetan Plateau varied over a large range (between 0 and 9 °C). Geologic proxies usually provided higher values than palynological data. Because of this wide temperature range, it was hitherto impossible to reconstruct the glacial environment of the Tibetan Plateau. Here, we present for the first time data indicating that local neoendemics of modern species groups are promising proxies for assessing the LGM temperature depression in Tibet. We used biogeographical and phylogenetic data from small, wingless edaphous ground beetles of the genus Trechus, and from private juniper tree haplotypes. The derived values of the maxΔT in July ranged between 3 and 4 °C. Our data support previous findings that were based on palynological data. At the same time, our data are spatially more specific as they are not bound to specific archives. Our study shows that the use of modern endemics enables a detailed mapping of local LGM conditions in High Asia. A prerequisite for this is an extensive biogeographical and phylogenetic exploration of the area and the inclusion of additional endemic taxa and evolutionary lines.

  14. Solution of the neutron point kinetics equations with temperature feedback effects applying the polynomial approach method

    Energy Technology Data Exchange (ETDEWEB)

    Tumelero, Fernanda, E-mail: fernanda.tumelero@yahoo.com.br [Universidade Federal do Rio Grande do Sul (UFRGS), Porto Alegre, RS (Brazil). Programa de Pos-Graduacao em Engenharia Mecanica; Petersen, Claudio Z.; Goncalves, Glenio A.; Lazzari, Luana, E-mail: claudiopeteren@yahoo.com.br, E-mail: gleniogoncalves@yahoo.com.br, E-mail: luana-lazzari@hotmail.com [Universidade Federal de Pelotas (DME/UFPEL), Capao do Leao, RS (Brazil). Instituto de Fisica e Matematica

    2015-07-01

    In this work, we present a solution of the Neutron Point Kinetics Equations with temperature feedback effects applying the Polynomial Approach Method. For the solution, we consider one and six groups of delayed neutrons precursors with temperature feedback effects and constant reactivity. The main idea is to expand the neutron density, delayed neutron precursors and temperature as a power series considering the reactivity as an arbitrary function of the time in a relatively short time interval around an ordinary point. In the first interval one applies the initial conditions of the problem and the analytical continuation is used to determine the solutions of the next intervals. With the application of the Polynomial Approximation Method it is possible to overcome the stiffness problem of the equations. In such a way, one varies the time step size of the Polynomial Approach Method and performs an analysis about the precision and computational time. Moreover, we compare the method with different types of approaches (linear, quadratic and cubic) of the power series. The answer of neutron density and temperature obtained by numerical simulations with linear approximation are compared with results in the literature. (author)

  15. Solution of the neutron point kinetics equations with temperature feedback effects applying the polynomial approach method

    International Nuclear Information System (INIS)

    Tumelero, Fernanda; Petersen, Claudio Z.; Goncalves, Glenio A.; Lazzari, Luana

    2015-01-01

    In this work, we present a solution of the Neutron Point Kinetics Equations with temperature feedback effects applying the Polynomial Approach Method. For the solution, we consider one and six groups of delayed neutrons precursors with temperature feedback effects and constant reactivity. The main idea is to expand the neutron density, delayed neutron precursors and temperature as a power series considering the reactivity as an arbitrary function of the time in a relatively short time interval around an ordinary point. In the first interval one applies the initial conditions of the problem and the analytical continuation is used to determine the solutions of the next intervals. With the application of the Polynomial Approximation Method it is possible to overcome the stiffness problem of the equations. In such a way, one varies the time step size of the Polynomial Approach Method and performs an analysis about the precision and computational time. Moreover, we compare the method with different types of approaches (linear, quadratic and cubic) of the power series. The answer of neutron density and temperature obtained by numerical simulations with linear approximation are compared with results in the literature. (author)

  16. An integrated approach to selecting materials for fuel cladding in advanced high-temperature reactors

    Energy Technology Data Exchange (ETDEWEB)

    Rangacharyulu, C., E-mail: chary.r@usask.ca [Univ. of Saskatchewan, Saskatoon, SK (Canada); Guzonas, D.A.; Pencer, J.; Nava-Dominguez, A.; Leung, L.K.H. [Atomic Energy of Canada Limited, Chalk River, ON (Canada)

    2014-07-01

    An integrated approach has been developed for selection of fuel cladding materials for advanced high-temperature reactors. Reactor physics, thermalhydraulic and material analyses are being integrated in a systematic study comparing various candidate fuel-cladding alloys. The analyses established the axial and radial neutron fluxes, power distributions, axial and radial temperature distributions, rates of defect formation and helium production using AECL analytical toolsets and experimentally measured corrosion rates to optimize the material composition for fuel cladding. The project has just been initiated at University of Saskatchewan. Some preliminary results of the analyses are presented together with the path forward for the project. (author)

  17. Assessing the Temperature Dependence of Narrow-Band Raman Water Vapor Lidar Measurements: A Practical Approach

    Science.gov (United States)

    Whiteman, David N.; Venable, Demetrius D.; Walker, Monique; Cardirola, Martin; Sakai, Tetsu; Veselovskii, Igor

    2013-01-01

    Narrow-band detection of the Raman water vapor spectrum using the lidar technique introduces a concern over the temperature dependence of the Raman spectrum. Various groups have addressed this issue either by trying to minimize the temperature dependence to the point where it can be ignored or by correcting for whatever degree of temperature dependence exists. The traditional technique for performing either of these entails accurately measuring both the laser output wavelength and the water vapor spectral passband with combined uncertainty of approximately 0.01 nm. However, uncertainty in interference filter center wavelengths and laser output wavelengths can be this large or larger. These combined uncertainties translate into uncertainties in the magnitude of the temperature dependence of the Raman lidar water vapor measurement of 3% or more. We present here an alternate approach for accurately determining the temperature dependence of the Raman lidar water vapor measurement. This alternate approach entails acquiring sequential atmospheric profiles using the lidar while scanning the channel passband across portions of the Raman water vapor Q-branch. This scanning is accomplished either by tilt-tuning an interference filter or by scanning the output of a spectrometer. Through this process a peak in the transmitted intensity can be discerned in a manner that defines the spectral location of the channel passband with respect to the laser output wavelength to much higher accuracy than that achieved with standard laboratory techniques. Given the peak of the water vapor signal intensity curve, determined using the techniques described here, and an approximate knowledge of atmospheric temperature, the temperature dependence of a given Raman lidar profile can be determined with accuracy of 0.5% or better. A Mathematica notebook that demonstrates the calculations used here is available from the lead author.

  18. Maximum Acceleration Recording Circuit

    Science.gov (United States)

    Bozeman, Richard J., Jr.

    1995-01-01

    Coarsely digitized maximum levels recorded in blown fuses. Circuit feeds power to accelerometer and makes nonvolatile record of maximum level to which output of accelerometer rises during measurement interval. In comparison with inertia-type single-preset-trip-point mechanical maximum-acceleration-recording devices, circuit weighs less, occupies less space, and records accelerations within narrower bands of uncertainty. In comparison with prior electronic data-acquisition systems designed for same purpose, circuit simpler, less bulky, consumes less power, costs and analysis of data recorded in magnetic or electronic memory devices. Circuit used, for example, to record accelerations to which commodities subjected during transportation on trucks.

  19. A space and time scale-dependent nonlinear geostatistical approach for downscaling daily precipitation and temperature

    KAUST Repository

    Jha, Sanjeev Kumar

    2015-07-21

    A geostatistical framework is proposed to downscale daily precipitation and temperature. The methodology is based on multiple-point geostatistics (MPS), where a multivariate training image is used to represent the spatial relationship between daily precipitation and daily temperature over several years. Here, the training image consists of daily rainfall and temperature outputs from the Weather Research and Forecasting (WRF) model at 50 km and 10 km resolution for a twenty year period ranging from 1985 to 2004. The data are used to predict downscaled climate variables for the year 2005. The result, for each downscaled pixel, is daily time series of precipitation and temperature that are spatially dependent. Comparison of predicted precipitation and temperature against a reference dataset indicates that both the seasonal average climate response together with the temporal variability are well reproduced. The explicit inclusion of time dependence is explored by considering the climate properties of the previous day as an additional variable. Comparison of simulations with and without inclusion of time dependence shows that the temporal dependence only slightly improves the daily prediction because the temporal variability is already well represented in the conditioning data. Overall, the study shows that the multiple-point geostatistics approach is an efficient tool to be used for statistical downscaling to obtain local scale estimates of precipitation and temperature from General Circulation Models. This article is protected by copyright. All rights reserved.

  20. A Hybrid Maximum Power Point Tracking Approach for Photovoltaic Systems under Partial Shading Conditions Using a Modified Genetic Algorithm and the Firefly Algorithm

    Directory of Open Access Journals (Sweden)

    Yu-Pei Huang

    2018-01-01

    Full Text Available This paper proposes a modified maximum power point tracking (MPPT algorithm for photovoltaic systems under rapidly changing partial shading conditions (PSCs. The proposed algorithm integrates a genetic algorithm (GA and the firefly algorithm (FA and further improves its calculation process via a differential evolution (DE algorithm. The conventional GA is not advisable for MPPT because of its complicated calculations and low accuracy under PSCs. In this study, we simplified the GA calculations with the integration of the DE mutation process and FA attractive process. Results from both the simulation and evaluation verify that the proposed algorithm provides rapid response time and high accuracy due to the simplified processing. For instance, evaluation results demonstrate that when compared to the conventional GA, the execution time and tracking accuracy of the proposed algorithm can be, respectively, improved around 69.4% and 4.16%. In addition, in comparison to FA, the tracking speed and tracking accuracy of the proposed algorithm can be improved around 42.9% and 1.85%, respectively. Consequently, the major improvement of the proposed method when evaluated against the conventional GA and FA is tracking speed. Moreover, this research provides a framework to integrate multiple nature-inspired algorithms for MPPT. Furthermore, the proposed method is adaptable to different types of solar panels and different system formats with specifically designed equations, the advantages of which are rapid tracking speed with high accuracy under PSCs.

  1. Quantifying the Strength of General Factors in Psychopathology: A Comparison of CFA with Maximum Likelihood Estimation, BSEM, and ESEM/EFA Bifactor Approaches.

    Science.gov (United States)

    Murray, Aja Louise; Booth, Tom; Eisner, Manuel; Obsuth, Ingrid; Ribeaud, Denis

    2018-05-22

    Whether or not importance should be placed on an all-encompassing general factor of psychopathology (or p factor) in classifying, researching, diagnosing, and treating psychiatric disorders depends (among other issues) on the extent to which comorbidity is symptom-general rather than staying largely within the confines of narrower transdiagnostic factors such as internalizing and externalizing. In this study, we compared three methods of estimating p factor strength. We compared omega hierarchical and explained common variance calculated from confirmatory factor analysis (CFA) bifactor models with maximum likelihood (ML) estimation, from exploratory structural equation modeling/exploratory factor analysis models with a bifactor rotation, and from Bayesian structural equation modeling (BSEM) bifactor models. Our simulation results suggested that BSEM with small variance priors on secondary loadings might be the preferred option. However, CFA with ML also performed well provided secondary loadings were modeled. We provide two empirical examples of applying the three methodologies using a normative sample of youth (z-proso, n = 1,286) and a university counseling sample (n = 359).

  2. New approach of determinations of earthquake moment magnitude using near earthquake source duration and maximum displacement amplitude of high frequency energy radiation

    Energy Technology Data Exchange (ETDEWEB)

    Gunawan, H.; Puspito, N. T.; Ibrahim, G.; Harjadi, P. J. P. [ITB, Faculty of Earth Sciences and Tecnology (Indonesia); BMKG (Indonesia)

    2012-06-20

    The new approach method to determine the magnitude by using amplitude displacement relationship (A), epicenter distance ({Delta}) and duration of high frequency radiation (t) has been investigated for Tasikmalaya earthquake, on September 2, 2009, and their aftershock. Moment magnitude scale commonly used seismic surface waves with the teleseismic range of the period is greater than 200 seconds or a moment magnitude of the P wave using teleseismic seismogram data and the range of 10-60 seconds. In this research techniques have been developed a new approach to determine the displacement amplitude and duration of high frequency radiation using near earthquake. Determination of the duration of high frequency using half of period of P waves on the seismograms displacement. This is due tothe very complex rupture process in the near earthquake. Seismic data of the P wave mixing with other wave (S wave) before the duration runs out, so it is difficult to separate or determined the final of P-wave. Application of the 68 earthquakes recorded by station of CISI, Garut West Java, the following relationship is obtained: Mw = 0.78 log (A) + 0.83 log {Delta}+ 0.69 log (t) + 6.46 with: A (m), d (km) and t (second). Moment magnitude of this new approach is quite reliable, time processing faster so useful for early warning.

  3. New approach of determinations of earthquake moment magnitude using near earthquake source duration and maximum displacement amplitude of high frequency energy radiation

    Science.gov (United States)

    Gunawan, H.; Puspito, N. T.; Ibrahim, G.; Harjadi, P. J. P.

    2012-06-01

    The new approach method to determine the magnitude by using amplitude displacement relationship (A), epicenter distance (Δ) and duration of high frequency radiation (t) has been investigated for Tasikmalaya earthquake, on September 2, 2009, and their aftershock. Moment magnitude scale commonly used seismic surface waves with the teleseismic range of the period is greater than 200 seconds or a moment magnitude of the P wave using teleseismic seismogram data and the range of 10-60 seconds. In this research techniques have been developed a new approach to determine the displacement amplitude and duration of high frequency radiation using near earthquake. Determination of the duration of high frequency using half of period of P waves on the seismograms displacement. This is due tothe very complex rupture process in the near earthquake. Seismic data of the P wave mixing with other wave (S wave) before the duration runs out, so it is difficult to separate or determined the final of P-wave. Application of the 68 earthquakes recorded by station of CISI, Garut West Java, the following relationship is obtained: Mw = 0.78 log (A) + 0.83 log Δ + 0.69 log (t) + 6.46 with: A (m), d (km) and t (second). Moment magnitude of this new approach is quite reliable, time processing faster so useful for early warning.

  4. New approach of determinations of earthquake moment magnitude using near earthquake source duration and maximum displacement amplitude of high frequency energy radiation

    International Nuclear Information System (INIS)

    Gunawan, H.; Puspito, N. T.; Ibrahim, G.; Harjadi, P. J. P.

    2012-01-01

    The new approach method to determine the magnitude by using amplitude displacement relationship (A), epicenter distance (Δ) and duration of high frequency radiation (t) has been investigated for Tasikmalaya earthquake, on September 2, 2009, and their aftershock. Moment magnitude scale commonly used seismic surface waves with the teleseismic range of the period is greater than 200 seconds or a moment magnitude of the P wave using teleseismic seismogram data and the range of 10-60 seconds. In this research techniques have been developed a new approach to determine the displacement amplitude and duration of high frequency radiation using near earthquake. Determination of the duration of high frequency using half of period of P waves on the seismograms displacement. This is due tothe very complex rupture process in the near earthquake. Seismic data of the P wave mixing with other wave (S wave) before the duration runs out, so it is difficult to separate or determined the final of P-wave. Application of the 68 earthquakes recorded by station of CISI, Garut West Java, the following relationship is obtained: Mw = 0.78 log (A) + 0.83 log Δ+ 0.69 log (t) + 6.46 with: A (m), d (km) and t (second). Moment magnitude of this new approach is quite reliable, time processing faster so useful for early warning.

  5. Using temperature-switching approach to evaluate the ELDRS of bipolar devices

    Science.gov (United States)

    Li, Xiaolong; Lu, Wu; Wang, Xin; Guo, Qi; Yu, Xin; He, Chengfa; Sun, Jing; Liu, Mohan; Yao, Shuai; Wei, Xinyu

    2017-12-01

    Enhanced low-dose rate sensitivity (ELDRS) exhibited at low-dose rates (LDRs) by most bipolar devices is considered as one of the main concerns for spacecraft reliability. In this work, a time-saving and conservative approach - temperature-switching approach (TSA) - to simulate the ELDRS of bipolar devices is presented. Good agreement is observed between the predictive curve obtained with the TSA and the LDR data, and TSA provides us with a new insight into the test technique for ELDRS. Additionally, the mechanisms of TSA are analyzed in this paper.

  6. Bethe ansatz approach to quantum sine Gordon thermodynamics and finite temperature excitations

    International Nuclear Information System (INIS)

    Zotos, X.

    1982-01-01

    Takahashi and Suzuki (TS) using the Bethe ansatz method developed a formalism for the thermodynamics of the XYZ spin chain. Translating their formalism to the quantum sine-Gordon system, the thermodynamics and finite temperature elementary excitations are analyzed. Criteria imposed by TS on the allowed states simply correspond to the condition of normalizability of the wave functions. A set of coupled nonlinear integral equations for the thermodynamic equilibrium densities for particular values of the coupling constant in the attractive regime is derived. Solving numerically these Bethe ansatz equations, curves of the specific heat as a function of temperature are obtained. The soliton contribution peaks at a temperature of about 0.4 soliton masses shifting downward as the classical limit is approached. The weak coupling regime is analyzed by deriving the Bethe ansatz equations including the charged vacuum excitations. It is shown that they are necessary for a consistent presentation of the thermodynamics

  7. Cointegration approach for temperature effect compensation in Lamb-wave-based damage detection

    International Nuclear Information System (INIS)

    Dao, Phong B; Staszewski, Wieslaw J

    2013-01-01

    Lamb waves are often used in smart structures with integrated, low-profile piezoceramic transducers for damage detection. However, it is well known that the method is prone to contamination from a variety of interference sources including environmental and operational conditions. The paper demonstrates how to remove the undesired temperature effect from Lamb wave data. The method is based on the concept of cointegration that is partially built on the analysis of the non-stationary behaviour of time series. Instead of directly using Lamb wave responses for damage detection, two approaches are proposed: (i) analysis of cointegrating residuals obtained from the cointegration process of Lamb wave responses, (ii) analysis of stationary characteristics of Lamb wave responses before and after cointegration. The method is tested on undamaged and damaged aluminium plates exposed to temperature variations. The experimental results show that the method can: isolate damage-sensitive features from temperature variations, detect the existence of damage and classify its severity. (paper)

  8. Zeta-function regularization approach to finite temperature effects in Kaluza-Klein space-times

    International Nuclear Information System (INIS)

    Bytsenko, A.A.; Vanzo, L.; Zerbini, S.

    1992-01-01

    In the framework of heat-kernel approach to zeta-function regularization, in this paper the one-loop effective potential at finite temperature for scalar and spinor fields on Kaluza-Klein space-time of the form M p x M c n , where M p is p-dimensional Minkowski space-time is evaluated. In particular, when the compact manifold is M c n = H n /Γ, the Selberg tracer formula associated with discrete torsion-free group Γ of the n-dimensional Lobachevsky space H n is used. An explicit representation for the thermodynamic potential valid for arbitrary temperature is found. As a result a complete high temperature expansion is presented and the roles of zero modes and topological contributions is discussed

  9. Maximum Quantum Entropy Method

    OpenAIRE

    Sim, Jae-Hoon; Han, Myung Joon

    2018-01-01

    Maximum entropy method for analytic continuation is extended by introducing quantum relative entropy. This new method is formulated in terms of matrix-valued functions and therefore invariant under arbitrary unitary transformation of input matrix. As a result, the continuation of off-diagonal elements becomes straightforward. Without introducing any further ambiguity, the Bayesian probabilistic interpretation is maintained just as in the conventional maximum entropy method. The applications o...

  10. Maximum power demand cost

    International Nuclear Information System (INIS)

    Biondi, L.

    1998-01-01

    The charging for a service is a supplier's remuneration for the expenses incurred in providing it. There are currently two charges for electricity: consumption and maximum demand. While no problem arises about the former, the issue is more complicated for the latter and the analysis in this article tends to show that the annual charge for maximum demand arbitrarily discriminates among consumer groups, to the disadvantage of some [it

  11. Spatiotemporal modeling of ozone levels in Quebec (Canada): a comparison of kriging, land-use regression (LUR), and combined Bayesian maximum entropy-LUR approaches.

    Science.gov (United States)

    Adam-Poupart, Ariane; Brand, Allan; Fournier, Michel; Jerrett, Michael; Smargiassi, Audrey

    2014-09-01

    Ambient air ozone (O3) is a pulmonary irritant that has been associated with respiratory health effects including increased lung inflammation and permeability, airway hyperreactivity, respiratory symptoms, and decreased lung function. Estimation of O3 exposure is a complex task because the pollutant exhibits complex spatiotemporal patterns. To refine the quality of exposure estimation, various spatiotemporal methods have been developed worldwide. We sought to compare the accuracy of three spatiotemporal models to predict summer ground-level O3 in Quebec, Canada. We developed a land-use mixed-effects regression (LUR) model based on readily available data (air quality and meteorological monitoring data, road networks information, latitude), a Bayesian maximum entropy (BME) model incorporating both O3 monitoring station data and the land-use mixed model outputs (BME-LUR), and a kriging method model based only on available O3 monitoring station data (BME kriging). We performed leave-one-station-out cross-validation and visually assessed the predictive capability of each model by examining the mean temporal and spatial distributions of the average estimated errors. The BME-LUR was the best predictive model (R2 = 0.653) with the lowest root mean-square error (RMSE ;7.06 ppb), followed by the LUR model (R2 = 0.466, RMSE = 8.747) and the BME kriging model (R2 = 0.414, RMSE = 9.164). Our findings suggest that errors of estimation in the interpolation of O3 concentrations with BME can be greatly reduced by incorporating outputs from a LUR model developed with readily available data.

  12. A field studies and modeling approach to develop organochlorine pesticide and PCB total maximum daily load calculations: Case study for Echo Park Lake, Los Angeles, CA

    Energy Technology Data Exchange (ETDEWEB)

    Vasquez, V.R., E-mail: vrvasquez@ucla.edu [Environmental Science and Engineering Program, University of California, Los Angeles, Los Angeles, CA 90095-1496 (United States); Curren, J., E-mail: janecurren@yahoo.com [Environmental Science and Engineering Program, University of California, Los Angeles, Los Angeles, CA 90095-1496 (United States); Lau, S.-L., E-mail: simlin@ucla.edu [Department of Civil and Environmental Engineering, University of California, Los Angeles, Los Angeles, CA 90095-1496 (United States); Stenstrom, M.K., E-mail: stenstro@seas.ucla.edu [Department of Civil and Environmental Engineering, University of California, Los Angeles, Los Angeles, CA 90095-1496 (United States); Suffet, I.H., E-mail: msuffet@ucla.edu [Environmental Science and Engineering Program, University of California, Los Angeles, Los Angeles, CA 90095-1496 (United States)

    2011-09-01

    Echo Park Lake is a small lake in Los Angeles, CA listed on the USA Clean Water Act Section 303(d) list of impaired water bodies for elevated levels of organochlorine pesticides (OCPs) and polychlorinated biphenyls (PCBs) in fish tissue. A lake water and sediment sampling program was completed to support the development of total maximum daily loads (TMDL) to address the lake impairment. The field data indicated quantifiable levels of OCPs and PCBs in the sediments, but lake water data were all below detection levels. The field sediment data obtained may explain the contaminant levels in fish tissue using appropriate sediment-water partitioning coefficients and bioaccumulation factors. A partition-equilibrium fugacity model of the whole lake system was used to interpret the field data and indicated that half of the total mass of the pollutants in the system are in the sediments and the other half is in soil; therefore, soil erosion could be a significant pollutant transport mode into the lake. Modeling also indicated that developing and quantifying the TMDL depends significantly on the analytical detection level for the pollutants in field samples and on the choice of octanol-water partitioning coefficient and bioaccumulation factors for the model. - Research highlights: {yields} Fugacity model using new OCP and PCB field data supports lake TMDL calculations. {yields} OCP and PCB levels in lake sediment were found above levels for impairment. {yields} Relationship between sediment data and available fish tissue data evaluated. {yields} Model provides approximation of contaminant sources and sinks for a lake system. {yields} Model results were sensitive to analytical detection and quantification levels.

  13. Modeling the evolution of the Laurentide Ice Sheet from MIS 3 to the Last Glacial Maximum: an approach using sea level modeling and ice flow dynamics

    Science.gov (United States)

    Weisenberg, J.; Pico, T.; Birch, L.; Mitrovica, J. X.

    2017-12-01

    The history of the Laurentide Ice Sheet since the Last Glacial Maximum ( 26 ka; LGM) is constrained by geological evidence of ice margin retreat in addition to relative sea-level (RSL) records in both the near and far field. Nonetheless, few observations exist constraining the ice sheet's extent across the glacial build-up phase preceding the LGM. Recent work correcting RSL records along the U.S. mid-Atlantic dated to mid-MIS 3 (50-35 ka) for glacial-isostatic adjustment (GIA) infer that the Laurentide Ice Sheet grew by more than three-fold in the 15 ky leading into the LGM. Here we test the plausibility of a late and extremely rapid glaciation by driving a high-resolution ice sheet model, based on a nonlinear diffusion equation for the ice thickness. We initialize this model at 44 ka with the mid-MIS 3 ice sheet configuration proposed by Pico et al. (2017), GIA-corrected basal topography, and mass balance representative of mid-MIS 3 conditions. These simulations predict rapid growth of the eastern Laurentide Ice Sheet, with rates consistent with achieving LGM ice volumes within 15 ky. We use these simulations to refine the initial ice configuration and present an improved and higher resolution model for North American ice cover during mid-MIS 3. In addition we show that assumptions of ice loads during the glacial phase, and the associated reconstructions of GIA-corrected basal topography, produce a bias that can underpredict ice growth rates in the late stages of the glaciation, which has important consequences for our understanding of the speed limit for ice growth on glacial timescales.

  14. Changes in Extreme Maximum Temperature Events and Population Exposure in China under Global Warming Scenarios of 1.5 and 2.0°C: Analysis Using the Regional Climate Model COSMO-CLM

    Science.gov (United States)

    Zhan, Mingjin; Li, Xiucang; Sun, Hemin; Zhai, Jianqing; Jiang, Tong; Wang, Yanjun

    2018-02-01

    We used daily maximum temperature data (1986-2100) from the COSMO-CLM (COnsortium for Small-scale MOdeling in CLimate Mode) regional climate model and the population statistics for China in 2010 to determine the frequency, intensity, coverage, and population exposure of extreme maximum temperature events (EMTEs) with the intensity-area-duration method. Between 1986 and 2005 (reference period), the frequency, intensity, and coverage of EMTEs are 1330-1680 times yr-1, 31.4-33.3°C, and 1.76-3.88 million km2, respectively. The center of the most severe EMTEs is located in central China and 179.5-392.8 million people are exposed to EMTEs annually. Relative to 1986-2005, the frequency, intensity, and coverage of EMTEs increase by 1.13-6.84, 0.32-1.50, and 15.98%-30.68%, respectively, under 1.5°C warming; under 2.0°C warming, the increases are 1.73-12.48, 0.64-2.76, and 31.96%-50.00%, respectively. It is possible that both the intensity and coverage of future EMTEs could exceed the most severe EMTEs currently observed. Two new centers of EMTEs are projected to develop under 1.5°C warming, one in North China and the other in Southwest China. Under 2.0°C warming, a fourth EMTE center is projected to develop in Northwest China. Under 1.5 and 2.0°C warming, population exposure is projected to increase by 23.2%-39.2% and 26.6%-48%, respectively. From a regional perspective, population exposure is expected to increase most rapidly in Southwest China. A greater proportion of the population in North, Northeast, and Northwest China will be exposed to EMTEs under 2.0°C warming. The results show that a warming world will lead to increases in the intensity, frequency, and coverage of EMTEs. Warming of 2.0°C will lead to both more severe EMTEs and the exposure of more people to EMTEs. Given the probability of the increased occurrence of more severe EMTEs than in the past, it is vitally important to China that the global temperature increase is limited within 1.5°C.

  15. Multi-model attribution of upper-ocean temperature changes using an isothermal approach

    Science.gov (United States)

    Weller, Evan; Min, Seung-Ki; Palmer, Matthew D.; Lee, Donghyun; Yim, Bo Young; Yeh, Sang-Wook

    2016-06-01

    Both air-sea heat exchanges and changes in ocean advection have contributed to observed upper-ocean warming most evident in the late-twentieth century. However, it is predominantly via changes in air-sea heat fluxes that human-induced climate forcings, such as increasing greenhouse gases, and other natural factors such as volcanic aerosols, have influenced global ocean heat content. The present study builds on previous work using two different indicators of upper-ocean temperature changes for the detection of both anthropogenic and natural external climate forcings. Using simulations from phase 5 of the Coupled Model Intercomparison Project, we compare mean temperatures above a fixed isotherm with the more widely adopted approach of using a fixed depth. We present the first multi-model ensemble detection and attribution analysis using the fixed isotherm approach to robustly detect both anthropogenic and natural external influences on upper-ocean temperatures. Although contributions from multidecadal natural variability cannot be fully removed, both the large multi-model ensemble size and properties of the isotherm analysis reduce internal variability of the ocean, resulting in better observation-model comparison of temperature changes since the 1950s. We further show that the high temporal resolution afforded by the isotherm analysis is required to detect natural external influences such as volcanic cooling events in the upper-ocean because the radiative effect of volcanic forcings is short-lived.

  16. Neurobehavioral approach for evaluation of office workers' productivity: The effects of room temperature

    Energy Technology Data Exchange (ETDEWEB)

    Lan, Li; Lian, Zhiwei; Pan, Li [School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai 200240 (China); Ye, Qian [Shanghai Research Institute of Building Science, Shanghai 200041 (China)

    2009-08-15

    Indoor environment quality has great influence on worker's productivity, and how to assess the effect of indoor environment on productivity remains to be the major challenge. A neurobehavioral approach was proposed for evaluation of office workers' productivity in this paper. The distinguishing characteristic of neurobehavioral approach is its emphasis on the identification and measurement of behavioral changes, for the influence of environment on brain functions manifests behaviorally. Therefore worker's productivity can be comprehensively evaluated by testing the neurobehavioral functions. Four neurobehavioral functions, including perception, learning and memory, thinking, and executive functions were measured with nine representative psychometric tests. The effect of room temperature on performance of neurobehavioral tests was investigated in the laboratory. Four temperatures (19 C, 24 C, 27 C, and 32 C) were investigated based on the thermal sensation from cold to hot. Signal detection theory was utilized to analyze response bias. It was found that motivated people could maintain high performance for a short time under adverse (hot or cold) environmental conditions. Room temperature affected task performance differentially, depending on the type of tasks. The proposed neurobehavioral approach could be worked to quantitatively and systematically evaluate office workers' productivity. (author)

  17. Modular High Temperature Gas-Cooled Reactor Safety Basis and Approach

    Energy Technology Data Exchange (ETDEWEB)

    David Petti; Jim Kinsey; Dave Alberstein

    2014-01-01

    Various international efforts are underway to assess the safety of advanced nuclear reactor designs. For example, the International Atomic Energy Agency has recently held its first Consultancy Meeting on a new cooperative research program on high temperature gas-cooled reactor (HTGR) safety. Furthermore, the Generation IV International Forum Reactor Safety Working Group has recently developed a methodology, called the Integrated Safety Assessment Methodology, for use in Generation IV advanced reactor technology development, design, and design review. A risk and safety assessment white paper is under development with respect to the Very High Temperature Reactor to pilot the Integrated Safety Assessment Methodology and to demonstrate its validity and feasibility. To support such efforts, this information paper on the modular HTGR safety basis and approach has been prepared. The paper provides a summary level introduction to HTGR history, public safety objectives, inherent and passive safety features, radionuclide release barriers, functional safety approach, and risk-informed safety approach. The information in this paper is intended to further the understanding of the modular HTGR safety approach. The paper gives those involved in the assessment of advanced reactor designs an opportunity to assess an advanced design that has already received extensive review by regulatory authorities and to judge the utility of recently proposed new methods for advanced reactor safety assessment such as the Integrated Safety Assessment Methodology.

  18. Strength of Geopolymer Cement Curing at Ambient Temperature by Non-Oven Curing Approaches: An Overview

    Science.gov (United States)

    Wattanachai, Pitiwat; Suwan, Teewara

    2017-06-01

    At the present day, a concept of environmentally friendly construction materials has been intensively studying to reduce the amount of releasing greenhouse gases. Geopolymer is one of the cementitious binders which can be produced by utilising pozzolanic wastes (e.g. fly ash or furnace slag) and also receiving much more attention as a low-CO2 emission material. However, to achieve excellent mechanical properties, heat curing process is needed to apply to geopolymer cement in a range of temperature around 40 to 90°C. To consume less oven-curing energy and be more convenience in practical work, the study on geopolymer curing at ambient temperature (around 20 to 25°C) is therefore widely investigated. In this paper, a core review of factors and approaches for non-oven curing geopolymer has been summarised. The performance, in term of strength, of each non-oven curing method, is also presented and analysed. The main aim of this review paper is to gather the latest study of ambient temperature curing geopolymer and to enlarge a feasibility of non-oven curing geopolymer development. Also, to extend the directions of research work, some approaches or techniques can be combined or applied to the specific properties for in-field applications and embankment stabilization by using soil-cement column.

  19. Maximum likely scale estimation

    DEFF Research Database (Denmark)

    Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo

    2005-01-01

    A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and/or ...

  20. Robust Maximum Association Estimators

    NARCIS (Netherlands)

    A. Alfons (Andreas); C. Croux (Christophe); P. Filzmoser (Peter)

    2017-01-01

    textabstractThe maximum association between two multivariate variables X and Y is defined as the maximal value that a bivariate association measure between one-dimensional projections αX and αY can attain. Taking the Pearson correlation as projection index results in the first canonical correlation

  1. Ocular surface temperature in patients with evaporative and aqueous-deficient dry eyes: a thermographic approach.

    Science.gov (United States)

    Matteoli, S; Favuzza, E; Mazzantini, L; Aragona, P; Cappelli, S; Corvi, A; Mencucci, R

    2017-07-26

    In recent decades infrared thermography (IRT) has facilitated accurate quantitative measurements of the ocular surface temperature (OST), applying a non-invasive procedure. The objective of this work was to develop a procedure based on IRT, which allows characterizing of the cooling of the ocular surface of patients suffering from dry eye syndrome, and distinguishing among patients suffering from aqueous deficient dry eye (ADDE) and evaporative dry eyes (EDE). All patients examined (34 females and 4 males, 23-84 years) were divided into two groups according to their Schirmer I result (⩽ 7 mm for ADDE and  >  7 mm for EDE), and the OST was recorded for 7 s at 30 Hz. For each acquisition, the temperatures of the central cornea (CC) as well as those of both temporal and nasal canthi were investigated. Findings showed that the maximum temperature variation (up to 0.75  ±  0.29 °C) was at the CC for both groups. Furthermore, patients suffering from EDE tended to have a higher initial OST than those with ADDE, explained by the greater quantity of the tear film, evenly distributed over the entire ocular surface, keeping the OST higher initially. Results also showed that EDE patients had an average cooling rate higher than those suffering from ADDE, confirming the excessive evaporation of the tear film. Ocular thermography paves the way to become an effective tool for differentiating between the two different etiologies of dry eye syndrome.

  2. Long-memory and the sea level-temperature relationship: a fractional cointegration approach.

    Science.gov (United States)

    Ventosa-Santaulària, Daniel; Heres, David R; Martínez-Hernández, L Catalina

    2014-01-01

    Through thermal expansion of oceans and melting of land-based ice, global warming is very likely contributing to the sea level rise observed during the 20th century. The amount by which further increases in global average temperature could affect sea level is only known with large uncertainties due to the limited capacity of physics-based models to predict sea levels from global surface temperatures. Semi-empirical approaches have been implemented to estimate the statistical relationship between these two variables providing an alternative measure on which to base potentially disrupting impacts on coastal communities and ecosystems. However, only a few of these semi-empirical applications had addressed the spurious inference that is likely to be drawn when one nonstationary process is regressed on another. Furthermore, it has been shown that spurious effects are not eliminated by stationary processes when these possess strong long memory. Our results indicate that both global temperature and sea level indeed present the characteristics of long memory processes. Nevertheless, we find that these variables are fractionally cointegrated when sea-ice extent is incorporated as an instrumental variable for temperature which in our estimations has a statistically significant positive impact on global sea level.

  3. Controlled low-temperature fabrication of ZnO nanopillars with a wet-chemical approach

    Energy Technology Data Exchange (ETDEWEB)

    Postels, B [Institute of Semiconductor Technology, Technical University of Braunschweig, Hans-Sommer-Strasse 66, D-38106 Braunschweig (Germany); Wehmann, H-H [Institute of Semiconductor Technology, Technical University of Braunschweig, Hans-Sommer-Strasse 66, D-38106 Braunschweig (Germany); Bakin, A [Institute of Semiconductor Technology, Technical University of Braunschweig, Hans-Sommer-Strasse 66, D-38106 Braunschweig (Germany); Kreye, M [Institute of Semiconductor Technology, Technical University of Braunschweig, Hans-Sommer-Strasse 66, D-38106 Braunschweig (Germany); Fuhrmann, D [Institute of Applied Physics, Technical University of Braunschweig, Mendelssohnstrasse 2, D-38106 Braunschweig (Germany); Blaesing, J [Institute of Experimental Physics, Otto-von-Guericke-University Magdeburg, Universitaetsplatz 1, 39016 Magdeburg (Germany); Hangleiter, A [Institute of Applied Physics, Technical University of Braunschweig, Mendelssohnstrasse 2, D-38106 Braunschweig (Germany); Krost, A [Institute of Experimental Physics, Otto-von-Guericke-University Magdeburg, Universitaetsplatz 1, 39016 Magdeburg (Germany); Waag, A [Institute of Semiconductor Technology, Technical University of Braunschweig, Hans-Sommer-Strasse 66, D-38106 Braunschweig (Germany)

    2007-05-16

    Aqueous chemical growth (ACG) is an efficient way to generate wafer-scale and densely packed arrays of ZnO nanopillars on various substrate materials. ACG is a low-temperature growth approach that is only weakly influenced by the substrate and even allows growth on flexible polymer substrates or on conducting materials. The advanced fabrication of wafer-scale and highly vertically aligned arrays of ZnO nanopillars on various substrate materials is demonstrated. Moreover, it is possible to control the morphology in diameter and length by changing the growth conditions. Photoluminescence characterization clearly shows a comparatively strong band-edge luminescence, even at room temperature, that is accompanied by a rather weak visible luminescence in the yellow/orange spectral range.

  4. On Maximum Entropy and Inference

    Directory of Open Access Journals (Sweden)

    Luigi Gresele

    2017-11-01

    Full Text Available Maximum entropy is a powerful concept that entails a sharp separation between relevant and irrelevant variables. It is typically invoked in inference, once an assumption is made on what the relevant variables are, in order to estimate a model from data, that affords predictions on all other (dependent variables. Conversely, maximum entropy can be invoked to retrieve the relevant variables (sufficient statistics directly from the data, once a model is identified by Bayesian model selection. We explore this approach in the case of spin models with interactions of arbitrary order, and we discuss how relevant interactions can be inferred. In this perspective, the dimensionality of the inference problem is not set by the number of parameters in the model, but by the frequency distribution of the data. We illustrate the method showing its ability to recover the correct model in a few prototype cases and discuss its application on a real dataset.

  5. IEFIT - An Interactive Approach to High Temperature Fusion Plasma Magnetic Equilibrium Fitting

    International Nuclear Information System (INIS)

    Peng, Q.; Schachter, J.; Schissel, D.P.; Lao, L.L.

    1999-01-01

    An interactive IDL based wrapper, IEFIT, has been created for the magnetic equilibrium reconstruction code EFIT written in FORTRAN. It allows high temperature fusion physicists to rapidly optimize a plasma equilibrium reconstruction by eliminating the unnecessarily repeated initialization in the conventional approach along with the immediate display of the fitting results of each input variation. It uses a new IDL based graphics package, GaPlotObj, developed in cooperation with Fanning Software Consulting, that provides a unified interface with great flexibility in presenting and analyzing scientific data. The overall interactivity reduces the process to minutes from the usual hours

  6. An environment-friendly microemulsion approach to α-FeOOH nanorods at room temperature

    International Nuclear Information System (INIS)

    Geng Fengxia; Zhao Zhigang; Cong Hongtao; Geng Jianxin; Cheng Huiming

    2006-01-01

    α-FeOOH nanorods have been prepared at room temperature by an environment-friendly microemulsion approach. X-ray diffraction and transmission electron microscopy revealed that the single-crystalline orthorhombic α-FeOOH nanorods are 8.2 ± 1.5 nm in diameter and 106 ± 16 nm in length. Furthermore, the mechanism for the formation of α-FeOOH nanorods is preliminarily presented. This method may be widely used for reference to fabricate other inorganic one-dimensional nanostructured materials and easily realized in industrial-scale synthesis

  7. Towards a comprehensive theory for He II: I. A zero-temperature hybrid approach

    International Nuclear Information System (INIS)

    Ghassib, H.B.; Khudeir, A.M.

    1982-09-01

    A simple hybrid approach based on a gauge theory as well as a Hartree formalism, is presented for He II at zero temperature. Although this is intended to be merely a first step in an all-embracing theory, it already resolves quite neatly several old inconsistencies and corrects a few errors. As an illustration of its feasibility, a crude but instructive calculation is performed for the static structure factor of the system at low momentum transfers. A number of planned extensions and generalizations are outlined. (author)

  8. Temperature and precipitation records from stalagmites grown under disequilibrium conditions: A model approach.

    Science.gov (United States)

    Mühlinghaus, C.; Scholz, D.; Mangini, A.

    2009-04-01

    records of the stalagmites, which were obtained by three independent model runs, follow a general trend over the whole growth period and reveal good correlations within certain time-frames. This is a promising first result of the model. In addition the calculated temperatures are robust to small variations of the input variables, giving confidence in the algorithm of the model and for further temperature reconstructions from kinetically grown stalagmites. Literature Dreybrodt, W., 1999. Chemical kinetics, speleothem growth and climate. Boreas 28, 347 -356. Kaufmann, G., Dreybrodt, W. , 2004. Stalagmite growth and palaeo-climate: an inverse approach. Earth and Planetary Science Letters 224, 529 - 545. Kilian, R., Biester, H., Behrmann, J., Baeza, O., Fesq-Martin, M., Hohner, M., Schimpf, D., Friedmann, A., Mangini, A., 2006: Millennium-scale volcanic impact on a superhumid and pristine ecosystem, Geology, 34: 609 - 612. Mühlinghaus, C., Scholz, D., Mangini, A., 2007. Modelling stalagmite growth and ^13C as a function of drip interval and temperature. Geochimica et Cosmochimica Acta 71, 2780 - 2790. Mühlinghaus, C.; Scholz, D., Mangini, A., 2008a. Temperature and precipitation records from stalagmites grown under disequilibrium conditions: A first approach. Advances in speleothem research, PAGES News, 16 Mühlinghaus, C., Scholz, D., Mangini, A., 2008b. Fractionation of stable isotopes in stalagmites under disequilibrium conditions. Geochimica et Cosmochimica Acta, Submitted. Schimpf, D., 2005. Datierung und Interpretation der Kohlenstoff- und Sauerstoffsotopie zweier holozäner Stalagmiten aus dem Süden Chiles (Patagonien). Master's thesis, Ruprecht-Karls- University, Heidelberg. Schimpf, D., Kilian, R., Mangini, A., Spötl, C., Kronz, A., in prep. Evaluation of chemical, isotopic and detrital proxies of a high-resolution stalagmite record from the southernmost Chilean Andes (53˚ S) Scholz, D., Mühlinghaus, C., Mangini, A., 2008. Modelling the evolution of ^13C and

  9. Low-temperature VRH conduction through complex materials in the presence of a temperature-dependent voltage threshold: A semi-classical percolative approach

    International Nuclear Information System (INIS)

    Sen, A.K.; Bhattacharya, S.

    2006-12-01

    In this paper, we study the variation of low temperature (T) dc conductance, G(T), of a semi-classical percolative Random Resistor cum Tunneling-bond Network (RRTN), in the presence of a linearly temperature-dependent microscopic voltage threshold, υ g (T). This model (proposed by our group in the early 90's) considers a phenomenological semi-classical tunneling (or, hopping through a barrier) process. Just as in our previous constant-υ g case, we find in the present study also that the variable range hopping (VRH) exponent γ varies continuously with the ohmic concentration p in a non-monotonic fashion. In addition, we observe a new shoulder-like behaviour of G(T) in the intermediate temperature range, below the conductance maximum. (author)

  10. A novel approach to quench detection for high temperature superconducting coils

    International Nuclear Information System (INIS)

    Song, W.J.; Fang, X.Y.; Fang, J.; Wei, B.; Hou, J.Z.; Liu, L.F.; Lu, K.K.; Li, Shuo

    2015-01-01

    Highlights: • We proposed a novel quench detection method mainly based on phase for HTS coil. • We showed theory model and numerical simulation system by LabVIEW. • Experiment results are showed and analyzed. • Little quench voltage will cause obvious change on phase. • The approach can accurately detect quench resistance voltage in real-time. - Abstract: A novel approach to quench detection for high temperature superconducting (HTS) coils is proposed, which is mainly based on phase angle between voltage and current of two coils to detect the quench resistance voltage. The approach is analyzed theoretically, verified experimentally and analytically by MATLAB Simulink and LabVIEW. An analog quench circuit is built on Simulink and a quench alarm system program is written in LabVIEW. Experiment of quench detection is further conducted. The sinusoidal AC currents ranging from 19.9 A to 96 A are transported to the HTS coils, whose critical current is 90 A at 77 K. The results of analog simulation and experiment are analyzed and they show good consistency. It is shown that with the increase of current, the phase undergoes apparent growth, and it is up to 60° and 15° when the current reaches critical value experimentally and analytically, respectively. It is concluded that the approach proposed in this paper can meet the need of precision and quench resistance voltage can be detected in time.

  11. A novel approach to quench detection for high temperature superconducting coils

    Energy Technology Data Exchange (ETDEWEB)

    Song, W.J., E-mail: songwenjuan@bjtu.edu.cn [School of Electrical Engineering, Beijing Jiaotong University, Beijing (China); China Electric Power Research Institute, Beijing (China); Fang, X.Y. [Department of Electrical and Computer Engineering, University of Victoria, PO Box 1700, STN CSC, Victoria, BC V8W 2Y2 (Canada); Fang, J., E-mail: fangseer@sina.com [School of Electrical Engineering, Beijing Jiaotong University, Beijing (China); Wei, B.; Hou, J.Z. [China Electric Power Research Institute, Beijing (China); Liu, L.F. [Guangzhou Metro Design & Research Institute Co., Ltd, Guangdong (China); Lu, K.K. [School of Electrical Engineering, Beijing Jiaotong University, Beijing (China); Li, Shuo [College of Information Science and Engineering, Northeastern University, Shenyang (China)

    2015-11-15

    Highlights: • We proposed a novel quench detection method mainly based on phase for HTS coil. • We showed theory model and numerical simulation system by LabVIEW. • Experiment results are showed and analyzed. • Little quench voltage will cause obvious change on phase. • The approach can accurately detect quench resistance voltage in real-time. - Abstract: A novel approach to quench detection for high temperature superconducting (HTS) coils is proposed, which is mainly based on phase angle between voltage and current of two coils to detect the quench resistance voltage. The approach is analyzed theoretically, verified experimentally and analytically by MATLAB Simulink and LabVIEW. An analog quench circuit is built on Simulink and a quench alarm system program is written in LabVIEW. Experiment of quench detection is further conducted. The sinusoidal AC currents ranging from 19.9 A to 96 A are transported to the HTS coils, whose critical current is 90 A at 77 K. The results of analog simulation and experiment are analyzed and they show good consistency. It is shown that with the increase of current, the phase undergoes apparent growth, and it is up to 60° and 15° when the current reaches critical value experimentally and analytically, respectively. It is concluded that the approach proposed in this paper can meet the need of precision and quench resistance voltage can be detected in time.

  12. Comparison of different statistical modelling approaches for deriving spatial air temperature patterns in an urban environment

    Science.gov (United States)

    Straub, Annette; Beck, Christoph; Breitner, Susanne; Cyrys, Josef; Geruschkat, Uta; Jacobeit, Jucundus; Kühlbach, Benjamin; Kusch, Thomas; Richter, Katja; Schneider, Alexandra; Umminger, Robin; Wolf, Kathrin

    2017-04-01

    Frequently spatial variations of air temperature of considerable magnitude occur within urban areas. They correspond to varying land use/land cover characteristics and vary with season, time of day and synoptic conditions. These temperature differences have an impact on human health and comfort directly by inducing thermal stress as well as indirectly by means of affecting air quality. Therefore, knowledge of the spatial patterns of air temperature in cities and the factors causing them is of great importance, e.g. for urban planners. A multitude of studies have shown statistical modelling to be a suitable tool for generating spatial air temperature patterns. This contribution presents a comparison of different statistical modelling approaches for deriving spatial air temperature patterns in the urban environment of Augsburg, Southern Germany. In Augsburg there exists a measurement network for air temperature and humidity currently comprising 48 stations in the city and its rural surroundings (corporately operated by the Institute of Epidemiology II, Helmholtz Zentrum München, German Research Center for Environmental Health and the Institute of Geography, University of Augsburg). Using different datasets for land surface characteristics (Open Street Map, Urban Atlas) area percentages of different types of land cover were calculated for quadratic buffer zones of different size (25, 50, 100, 250, 500 m) around the stations as well for source regions of advective air flow and used as predictors together with additional variables such as sky view factor, ground level and distance from the city centre. Multiple Linear Regression and Random Forest models for different situations taking into account season, time of day and weather condition were applied utilizing selected subsets of these predictors in order to model spatial distributions of mean hourly and daily air temperature deviations from a rural reference station. Furthermore, the different model setups were

  13. Experimental Validation of Various Temperature Modells for Semi-Physical Tyre Model Approaches

    Science.gov (United States)

    Hackl, Andreas; Scherndl, Christoph; Hirschberg, Wolfgang; Lex, Cornelia

    2017-10-01

    With increasing level of complexity and automation in the area of automotive engineering, the simulation of safety relevant Advanced Driver Assistance Systems (ADAS) leads to increasing accuracy demands in the description of tyre contact forces. In recent years, with improvement in tyre simulation, the needs for coping with tyre temperatures and the resulting changes in tyre characteristics are rising significantly. Therefore, experimental validation of three different temperature model approaches is carried out, discussed and compared in the scope of this article. To investigate or rather evaluate the range of application of the presented approaches in combination with respect of further implementation in semi-physical tyre models, the main focus lies on the a physical parameterisation. Aside from good modelling accuracy, focus is held on computational time and complexity of the parameterisation process. To evaluate this process and discuss the results, measurements from a Hoosier racing tyre 6.0 / 18.0 10 LCO C2000 from an industrial flat test bench are used. Finally the simulation results are compared with the measurement data.

  14. Influence of the temperature and oxygen exposure in red Port wine: A kinetic approach.

    Science.gov (United States)

    Oliveira, Carla Maria; Barros, António S; Silva Ferreira, António César; Silva, Artur M S

    2015-09-01

    Although phenolics are recognized to be related with health benefits by limiting lipid oxidation, in wine, they are the primary substrates for oxidation resulting in the quinone by-products with the participation of transition metal ions. Nevertheless, high quality Port wines require a period of aging in either bottle or barrels. During this time, a modification of sensory properties of wines such as the decrease of astringency or the stabilization of color is recognized to phenolic compounds, mainly attributed to anthocyanins and derived pigments. The present work aims to illustrate the oxidation of red Port wine based on its phenolic composition by the effect of both thermal and oxygen exposures. A kinetic approach toanthocyanins degradation was also achieved. For this purpose a forced red Port wine aging protocol was performed at four different storage temperatures, respectively, 20, 30, 35 and 40°C, and two adjusted oxygen saturation levels, no oxygen addition (treatment I), and oxygen addition (treatment II). Three hydroxycinnamic esters, three hydroxycinnamic acids, three hydroxybenzoic acids, two flavan-3-ols, and six anthocyanins were quantitated weekly during 63days, along with oxygen consumption. The most relevant phenolic oxidation markers were anthocyanins and catechin-type flavonoids, which had the highest decreases during the thermal and oxidative red Port wine process. Both temperature and oxygen treatments affected the rate of phenolic degradation. In addition, temperature seems to influence mostly the phenolics kinetic degradation. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. Synthesis of Highly Uniform and Compact Lithium Zinc Ferrite Ceramics via an Efficient Low Temperature Approach.

    Science.gov (United States)

    Xu, Fang; Liao, Yulong; Zhang, Dainan; Zhou, Tingchuan; Li, Jie; Gan, Gongwen; Zhang, Huaiwu

    2017-04-17

    LiZn ferrite ceramics with high saturation magnetization (4πM s ) and low ferromagnetic resonance line widths (ΔH) represent a very critical class of material for microwave ferrite devices. Many existing approaches emphasize promotion of the grain growth (average size is 10-50 μm) of ferrite ceramics to improve the gyromagnetic properties at relatively low sintering temperatures. This paper describes a new strategy for obtaining uniform and compact LiZn ferrite ceramics (average grains size is ∼2 μm) with enhanced magnetic performance by suppressing grain growth in great detail. The LiZn ferrites with a formula of Li 0.415 Zn 0.27 Mn 0.06 Ti 0.1 Fe 2.155 O 4 were prepared by solid reaction routes with two new sintering strategies. Interestingly, results show that uniform, compact, and pure spinel ferrite ceramics were synthesized at a low temperature (∼850 °C) without obvious grain growth. We also find that a fast second sintering treatment (FSST) can further improve their gyromagnetic properties, such as higher 4πM s and lower ΔH. The two new strategies are facile and efficient for densification of LiZn ferrite ceramics via suppressing grain growth at low temperatures. The sintering strategy reported in this study also provides a referential experience for other ceramics, such as soft magnetism ferrite ceramics or dielectric ceramics.

  16. Predicting critical temperatures of iron(II) spin crossover materials: Density functional theory plus U approach

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Yachao, E-mail: yczhang@nano.gznc.edu.cn [Guizhou Provincial Key Laboratory of Computational Nano-Material Science, Guizhou Normal College, Guiyang 550018, Guizhou (China)

    2014-12-07

    A first-principles study of critical temperatures (T{sub c}) of spin crossover (SCO) materials requires accurate description of the strongly correlated 3d electrons as well as much computational effort. This task is still a challenge for the widely used local density or generalized gradient approximations (LDA/GGA) and hybrid functionals. One remedy, termed density functional theory plus U (DFT+U) approach, introduces a Hubbard U term to deal with the localized electrons at marginal computational cost, while treats the delocalized electrons with LDA/GGA. Here, we employ the DFT+U approach to investigate the T{sub c} of a pair of iron(II) SCO molecular crystals (α and β phase), where identical constituent molecules are packed in different ways. We first calculate the adiabatic high spin-low spin energy splitting ΔE{sub HL} and molecular vibrational frequencies in both spin states, then obtain the temperature dependent enthalpy and entropy changes (ΔH and ΔS), and finally extract T{sub c} by exploiting the ΔH/T − T and ΔS − T relationships. The results are in agreement with experiment. Analysis of geometries and electronic structures shows that the local ligand field in the α phase is slightly weakened by the H-bondings involving the ligand atoms and the specific crystal packing style. We find that this effect is largely responsible for the difference in T{sub c} of the two phases. This study shows the applicability of the DFT+U approach for predicting T{sub c} of SCO materials, and provides a clear insight into the subtle influence of the crystal packing effects on SCO behavior.

  17. A non-invasive experimental approach for surface temperature measurements on semi-crystalline thermoplastics

    Science.gov (United States)

    Boztepe, Sinan; Gilblas, Remi; de Almeida, Olivier; Le Maoult, Yannick; Schmidt, Fabrice

    2017-10-01

    Most of the thermoforming processes of thermoplastic polymers and their composites are performed adopting a combined heating and forming stages at which a precursor is heated prior to the forming. This step is done in order to improve formability by softening the thermoplastic polymer. Due to low thermal conductivity and semi-transparency of polymers, infrared (IR) heating is widely used for thermoforming of such materials. Predictive radiation heat transfer models for temperature distributions are therefore critical for optimizations of thermoforming process. One of the key challenges is to build a predictive model including the physical background of radiation heat transfer phenomenon in semi-crystalline thermoplastics as their microcrystalline structure introduces an optically heterogeneous medium. In addition, the accuracy of a predictive model is required to be validated experimentally where IR thermography is one of the suitable methods for such a validation as it provides a non-invasive, full-field surface temperature measurement. Although IR cameras provide a non-invasive measurement, a key issue for obtaining a reliable measurement depends on the optical characteristics of a heated material and the operating spectral band of IR camera. It is desired that the surface of a material to be measured has a spectral band where the material behaves opaque and an employed IR camera operates in the corresponding band. In this study, the optical characteristics of the PO-based polymer are discussed and, an experimental approach is proposed in order to measure the surface temperature of the PO-based polymer via IR thermography. The preliminary analyses showed that IR thermographic measurements may not be simply performed on PO-based polymers and require a correction method as their semi-transparent medium introduce a challenge to obtain reliable surface temperature measurements.

  18. Understanding uncertainty in temperature effects on vector-borne disease: a Bayesian approach

    Science.gov (United States)

    Johnson, Leah R.; Ben-Horin, Tal; Lafferty, Kevin D.; McNally, Amy; Mordecai, Erin A.; Paaijmans, Krijn P.; Pawar, Samraat; Ryan, Sadie J.

    2015-01-01

    Extrinsic environmental factors influence the distribution and population dynamics of many organisms, including insects that are of concern for human health and agriculture. This is particularly true for vector-borne infectious diseases like malaria, which is a major source of morbidity and mortality in humans. Understanding the mechanistic links between environment and population processes for these diseases is key to predicting the consequences of climate change on transmission and for developing effective interventions. An important measure of the intensity of disease transmission is the reproductive number R0. However, understanding the mechanisms linking R0 and temperature, an environmental factor driving disease risk, can be challenging because the data available for parameterization are often poor. To address this, we show how a Bayesian approach can help identify critical uncertainties in components of R0 and how this uncertainty is propagated into the estimate of R0. Most notably, we find that different parameters dominate the uncertainty at different temperature regimes: bite rate from 15°C to 25°C; fecundity across all temperatures, but especially ~25–32°C; mortality from 20°C to 30°C; parasite development rate at ~15–16°C and again at ~33–35°C. Focusing empirical studies on these parameters and corresponding temperature ranges would be the most efficient way to improve estimates of R0. While we focus on malaria, our methods apply to improving process-based models more generally, including epidemiological, physiological niche, and species distribution models.

  19. Maximum entropy methods

    International Nuclear Information System (INIS)

    Ponman, T.J.

    1984-01-01

    For some years now two different expressions have been in use for maximum entropy image restoration and there has been some controversy over which one is appropriate for a given problem. Here two further entropies are presented and it is argued that there is no single correct algorithm. The properties of the four different methods are compared using simple 1D simulations with a view to showing how they can be used together to gain as much information as possible about the original object. (orig.)

  20. On a closed form approach to the fractional neutron point kinetics equation with temperature feedback

    International Nuclear Information System (INIS)

    Schramm, Marcelo; Bodmann, Bardo E.J.; Vilhena, Marco T.M.B.; Petersen, Claudio Z.; Alvim, Antonio C.M.

    2013-01-01

    Following the quest to find analytical solutions, we extend the methodology applied successfully to timely fractional neutron point kinetics (FNPK) equations by adding the effects of temperature. The FNPK equations with temperature feedback correspond to a nonlinear system and “stiff” type for the neutron density and the concentration of delayed neutron precursors. These variables determine the behavior of a nuclear reactor power with time and are influenced by the position of control rods, for example. The solutions of kinetics equations provide time information about the dynamics in a nuclear reactor in operation and are useful, for example, to understand the power fluctuations with time that occur during startup or shutdown of the reactor, due to adjustments of the control rods. The inclusion of temperature feedback in the model introduces an estimate of the transient behavior of the power and other variables, which are strongly coupled. Normally, a single value of reactivity is used across the energy spectrum. Especially in case of power change, the neutron energy spectrum changes as well as physical parameters such as the average cross sections. However, even knowing the importance of temperature effects on the control of the reactor power, the character of the set of nonlinear equations governing this system makes it difficult to obtain a purely analytical solution. Studies have been published in this sense, using numerical approaches. Here the idea is to consider temperature effects to make the model more realistic and thus solve it in a semi-analytical way. Therefore, the main objective of this paper is to obtain an analytical representation of fractional neutron point kinetics equations with temperature feedback, without having to resort to approximations inherent in numerical methods. To this end, we will use the decomposition method, which has been successfully used by the authors to solve neutron point kinetics problems. The results obtained will

  1. On a closed form approach to the fractional neutron point kinetics equation with temperature feedback

    Energy Technology Data Exchange (ETDEWEB)

    Schramm, Marcelo; Bodmann, Bardo E.J.; Vilhena, Marco T.M.B., E-mail: marceloschramm@hotmail.com, E-mail: bardo.bodmann@ufrgs.br, E-mail: mtmbvilhena@gmail.com [Universidade Federal do Rio Grande do Sul (UFRGS), Porto Alegre, RS (Brazil). Departamento de Engenharia Mecanica; Petersen, Claudio Z., E-mail: claudiopetersen@yahoo.com.br [Universidade Federal de Pelotas (UFPel), RS (Brazil). Departamento de Matematica; Alvim, Antonio C.M., E-mail: alvim@nuclear.ufrj.br [Universidade Federal do Rio de Janeiro (UFRJ), RJ (Brazil). Instituto Alberto Luiz Coimbra de Pos-Graduacao e Pesquisa em Engenharia

    2013-07-01

    Following the quest to find analytical solutions, we extend the methodology applied successfully to timely fractional neutron point kinetics (FNPK) equations by adding the effects of temperature. The FNPK equations with temperature feedback correspond to a nonlinear system and “stiff” type for the neutron density and the concentration of delayed neutron precursors. These variables determine the behavior of a nuclear reactor power with time and are influenced by the position of control rods, for example. The solutions of kinetics equations provide time information about the dynamics in a nuclear reactor in operation and are useful, for example, to understand the power fluctuations with time that occur during startup or shutdown of the reactor, due to adjustments of the control rods. The inclusion of temperature feedback in the model introduces an estimate of the transient behavior of the power and other variables, which are strongly coupled. Normally, a single value of reactivity is used across the energy spectrum. Especially in case of power change, the neutron energy spectrum changes as well as physical parameters such as the average cross sections. However, even knowing the importance of temperature effects on the control of the reactor power, the character of the set of nonlinear equations governing this system makes it difficult to obtain a purely analytical solution. Studies have been published in this sense, using numerical approaches. Here the idea is to consider temperature effects to make the model more realistic and thus solve it in a semi-analytical way. Therefore, the main objective of this paper is to obtain an analytical representation of fractional neutron point kinetics equations with temperature feedback, without having to resort to approximations inherent in numerical methods. To this end, we will use the decomposition method, which has been successfully used by the authors to solve neutron point kinetics problems. The results obtained will

  2. Analysis of the temperature effect on the water retention capacity of soil using a thermodynamic approach

    International Nuclear Information System (INIS)

    Jacinto, A.C.; Ledesma, A.; Villar, M.V.

    2012-01-01

    with temperature were due entirely to changes in interfacial tension of liquid water against its vapour. However, this correction has consistently failed to account for measured temperature induced changes in the water retention capacity. From the thermodynamic approach a coefficient that quantifies the influence of temperature on the water retention capacity can be deduced. In this way it is possible to explain the differences between the results deduced from experimental data and those obtained when the capillary principle is applied to evaluate the influence of the temperature on the water retention capacity of soils. Additionally, the analysis of the effect of the temperature on the retention capacity when thermodynamic concepts are used is formally equal to that deduced from the capillary model. Therefore, it is possible to include this effect on the SWRC in the same way that was made traditionally using the capillary model. Expansive clays are being considered in the design of engineered barriers. In this work, experimental data of the water retention capacity obtained on compacted samples of MX-80 and FEBEX bentonite at different temperature and densities were analysed using an approach based on thermodynamics of adsorption. By using these concepts it is possible to explain that mechanisms other than the change in the surface tension affect the retention capacity of water in soils as temperature increases. Also, the incorporation of the results previously obtained in a specific law used to define the SWRC is discussed in the paper. (authors)

  3. Maximum permissible voltage of YBCO coated conductors

    Energy Technology Data Exchange (ETDEWEB)

    Wen, J.; Lin, B.; Sheng, J.; Xu, J.; Jin, Z. [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Hong, Z., E-mail: zhiyong.hong@sjtu.edu.cn [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Wang, D.; Zhou, H.; Shen, X.; Shen, C. [Qingpu Power Supply Company, State Grid Shanghai Municipal Electric Power Company, Shanghai (China)

    2014-06-15

    Highlights: • We examine three kinds of tapes’ maximum permissible voltage. • We examine the relationship between quenching duration and maximum permissible voltage. • Continuous I{sub c} degradations under repetitive quenching where tapes reaching maximum permissible voltage. • The relationship between maximum permissible voltage and resistance, temperature. - Abstract: Superconducting fault current limiter (SFCL) could reduce short circuit currents in electrical power system. One of the most important thing in developing SFCL is to find out the maximum permissible voltage of each limiting element. The maximum permissible voltage is defined as the maximum voltage per unit length at which the YBCO coated conductors (CC) do not suffer from critical current (I{sub c}) degradation or burnout. In this research, the time of quenching process is changed and voltage is raised until the I{sub c} degradation or burnout happens. YBCO coated conductors test in the experiment are from American superconductor (AMSC) and Shanghai Jiao Tong University (SJTU). Along with the quenching duration increasing, the maximum permissible voltage of CC decreases. When quenching duration is 100 ms, the maximum permissible of SJTU CC, 12 mm AMSC CC and 4 mm AMSC CC are 0.72 V/cm, 0.52 V/cm and 1.2 V/cm respectively. Based on the results of samples, the whole length of CCs used in the design of a SFCL can be determined.

  4. Probable maximum flood control

    International Nuclear Information System (INIS)

    DeGabriele, C.E.; Wu, C.L.

    1991-11-01

    This study proposes preliminary design concepts to protect the waste-handling facilities and all shaft and ramp entries to the underground from the probable maximum flood (PMF) in the current design configuration for the proposed Nevada Nuclear Waste Storage Investigation (NNWSI) repository protection provisions were furnished by the United States Bureau of Reclamation (USSR) or developed from USSR data. Proposed flood protection provisions include site grading, drainage channels, and diversion dikes. Figures are provided to show these proposed flood protection provisions at each area investigated. These areas are the central surface facilities (including the waste-handling building and waste treatment building), tuff ramp portal, waste ramp portal, men-and-materials shaft, emplacement exhaust shaft, and exploratory shafts facility

  5. Introduction to maximum entropy

    International Nuclear Information System (INIS)

    Sivia, D.S.

    1988-01-01

    The maximum entropy (MaxEnt) principle has been successfully used in image reconstruction in a wide variety of fields. We review the need for such methods in data analysis and show, by use of a very simple example, why MaxEnt is to be preferred over other regularizing functions. This leads to a more general interpretation of the MaxEnt method, and its use is illustrated with several different examples. Practical difficulties with non-linear problems still remain, this being highlighted by the notorious phase problem in crystallography. We conclude with an example from neutron scattering, using data from a filter difference spectrometer to contrast MaxEnt with a conventional deconvolution. 12 refs., 8 figs., 1 tab

  6. Solar maximum observatory

    International Nuclear Information System (INIS)

    Rust, D.M.

    1984-01-01

    The successful retrieval and repair of the Solar Maximum Mission (SMM) satellite by Shuttle astronauts in April 1984 permitted continuance of solar flare observations that began in 1980. The SMM carries a soft X ray polychromator, gamma ray, UV and hard X ray imaging spectrometers, a coronagraph/polarimeter and particle counters. The data gathered thus far indicated that electrical potentials of 25 MeV develop in flares within 2 sec of onset. X ray data show that flares are composed of compressed magnetic loops that have come too close together. Other data have been taken on mass ejection, impacts of electron beams and conduction fronts with the chromosphere and changes in the solar radiant flux due to sunspots. 13 references

  7. Introduction to maximum entropy

    International Nuclear Information System (INIS)

    Sivia, D.S.

    1989-01-01

    The maximum entropy (MaxEnt) principle has been successfully used in image reconstruction in a wide variety of fields. The author reviews the need for such methods in data analysis and shows, by use of a very simple example, why MaxEnt is to be preferred over other regularizing functions. This leads to a more general interpretation of the MaxEnt method, and its use is illustrated with several different examples. Practical difficulties with non-linear problems still remain, this being highlighted by the notorious phase problem in crystallography. He concludes with an example from neutron scattering, using data from a filter difference spectrometer to contrast MaxEnt with a conventional deconvolution. 12 refs., 8 figs., 1 tab

  8. Functional Maximum Autocorrelation Factors

    DEFF Research Database (Denmark)

    Larsen, Rasmus; Nielsen, Allan Aasbjerg

    2005-01-01

    MAF outperforms the functional PCA in concentrating the interesting' spectra/shape variation in one end of the eigenvalue spectrum and allows for easier interpretation of effects. Conclusions. Functional MAF analysis is a useful methods for extracting low dimensional models of temporally or spatially......Purpose. We aim at data where samples of an underlying function are observed in a spatial or temporal layout. Examples of underlying functions are reflectance spectra and biological shapes. We apply functional models based on smoothing splines and generalize the functional PCA in......\\verb+~+\\$\\backslash\\$cite{ramsay97} to functional maximum autocorrelation factors (MAF)\\verb+~+\\$\\backslash\\$cite{switzer85,larsen2001d}. We apply the method to biological shapes as well as reflectance spectra. {\\$\\backslash\\$bf Methods}. MAF seeks linear combination of the original variables that maximize autocorrelation between...

  9. Regularized maximum correntropy machine

    KAUST Repository

    Wang, Jim Jing-Yan; Wang, Yunji; Jing, Bing-Yi; Gao, Xin

    2015-01-01

    In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.

  10. Regularized maximum correntropy machine

    KAUST Repository

    Wang, Jim Jing-Yan

    2015-02-12

    In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.

  11. Solvation behaviour of L-leucine in aqueous ionic liquid at different temperatures: Volumetric approach

    Science.gov (United States)

    Sharma, Samriti; Sandarve, Sharma, Amit K.; Sharma, Meena

    2018-05-01

    For the investigation of interactions of L-leucine in aqueous solutions of an ionic liquid (1-butyl-3-methylimidazolium tetra fluoroborate [Bmim][BF4]) at atmospheric pressure over a temperature range of (293.15K to 313.16K), we use the volumetric approach. By using the density data we have calculated the apparent molar volume, VΦ, limiting apparent molar volume, V0Φ, the slope, Sv, partial molar volume of transfer, V0Φ,tr. The values of these acoustical parameters have been used for the interpretation of different interactions like hydrophilic-hydrophilic, hydrophilic-hydrophobic, ion hydrophilic, solute-solvent and solute-solute interactions in the amino acid and ionic liquid solutions.

  12. GPU-based local interaction simulation approach for simplified temperature effect modelling in Lamb wave propagation used for damage detection

    International Nuclear Information System (INIS)

    Kijanka, P; Radecki, R; Packo, P; Staszewski, W J; Uhl, T

    2013-01-01

    Temperature has a significant effect on Lamb wave propagation. It is important to compensate for this effect when the method is considered for structural damage detection. The paper explores a newly proposed, very efficient numerical simulation tool for Lamb wave propagation modelling in aluminum plates exposed to temperature changes. A local interaction approach implemented with a parallel computing architecture and graphics cards is used for these numerical simulations. The numerical results are compared with the experimental data. The results demonstrate that the proposed approach could be used efficiently to produce a large database required for the development of various temperature compensation procedures in structural health monitoring applications. (paper)

  13. Hybrid Vibration Control under Broadband Excitation and Variable Temperature Using Viscoelastic Neutralizer and Adaptive Feedforward Approach

    Directory of Open Access Journals (Sweden)

    João C. O. Marra

    2016-01-01

    Full Text Available Vibratory phenomena have always surrounded human life. The need for more knowledge and domain of such phenomena increases more and more, especially in the modern society where the human-machine integration becomes closer day after day. In that context, this work deals with the development and practical implementation of a hybrid (passive-active/adaptive vibration control system over a metallic beam excited by a broadband signal and under variable temperature, between 5 and 35°C. Since temperature variations affect directly and considerably the performance of the passive control system, composed of a viscoelastic dynamic vibration neutralizer (also called a viscoelastic dynamic vibration absorber, the associative strategy of using an active-adaptive vibration control system (based on a feedforward approach with the use of the FXLMS algorithm working together with the passive one has shown to be a good option to compensate the neutralizer loss of performance and generally maintain the extended overall level of vibration control. As an additional gain, the association of both vibration control systems (passive and active-adaptive has improved the attenuation of vibration levels. Some key steps matured over years of research on this experimental setup are presented in this paper.

  14. Attitude sensor alignment calibration for the solar maximum mission

    Science.gov (United States)

    Pitone, Daniel S.; Shuster, Malcolm D.

    1990-01-01

    An earlier heuristic study of the fine attitude sensors for the Solar Maximum Mission (SMM) revealed a temperature dependence of the alignment about the yaw axis of the pair of fixed-head star trackers relative to the fine pointing Sun sensor. Here, new sensor alignment algorithms which better quantify the dependence of the alignments on the temperature are developed and applied to the SMM data. Comparison with the results from the previous study reveals the limitations of the heuristic approach. In addition, some of the basic assumptions made in the prelaunch analysis of the alignments of the SMM are examined. The results of this work have important consequences for future missions with stringent attitude requirements and where misalignment variations due to variations in the temperature will be significant.

  15. Solar maximum mission

    International Nuclear Information System (INIS)

    Ryan, J.

    1981-01-01

    By understanding the sun, astrophysicists hope to expand this knowledge to understanding other stars. To study the sun, NASA launched a satellite on February 14, 1980. The project is named the Solar Maximum Mission (SMM). The satellite conducted detailed observations of the sun in collaboration with other satellites and ground-based optical and radio observations until its failure 10 months into the mission. The main objective of the SMM was to investigate one aspect of solar activity: solar flares. A brief description of the flare mechanism is given. The SMM satellite was valuable in providing information on where and how a solar flare occurs. A sequence of photographs of a solar flare taken from SMM satellite shows how a solar flare develops in a particular layer of the solar atmosphere. Two flares especially suitable for detailed observations by a joint effort occurred on April 30 and May 21 of 1980. These flares and observations of the flares are discussed. Also discussed are significant discoveries made by individual experiments

  16. Comparison of fuzzy logic and neural network in maximum power point tracker for PV systems

    Energy Technology Data Exchange (ETDEWEB)

    Ben Salah, Chokri; Ouali, Mohamed [Research Unit on Intelligent Control, Optimization, Design and Optimization of Complex Systems (ICOS), Department of Electrical Engineering, National School of Engineers of Sfax, BP. W, 3038, Sfax (Tunisia)

    2011-01-15

    This paper proposes two methods of maximum power point tracking using a fuzzy logic and a neural network controllers for photovoltaic systems. The two maximum power point tracking controllers receive solar radiation and photovoltaic cell temperature as inputs, and estimated the optimum duty cycle corresponding to maximum power as output. The approach is validated on a 100 Wp PVP (two parallels SM50-H panel) connected to a 24 V dc load. The new method gives a good maximum power operation of any photovoltaic array under different conditions such as changing solar radiation and PV cell temperature. From the simulation and experimental results, the fuzzy logic controller can deliver more power than the neural network controller and can give more power than other different methods in literature. (author)

  17. Novel methods for estimating lithium-ion battery state of energy and maximum available energy

    International Nuclear Information System (INIS)

    Zheng, Linfeng; Zhu, Jianguo; Wang, Guoxiu; He, Tingting; Wei, Yiying

    2016-01-01

    Highlights: • Study on temperature, current, aging dependencies of maximum available energy. • Study on the various factors dependencies of relationships between SOE and SOC. • A quantitative relationship between SOE and SOC is proposed for SOE estimation. • Estimate maximum available energy by means of moving-window energy-integral. • The robustness and feasibility of the proposed approaches are systematic evaluated. - Abstract: The battery state of energy (SOE) allows a direct determination of the ratio between the remaining and maximum available energy of a battery, which is critical for energy optimization and management in energy storage systems. In this paper, the ambient temperature, battery discharge/charge current rate and cell aging level dependencies of battery maximum available energy and SOE are comprehensively analyzed. An explicit quantitative relationship between SOE and state of charge (SOC) for LiMn_2O_4 battery cells is proposed for SOE estimation, and a moving-window energy-integral technique is incorporated to estimate battery maximum available energy. Experimental results show that the proposed approaches can estimate battery maximum available energy and SOE with high precision. The robustness of the proposed approaches against various operation conditions and cell aging levels is systematically evaluated.

  18. Maximum mutual information regularized classification

    KAUST Repository

    Wang, Jim Jing-Yan

    2014-09-07

    In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.

  19. Maximum mutual information regularized classification

    KAUST Repository

    Wang, Jim Jing-Yan; Wang, Yi; Zhao, Shiguang; Gao, Xin

    2014-01-01

    In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.

  20. Thermoelectric cooler concepts and the limit for maximum cooling

    International Nuclear Information System (INIS)

    Seifert, W; Hinsche, N F; Pluschke, V

    2014-01-01

    The conventional analysis of a Peltier cooler approximates the material properties as independent of temperature using a constant properties model (CPM). Alternative concepts have been published by Bian and Shakouri (2006 Appl. Phys. Lett. 89 212101), Bian (et al 2007 Phys. Rev. B 75 245208) and Snyder et al (2012 Phys. Rev. B 86 045202). While Snyder's Thomson cooler concept results from a consideration of compatibility, the method of Bian et al focuses on the redistribution of heat. Thus, both approaches are based on different principles. In this paper we compare the new concepts to CPM and we reconsider the limit for maximum cooling. The results provide a new perspective on maximum cooling. (paper)

  1. Tribocorrosion in pressurized high temperature water: a mass flow model based on the third body approach

    Energy Technology Data Exchange (ETDEWEB)

    Guadalupe Maldonado, S.

    2014-07-01

    Pressurized water reactors (PWR) used for power generation are operated at elevated temperatures (280-300 °C) and under higher pressure (120-150 bar). In addition to these harsh environmental conditions some components of the PWR assemblies are subject to mechanical loading (sliding, vibration and impacts) leading to undesirable and hardly controllable material degradation phenomena. In such situations wear is determined by the complex interplay (tribocorrosion) between mechanical, material and physical-chemical phenomena. Tribocorrosion in PWR conditions is at present little understood and models need to be developed in order to predict component lifetime over several decades. The goal of this project, carried out in collaboration with the French company AREVA NP, is to develop a predictive model based on the mechanistic understanding of tribocorrosion of specific PWR components (stainless steel control assemblies, stellite grippers). The approach taken here is to describe degradation in terms of electro-chemical and mechanical material flows (third body concept of tribology) from the metal into the friction film (i.e. the oxidized film forming during rubbing on the metal surface) and from the friction film into the environment instead of simple mass loss considerations. The project involves the establishment of mechanistic models for describing the single flows based on ad-hoc tribocorrosion measurements operating at low temperature. The overall behaviour at high temperature and pressure in investigated using a dedicated tribometer (Aurore) including electrochemical control of the contact during rubbing. Physical laws describing the individual flows according to defined mechanisms and as a function of defined physical parameters were identified based on the obtained experimental results and from literature data. The physical laws were converted into mass flow rates and solved as differential equation system by considering the mass balance in compartments

  2. A GM (1, 1) Markov Chain-Based Aeroengine Performance Degradation Forecast Approach Using Exhaust Gas Temperature

    OpenAIRE

    Zhao, Ning-bo; Yang, Jia-long; Li, Shu-ying; Sun, Yue-wu

    2014-01-01

    Performance degradation forecast technology for quantitatively assessing degradation states of aeroengine using exhaust gas temperature is an important technology in the aeroengine health management. In this paper, a GM (1, 1) Markov chain-based approach is introduced to forecast exhaust gas temperature by taking the advantages of GM (1, 1) model in time series and the advantages of Markov chain model in dealing with highly nonlinear and stochastic data caused by uncertain factors. In this ap...

  3. Effects of temperature and irradiance on a benthic microalgal community: A combined two-dimensional oxygen and fluorescence imaging approach

    DEFF Research Database (Denmark)

    Hancke, Kasper; Sorrell, Brian Keith; Lund-Hansen, Lars Chresten

    2014-01-01

    The effects of temperature and light on both oxygen (O2) production and gross photosynthesis were resolved in a benthic microalgae community by combining two-dimensional (2D) imaging of O2 and variable chlorophyll a (Chl a) fluorescence. Images revealed a photosynthetically active community...... microbial community, at different temperatures. The present imaging approach demonstrates a great potential to study consequences of environmental effects on photosynthetic activity and O2 turnover in complex phototrophic benthic communities....

  4. Downscaling Meteosat Land Surface Temperature over a Heterogeneous Landscape Using a Data Assimilation Approach

    Directory of Open Access Journals (Sweden)

    Rihab Mechri

    2016-07-01

    Full Text Available A wide range of environmental applications require the monitoring of land surface temperature (LST at frequent intervals and fine spatial resolutions, but these conditions are not offered nowadays by the available space sensors. To overcome these shortcomings, LST downscaling methods have been developed to derive higher resolution LST from the available satellite data. This research concerns the application of a data assimilation (DA downscaling approach, the genetic particle smoother (GPS, to disaggregate Meteosat 8 LST time series (3 km × 5 km at finer spatial resolutions. The methodology was applied over the Crau-Camargue region in Southeastern France for seven months in 2009. The evaluation of the downscaled LSTs has been performed at a moderate resolution using a set of coincident clear-sky MODIS LST images from Aqua and Terra platforms (1 km × 1 km and at a higher resolution using Landsat 7 data (60 m × 60 m. The performance of the downscaling has been assessed in terms of reduction of the biases and the root mean square errors (RMSE compared to prior model-simulated LSTs. The results showed that GPS allows downscaling the Meteosat LST product from 3 × 5 km2 to 1 × 1 km2 scales with a RMSE less than 2.7 K. Finer scale downscaling at Landsat 7 resolution showed larger errors (RMSE around 5 K explained by land cover errors and inter-calibration issues between sensors. Further methodology improvements are finally suggested.

  5. A density variational approach to nuclear giant resonances at zero and finite temperature

    International Nuclear Information System (INIS)

    Gleissl, P.; Brack, M.; Quentin, P.; Meyer, J.

    1989-02-01

    We present a density functional approach to the description of nuclear giant resonances (GR), using Skyrme type effective interactions. We exploit hereby the theorems of Thouless and others, relating RPA sum rules to static (constrained) Hartree-Fock expectation values. The latter are calculated both microscopically and, where shell effects are small enough to allow it, semiclassically by a density variational method employing the gradient-expanded density functionals of the extended Thomas-Fermi model. We obtain an excellent overall description of both systematics and detailed isotopic dependence of GR energies, in particular with the Skyrme force SkM. For the breathing modes (isoscalar and isovector giant monopole modes), and to some extent also for the isovector dipole mode, the A-dependence of the experimental peak energies is better described by coupling two different modes (corresponding to two different excitation operators) of the same spin and parity and evaluating the eigenmodes of the coupled system. Our calculations are also extended to highly excited nuclei (without angular momentum) and the temperature dependence of the various GR energies is discussed

  6. Prediction of hydrate formation temperature by both statistical models and artificial neural network approaches

    International Nuclear Information System (INIS)

    Zahedi, Gholamreza; Karami, Zohre; Yaghoobi, Hamed

    2009-01-01

    In this study, various estimation methods have been reviewed for hydrate formation temperature (HFT) and two procedures have been presented. In the first method, two general correlations have been proposed for HFT. One of the correlations has 11 parameters, and the second one has 18 parameters. In order to obtain constants in proposed equations, 203 experimental data points have been collected from literatures. The Engineering Equation Solver (EES) and Statistical Package for the Social Sciences (SPSS) soft wares have been employed for statistical analysis of the data. Accuracy of the obtained correlations also has been declared by comparison with experimental data and some recent common used correlations. In the second method, HFT is estimated by artificial neural network (ANN) approach. In this case, various architectures have been checked using 70% of experimental data for training of ANN. Among the various architectures multi layer perceptron (MLP) network with trainlm training algorithm was found as the best architecture. Comparing the obtained ANN model results with 30% of unseen data confirms ANN excellent estimation performance. It was found that ANN is more accurate than traditional methods and even our two proposed correlations for HFT estimation.

  7. High-Temperature Phase Equilibria of Duplex Stainless Steels Assessed with a Novel In-Situ Neutron Scattering Approach

    Science.gov (United States)

    Pettersson, Niklas; Wessman, Sten; Hertzman, Staffan; Studer, Andrew

    2017-04-01

    Duplex stainless steels are designed to solidify with ferrite as the parent phase, with subsequent austenite formation occurring in the solid state, implying that, thermodynamically, a fully ferritic range should exist at high temperatures. However, computational thermodynamic tools appear currently to overestimate the austenite stability of these systems, and contradictory data exist in the literature. In the present work, the high-temperature phase equilibria of four commercial duplex stainless steel grades, denoted 2304, 2101, 2507, and 3207, with varying alloying levels were assessed by measurements of the austenite-to-ferrite transformation at temperatures approaching 1673 K (1400 °C) using a novel in-situ neutron scattering approach. All grades became fully ferritic at some point during progressive heating. Higher austenite dissolution temperatures were measured for the higher alloyed grades, and for 3207, the temperature range for a single-phase ferritic structure approached zero. The influence of temperatures in the region of austenite dissolution was further evaluated by microstructural characterization using electron backscattered diffraction of isothermally heat-treated and quenched samples. The new experimental data are compared to thermodynamic calculations, and the precision of databases is discussed.

  8. Maximum-Entropy Inference with a Programmable Annealer

    Science.gov (United States)

    Chancellor, Nicholas; Szoke, Szilard; Vinci, Walter; Aeppli, Gabriel; Warburton, Paul A.

    2016-03-01

    Optimisation problems typically involve finding the ground state (i.e. the minimum energy configuration) of a cost function with respect to many variables. If the variables are corrupted by noise then this maximises the likelihood that the solution is correct. The maximum entropy solution on the other hand takes the form of a Boltzmann distribution over the ground and excited states of the cost function to correct for noise. Here we use a programmable annealer for the information decoding problem which we simulate as a random Ising model in a field. We show experimentally that finite temperature maximum entropy decoding can give slightly better bit-error-rates than the maximum likelihood approach, confirming that useful information can be extracted from the excited states of the annealer. Furthermore we introduce a bit-by-bit analytical method which is agnostic to the specific application and use it to show that the annealer samples from a highly Boltzmann-like distribution. Machines of this kind are therefore candidates for use in a variety of machine learning applications which exploit maximum entropy inference, including language processing and image recognition.

  9. Extremely low temperature behaviour of the thermodynamical properties of gaseous UF6 under an exact quantum approach

    International Nuclear Information System (INIS)

    Amarante, J.A.A. do.

    1979-10-01

    The thermodynamic functions of molecules of type XF 6 are calculated under an exact quantum-mechanical approach, which also yields general expressions valid for other types of molecules. The formalism is used to analyse the behavior of gaseous UF 6 at very low temperatures (around and below 1 0 K), where symmetry effects due to Pauli principle lead to results which are very markedly different from those obtained with the semi-classical approximation. It is shown that this approximation becomes sufficiently accurate only for temperatures about ten times the rotational temperature. (Author) [pt

  10. Soft x-ray continuum radiation transmitted through metallic filters: An analytical approach to fast electron temperature measurements

    International Nuclear Information System (INIS)

    Delgado-Aparicio, L.; Hill, K.; Bitter, M.; Tritz, K.; Kramer, T.; Stutman, D.; Finkenthal, M.

    2010-01-01

    A new set of analytic formulas describes the transmission of soft x-ray continuum radiation through a metallic foil for its application to fast electron temperature measurements in fusion plasmas. This novel approach shows good agreement with numerical calculations over a wide range of plasma temperatures in contrast with the solutions obtained when using a transmission approximated by a single-Heaviside function [S. von Goeler et al., Rev. Sci. Instrum. 70, 599 (1999)]. The new analytic formulas can improve the interpretation of the experimental results and thus contribute in obtaining fast temperature measurements in between intermittent Thomson scattering data.

  11. Mapping air temperature using time series analysis of LST : The SINTESI approach

    NARCIS (Netherlands)

    Alfieri, S.M.; De Lorenzi, F.; Menenti, M.

    2013-01-01

    This paper presents a new procedure to map time series of air temperature (Ta) at fine spatial resolution using time series analysis of satellite-derived land surface temperature (LST) observations. The method assumes that air temperature is known at a single (reference) location such as in gridded

  12. Maximum neutron flux in thermal reactors

    International Nuclear Information System (INIS)

    Strugar, P.V.

    1968-12-01

    Direct approach to the problem is to calculate spatial distribution of fuel concentration if the reactor core directly using the condition of maximum neutron flux and comply with thermal limitations. This paper proved that the problem can be solved by applying the variational calculus, i.e. by using the maximum principle of Pontryagin. Mathematical model of reactor core is based on the two-group neutron diffusion theory with some simplifications which make it appropriate from maximum principle point of view. Here applied theory of maximum principle are suitable for application. The solution of optimum distribution of fuel concentration in the reactor core is obtained in explicit analytical form. The reactor critical dimensions are roots of a system of nonlinear equations and verification of optimum conditions can be done only for specific examples

  13. Biochemical, physiological and molecular responses of Ricinus communis seeds and seedlings to different temperatures: a multi-omics approach

    NARCIS (Netherlands)

    Ribeiro de Jesus, P.R.

    2015-01-01

    Biochemical, physiological and molecular responses of Ricinus communis seeds and seedlings to different temperatures: a multi-omics approach

    by Paulo Roberto Ribeiro de Jesus

    The main objective of this thesis was to provide a detailed

  14. Implementation of nondestructive testing and mechanical performance approaches to assess low temperature fracture properties of asphalt binders

    Directory of Open Access Journals (Sweden)

    Salman Hakimzadeh

    2017-05-01

    Full Text Available In the present work, three different asphalt binders were studied to assess their fracture behavior at low temperatures. Fracture properties of asphalt materials were obtained through conducting the compact tension [C(T] and indirect tensile [ID(T] strength tests. Mechanical fracture tests were followed by performing acoustic emissions test to determine the “embrittlement temperature” of binders which was used in evaluation of thermally induced microdamages in binders. Results showed that both nondestructive and mechanical testing approaches could successfully capture low-temperature cracking behavior of asphalt materials. It was also observed that using GTR as the binder modifier significantly improved thermal cracking resistance of PG64-22 binder. The overall trends of AE test results were consistent with those of mechanical tests. Keywords: Thermal cracking, Indirect tensile strength test, Compact tension test, Nondestructive approach, Acoustic emission test, Embrittlement temperature

  15. Temperature impact on yeast metabolism : Insights from experimental and modeling approaches

    NARCIS (Netherlands)

    Braga da Cruz, A.L.

    2013-01-01

    Temperature is an environmental parameter that greatly affects the growth of microorganisms, due to its impact on the activity of all enzymes in the network. This is particularly relevant in habitats where there are large temperature changes, either daily or seasonal. Understanding how organisms

  16. A space and time scale-dependent nonlinear geostatistical approach for downscaling daily precipitation and temperature

    KAUST Repository

    Jha, Sanjeev Kumar; Mariethoz, Gregoire; Evans, Jason; McCabe, Matthew; Sharma, Ashish

    2015-01-01

    precipitation and daily temperature over several years. Here, the training image consists of daily rainfall and temperature outputs from the Weather Research and Forecasting (WRF) model at 50 km and 10 km resolution for a twenty year period ranging from 1985

  17. Thermodynamic approach to the synthesis of silicon carbide using tetramethylsilane as the precursor at high temperature

    Science.gov (United States)

    Jeong, Seong-Min; Kim, Kyung-Hun; Yoon, Young Joon; Lee, Myung-Hyun; Seo, Won-Seon

    2012-10-01

    Tetramethylsilane (TMS) is commonly used as a precursor in the production of SiC(β) films at relatively low temperatures. However, because TMS contains much more C than Si, it is difficult to produce solid phase SiC at high temperatures. In an attempt to develop a more efficient TMS-based SiC(α) process, computational thermodynamic simulations were performed under various temperatures, working pressures and TMS/H2 ratios. The findings indicate that each solid phase has a different dependency on the H2 concentration. Consequently, a high H2 concentration results in the formation of a single, solid phase SiC region at high temperatures. Finally, TMS appears to be useful as a precursor for the high temperature production of SiC(α).

  18. Delayed Hydride Cracking in Zr-2.5Nb Tubes with the Direction of An Approach to Temperature

    International Nuclear Information System (INIS)

    Kim, Young Suk; Im, Kyung Soo; Kim, Kang Soo; Ahn, Sang Bok; Cheong, Yong Moo

    2006-01-01

    One of the unique features of delayed hydride cracking (DHC) of zirconium alloys is that the DHC velocity (DHCV) of zirconium alloys strongly depends on the path to the test temperature. Ambler reported that the DHCV of Zr-2.5Nb tubes at temperatures above 180 .deg. C depended upon the direction of an approach to the test temperatures, and reported on a presence of the DHC arrest temperature or TDAT above which the DHCV decreased upon an approach to the test temperature by a heating. Ambler proposed a hydrogen transfer from the bulk to the crack tip assuming that the hydrides formed at the crack tip and in the bulk region are fully constrained and partially constrained at the crack tip, respectively. In other words, the terminal solid solubility (TSS) of hydrogen would be governed by elastic strain energy induced by the precipitating hydrides, leading to a higher TSS in the bulk region than that at the crack tip. In a sense, his assumption that the hydrogen concentration is higher in the bulk region than that at the crack tip due to a higher TSS in the bulk region is, in a way, similar to Kim's DHC model. Even though Ambler assumed a different strain energy of the matrix hydrides with the direction of an approach to the test temperature, the peak temperature, hydrogen concentration and the hydride phase, a feasible rationale for this assumption is yet to be given. In this study, a path dependence of DHC velocity of Zr-2.5Nb tubes will be investigated using Kim's DHC model where a driving force for DHC is the supersaturated hydrogen concentration between the crack tip and the bulk region. To this ends, the furnace cooled and water-quenched Zr-2.5Nb specimens were subjected to DHC tests at different test temperatures that were approached by a heating or by a cooling. Kim's DHC model predicts that the water-quenched Zr- 2.5Nb will have DHC crack growth even at temperatures above 180 .deg. C where the furnace-cooled Zr-2.5Nb will not. This experiment will provide

  19. Delayed Hydride Cracking in Zr-2.5Nb Tubes with the Direction of An Approach to Temperature

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Young Suk; Im, Kyung Soo; Kim, Kang Soo; Ahn, Sang Bok; Cheong, Yong Moo [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    2006-07-01

    One of the unique features of delayed hydride cracking (DHC) of zirconium alloys is that the DHC velocity (DHCV) of zirconium alloys strongly depends on the path to the test temperature. Ambler reported that the DHCV of Zr-2.5Nb tubes at temperatures above 180 .deg. C depended upon the direction of an approach to the test temperatures, and reported on a presence of the DHC arrest temperature or TDAT above which the DHCV decreased upon an approach to the test temperature by a heating. Ambler proposed a hydrogen transfer from the bulk to the crack tip assuming that the hydrides formed at the crack tip and in the bulk region are fully constrained and partially constrained at the crack tip, respectively. In other words, the terminal solid solubility (TSS) of hydrogen would be governed by elastic strain energy induced by the precipitating hydrides, leading to a higher TSS in the bulk region than that at the crack tip. In a sense, his assumption that the hydrogen concentration is higher in the bulk region than that at the crack tip due to a higher TSS in the bulk region is, in a way, similar to Kim's DHC model. Even though Ambler assumed a different strain energy of the matrix hydrides with the direction of an approach to the test temperature, the peak temperature, hydrogen concentration and the hydride phase, a feasible rationale for this assumption is yet to be given. In this study, a path dependence of DHC velocity of Zr-2.5Nb tubes will be investigated using Kim's DHC model where a driving force for DHC is the supersaturated hydrogen concentration between the crack tip and the bulk region. To this ends, the furnace cooled and water-quenched Zr-2.5Nb specimens were subjected to DHC tests at different test temperatures that were approached by a heating or by a cooling. Kim's DHC model predicts that the water-quenched Zr- 2.5Nb will have DHC crack growth even at temperatures above 180 .deg. C where the furnace-cooled Zr-2.5Nb will not. This experiment

  20. Predicting temperature and moisture distributions in conditioned spaces using the zonal approach

    Energy Technology Data Exchange (ETDEWEB)

    Mendonca, K.C. [Parana Pontifical Catholic Univ., Curitiba (Brazil); Wurtz, E.; Inard, C. [La Rochelle Univ., La Rochelle, Cedex (France). LEPTAB

    2005-07-01

    Moisture interacts with building elements in a number of different ways that impact upon building performance, causing deterioration of building materials, as well as contributing to poor indoor air quality. In humid climates, moisture represents one of the major loads in conditioned spaces. It is therefore important to understand and model moisture transport accurately. This paper discussed an intermediate zonal approach to building a library of data in order to predict whole hygrothermal behavior in conditioned rooms. The zonal library included 2 models in order to consider building envelope moisture buffering effects as well as taking into account the dynamic aspect of jet airflow in the zonal method. The zonal library was then applied to a case study to show the impact of external humidity on the whole hygrothermal performance of a room equipped with a vertical fan-coil unit. The proposed theory was structured into 3 groups representing 3 building domains: indoor air; envelope; and heating, ventilation and air conditioning (HVAC) systems. The indoor air sub-model related to indoor air space, where airflow speed was considered to be low. The envelope sub-model related to the radiation exchanges between the envelope and its environment as well as to the heat and mass transfers through the envelope material. The HVAC system sub-model referred to the whole system including equipment, control and specific airflow from the equipment. All the models were coupled into SPARK, where the resulting set of non-linear equations were solved simultaneously. A case study of a large office conditioned by a vertical fan-coil unit with a rectangular air supply diffuser was presented. Details of the building's external and internal environment were provided, as well as convective heat and mass transfer coefficients and temperature distributions versus time. Results of the study indicated that understanding building material moisture buffering effects is as important as

  1. Credal Networks under Maximum Entropy

    OpenAIRE

    Lukasiewicz, Thomas

    2013-01-01

    We apply the principle of maximum entropy to select a unique joint probability distribution from the set of all joint probability distributions specified by a credal network. In detail, we start by showing that the unique joint distribution of a Bayesian tree coincides with the maximum entropy model of its conditional distributions. This result, however, does not hold anymore for general Bayesian networks. We thus present a new kind of maximum entropy models, which are computed sequentially. ...

  2. The effects of temperature on service employees' customer orientation: an experimental approach.

    Science.gov (United States)

    Kolb, Peter; Gockel, Christine; Werth, Lioba

    2012-01-01

    Numerous studies have demonstrated how temperature can affect perceptual, cognitive and psychomotor performance (e.g. Hancock, P.A., Ross, J., and Szalma, J., 2007. A meta-analysis of performance response under thermal stressors. Human Factors: The Journal of the Human Factors and Ergonomics Society, 49 (5), 851-877). We extend this research to interpersonal aspects of performance, namely service employees' and salespeople's customer orientation. We combine ergonomics with recent research on social cognition linking physical with interpersonal warmth/coldness. In Experiment 1, a scenario study in the lab, we demonstrate that student participants in rooms with a low temperature showed more customer-oriented behaviour and gave higher customer discounts than participants in rooms with a high temperature - even in zones of thermal comfort. In Experiment 2, we show the existence of alternative possibilities to evoke positive temperature effects on customer orientation in a sample of 126 service and sales employees using a semantic priming procedure. Overall, our results confirm the existence of temperature effects on customer orientation. Furthermore, important implications for services, retail and other settings of interpersonal interactions are discussed. Practitioner Summary: Temperature effects on performance have emerged as a vital research topic. Owing to services' increasing economic importance, we transferred this research to the construct of customer orientation, focusing on performance in service and retail settings. The demonstrated temperature effects are transferable to services, retail and other settings of interpersonal interactions.

  3. Determination of the glass-transition temperature of proteins from a viscometric approach.

    Science.gov (United States)

    Monkos, Karol

    2015-03-01

    All fully hydrated proteins undergo a distinct change in their dynamical properties at glass-transition temperature Tg. To determine indirectly this temperature for dry albumins, the viscosity measurements of aqueous solutions of human, equine, ovine, porcine and rabbit serum albumin have been conducted at a wide range of concentrations and at temperatures ranging from 278 K to 318 K. Viscosity-temperature dependence of the solutions is discussed on the basis of the three parameters equation resulting from Avramov's model. One of the parameter in the Avramov's equation is the glass-transition temperature. For all studied albumins, Tg of a solution monotonically increases with increasing concentration. The glass-transition temperature of a solution depends both on Tg for a dissolved dry protein Tg,p and water Tg,w. To obtain Tg,p for each studied albumin the modified Gordon-Taylor equation was applied. This equation describes the dependence of Tg of a solution on concentration, and Tg,p and a parameter depending on the strength of the protein-solvent interaction are the fitting parameters. Thus determined the glass-transition temperature for the studied dry albumins is in the range (215.4-245.5)K. Copyright © 2014 Elsevier B.V. All rights reserved.

  4. A GM (1, 1 Markov Chain-Based Aeroengine Performance Degradation Forecast Approach Using Exhaust Gas Temperature

    Directory of Open Access Journals (Sweden)

    Ning-bo Zhao

    2014-01-01

    Full Text Available Performance degradation forecast technology for quantitatively assessing degradation states of aeroengine using exhaust gas temperature is an important technology in the aeroengine health management. In this paper, a GM (1, 1 Markov chain-based approach is introduced to forecast exhaust gas temperature by taking the advantages of GM (1, 1 model in time series and the advantages of Markov chain model in dealing with highly nonlinear and stochastic data caused by uncertain factors. In this approach, firstly, the GM (1, 1 model is used to forecast the trend by using limited data samples. Then, Markov chain model is integrated into GM (1, 1 model in order to enhance the forecast performance, which can solve the influence of random fluctuation data on forecasting accuracy and achieving an accurate estimate of the nonlinear forecast. As an example, the historical monitoring data of exhaust gas temperature from CFM56 aeroengine of China Southern is used to verify the forecast performance of the GM (1, 1 Markov chain model. The results show that the GM (1, 1 Markov chain model is able to forecast exhaust gas temperature accurately, which can effectively reflect the random fluctuation characteristics of exhaust gas temperature changes over time.

  5. Physiological and biochemical responses of Ricinus communis seedlings to different temperatures: a metabolomics approach.

    Science.gov (United States)

    Ribeiro, Paulo Roberto; Fernandez, Luzimar Gonzaga; de Castro, Renato Delmondez; Ligterink, Wilco; Hilhorst, Henk W M

    2014-08-12

    Compared with major crops, growth and development of Ricinus communis is still poorly understood. A better understanding of the biochemical and physiological aspects of germination and seedling growth is crucial for the breeding of high yielding varieties adapted to various growing environments. In this context, we analysed the effect of temperature on growth of young R. communis seedlings and we measured primary and secondary metabolites in roots and cotyledons. Three genotypes, recommended to small family farms as cash crop, were used in this study. Seedling biomass was strongly affected by the temperature, with the lowest total biomass observed at 20°C. The response in terms of biomass production for the genotype MPA11 was clearly different from the other two genotypes: genotype MPA11 produced heavier seedlings at all temperatures but the root biomass of this genotype decreased with increasing temperature, reaching the lowest value at 35°C. In contrast, root biomass of genotypes MPB01 and IAC80 was not affected by temperature, suggesting that the roots of these genotypes are less sensitive to changes in temperature. In addition, an increasing temperature decreased the root to shoot ratio, which suggests that biomass allocation between below- and above ground parts of the plants was strongly affected by the temperature. Carbohydrate contents were reduced in response to increasing temperature in both roots and cotyledons, whereas amino acids accumulated to higher contents. Our results show that a specific balance between amino acids, carbohydrates and organic acids in the cotyledons and roots seems to be an important trait for faster and more efficient growth of genotype MPA11. An increase in temperature triggers the mobilization of carbohydrates to support the preferred growth of the aerial parts, at the expense of the roots. A shift in the carbon-nitrogen metabolism towards the accumulation of nitrogen-containing compounds seems to be the main biochemical

  6. The non-linear link between electricity consumption and temperature in Europe: A threshold panel approach

    Energy Technology Data Exchange (ETDEWEB)

    Bessec, Marie [CGEMP, Universite Paris-Dauphine, Place du Marechal de Lattre de Tassigny Paris (France); Fouquau, Julien [LEO, Universite d' Orleans, Faculte de Droit, d' Economie et de Gestion, Rue de Blois, BP 6739, 45067 Orleans Cedex 2 (France)

    2008-09-15

    This paper investigates the relationship between electricity demand and temperature in the European Union. We address this issue by means of a panel threshold regression model on 15 European countries over the last two decades. Our results confirm the non-linearity of the link between electricity consumption and temperature found in more limited geographical areas in previous studies. By distinguishing between North and South countries, we also find that this non-linear pattern is more pronounced in the warm countries. Finally, rolling regressions show that the sensitivity of electricity consumption to temperature in summer has increased in the recent period. (author)

  7. A Mechanistic Design Approach for Graphite Nanoplatelet (GNP) Reinforced Asphalt Mixtures for Low-Temperature Applications

    Science.gov (United States)

    2018-01-01

    This report explores the application of a discrete computational model for predicting the fracture behavior of asphalt mixtures at low temperatures based on the results of simple laboratory experiments. In this discrete element model, coarse aggregat...

  8. A new temperature and humidity dependent surface site density approach for deposition ice nucleation

    OpenAIRE

    I. Steinke; C. Hoose; O. Möhler; P. Connolly; T. Leisner

    2014-01-01

    Deposition nucleation experiments with Arizona Test Dust (ATD) as a surrogate for mineral dusts were conducted at the AIDA cloud chamber at temperatures between 220 and 250 K. The influence of the aerosol size distribution and the cooling rate on the ice nucleation efficiencies was investigated. Ice nucleation active surface site (INAS) densities were calculated to quantify the ice nucleation efficiency as a function of temperature, humidity and the aerosol ...

  9. Measurement of high-temperature spectral emissivity using integral blackbody approach

    Science.gov (United States)

    Pan, Yijie; Dong, Wei; Lin, Hong; Yuan, Zundong; Bloembergen, Pieter

    2016-11-01

    Spectral emissivity is one of the most critical thermophysical properties of a material for heat design and analysis. Especially in the traditional radiation thermometry, normal spectral emissivity is very important. We developed a prototype instrument based upon an integral blackbody method to measure material's spectral emissivity at elevated temperatures. An optimized commercial variable-high-temperature blackbody, a high speed linear actuator, a linear pyrometer, and an in-house designed synchronization circuit was used to implemented the system. A sample was placed in a crucible at the bottom of the blackbody furnace, by which the sample and the tube formed a simulated reference blackbody which had an effective total emissivity greater than 0.985. During the measurement, a pneumatic cylinder pushed a graphite rode and then the sample crucible to the cold opening within hundreds of microseconds. The linear pyrometer was used to monitor the brightness temperature of the sample surface, and the corresponding opto-converted voltage was fed and recorded by a digital multimeter. To evaluate the temperature drop of the sample along the pushing process, a physical model was proposed. The tube was discretized into several isothermal cylindrical rings, and the temperature of each ring was measurement. View factors between sample and rings were utilized. Then, the actual surface temperature of the sample at the end opening was obtained. Taking advantages of the above measured voltage signal and the calculated actual temperature, normal spectral emissivity under the that temperature point was obtained. Graphite sample at 1300°C was measured to prove the validity of the method.

  10. A new approach to measure the temperature in rapid thermal processing

    Science.gov (United States)

    Yan, Jiang

    This dissertation has presented the research work about a new method to measure the temperatures for the silicon wafer. The new technology is mainly for the rapid thermal processing (RTP) system. RTP is a promising technology in semiconductor manufacturing especially for the devices with minimum feature size less than 0.5 μm. The technique to measure the temperatures of the silicon wafer accurately is the key factor to apply the RTP technology to more critical processes in the manufacturing. Two methods which are mostly used nowadays, thermocouples and pyrometer, all have the limitation to be applied in the RTP. This is the motivation to study the new method using acoustic waves for the temperature measurement. The test system was designed and built up for the study of the acoustic method. The whole system mainly includes the transducer unit, circuit hardware, control software, the computer, and the chamber. The acoustic wave was generated by the PZT-5H transducer. The wave travels through the quartz rod into the silicon wafer. After traveling a certain distances in the wafer, the acoustic waves could be received by other transducers. By measuring the travel time and with the travel distance, the velocity of the acoustic wave traveling in the silicon wafer can be calculated. Because there is a relationship between the velocity and the temperature: the velocities of the acoustic waves traveling in the silicon wafer decrease as the temperatures of the wafer increase, the temperature of the wafer can be finally obtained. The thermocouples were used to check the measurement accuracy of the acoustic method. The temperature mapping across the 8″ silicon wafer was obtained with four transducer sensor unit. The temperatures of the wafer were measured using acoustic method at both static and dynamic status. The main purpose of the tests is to know the measurement accuracy for the new method. The goal of the research work regarding to the accuracy is acoustic method is

  11. Low-temperature approach to the renormalization-group study of critical phenomena

    International Nuclear Information System (INIS)

    Suranyi, P.

    1977-01-01

    A new method of exploring the contents of the renormalization-group equations for discrete spins is introduced. The equations are expanded in low-temperature series and the truncated series are used to obtain the critical exponents and critical temperature of a system. The method is demonstrated on the planar triangular Ising lattice and the critical parameters are found to be within a few percent of the exactly known values in third nonvanishing order of approximation

  12. A Pedestrian Approach to Indoor Temperature Distribution Prediction of a Passive Solar Energy Efficient House

    Directory of Open Access Journals (Sweden)

    Golden Makaka

    2015-01-01

    Full Text Available With the increase in energy consumption by buildings in keeping the indoor environment within the comfort levels and the ever increase of energy price there is need to design buildings that require minimal energy to keep the indoor environment within the comfort levels. There is need to predict the indoor temperature during the design stage. In this paper a statistical indoor temperature prediction model was developed. A passive solar house was constructed; thermal behaviour was simulated using ECOTECT and DOE computer software. The thermal behaviour of the house was monitored for a year. The indoor temperature was observed to be in the comfort level for 85% of the total time monitored. The simulation results were compared with the measured results and those from the prediction model. The statistical prediction model was found to agree (95% with the measured results. Simulation results were observed to agree (96% with the statistical prediction model. Modeled indoor temperature was most sensitive to the outdoor temperatures variations. The daily mean peak ones were found to be more pronounced in summer (5% than in winter (4%. The developed model can be used to predict the instantaneous indoor temperature for a specific house design.

  13. A serviceability approach for carbon steel piping to intermittent high temperatures

    International Nuclear Information System (INIS)

    Ratiu, M.D.; Moisidis, N.T.

    1996-01-01

    Carbon steel piping (e.g., ASME SA-106, SA-53), is installed in many industrial applications (i.e. diesel generator at NPP) where the internal gas flow subjects the piping to successive short time exposures at elevated temperatures up to 1,100 F. A typical design of this piping without consideration for creep-fatigue cumulative damage is at least incomplete if not inappropriate. Also, a design for creep-fatigue, usually employed for long-term exposure to elevated temperatures, would be too conservative and will impose replacement of the carbon steel piping with heat-resistant CrMo steel piping. The existing ASME Standard procedures do not explicitly provide acceptance criteria for the design qualification to withstand these intermittent exposures to elevated temperatures. The serviceability qualification proposed is based on the evaluation of equivalent full temperature cycles which are presumed/expected to be experienced by the exhaust piping during the design operating life of the diesel engine. The proposed serviceability analysis consists of: (a) determination of the permissible stress at elevated temperatures, and (b) estimation of creep-fatigue damage for the total expected cycles of elevated temperature exposures following the procedure provided in ASME Code Cases N-253-6 and N-47-28

  14. Topics in Bayesian statistics and maximum entropy

    International Nuclear Information System (INIS)

    Mutihac, R.; Cicuttin, A.; Cerdeira, A.; Stanciulescu, C.

    1998-12-01

    Notions of Bayesian decision theory and maximum entropy methods are reviewed with particular emphasis on probabilistic inference and Bayesian modeling. The axiomatic approach is considered as the best justification of Bayesian analysis and maximum entropy principle applied in natural sciences. Particular emphasis is put on solving the inverse problem in digital image restoration and Bayesian modeling of neural networks. Further topics addressed briefly include language modeling, neutron scattering, multiuser detection and channel equalization in digital communications, genetic information, and Bayesian court decision-making. (author)

  15. Prediction of Flow and Temperature Distributions in a High Flux Research Reactor Using the Porous Media Approach

    Directory of Open Access Journals (Sweden)

    Shanfang Huang

    2017-01-01

    Full Text Available High thermal neutron fluxes are needed in some research reactors and for irradiation tests of materials. A High Flux Research Reactor (HFRR with an inverse flux trap-converter target structure is being developed by the Reactor Engineering Analysis Lab (REAL at Tsinghua University. This paper studies the safety of the HFRR core by full core flow and temperature calculations using the porous media approach. The thermal nonequilibrium model is used in the porous media energy equation to calculate coolant and fuel assembly temperatures separately. The calculation results show that the coolant temperature keeps increasing along the flow direction, while the fuel temperature increases first and decreases afterwards. As long as the inlet coolant mass flow rate is greater than 450 kg/s, the peak cladding temperatures in the fuel assemblies are lower than the local saturation temperatures and no boiling exists. The flow distribution in the core is homogeneous with a small flow rate variation less than 5% for different assemblies. A large recirculation zone is observed in the outlet region. Moreover, the porous media model is compared with the exact model and found to be much more efficient than a detailed simulation of all the core components.

  16. AEROSOL NUCLEATION AND GROWTH DURING LAMINAR TUBE FLOW: MAXIMUM SATURATIONS AND NUCLEATION RATES. (R827354C008)

    Science.gov (United States)

    An approximate method of estimating the maximum saturation, the nucleation rate, and the total number nucleated per second during the laminar flow of a hot vapour–gas mixture along a tube with cold walls is described. The basis of the approach is that the temperature an...

  17. Approach to the HTGR core outlet temperature measurements in the United States

    International Nuclear Information System (INIS)

    Franklin, R.; Rodriguez, C.

    1982-06-01

    The High Temperature Gas-Cooled Reactor (HTGR) constructed at Fort St. Vrain Colorado (330 MWe) used Geminol thermocouples to measure the primary coolant temperature at the core outlet. The primary coolant (helium) is heated by the graphite core to temperatures in the range of 700 deg. to 750 deg. C. The combination of the high temperature, high flow rate and radiation at the core outlet area makes it difficult to obtain accurate temperature measurements. The Geminol thermocouples installed in the Fort St. Vrain reactor have provided accurate data for several years of power operation without any failures. The indicated temperature of the core outlet thermocouples agrees with a ''traversing'' thermocouple measurement to within +-2 deg. C. The Geminol thermocouple wire was provided by the Driver-Harris Company and is similar to the chromel versus alumel thermocouple. Geminol wire is no longer distributed and on future designs, chromel versus alumel wire will be used. The next large HTGR design, which is being performed with funding support from the United States Department of Energy, will incorporate replaceable thermocouples. The thermocouples used in the Fort St. Vrain reactor were permanently installed and large in diameter (6.35 mm) to insure good reliability. The replaceable thermocouples to be used in the next large reactor will be smaller in diameter (3.18 mm). These replaceable thermocouples will be inserted into the core outlet area through long curved guide tubes that are permanently installed. These guide tubes are as long as 18 meters and must be curved to reach the core outlet regions. Tests were conducted to prove that the thermocouples could be inserted and removed through the long curved guide tubes. (author)

  18. High temperature polymer electrolyte membrane fuel cells: Approaches, status, and perspectives

    DEFF Research Database (Denmark)

    This book is a comprehensive review of high-temperature polymer electrolyte membrane fuel cells (PEMFCs). PEMFCs are the preferred fuel cells for a variety of applications such as automobiles, cogeneration of heat and power units, emergency power and portable electronics. The first 5 chapters...... of and motivated extensive research activity in the field. The last 11 chapters summarize the state-of-the-art of technological development of high temperature-PEMFCs based on acid doped PBI membranes including catalysts, electrodes, MEAs, bipolar plates, modelling, stacking, diagnostics and applications....

  19. A molecular dynamics approach for predicting the glass transition temperature and plasticization effect in amorphous pharmaceuticals.

    Science.gov (United States)

    Gupta, Jasmine; Nunes, Cletus; Jonnalagadda, Sriramakamal

    2013-11-04

    The objectives of this study were as follows: (i) To develop an in silico technique, based on molecular dynamics (MD) simulations, to predict glass transition temperatures (Tg) of amorphous pharmaceuticals. (ii) To computationally study the effect of plasticizer on Tg. (iii) To investigate the intermolecular interactions using radial distribution function (RDF). Amorphous sucrose and water were selected as the model compound and plasticizer, respectively. MD simulations were performed using COMPASS force field and isothermal-isobaric ensembles. The specific volumes of amorphous cells were computed in the temperature range of 440-265 K. The characteristic "kink" observed in volume-temperature curves, in conjunction with regression analysis, defined the Tg. The MD computed Tg values were 367 K, 352 K and 343 K for amorphous sucrose containing 0%, 3% and 5% w/w water, respectively. The MD technique thus effectively simulated the plasticization effect of water; and the corresponding Tg values were in reasonable agreement with theoretical models and literature reports. The RDF measurements revealed strong hydrogen bond interactions between sucrose hydroxyl oxygens and water oxygen. Steric effects led to weak interactions between sucrose acetal oxygens and water oxygen. MD is thus a powerful predictive tool for probing temperature and water effects on the stability of amorphous systems during drug development.

  20. Retrieval of sea surface air temperature from satellite data over Indian Ocean: An empirical approach

    Digital Repository Service at National Institute of Oceanography (India)

    Sathe, P.V.; Muraleedharan, P.M.

    the sea surface air temperature from satellite derived sea surface humidity in the Indian Ocean. Using the insitu data on surface met parameters collected on board O.R.V. Sagar Kanya in the Indian Ocean over a period of 15 years, the relationship between...

  1. Transition to Collisionless Ion-Temperature-Gradient-Driven Plasma Turbulence: A Dynamical Systems Approach

    International Nuclear Information System (INIS)

    Kolesnikov, R.A.; Krommes, J.A.

    2005-01-01

    The transition to collisionless ion-temperature-gradient-driven plasma turbulence is considered by applying dynamical systems theory to a model with 10 degrees of freedom. The study of a four-dimensional center manifold predicts a 'Dimits shift' of the threshold for turbulence due to the excitation of zonal flows and establishes (for the model) the exact value of that shift

  2. Comparison of data-driven and model-driven approaches to brightness temperature diurnal cycle interpolation

    CSIR Research Space (South Africa)

    Van den Bergh, F

    2006-01-01

    Full Text Available This paper presents two new schemes for interpolating missing samples in satellite diurnal temperature cycles (DTCs). The first scheme, referred to here as the cosine model, is an improvement of the model proposed in [2] and combines a cosine...

  3. Quantitative assessment of drivers of recent global temperature variability: an information theoretic approach

    Science.gov (United States)

    Bhaskar, Ankush; Ramesh, Durbha Sai; Vichare, Geeta; Koganti, Triven; Gurubaran, S.

    2017-12-01

    Identification and quantification of possible drivers of recent global temperature variability remains a challenging task. This important issue is addressed adopting a non-parametric information theory technique, the Transfer Entropy and its normalized variant. It distinctly quantifies actual information exchanged along with the directional flow of information between any two variables with no bearing on their common history or inputs, unlike correlation, mutual information etc. Measurements of greenhouse gases: CO2, CH4 and N2O; volcanic aerosols; solar activity: UV radiation, total solar irradiance ( TSI) and cosmic ray flux ( CR); El Niño Southern Oscillation ( ENSO) and Global Mean Temperature Anomaly ( GMTA) made during 1984-2005 are utilized to distinguish driving and responding signals of global temperature variability. Estimates of their relative contributions reveal that CO2 ({˜ } 24 %), CH4 ({˜ } 19 %) and volcanic aerosols ({˜ }23 %) are the primary contributors to the observed variations in GMTA. While, UV ({˜ } 9 %) and ENSO ({˜ } 12 %) act as secondary drivers of variations in the GMTA, the remaining play a marginal role in the observed recent global temperature variability. Interestingly, ENSO and GMTA mutually drive each other at varied time lags. This study assists future modelling efforts in climate science.

  4. Reflection and refraction of a transient temperature field at a plane interface using Cagniard-de Hoop approach.

    Science.gov (United States)

    Shendeleva, M L

    2001-09-01

    An instantaneous line heat source located in the medium consisting of two half-spaces with different thermal properties is considered. Green's functions for the temperature field are derived using the Laplace and Fourier transforms in time and space and their inverting by the Cagniard-de Hoop technique known in elastodynamics. The characteristic feature of the proposed approach consists in the application of the Cagniard-de Hoop method to the transient heat conduction problem. The idea is suggested by the fact that the Laplace transform in time reduces the heat conduction equation to a Helmholtz equation, as for the wave propagation. Derived solutions exhibit some wave properties. First, the temperature field is decomposed into the source field and the reflected field in one half-space and the transmitted field in the other. Second, the laws of reflection and refraction can be deduced for the rays of the temperature field. In this connection the ray concept is briefly discussed. It is shown that the rays, introduced in such a way that they are consistent with Snell's law do not represent the directions of heat flux in the medium. Numerical computations of the temperature field as well as diagrams of rays and streamlines of the temperature field are presented.

  5. Real-time temperature estimation and monitoring of HIFU ablation through a combined modeling and passive acoustic mapping approach

    International Nuclear Information System (INIS)

    Jensen, C R; Cleveland, R O; Coussios, C C

    2013-01-01

    Passive acoustic mapping (PAM) has been recently demonstrated as a method of monitoring focused ultrasound therapy by reconstructing the emissions created by inertially cavitating bubbles (Jensen et al 2012 Radiology 262 252–61). The published method sums energy emitted by cavitation from the focal region within the tissue and uses a threshold to determine when sufficient energy has been delivered for ablation. The present work builds on this approach to provide a high-intensity focused ultrasound (HIFU) treatment monitoring software that displays both real-time temperature maps and a prediction of the ablated tissue region. This is achieved by determining heat deposition from two sources: (i) acoustic absorption of the primary HIFU beam which is calculated via a nonlinear model, and (ii) absorption of energy from bubble acoustic emissions which is estimated from measurements. The two sources of heat are used as inputs to the bioheat equation that gives an estimate of the temperature of the tissue as well as estimates of tissue ablation. The method has been applied to ex vivo ox liver samples and the estimated temperature is compared to the measured temperature and shows good agreement, capturing the effect of cavitation-enhanced heating on temperature evolution. In conclusion, it is demonstrated that by using PAM and predictions of heating it is possible to produce an evolving estimate of cell death during exposure in order to guide treatment for monitoring ablative HIFU therapy. (paper)

  6. The worst case complexity of maximum parsimony.

    Science.gov (United States)

    Carmel, Amir; Musa-Lempel, Noa; Tsur, Dekel; Ziv-Ukelson, Michal

    2014-11-01

    One of the core classical problems in computational biology is that of constructing the most parsimonious phylogenetic tree interpreting an input set of sequences from the genomes of evolutionarily related organisms. We reexamine the classical maximum parsimony (MP) optimization problem for the general (asymmetric) scoring matrix case, where rooted phylogenies are implied, and analyze the worst case bounds of three approaches to MP: The approach of Cavalli-Sforza and Edwards, the approach of Hendy and Penny, and a new agglomerative, "bottom-up" approach we present in this article. We show that the second and third approaches are faster than the first one by a factor of Θ(√n) and Θ(n), respectively, where n is the number of species.

  7. Parameter extraction using global particle swarm optimization approach and the influence of polymer processing temperature on the solar cell parameters

    Science.gov (United States)

    Kumar, S.; Singh, A.; Dhar, A.

    2017-08-01

    The accurate estimation of the photovoltaic parameters is fundamental to gain an insight of the physical processes occurring inside a photovoltaic device and thereby to optimize its design, fabrication processes, and quality. A simulative approach of accurately determining the device parameters is crucial for cell array and module simulation when applied in practical on-field applications. In this work, we have developed a global particle swarm optimization (GPSO) approach to estimate the different solar cell parameters viz., ideality factor (η), short circuit current (Isc), open circuit voltage (Voc), shunt resistant (Rsh), and series resistance (Rs) with wide a search range of over ±100 % for each model parameter. After validating the accurateness and global search power of the proposed approach with synthetic and noisy data, we applied the technique to the extract the PV parameters of ZnO/PCDTBT based hybrid solar cells (HSCs) prepared under different annealing conditions. Further, we examine the variation of extracted model parameters to unveil the physical processes occurring when different annealing temperatures are employed during the device fabrication and establish the role of improved charge transport in polymer films from independent FET measurements. The evolution of surface morphology, optical absorption, and chemical compositional behaviour of PCDTBT co-polymer films as a function of processing temperature has also been captured in the study and correlated with the findings from the PV parameters extracted using GPSO approach.

  8. Ambient temperature and cardiovascular biomarkers in a repeated-measure study in healthy adults: A novel biomarker index approach.

    Science.gov (United States)

    Wu, Shaowei; Yang, Di; Pan, Lu; Shan, Jiao; Li, Hongyu; Wei, Hongying; Wang, Bin; Huang, Jing; Baccarelli, Andrea A; Shima, Masayuki; Deng, Furong; Guo, Xinbiao

    2017-07-01

    and mortality. The biomarker index approach may serve as a novel tool to capture ambient temperature effects. Copyright © 2017. Published by Elsevier Inc.

  9. Retrieval of temperature and pressure using broadband solar occultation: SOFIE approach and results

    Directory of Open Access Journals (Sweden)

    B. T. Marshall

    2011-05-01

    Full Text Available Measurement of atmospheric temperature as a function of pressure, T(P, is key to understanding many atmospheric processes and a prerequisite for retrieving gas mixing ratios and other parameters from solar occultation measurements. This paper gives a brief overview of the solar occultation measurement technique followed by a detailed discussion of the mechanisms that make the measurement sensitive to temperature. Methods for retrieving T(P using both broadband transmittance and refraction are discussed. Investigations using measurements of broadband transmittance in two CO2 absorption bands (the 4.3 and 2.7 μm bands and refractive bending are then presented. These investigations include sensitivity studies, simulated retrieval studies, and examples from SOFIE.

  10. Hawking temperature: an elementary approach based on Newtonian mechanics and quantum theory

    Science.gov (United States)

    Pinochet, Jorge

    2016-01-01

    In 1974, the British physicist Stephen Hawking discovered that black holes have a characteristic temperature and are therefore capable of emitting radiation. Given the scientific importance of this discovery, there is a profuse literature on the subject. Nevertheless, the available literature ends up being either too simple, which does not convey the true physical significance of the issue, or too technical, which excludes an ample segment of the audience interested in science, such as physics teachers and their students. The present article seeks to remedy this shortcoming. It develops a simple and plausible argument that provides insight into the fundamental aspects of Hawking’s discovery, which leads to an approximate equation for the so-called Hawking temperature. The exposition is mainly intended for physics teachers and their students, and it only requires elementary algebra, as well as basic notions of Newtonian mechanics and quantum theory.

  11. The approaches of safety design and safety evaluation at HTTR (High Temperature Engineering Test Reactor)

    International Nuclear Information System (INIS)

    Iigaki, Kazuhiko; Saikusa, Akio; Sawahata, Hiroaki; Shinozaki, Masayuki; Tochio, Daisuke; Honma, Fumitaka; Tachibana, Yukio; Iyoku, Tatsuo; Kawasaki, Kozo; Baba, Osamu

    2006-06-01

    Gas Cooled Reactor has long history of nuclear development, and High Temperature Gas Cooled Reactor (HTGR) has been expected that it can be supply high temperature energy to chemical industry and to power generation from the points of view of the safety, the efficiency, the environment and the economy. The HTGR design is tried to installed passive safety equipment. The current licensing review guideline was made for a Low Water Reactor (LWR) on safety evaluation therefore if it would be directly utilized in the HTGR it needs the special consideration for the HTGR. This paper describes that investigation result of the safety design and the safety evaluation traditions for the HTGR, comparison the safety design and safety evaluation feature for the HTGT with it's the LWR, and reflection for next HTGR based on HTTR operational experiment. (author)

  12. Embedding of MEMS pressure and temperature sensors in carbon fiber composites: a manufacturing approach

    Science.gov (United States)

    Javidinejad, Amir; Joshi, Shiv P.

    2000-06-01

    In this paper embedding of surface mount pressure and temperature sensors in the Carbon fiber composites are described. A commercially available surface mount pressure and temperature sensor are used for embedding in a composite lay- up of IM6/HST-7, IM6/3501 and AS4/E7T1-2 prepregs. The fabrication techniques developed here are the focus of this paper and provide for a successful embedding procedure of pressure sensors in fibrous composites. The techniques for positioning and insulating, the sensor and the lead wires, from the conductive carbon prepregs are described and illustrated. Procedural techniques are developed and discussed for isolating the sensor's flow-opening, from the exposure to the prepreg epoxy flow and exposure to the fibrous particles, during the autoclave curing of the composite laminate. The effects of the autoclave cycle (if any) on the operation of the embedded pressure sensor are discussed.

  13. Towards a comprehensive theory for He II: II. A temperature-dependent field-theoretic approach

    International Nuclear Information System (INIS)

    Chela-Flores, J.; Ghassib, H.B.

    1982-09-01

    New experimental aspects of He II are used as a guide towards a comprehensive theory in which non-zero temperature U(1) and SU(2) gauge fields are incorporated into a gauge hierarchy of effective Lagrangians. We conjecture that an SU(n) gauge-theoretic description of the superfluidity of 4 He may be obtained in the limit n→infinity. We indicate, however, how experiments may be understood in the zeroth, first and second order of the hierarchy. (author)

  14. The Transition to Collisionless Ion-temperature-gradient-driven Plasma Turbulence: A Dynamical Systems Approach

    International Nuclear Information System (INIS)

    Kolesnikov, R.A.; Krommes, J.A.

    2004-01-01

    The transition to collisionless ion-temperature-gradient-driven plasma turbulence is considered by applying dynamical systems theory to a model with ten degrees of freedom. Study of a four-dimensional center manifold predicts a ''Dimits shift'' of the threshold for turbulence due to the excitation of zonal flows and establishes the exact value of that shift in terms of physical parameters. For insight into fundamental physical mechanisms, the method provides a viable alternative to large simulations

  15. A simplified approach for evaluating secondary stresses in elevated temperature design

    International Nuclear Information System (INIS)

    Becht, C.

    1983-01-01

    Control of secondary stresses is important for long-term reliability of components, particularly at elevated temperatures where substantial creep damage can occur and result in cracking. When secondary stresses are considered in the design of elevated temperature components, these are often addressed by the criteria contained in Nuclear Code Case N-47 for use with elastic or inelastic analysis. The elastic rules are very conservative as they bound a large range of complex phenomena; because of this conservatism, only components in relatively mild services can be designed in accordance with these rules. The inelastic rules, although more accurate, require complex and costly nonlinear analysis. Elevated temperature shakedown is a recognized phenomenon that has been considered in developing Code rules and simplified methods. This paper develops and examines the implications of using a criteria which specifically limits stresses to the shakedown regime. Creep, fatigue, and strain accumulation are considered. The effect of elastic follow-up on the conservatism of the criteria is quantified by means of a simplified method. The level of conservatism is found to fall between the elastic and inelastic rules of N-47 and, in fact, the incentives for performing complex inelastic analyses appear to be low except in the low cycle regime. The criteria has immediate applicability to non-code components such as vessel internals in the chemical, petroleum, and synfuels industry. It is suggested that such a criteria be considered in future code rule development

  16. A chemical approach toward low temperature alloying of immiscible iron and molybdenum metals

    Energy Technology Data Exchange (ETDEWEB)

    Nazir, Rabia [Department of Chemistry, Quaid-i-Azam University, Islamabad 45320 (Pakistan); Applied Chemistry Research Centre, Pakistan Council of Scientific and Industrial Research Laboratories Complex, Lahore 54600 (Pakistan); Ahmed, Sohail [Department of Chemistry, Quaid-i-Azam University, Islamabad 45320 (Pakistan); Mazhar, Muhammad, E-mail: mazhar42pk@yahoo.com [Department of Chemistry, University of Malaya, Lembah Pantai, 50603 Kuala Lumpur (Malaysia); Akhtar, Muhammad Javed; Siddique, Muhammad [Physics Division, PINSTECH, P.O. Nilore, Islamabad (Pakistan); Khan, Nawazish Ali [Material Science Laboratory, Department of Physics, Quaid-i-Azam University, Islamabad 45320 (Pakistan); Shah, Muhammad Raza [HEJ Research Institute of Chemistry, University of Karachi, Karachi 75270 (Pakistan); Nadeem, Muhammad [Physics Division, PINSTECH, P.O. Nilore, Islamabad (Pakistan)

    2013-11-15

    Graphical abstract: - Highlights: • Low temperature pyrolysis of [Fe(bipy){sub 3}]Cl{sub 2} and [Mo(bipy)Cl{sub 4}] homogeneous powder. • Easy low temperature alloying of immiscible metals like Fe and Mo. • Uniform sized Fe–Mo nanoalloy with particle size of 48–68 nm. • Characterization by EDXRF, AFM, XRPD, magnetometery, {sup 57}Fe Mössbauer and impedance. • Alloy behaves as almost superparamagnetic obeying simple –R(CPE)– circuit. - Abstract: The present research is based on a low temperature operated feasible method for the synthesis of immiscible iron and molybdenum metals’ nanoalloy for technological applications. The nanoalloy has been synthesized by pyrolysis of homogeneous powder precipitated, from a common solvent, of the two complexes, trisbipyridineiron(II)chloride, [Fe(bipy){sub 3}]Cl{sub 2}, and bipyridinemolybedenum(IV) chloride, [Mo(bipy)Cl{sub 4}], followed by heating at 500 °C in an inert atmosphere of flowing argon gas. The resulting nanoalloy has been characterized by using EDXRF, AFM, XRD, magnetometery, {sup 57}Fe Mössbauer and impedance spectroscopies. These results showed that under provided experimental conditions iron and molybdenum metals, with known miscibility barrier, alloy together to give (1:1) single phase material having particle size in the range of 48–66 nm. The magnetism of iron is considerably reduced after alloy formation and shows its trend toward superparamagnetism. The designed chemical synthetic procedure is equally feasible for the fabrication of other immiscible metals.

  17. A chemical approach toward low temperature alloying of immiscible iron and molybdenum metals

    International Nuclear Information System (INIS)

    Nazir, Rabia; Ahmed, Sohail; Mazhar, Muhammad; Akhtar, Muhammad Javed; Siddique, Muhammad; Khan, Nawazish Ali; Shah, Muhammad Raza; Nadeem, Muhammad

    2013-01-01

    Graphical abstract: - Highlights: • Low temperature pyrolysis of [Fe(bipy) 3 ]Cl 2 and [Mo(bipy)Cl 4 ] homogeneous powder. • Easy low temperature alloying of immiscible metals like Fe and Mo. • Uniform sized Fe–Mo nanoalloy with particle size of 48–68 nm. • Characterization by EDXRF, AFM, XRPD, magnetometery, 57 Fe Mössbauer and impedance. • Alloy behaves as almost superparamagnetic obeying simple –R(CPE)– circuit. - Abstract: The present research is based on a low temperature operated feasible method for the synthesis of immiscible iron and molybdenum metals’ nanoalloy for technological applications. The nanoalloy has been synthesized by pyrolysis of homogeneous powder precipitated, from a common solvent, of the two complexes, trisbipyridineiron(II)chloride, [Fe(bipy) 3 ]Cl 2 , and bipyridinemolybedenum(IV) chloride, [Mo(bipy)Cl 4 ], followed by heating at 500 °C in an inert atmosphere of flowing argon gas. The resulting nanoalloy has been characterized by using EDXRF, AFM, XRD, magnetometery, 57 Fe Mössbauer and impedance spectroscopies. These results showed that under provided experimental conditions iron and molybdenum metals, with known miscibility barrier, alloy together to give (1:1) single phase material having particle size in the range of 48–66 nm. The magnetism of iron is considerably reduced after alloy formation and shows its trend toward superparamagnetism. The designed chemical synthetic procedure is equally feasible for the fabrication of other immiscible metals

  18. Effect of direction of approach to temperature on the delayed hydrogen cracking behavior of cold-worked Zr-2.5Nb

    International Nuclear Information System (INIS)

    Ambler, J.F.R.

    1984-01-01

    The delayed hydrogen cracking behavior of cold-worked Zr-2.5Nb at temperatures above about 423 K depends upon the direction of approach to test temperature. Cooling to the test temperatures results in an increase in crack growth rate, da/dt, with increase in temperature, given by the following Arrhenius relationship da/dt = 6.86 X 10 -1 exp(--71500/RT) Heating from room temperature to the test temperature results in the same increase in da/dt with temperature, but only up to a certain temperature, T /SUB DAT/ . The temperature, T /SUB DAT/ , increases with the amount of hydride precipitated during cooling to room temperature, prior to heating, and with cooling rate. The results obtained can be explained in terms of the Simpson and Puls model of delayed hydrogen cracking, if the hydride precipitated at the crack tip is initially fully constrained and the matrix hydride loses constraint during heating

  19. Weak scale from the maximum entropy principle

    Science.gov (United States)

    Hamada, Yuta; Kawai, Hikaru; Kawana, Kiyoharu

    2015-03-01

    The theory of the multiverse and wormholes suggests that the parameters of the Standard Model (SM) are fixed in such a way that the radiation of the S3 universe at the final stage S_rad becomes maximum, which we call the maximum entropy principle. Although it is difficult to confirm this principle generally, for a few parameters of the SM, we can check whether S_rad actually becomes maximum at the observed values. In this paper, we regard S_rad at the final stage as a function of the weak scale (the Higgs expectation value) vh, and show that it becomes maximum around vh = {{O}} (300 GeV) when the dimensionless couplings in the SM, i.e., the Higgs self-coupling, the gauge couplings, and the Yukawa couplings are fixed. Roughly speaking, we find that the weak scale is given by vh ˜ T_{BBN}2 / (M_{pl}ye5), where ye is the Yukawa coupling of electron, T_BBN is the temperature at which the Big Bang nucleosynthesis starts, and M_pl is the Planck mass.

  20. Maximum Parsimony on Phylogenetic networks

    Science.gov (United States)

    2012-01-01

    Background Phylogenetic networks are generalizations of phylogenetic trees, that are used to model evolutionary events in various contexts. Several different methods and criteria have been introduced for reconstructing phylogenetic trees. Maximum Parsimony is a character-based approach that infers a phylogenetic tree by minimizing the total number of evolutionary steps required to explain a given set of data assigned on the leaves. Exact solutions for optimizing parsimony scores on phylogenetic trees have been introduced in the past. Results In this paper, we define the parsimony score on networks as the sum of the substitution costs along all the edges of the network; and show that certain well-known algorithms that calculate the optimum parsimony score on trees, such as Sankoff and Fitch algorithms extend naturally for networks, barring conflicting assignments at the reticulate vertices. We provide heuristics for finding the optimum parsimony scores on networks. Our algorithms can be applied for any cost matrix that may contain unequal substitution costs of transforming between different characters along different edges of the network. We analyzed this for experimental data on 10 leaves or fewer with at most 2 reticulations and found that for almost all networks, the bounds returned by the heuristics matched with the exhaustively determined optimum parsimony scores. Conclusion The parsimony score we define here does not directly reflect the cost of the best tree in the network that displays the evolution of the character. However, when searching for the most parsimonious network that describes a collection of characters, it becomes necessary to add additional cost considerations to prefer simpler structures, such as trees over networks. The parsimony score on a network that we describe here takes into account the substitution costs along the additional edges incident on each reticulate vertex, in addition to the substitution costs along the other edges which are

  1. Automatic maximum entropy spectral reconstruction in NMR

    International Nuclear Information System (INIS)

    Mobli, Mehdi; Maciejewski, Mark W.; Gryk, Michael R.; Hoch, Jeffrey C.

    2007-01-01

    Developments in superconducting magnets, cryogenic probes, isotope labeling strategies, and sophisticated pulse sequences together have enabled the application, in principle, of high-resolution NMR spectroscopy to biomolecular systems approaching 1 megadalton. In practice, however, conventional approaches to NMR that utilize the fast Fourier transform, which require data collected at uniform time intervals, result in prohibitively lengthy data collection times in order to achieve the full resolution afforded by high field magnets. A variety of approaches that involve nonuniform sampling have been proposed, each utilizing a non-Fourier method of spectrum analysis. A very general non-Fourier method that is capable of utilizing data collected using any of the proposed nonuniform sampling strategies is maximum entropy reconstruction. A limiting factor in the adoption of maximum entropy reconstruction in NMR has been the need to specify non-intuitive parameters. Here we describe a fully automated system for maximum entropy reconstruction that requires no user-specified parameters. A web-accessible script generator provides the user interface to the system

  2. maximum neutron flux at thermal nuclear reactors

    International Nuclear Information System (INIS)

    Strugar, P.

    1968-10-01

    Since actual research reactors are technically complicated and expensive facilities it is important to achieve savings by appropriate reactor lattice configurations. There is a number of papers, and practical examples of reactors with central reflector, dealing with spatial distribution of fuel elements which would result in higher neutron flux. Common disadvantage of all the solutions is that the choice of best solution is done starting from the anticipated spatial distributions of fuel elements. The weakness of these approaches is lack of defined optimization criteria. Direct approach is defined as follows: determine the spatial distribution of fuel concentration starting from the condition of maximum neutron flux by fulfilling the thermal constraints. Thus the problem of determining the maximum neutron flux is solving a variational problem which is beyond the possibilities of classical variational calculation. This variational problem has been successfully solved by applying the maximum principle of Pontrjagin. Optimum distribution of fuel concentration was obtained in explicit analytical form. Thus, spatial distribution of the neutron flux and critical dimensions of quite complex reactor system are calculated in a relatively simple way. In addition to the fact that the results are innovative this approach is interesting because of the optimization procedure itself [sr

  3. Particle Swarm Optimization Based of the Maximum Photovoltaic ...

    African Journals Online (AJOL)

    Photovoltaic electricity is seen as an important source of renewable energy. The photovoltaic array is an unstable source of power since the peak power point depends on the temperature and the irradiation level. A maximum peak power point tracking is then necessary for maximum efficiency. In this work, a Particle Swarm ...

  4. Maximum entropy decomposition of quadrupole mass spectra

    International Nuclear Information System (INIS)

    Toussaint, U. von; Dose, V.; Golan, A.

    2004-01-01

    We present an information-theoretic method called generalized maximum entropy (GME) for decomposing mass spectra of gas mixtures from noisy measurements. In this GME approach to the noisy, underdetermined inverse problem, the joint entropies of concentration, cracking, and noise probabilities are maximized subject to the measured data. This provides a robust estimation for the unknown cracking patterns and the concentrations of the contributing molecules. The method is applied to mass spectroscopic data of hydrocarbons, and the estimates are compared with those received from a Bayesian approach. We show that the GME method is efficient and is computationally fast

  5. Effect of temperature rise and ocean acidification on growth of calcifying tubeworm shells (Spirorbis spirorbis): an in situ benthocosm approach

    Science.gov (United States)

    Ni, Sha; Taubner, Isabelle; Böhm, Florian; Winde, Vera; Böttcher, Michael E.

    2018-03-01

    The calcareous tubeworm Spirorbis spirorbis is a widespread serpulid species in the Baltic Sea, where it commonly grows as an epibiont on brown macroalgae (genus Fucus). It lives within a Mg-calcite shell and could be affected by ocean acidification and temperature rise induced by the predicted future atmospheric CO2 increase. However, Spirorbis tubes grow in a chemically modified boundary layer around the algae, which may mitigate acidification. In order to investigate how increasing temperature and rising pCO2 may influence S. spirorbis shell growth we carried out four seasonal experiments in the Kiel Outdoor Benthocosms at elevated pCO2 and temperature conditions. Compared to laboratory batch culture experiments the benthocosm approach provides a better representation of natural conditions for physical and biological ecosystem parameters, including seasonal variations. We find that growth rates of S. spirorbis are significantly controlled by ontogenetic and seasonal effects. The length of the newly grown tube is inversely related to the initial diameter of the shell. Our study showed no significant difference of the growth rates between ambient atmospheric and elevated (1100 ppm) pCO2 conditions. No influence of daily average CaCO3 saturation state on the growth rates of S. spirorbis was observed. We found, however, net growth of the shells even in temporarily undersaturated bulk solutions, under conditions that concurrently favoured selective shell surface dissolution. The results suggest an overall resistance of S. spirorbis growth to acidification levels predicted for the year 2100 in the Baltic Sea. In contrast, S. spirorbis did not survive at mean seasonal temperatures exceeding 24 °C during the summer experiments. In the autumn experiments at ambient pCO2, the growth rates of juvenile S. spirorbis were higher under elevated temperature conditions. The results reveal that S. spirorbis may prefer moderately warmer conditions during their early life stages

  6. Towards a comprehensive theory for He II: A temperature-dependent field-theoretic approach

    International Nuclear Information System (INIS)

    Ghassib, H.B.; Chela-Flores, J.

    1983-07-01

    New experimental aspects of He II, as well as recent developments in particle physics, are invoked to construct the rudiments of a comprehensive theory in which temperature-dependent U(1) and SU(2) gauge fields are incorporated into a hierarchy of effective Lagrangians. It is conjectured that an SU(n) gauge-theoretic description of superfluidity may be obtained in the limit n→infinity. However, it is outlined how experiments can be understood in the zeroth, first and second order of the hierarchy. (author)

  7. temperature overspecification

    Directory of Open Access Journals (Sweden)

    Mehdi Dehghan

    2001-01-01

    Full Text Available Two different finite difference schemes for solving the two-dimensional parabolic inverse problem with temperature overspecification are considered. These schemes are developed for indentifying the control parameter which produces, at any given time, a desired temperature distribution at a given point in the spatial domain. The numerical methods discussed, are based on the (3,3 alternating direction implicit (ADI finite difference scheme and the (3,9 alternating direction implicit formula. These schemes are unconditionally stable. The basis of analysis of the finite difference equation considered here is the modified equivalent partial differential equation approach, developed from the 1974 work of Warming and Hyett [17]. This allows direct and simple comparison of the errors associated with the equations as well as providing a means to develop more accurate finite difference schemes. These schemes use less central processor times than the fully implicit schemes for two-dimensional diffusion with temperature overspecification. The alternating direction implicit schemes developed in this report use more CPU times than the fully explicit finite difference schemes, but their unconditional stability is significant. The results of numerical experiments are presented, and accuracy and the Central Processor (CPU times needed for each of the methods are discussed. We also give error estimates in the maximum norm for each of these methods.

  8. Maximum likelihood estimation for integrated diffusion processes

    DEFF Research Database (Denmark)

    Baltazar-Larios, Fernando; Sørensen, Michael

    We propose a method for obtaining maximum likelihood estimates of parameters in diffusion models when the data is a discrete time sample of the integral of the process, while no direct observations of the process itself are available. The data are, moreover, assumed to be contaminated...... EM-algorithm to obtain maximum likelihood estimates of the parameters in the diffusion model. As part of the algorithm, we use a recent simple method for approximate simulation of diffusion bridges. In simulation studies for the Ornstein-Uhlenbeck process and the CIR process the proposed method works...... by measurement errors. Integrated volatility is an example of this type of observations. Another example is ice-core data on oxygen isotopes used to investigate paleo-temperatures. The data can be viewed as incomplete observations of a model with a tractable likelihood function. Therefore we propose a simulated...

  9. A Maximum Radius for Habitable Planets.

    Science.gov (United States)

    Alibert, Yann

    2015-09-01

    We compute the maximum radius a planet can have in order to fulfill two constraints that are likely necessary conditions for habitability: 1- surface temperature and pressure compatible with the existence of liquid water, and 2- no ice layer at the bottom of a putative global ocean, that would prevent the operation of the geologic carbon cycle to operate. We demonstrate that, above a given radius, these two constraints cannot be met: in the Super-Earth mass range (1-12 Mearth), the overall maximum that a planet can have varies between 1.8 and 2.3 Rearth. This radius is reduced when considering planets with higher Fe/Si ratios, and taking into account irradiation effects on the structure of the gas envelope.

  10. Semiclassical approach to finite-temperature quantum annealing with trapped ions

    Science.gov (United States)

    Raventós, David; Graß, Tobias; Juliá-Díaz, Bruno; Lewenstein, Maciej

    2018-05-01

    Recently it has been demonstrated that an ensemble of trapped ions may serve as a quantum annealer for the number-partitioning problem [Nat. Commun. 7, 11524 (2016), 10.1038/ncomms11524]. This hard computational problem may be addressed by employing a tunable spin-glass architecture. Following the proposal of the trapped-ion annealer, we study here its robustness against thermal effects; that is, we investigate the role played by thermal phonons. For the efficient description of the system, we use a semiclassical approach, and benchmark it against the exact quantum evolution. The aim is to understand better and characterize how the quantum device approaches a solution of an otherwise difficult to solve NP-hard problem.

  11. A modeling approach for heat conduction and radiation diffusion in plasma-photon mixture in temperature nonequilibrium

    Energy Technology Data Exchange (ETDEWEB)

    Chang, Chong [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-08-09

    We present a simple approach for determining ion, electron, and radiation temperatures of heterogeneous plasma-photon mixtures, in which temperatures depend on both material type and morphology of the mixture. The solution technique is composed of solving ion, electron, and radiation energy equations for both mixed and pure phases of each material in zones containing random mixture and solving pure material energy equations in subdivided zones using interface reconstruction. Application of interface reconstruction is determined by the material configuration in the surrounding zones. In subdivided zones, subzonal inter-material energy exchanges are calculated by heat fluxes across the material interfaces. Inter-material energy exchange in zones with random mixtures is modeled using the length scale and contact surface area models. In those zones, inter-zonal heat flux in each material is determined using the volume fractions.

  12. A GIS Approach to Wind,SST(Sea Surface Temperature) and CHL(Chlorophyll) variations in the Caspian Sea

    Science.gov (United States)

    Mirkhalili, Seyedhamzeh

    2016-07-01

    Chlorophyll is an extremely important bio-molecule, critical in photosynthesis, which allows plants to absorb energy from light. At the base of the ocean food web are single-celled algae and other plant-like organisms known as Phytoplankton. Like plants on land, Phytoplankton use chlorophyll and other light-harvesting pigments to carry out photosynthesis. Where Phytoplankton grow depends on available sunlight, temperature, and nutrient levels. In this research a GIS Approach using ARCGIS software and QuikSCAT satellite data was applied to visualize WIND,SST(Sea Surface Temperature) and CHL(Chlorophyll) variations in the Caspian Sea.Results indicate that increase in chlorophyll concentration in coastal areas is primarily driven by terrestrial nutrients and does not imply that warmer SST will lead to an increase in chlorophyll concentration and consequently Phytoplankton abundance.

  13. A modeling approach for heat conduction and radiation diffusion in plasma-photon mixture in temperature nonequilibrium

    International Nuclear Information System (INIS)

    Chang, Chong

    2016-01-01

    We present a simple approach for determining ion, electron, and radiation temperatures of heterogeneous plasma-photon mixtures, in which temperatures depend on both material type and morphology of the mixture. The solution technique is composed of solving ion, electron, and radiation energy equations for both mixed and pure phases of each material in zones containing random mixture and solving pure material energy equations in subdivided zones using interface reconstruction. Application of interface reconstruction is determined by the material configuration in the surrounding zones. In subdivided zones, subzonal inter-material energy exchanges are calculated by heat fluxes across the material interfaces. Inter-material energy exchange in zones with random mixtures is modeled using the length scale and contact surface area models. In those zones, inter-zonal heat flux in each material is determined using the volume fractions.

  14. Maximum entropy principle and hydrodynamic models in statistical mechanics

    International Nuclear Information System (INIS)

    Trovato, M.; Reggiani, L.

    2012-01-01

    This review presents the state of the art of the maximum entropy principle (MEP) in its classical and quantum (QMEP) formulation. Within the classical MEP we overview a general theory able to provide, in a dynamical context, the macroscopic relevant variables for carrier transport in the presence of electric fields of arbitrary strength. For the macroscopic variables the linearized maximum entropy approach is developed including full-band effects within a total energy scheme. Under spatially homogeneous conditions, we construct a closed set of hydrodynamic equations for the small-signal (dynamic) response of the macroscopic variables. The coupling between the driving field and the energy dissipation is analyzed quantitatively by using an arbitrary number of moments of the distribution function. Analogously, the theoretical approach is applied to many one-dimensional n + nn + submicron Si structures by using different band structure models, different doping profiles, different applied biases and is validated by comparing numerical calculations with ensemble Monte Carlo simulations and with available experimental data. Within the quantum MEP we introduce a quantum entropy functional of the reduced density matrix, the principle of quantum maximum entropy is then asserted as fundamental principle of quantum statistical mechanics. Accordingly, we have developed a comprehensive theoretical formalism to construct rigorously a closed quantum hydrodynamic transport within a Wigner function approach. The theory is formulated both in thermodynamic equilibrium and nonequilibrium conditions, and the quantum contributions are obtained by only assuming that the Lagrange multipliers can be expanded in powers of ħ 2 , being ħ the reduced Planck constant. In particular, by using an arbitrary number of moments, we prove that: i) on a macroscopic scale all nonlocal effects, compatible with the uncertainty principle, are imputable to high-order spatial derivatives both of the

  15. Time-dependent Hartree-Fock approach to nuclear ``pasta'' at finite temperature

    Science.gov (United States)

    Schuetrumpf, B.; Klatt, M. A.; Iida, K.; Maruhn, J. A.; Mecke, K.; Reinhard, P.-G.

    2013-05-01

    We present simulations of neutron-rich matter at subnuclear densities, like supernova matter, with the time-dependent Hartree-Fock approximation at temperatures of several MeV. The initial state consists of α particles randomly distributed in space that have a Maxwell-Boltzmann distribution in momentum space. Adding a neutron background initialized with Fermi distributed plane waves the calculations reflect a reasonable approximation of astrophysical matter. This matter evolves into spherical, rod-like, and slab-like shapes and mixtures thereof. The simulations employ a full Skyrme interaction in a periodic three-dimensional grid. By an improved morphological analysis based on Minkowski functionals, all eight pasta shapes can be uniquely identified by the sign of only two valuations, namely the Euler characteristic and the integral mean curvature. In addition, we propose the variance in the cell density distribution as a measure to distinguish pasta matter from uniform matter.

  16. Time-Dependent Hartree-Fock Approach to Nuclear Pasta at Finite Temperature

    International Nuclear Information System (INIS)

    Schuetrumpf, B; Maruhn, J A; Klatt, M A; Mecke, K; Reinhard, P-G; Iida, K

    2013-01-01

    We present simulations of neutron-rich matter at subnuclear densities, like supernova matter, with the time-dependent Hartree-Fock approximation at temperatures of several MeV. The initial state consists of α particles randomly distributed in space that have a Maxwell-Boltzmann distribution in momentum space. Adding a neutron background initialized with Fermi distributed plane waves the calculations reflect a reasonable approximation of astrophysical matter. This matter evolves into spherical, rod-like, and slab-like shapes and mixtures thereof. The simulations employ a full Skyrme interaction in a periodic three-dimensional grid. By an improved morphological analysis based on Minkowski functionals, all eight pasta shapes can be uniquely identified by the sign of only two valuations, namely the Euler characteristic and the integral mean curvature.

  17. Time-Dependent Hartree-Fock Approach to Nuclear Pasta at Finite Temperature

    Science.gov (United States)

    Schuetrumpf, B.; Klatt, M. A.; Iida, K.; Maruhn, J. A.; Mecke, K.; Reinhard, P.-G.

    2013-03-01

    We present simulations of neutron-rich matter at subnuclear densities, like supernova matter, with the time-dependent Hartree-Fock approximation at temperatures of several MeV. The initial state consists of α particles randomly distributed in space that have a Maxwell-Boltzmann distribution in momentum space. Adding a neutron background initialized with Fermi distributed plane waves the calculations reflect a reasonable approximation of astrophysical matter. This matter evolves into spherical, rod-like, and slab-like shapes and mixtures thereof. The simulations employ a full Skyrme interaction in a periodic three-dimensional grid. By an improved morphological analysis based on Minkowski functionals, all eight pasta shapes can be uniquely identified by the sign of only two valuations, namely the Euler characteristic and the integral mean curvature.

  18. Room temperature ionic liquids for actinide extraction: a 'green' approach?

    International Nuclear Information System (INIS)

    Mohapatra, P.K.

    2013-01-01

    Extraction of actinides is one of the key issues in the remediation of high level radioactive wastes emanating from the back end of the nuclear fuel cycle. Effective actinide extraction makes the waste benign and ready for disposal as vitrified waste blocks in deep geological repositories. However, conventional solvent extraction methods, though being routinely used for actinide separations, have several disadvantages, which include large VOC (volatile organic compounds) inventory and generation of huge volumes of secondary wastes. Growing concern for the environment has led to the increasing interest in room temperature ionic liquids (RTIL) as an alternative to molecular diluents in myriad applications including synthesis, catalysis, separation and electrochemistry. Out of these, application of RTILs to separation science has increased enormously as can be seen from the rapid rise in the number of publications in this area in the last decade, due to their unique characteristics of high thermal stability and low volatility

  19. The Application of an Army Prospective Payment Model Structured on the Standards Set Forth by the CHAMPUS Maximum Allowable Charges and the Center for Medicare and Medicaid Services: An Academic Approach

    Science.gov (United States)

    2005-04-29

    To) 29-04-2005 Final Report July 2004 to July 2005 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER The appli’eation of an army prospective payment model structured...Z39.18 Prospective Payment Model 1 The Application of an Army Prospective Payment Model Structured on the Standards Set Forth by the CHAMPUS Maximum...Health Care Administration 20060315 090 Prospective Payment Model 2 Acknowledgments I would like to acknowledge my wife, Karen, who allowed me the

  20. Equivalent electrical network model approach applied to a double acting low temperature differential Stirling engine

    International Nuclear Information System (INIS)

    Formosa, Fabien; Badel, Adrien; Lottin, Jacques

    2014-01-01

    Highlights: • An equivalent electrical network modeling of Stirling engine is proposed. • This model is applied to a membrane low temperate double acting Stirling engine. • The operating conditions (self-startup and steady state behavior) are defined. • An experimental engine is presented and tested. • The model is validated against experimental results. - Abstract: This work presents a network model to simulate the periodic behavior of a double acting free piston type Stirling engine. Each component of the engine is considered independently and its equivalent electrical circuit derived. When assembled in a global electrical network, a global model of the engine is established. Its steady behavior can be obtained by the analysis of the transfer function for one phase from the piston to the expansion chamber. It is then possible to simulate the dynamic (steady state stroke and operation frequency) as well as the thermodynamic performances (output power and efficiency) for given mean pressure, heat source and heat sink temperatures. The motion amplitude especially can be determined by the spring-mass properties of the moving parts and the main nonlinear effects which are taken into account in the model. The thermodynamic features of the model have then been validated using the classical isothermal Schmidt analysis for a given stroke. A three-phase low temperature differential double acting free membrane architecture has been built and tested. The experimental results are compared with the model and a satisfactory agreement is obtained. The stroke and operating frequency are predicted with less than 2% error whereas the output power discrepancy is of about 30%. Finally, some optimization routes are suggested to improve the design and maximize the performances aiming at waste heat recovery applications

  1. Short-term preservation of porcine oocytes in ambient temperature: novel approaches.

    Directory of Open Access Journals (Sweden)

    Cai-Rong Yang

    Full Text Available The objective of this study was to evaluate the feasibility of preserving porcine oocytes without freezing. To optimize preservation conditions, porcine cumulus-oocyte complexes (COCs were preserved in TCM-199, porcine follicular fluid (pFF and FCS at different temperatures (4°C, 20°C, 25°C, 27.5°C, 30°C and 38.5°C for 1 day, 2 days or 3 days. After preservation, oocyte morphology, germinal vesicle (GV rate, actin cytoskeleton organization, cortical granule distribution, mitochondrial translocation and intracellular glutathione level were evaluated. Oocyte maturation was indicated by first polar body emission and spindle morphology after in vitro culture. Strikingly, when COCs were stored at 27.5°C for 3 days in pFF or FCS, more than 60% oocytes were still arrested at the GV stage and more than 50% oocytes matured into MII stages after culture. Almost 80% oocytes showed normal actin organization and cortical granule relocation to the cortex, and approximately 50% oocytes showed diffused mitochondria distribution patterns and normal spindle configurations. While stored in TCM-199, all these criteria decreased significantly. Glutathione (GSH level in the pFF or FCS group was higher than in the TCM-199 group, but lower than in the non-preserved control group. The preserved oocytes could be fertilized and developed to blastocysts (about 10% with normal cell number, which is clear evidence for their retaining the developmental potentiality after 3d preservation. Thus, we have developed a simple method for preserving immature pig oocytes at an ambient temperature for several days without evident damage of cytoplasm and keeping oocyte developmental competence.

  2. Short-term preservation of porcine oocytes in ambient temperature: novel approaches.

    Science.gov (United States)

    Yang, Cai-Rong; Miao, De-Qiang; Zhang, Qing-Hua; Guo, Lei; Tong, Jing-Shan; Wei, Yanchang; Huang, Xin; Hou, Yi; Schatten, Heide; Liu, ZhongHua; Sun, Qing-Yuan

    2010-12-07

    The objective of this study was to evaluate the feasibility of preserving porcine oocytes without freezing. To optimize preservation conditions, porcine cumulus-oocyte complexes (COCs) were preserved in TCM-199, porcine follicular fluid (pFF) and FCS at different temperatures (4°C, 20°C, 25°C, 27.5°C, 30°C and 38.5°C) for 1 day, 2 days or 3 days. After preservation, oocyte morphology, germinal vesicle (GV) rate, actin cytoskeleton organization, cortical granule distribution, mitochondrial translocation and intracellular glutathione level were evaluated. Oocyte maturation was indicated by first polar body emission and spindle morphology after in vitro culture. Strikingly, when COCs were stored at 27.5°C for 3 days in pFF or FCS, more than 60% oocytes were still arrested at the GV stage and more than 50% oocytes matured into MII stages after culture. Almost 80% oocytes showed normal actin organization and cortical granule relocation to the cortex, and approximately 50% oocytes showed diffused mitochondria distribution patterns and normal spindle configurations. While stored in TCM-199, all these criteria decreased significantly. Glutathione (GSH) level in the pFF or FCS group was higher than in the TCM-199 group, but lower than in the non-preserved control group. The preserved oocytes could be fertilized and developed to blastocysts (about 10%) with normal cell number, which is clear evidence for their retaining the developmental potentiality after 3d preservation. Thus, we have developed a simple method for preserving immature pig oocytes at an ambient temperature for several days without evident damage of cytoplasm and keeping oocyte developmental competence.

  3. Minimal length, Friedmann equations and maximum density

    Energy Technology Data Exchange (ETDEWEB)

    Awad, Adel [Center for Theoretical Physics, British University of Egypt,Sherouk City 11837, P.O. Box 43 (Egypt); Department of Physics, Faculty of Science, Ain Shams University,Cairo, 11566 (Egypt); Ali, Ahmed Farag [Centre for Fundamental Physics, Zewail City of Science and Technology,Sheikh Zayed, 12588, Giza (Egypt); Department of Physics, Faculty of Science, Benha University,Benha, 13518 (Egypt)

    2014-06-16

    Inspired by Jacobson’s thermodynamic approach, Cai et al. have shown the emergence of Friedmann equations from the first law of thermodynamics. We extend Akbar-Cai derivation http://dx.doi.org/10.1103/PhysRevD.75.084003 of Friedmann equations to accommodate a general entropy-area law. Studying the resulted Friedmann equations using a specific entropy-area law, which is motivated by the generalized uncertainty principle (GUP), reveals the existence of a maximum energy density closed to Planck density. Allowing for a general continuous pressure p(ρ,a) leads to bounded curvature invariants and a general nonsingular evolution. In this case, the maximum energy density is reached in a finite time and there is no cosmological evolution beyond this point which leaves the big bang singularity inaccessible from a spacetime prospective. The existence of maximum energy density and a general nonsingular evolution is independent of the equation of state and the spacial curvature k. As an example we study the evolution of the equation of state p=ωρ through its phase-space diagram to show the existence of a maximum energy which is reachable in a finite time.

  4. The Influence of Temperature on Time-Dependent Deformation and Failure in Granite: A Mesoscale Modeling Approach

    Science.gov (United States)

    Xu, T.; Zhou, G. L.; Heap, Michael J.; Zhu, W. C.; Chen, C. F.; Baud, Patrick

    2017-09-01

    An understanding of the influence of temperature on brittle creep in granite is important for the management and optimization of granitic nuclear waste repositories and geothermal resources. We propose here a two-dimensional, thermo-mechanical numerical model that describes the time-dependent brittle deformation (brittle creep) of low-porosity granite under different constant temperatures and confining pressures. The mesoscale model accounts for material heterogeneity through a stochastic local failure stress field, and local material degradation using an exponential material softening law. Importantly, the model introduces the concept of a mesoscopic renormalization to capture the co-operative interaction between microcracks in the transition from distributed to localized damage. The mesoscale physico-mechanical parameters for the model were first determined using a trial-and-error method (until the modeled output accurately captured mechanical data from constant strain rate experiments on low-porosity granite at three different confining pressures). The thermo-physical parameters required for the model, such as specific heat capacity, coefficient of linear thermal expansion, and thermal conductivity, were then determined from brittle creep experiments performed on the same low-porosity granite at temperatures of 23, 50, and 90 °C. The good agreement between the modeled output and the experimental data, using a unique set of thermo-physico-mechanical parameters, lends confidence to our numerical approach. Using these parameters, we then explore the influence of temperature, differential stress, confining pressure, and sample homogeneity on brittle creep in low-porosity granite. Our simulations show that increases in temperature and differential stress increase the creep strain rate and therefore reduce time-to-failure, while increases in confining pressure and sample homogeneity decrease creep strain rate and increase time-to-failure. We anticipate that the

  5. Maximum stellar iron core mass

    Indian Academy of Sciences (India)

    60, No. 3. — journal of. March 2003 physics pp. 415–422. Maximum stellar iron core mass. F W GIACOBBE. Chicago Research Center/American Air Liquide ... iron core compression due to the weight of non-ferrous matter overlying the iron cores within large .... thermal equilibrium velocities will tend to be non-relativistic.

  6. Maximum entropy beam diagnostic tomography

    International Nuclear Information System (INIS)

    Mottershead, C.T.

    1985-01-01

    This paper reviews the formalism of maximum entropy beam diagnostic tomography as applied to the Fusion Materials Irradiation Test (FMIT) prototype accelerator. The same formalism has also been used with streak camera data to produce an ultrahigh speed movie of the beam profile of the Experimental Test Accelerator (ETA) at Livermore. 11 refs., 4 figs

  7. Maximum entropy beam diagnostic tomography

    International Nuclear Information System (INIS)

    Mottershead, C.T.

    1985-01-01

    This paper reviews the formalism of maximum entropy beam diagnostic tomography as applied to the Fusion Materials Irradiation Test (FMIT) prototype accelerator. The same formalism has also been used with streak camera data to produce an ultrahigh speed movie of the beam profile of the Experimental Test Accelerator (ETA) at Livermore

  8. Neutron spectra unfolding with maximum entropy and maximum likelihood

    International Nuclear Information System (INIS)

    Itoh, Shikoh; Tsunoda, Toshiharu

    1989-01-01

    A new unfolding theory has been established on the basis of the maximum entropy principle and the maximum likelihood method. This theory correctly embodies the Poisson statistics of neutron detection, and always brings a positive solution over the whole energy range. Moreover, the theory unifies both problems of overdetermined and of underdetermined. For the latter, the ambiguity in assigning a prior probability, i.e. the initial guess in the Bayesian sense, has become extinct by virtue of the principle. An approximate expression of the covariance matrix for the resultant spectra is also presented. An efficient algorithm to solve the nonlinear system, which appears in the present study, has been established. Results of computer simulation showed the effectiveness of the present theory. (author)

  9. Growth of uniform nanoparticles of platinum by an economical approach at relatively low temperature

    KAUST Repository

    Shah, M.A.

    2012-01-01

    Current chemical methods of synthesis have shown limited success in the fabrication of nanomaterials, which involves environmentally malignant chemicals. Environmental friendly synthesis requires alternative solvents, and it is expected that the use of soft options of green approaches may overcome these obstacles. Water, which is regarded as a benign solvent, has been used in the present work for the preparation of platinum nanoparticles. The average particle diameter is in the range of ∼13±5 nm and particles are largely agglomerated. The advantages of preparing nanoparticles with this method include ease, flexibility and cost effectiveness. The prospects of the process are bright, and the technique could be extended to prepare many other important metal and metal oxide nanostructures. © 2012 Sharif University of Technology. Production and hosting by Elsevier B.V. All rights reserved.

  10. Growth of uniform nanoparticles of platinum by an economical approach at relatively low temperature

    KAUST Repository

    Shah, M.A.

    2012-06-01

    Current chemical methods of synthesis have shown limited success in the fabrication of nanomaterials, which involves environmentally malignant chemicals. Environmental friendly synthesis requires alternative solvents, and it is expected that the use of soft options of green approaches may overcome these obstacles. Water, which is regarded as a benign solvent, has been used in the present work for the preparation of platinum nanoparticles. The average particle diameter is in the range of ∼13±5 nm and particles are largely agglomerated. The advantages of preparing nanoparticles with this method include ease, flexibility and cost effectiveness. The prospects of the process are bright, and the technique could be extended to prepare many other important metal and metal oxide nanostructures. © 2012 Sharif University of Technology. Production and hosting by Elsevier B.V. All rights reserved.

  11. Solubility of magnetite in high temperature water and an approach to generalized solubility computations

    International Nuclear Information System (INIS)

    Dinov, K.; Ishigure, K.; Matsuura, C.; Hiroishi, D.

    1993-01-01

    Magnetite solubility in pure water was measured at 423 K in a fully teflon-covered autoclave system. A fairly good agreement was found to exist between the experimental data and calculation results obtained from the thermodynamical model, based on the assumption of Fe 3 O 4 dissolution and Fe 2 O 3 deposition reactions. A generalized thermodynamical approach to the solubility computations under complex conditions on the basis of minimization of the total system Gibbs free energy was proposed. The forms of the chemical equilibria were obtained for various systems initially defined and successfully justified by the subsequent computations. A [Fe 3+ ] T -[Fe 2+ ] T phase diagram was introduced as a tool for systematic understanding of the magnetite dissolution phenomena in pure water and under oxidizing and reducing conditions. (orig.)

  12. Transient regimes during high-temperature deformation of a bulk metallic glass: A free volume approach

    International Nuclear Information System (INIS)

    Bletry, M.; Guyot, P.; Brechet, Y.; Blandin, J.J.; Soubeyroux, J.L.

    2007-01-01

    The homogeneous deformation of a zirconium-based bulk metallic glass is investigated in the glass transition range. Compression and stress-relaxation tests have been conducted. The stress-strain curves are modeled in the framework of the free volume theory, including transient phenomena (overshoot and undershoot). This approach allows several physical parameters (activation volume, flow defect creation and relaxation coefficient) to be determined from a mechanical experiment. This model is able to rationalize the dependency of stress overshoot on relaxation time. It is shown that, due to the relationship between flow defect concentration and free volume model, it is impossible to determine the equilibrium flow defect concentration. However, the relative variation of flow defect is always the same, and all the model parameters depend on the equilibrium flow defect concentration. The methodology presented in this paper should, in the future, allow the consistency of the free volume model to be assessed

  13. Nuclear Pasta at Finite Temperature with the Time-Dependent Hartree-Fock Approach

    International Nuclear Information System (INIS)

    Schuetrumpf, B; Maruhn, J A; Klatt, M A; Mecke, K; Reinhard, P-G; Iida, K

    2016-01-01

    We present simulations of neutron-rich matter at sub-nuclear densities, like supernova matter. With the time-dependent Hartree-Fock approximation we can study the evolution of the system at temperatures of several MeV employing a full Skyrme interaction in a periodic three-dimensional grid [1].The initial state consists of α particles randomly distributed in space that have a Maxwell-Boltzmann distribution in momentum space. Adding a neutron background initialized with Fermi distributed plane waves the calculations reflect a reasonable approximation of astrophysical matter.The matter evolves into spherical, rod-like, connected rod-like and slab-like shapes. Further we observe gyroid-like structures, discussed e.g. in [2], which are formed spontaneously choosing a certain value of the simulation box length. The ρ-T-map of pasta shapes is basically consistent with the phase diagrams obtained from QMD calculations [3]. By an improved topological analysis based on Minkowski functionals [4], all observed pasta shapes can be uniquely identified by only two valuations, namely the Euler characteristic and the integral mean curvature.In addition we propose the variance in the cell-density distribution as a measure to distinguish pasta matter from uniform matter. (paper)

  14. Fusion of MODIS and landsat-8 surface temperature images: a new approach.

    Science.gov (United States)

    Hazaymeh, Khaled; Hassan, Quazi K

    2015-01-01

    Here, our objective was to develop a spatio-temporal image fusion model (STI-FM) for enhancing temporal resolution of Landsat-8 land surface temperature (LST) images by fusing LST images acquired by the Moderate Resolution Imaging Spectroradiometer (MODIS); and implement the developed algorithm over a heterogeneous semi-arid study area in Jordan, Middle East. The STI-FM technique consisted of two major components: (i) establishing a linear relationship between two consecutive MODIS 8-day composite LST images acquired at time 1 and time 2; and (ii) utilizing the above mentioned relationship as a function of a Landsat-8 LST image acquired at time 1 in order to predict a synthetic Landsat-8 LST image at time 2. It revealed that strong linear relationships (i.e., r2, slopes, and intercepts were in the range 0.93-0.94, 0.94-0.99; and 2.97-20.07) existed between the two consecutive MODIS LST images. We evaluated the synthetic LST images qualitatively and found high visual agreements with the actual Landsat-8 LST images. In addition, we conducted quantitative evaluations of these synthetic images; and found strong agreements with the actual Landsat-8 LST images. For example, r2, root mean square error (RMSE), and absolute average difference (AAD)-values were in the ranges 084-0.90, 0.061-0.080, and 0.003-0.004, respectively.

  15. Nuclear Pasta at Finite Temperature with the Time-Dependent Hartree-Fock Approach

    Science.gov (United States)

    Schuetrumpf, B.; Klatt, M. A.; Iida, K.; Maruhn, J. A.; Mecke, K.; Reinhard, P.-G.

    2016-01-01

    We present simulations of neutron-rich matter at sub-nuclear densities, like supernova matter. With the time-dependent Hartree-Fock approximation we can study the evolution of the system at temperatures of several MeV employing a full Skyrme interaction in a periodic three-dimensional grid [1]. The initial state consists of α particles randomly distributed in space that have a Maxwell-Boltzmann distribution in momentum space. Adding a neutron background initialized with Fermi distributed plane waves the calculations reflect a reasonable approximation of astrophysical matter. The matter evolves into spherical, rod-like, connected rod-like and slab-like shapes. Further we observe gyroid-like structures, discussed e.g. in [2], which are formed spontaneously choosing a certain value of the simulation box length. The ρ-T-map of pasta shapes is basically consistent with the phase diagrams obtained from QMD calculations [3]. By an improved topological analysis based on Minkowski functionals [4], all observed pasta shapes can be uniquely identified by only two valuations, namely the Euler characteristic and the integral mean curvature. In addition we propose the variance in the cell-density distribution as a measure to distinguish pasta matter from uniform matter.

  16. Pattern formation and filamentation in low temperature, magnetized plasmas - a numerical approach

    Science.gov (United States)

    Menati, Mohamad; Konopka, Uwe; Thomas, Edward

    2017-10-01

    In low-temperature discharges under the influence of high magnetic field, pattern and filament formation in the plasma has been reported by different groups. The phenomena present themselves as bright plasma columns (filaments) oriented parallel to the magnetic field lines at high magnetic field regime. The plasma structure can filament into different shapes from single columns to spiral and bright rings when viewed from the top. In spite of the extensive experimental observations, the observed effects lack a detailed theoretical and numerical description. In an attempt to numerically explain the plasma filamentation, we present a simplified model for the plasma discharge and power deposition into the plasma. Based on the model, 2-D and 3-D codes are being developed that solve Poisson's equation along with the fluid equations to obtain a self-consistent description of the plasma. The model and preliminary results applied to the specific plasma conditions will be presented. This work was supported by the US Dept. of Energy and NSF, DE-SC0016330, PHY-1613087.

  17. A practical approach to temperature effects in dissociative electron attachment cross sections using local complex potential theory

    International Nuclear Information System (INIS)

    Sugioka, Yuji; Takayanagi, Toshiyuki

    2012-01-01

    Highlights: ► Dissociative electron attachment cross sections for polyatomic molecules are calculated by a simple theoretical approach. ► Temperature effects can be reasonably reproduced with the present model. ► All the degrees-of-freedom are taken into account in the present dynamics approach. -- Abstract: We propose a practical computational scheme to obtain temperature dependence of dissociative electron attachment cross sections to polyatomic molecules within a local complex potential theory formalism. First we perform quantum path-integral molecular dynamics simulations on the potential energy surface for the neutral molecule in order to sample initial nuclear configurations as well as momenta. Classical trajectories are subsequently integrated on the potential energy surface for the anionic state and survival probabilities are simultaneously calculated along the obtained trajectories. We have applied this simple scheme to dissociative electron attachment processes to H 2 O and CF 3 Cl, for which several previous studies are available from both the experimental and theoretical sides.

  18. A practical approach to temperature effects in dissociative electron attachment cross sections using local complex potential theory

    Energy Technology Data Exchange (ETDEWEB)

    Sugioka, Yuji [Department of Chemistry, Saitama University, 255 Shimo-Okubo, Sakura-ku, Saitama City, Saitama 338-8570 (Japan); Takayanagi, Toshiyuki, E-mail: tako@mail.saitama-u.ac.jp [Department of Chemistry, Saitama University, 255 Shimo-Okubo, Sakura-ku, Saitama City, Saitama 338-8570 (Japan)

    2012-09-11

    Highlights: Black-Right-Pointing-Pointer Dissociative electron attachment cross sections for polyatomic molecules are calculated by a simple theoretical approach. Black-Right-Pointing-Pointer Temperature effects can be reasonably reproduced with the present model. Black-Right-Pointing-Pointer All the degrees-of-freedom are taken into account in the present dynamics approach. -- Abstract: We propose a practical computational scheme to obtain temperature dependence of dissociative electron attachment cross sections to polyatomic molecules within a local complex potential theory formalism. First we perform quantum path-integral molecular dynamics simulations on the potential energy surface for the neutral molecule in order to sample initial nuclear configurations as well as momenta. Classical trajectories are subsequently integrated on the potential energy surface for the anionic state and survival probabilities are simultaneously calculated along the obtained trajectories. We have applied this simple scheme to dissociative electron attachment processes to H{sub 2}O and CF{sub 3}Cl, for which several previous studies are available from both the experimental and theoretical sides.

  19. Pareto versus lognormal: a maximum entropy test.

    Science.gov (United States)

    Bee, Marco; Riccaboni, Massimo; Schiavo, Stefano

    2011-08-01

    It is commonly found that distributions that seem to be lognormal over a broad range change to a power-law (Pareto) distribution for the last few percentiles. The distributions of many physical, natural, and social events (earthquake size, species abundance, income and wealth, as well as file, city, and firm sizes) display this structure. We present a test for the occurrence of power-law tails in statistical distributions based on maximum entropy. This methodology allows one to identify the true data-generating processes even in the case when it is neither lognormal nor Pareto. The maximum entropy approach is then compared with other widely used methods and applied to different levels of aggregation of complex systems. Our results provide support for the theory that distributions with lognormal body and Pareto tail can be generated as mixtures of lognormally distributed units.

  20. Maximum Entropy Closure of Balance Equations for Miniband Semiconductor Superlattices

    Directory of Open Access Journals (Sweden)

    Luis L. Bonilla

    2016-07-01

    Full Text Available Charge transport in nanosized electronic systems is described by semiclassical or quantum kinetic equations that are often costly to solve numerically and difficult to reduce systematically to macroscopic balance equations for densities, currents, temperatures and other moments of macroscopic variables. The maximum entropy principle can be used to close the system of equations for the moments but its accuracy or range of validity are not always clear. In this paper, we compare numerical solutions of balance equations for nonlinear electron transport in semiconductor superlattices. The equations have been obtained from Boltzmann–Poisson kinetic equations very far from equilibrium for strong fields, either by the maximum entropy principle or by a systematic Chapman–Enskog perturbation procedure. Both approaches produce the same current-voltage characteristic curve for uniform fields. When the superlattices are DC voltage biased in a region where there are stable time periodic solutions corresponding to recycling and motion of electric field pulses, the differences between the numerical solutions produced by numerically solving both types of balance equations are smaller than the expansion parameter used in the perturbation procedure. These results and possible new research venues are discussed.

  1. Maximum Likelihood Reconstruction for Magnetic Resonance Fingerprinting.

    Science.gov (United States)

    Zhao, Bo; Setsompop, Kawin; Ye, Huihui; Cauley, Stephen F; Wald, Lawrence L

    2016-08-01

    This paper introduces a statistical estimation framework for magnetic resonance (MR) fingerprinting, a recently proposed quantitative imaging paradigm. Within this framework, we present a maximum likelihood (ML) formalism to estimate multiple MR tissue parameter maps directly from highly undersampled, noisy k-space data. A novel algorithm, based on variable splitting, the alternating direction method of multipliers, and the variable projection method, is developed to solve the resulting optimization problem. Representative results from both simulations and in vivo experiments demonstrate that the proposed approach yields significantly improved accuracy in parameter estimation, compared to the conventional MR fingerprinting reconstruction. Moreover, the proposed framework provides new theoretical insights into the conventional approach. We show analytically that the conventional approach is an approximation to the ML reconstruction; more precisely, it is exactly equivalent to the first iteration of the proposed algorithm for the ML reconstruction, provided that a gridding reconstruction is used as an initialization.

  2. Maximum Water Hammer Sensitivity Analysis

    OpenAIRE

    Jalil Emadi; Abbas Solemani

    2011-01-01

    Pressure waves and Water Hammer occur in a pumping system when valves are closed or opened suddenly or in the case of sudden failure of pumps. Determination of maximum water hammer is considered one of the most important technical and economical items of which engineers and designers of pumping stations and conveyance pipelines should take care. Hammer Software is a recent application used to simulate water hammer. The present study focuses on determining significance of ...

  3. Maximum Gene-Support Tree

    Directory of Open Access Journals (Sweden)

    Yunfeng Shan

    2008-01-01

    Full Text Available Genomes and genes diversify during evolution; however, it is unclear to what extent genes still retain the relationship among species. Model species for molecular phylogenetic studies include yeasts and viruses whose genomes were sequenced as well as plants that have the fossil-supported true phylogenetic trees available. In this study, we generated single gene trees of seven yeast species as well as single gene trees of nine baculovirus species using all the orthologous genes among the species compared. Homologous genes among seven known plants were used for validation of the finding. Four algorithms—maximum parsimony (MP, minimum evolution (ME, maximum likelihood (ML, and neighbor-joining (NJ—were used. Trees were reconstructed before and after weighting the DNA and protein sequence lengths among genes. Rarely a gene can always generate the “true tree” by all the four algorithms. However, the most frequent gene tree, termed “maximum gene-support tree” (MGS tree, or WMGS tree for the weighted one, in yeasts, baculoviruses, or plants was consistently found to be the “true tree” among the species. The results provide insights into the overall degree of divergence of orthologous genes of the genomes analyzed and suggest the following: 1 The true tree relationship among the species studied is still maintained by the largest group of orthologous genes; 2 There are usually more orthologous genes with higher similarities between genetically closer species than between genetically more distant ones; and 3 The maximum gene-support tree reflects the phylogenetic relationship among species in comparison.

  4. Low-temperature abatement of toluene over Mn-Ce oxides catalysts synthesized by a modified hydrothermal approach

    Science.gov (United States)

    Du, Jinpeng; Qu, Zhenping; Dong, Cui; Song, Lixin; Qin, Yuan; Huang, Na

    2018-03-01

    Mn-Ce oxides catalysts were synthesized by a novel method combining redox-precipitation and hydrothermal approach. The results indicate that the ratio between manganese and cerium plays a crucial role in the formation of catalysts, and the textual properties as well as catalytic activity are remarked affected. Mn0.6Ce0.4O2 possesses a predominant catalytic activity in the oxidation of toluene, over 70% of toluene is converted at 200 °C, and the complete conversion temperature is 210 °C. The formation of Mn-Ce solid solution markedly improves the surface area as well as pore volume of Mn-Ce oxide catalyst, and Mn0.6Ce0.4O2 possesses the largest surface area of 298.5 m2/g. The abundant Ce3+ and Mn3+ on Mn0.6Ce0.4O2 catalyst facilitate the formation of oxygen vacancies, and improve the transfer of oxygen in the catalysts. Meanwhile, it is found that cerium in Mn-Ce oxide plays a key role in the adsorption of toluene, while manganese is proved to be crucial in the oxidation of toluene, the cooperation between manganese and cerium improves the catalytic reaction process. In addition, the reaction process is investigated by in situ DRIFT measurement, and it is found that the adsorbed toluene could be oxidized to benzyl alcohol as temperature rises around 80-120 °C that can be further be oxidized to benzoic acid. Then benzoic acid could be decomposed to formate and/or carbonate species as temperature rises to form CO2 and H2O. In addition, the formed by-product phenol could be further oxidized into CO2 and H2O when the temperature is high enough.

  5. LCLS Maximum Credible Beam Power

    International Nuclear Information System (INIS)

    Clendenin, J.

    2005-01-01

    The maximum credible beam power is defined as the highest credible average beam power that the accelerator can deliver to the point in question, given the laws of physics, the beam line design, and assuming all protection devices have failed. For a new accelerator project, the official maximum credible beam power is determined by project staff in consultation with the Radiation Physics Department, after examining the arguments and evidence presented by the appropriate accelerator physicist(s) and beam line engineers. The definitive parameter becomes part of the project's safety envelope. This technical note will first review the studies that were done for the Gun Test Facility (GTF) at SSRL, where a photoinjector similar to the one proposed for the LCLS is being tested. In Section 3 the maximum charge out of the gun for a single rf pulse is calculated. In Section 4, PARMELA simulations are used to track the beam from the gun to the end of the photoinjector. Finally in Section 5 the beam through the matching section and injected into Linac-1 is discussed

  6. Dynamical renormalization group approach to transport in ultrarelativistic plasmas: The electrical conductivity in high temperature QED

    International Nuclear Information System (INIS)

    Boyanovsky, Daniel; Vega, Hector J. de; Wang Shangyung

    2003-01-01

    The dc electrical conductivity of an ultrarelativistic QED plasma is studied in real time by implementing the dynamical renormalization group. The conductivity is obtained from the real-time dependence of a dissipative kernel closely related to the retarded photon polarization. Pinch singularities in the imaginary part of the polarization are manifest as secular terms that grow in time in the perturbative expansion of this kernel. The leading secular terms are studied explicitly and it is shown that they are insensitive to the anomalous damping of hard fermions as a result of a cancellation between self-energy and vertex corrections. The resummation of the secular terms via the dynamical renormalization group leads directly to a renormalization group equation in real time, which is the Boltzmann equation for the (gauge invariant) fermion distribution function. A direct correspondence between the perturbative expansion and the linearized Boltzmann equation is established, allowing a direct identification of the self-energy and vertex contributions to the collision term. We obtain a Fokker-Planck equation in momentum space that describes the dynamics of the departure from equilibrium to leading logarithmic order in the coupling. This equation determines that the transport time scale is given by t tr =24 π/e 4 T ln(1/e). The solution of the Fokker-Planck equation approaches asymptotically the steady-state solution as ∼e -t/(4.038...t tr ) . The steady-state solution leads to the conductivity σ=15.698 T/e 2 ln(1/e) to leading logarithmic order. We discuss the contributions beyond leading logarithms as well as beyond the Boltzmann equation. The dynamical renormalization group provides a link between linear response in quantum field theory and kinetic theory

  7. Dependency of Delayed Hydride Crack Velocity on the Direction of an Approach to Test Temperatures in Zirconium Alloys

    International Nuclear Information System (INIS)

    Kim, Young Suk; Kim, Kang Soo; Im, Kyung Soo; Ahn, Sang Bok; Cheong, Yong Moo

    2005-01-01

    Recently, Kim proposed a new DHC model where a driving force for the DHC is a supersaturated hydrogen concentration as a result of a hysteresis of the terminal solid solubility (TSS) of hydrogen in zirconium alloys upon a heating and a cooling. This model was demonstrated to be valid through a model experiment where the prior plastic deformation facilitated nucleation of the reoriented hydrides, thus reducing the supersaturated hydrogen concentration at the plastic zone ahead of the crack tip and causing hydrogen to move to the crack tip from the bulk region. Thus, an approach to the test temperature by a cooling is required to create a supersaturation of hydrogen, which is a driving force for the DHC of zirconium alloys. However, despite the absence of the supersaturation of hydrogen due to an approach to the test temperature by a heating, DHC is observed to occur in zirconium alloys at the test temperatures below 180 .deg. C. As to this DHC phenomenon, Kim proposed that stress-induced transformation from γ-hydrides to δ-hydrides is likely to be a cause of this, based on Root's observation that the γ-hydride is a stable phase at temperatures lower than 180 .deg. C. In other words, the hydrides formed at the crack tip would be δ-hydrides due to the stressinduced transformation while the bulk region still maintains the initial hydride phase or γ-hydrides. It should be noted that Ambler has also assumed the crack tip hydrides to be δ-hydrides. When the δ-hydrides or ZrH1.66 are precipitated at the crack tip due to the transformation of the γ-hydrides or ZrH, the crack tip will have a decreased concentration of dissolved hydrogen in zirconium, considering the atomic ratio of hydrogen and zirconium in the γ- and δ-hydrides. In contrast, due to no stress-induced transformation of hydrides, the bulk region maintains the initial concentration of dissolved hydrogen. Hence, there develops a difference in the hydrogen concentration or .C between the bulk and the

  8. Dependency of Delayed Hydride Crack Velocity on the Direction of an Approach to Test Temperatures in Zirconium Alloys

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Young Suk; Kim, Kang Soo; Im, Kyung Soo; Ahn, Sang Bok; Cheong, Yong Moo [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    2005-07-01

    Recently, Kim proposed a new DHC model where a driving force for the DHC is a supersaturated hydrogen concentration as a result of a hysteresis of the terminal solid solubility (TSS) of hydrogen in zirconium alloys upon a heating and a cooling. This model was demonstrated to be valid through a model experiment where the prior plastic deformation facilitated nucleation of the reoriented hydrides, thus reducing the supersaturated hydrogen concentration at the plastic zone ahead of the crack tip and causing hydrogen to move to the crack tip from the bulk region. Thus, an approach to the test temperature by a cooling is required to create a supersaturation of hydrogen, which is a driving force for the DHC of zirconium alloys. However, despite the absence of the supersaturation of hydrogen due to an approach to the test temperature by a heating, DHC is observed to occur in zirconium alloys at the test temperatures below 180 .deg. C. As to this DHC phenomenon, Kim proposed that stress-induced transformation from {gamma}-hydrides to {delta}-hydrides is likely to be a cause of this, based on Root's observation that the {gamma}-hydride is a stable phase at temperatures lower than 180 .deg. C. In other words, the hydrides formed at the crack tip would be {delta}-hydrides due to the stressinduced transformation while the bulk region still maintains the initial hydride phase or {gamma}-hydrides. It should be noted that Ambler has also assumed the crack tip hydrides to be {delta}-hydrides. When the {delta}-hydrides or ZrH1.66 are precipitated at the crack tip due to the transformation of the {gamma}-hydrides or ZrH, the crack tip will have a decreased concentration of dissolved hydrogen in zirconium, considering the atomic ratio of hydrogen and zirconium in the {gamma}- and {delta}-hydrides. In contrast, due to no stress-induced transformation of hydrides, the bulk region maintains the initial concentration of dissolved hydrogen. Hence, there develops a difference in the

  9. Future Temperatures and Precipitations in the Arid Northern-Central Chile: A Multi-Model Downscaling Approach

    Science.gov (United States)

    Souvignet, M.; Heinrich, J.

    2010-03-01

    for maximum, minimum temperature and precipitation in the research area based on four different General Circulation Models (GCMs). On the first hand, the Statistical Downscaling Model (SDSM) was used. This model is based on a multiple linear regression method and is best described as a hybrid of the stochastic weather generator and transfer function methods. One common advantage of statistical downscaling is that it ensures the maintenance of local spatial and temporal variability in generating realistic data time series. On the other hand and for comparison purposes, the Change Factor method was used. This methodology is relatively straightforward and ideal for rapid climate change assessment. The outputs of the HadCM3, CGCM3.1, GDFL-CM2 and MRI-CGCM2.3.2 A1 and B2 scenarios were downscaled with both methodologies and thereafter compared by means of several hydro-meteorological indices for a 55-years period (2045-2099). Preliminary results indicate that local temperatures are expected to rise in the region, whereas precipitations may decrease. However, minimum and maximum temperatures might increase at a faster rate at higher altitude areas. In addition, the Cordillera mountain range may encounter and longer winters with a dramatic decrease of icing days (Tmaxrate. Results indicate potential strong inter-seasonal and inter-annual perturbations in Rainfall in the region. Consequently, the Norte Chico will possibly see its streamflow strongly impacted with a resulting high variability at the seasonal and inter-annual level. A probabilistic analysis of the projections of the four GCMs provided a better representation of uncertainties linked with downscaled scenarios. Whereas maximum and minimum temperatures were accurately simulated by both downscaling methods, precipitation simulations returned weaker results. SDSM proved to have a poor ability to simulate extreme rainfall events and few conclusions could be drawn with respect to future occurrences of ENSO phenomena

  10. Percentile-Based ETCCDI Temperature Extremes Indices for CMIP5 Model Output: New Results through Semiparametric Quantile Regression Approach

    Science.gov (United States)

    Li, L.; Yang, C.

    2017-12-01

    Climate extremes often manifest as rare events in terms of surface air temperature and precipitation with an annual reoccurrence period. In order to represent the manifold characteristics of climate extremes for monitoring and analysis, the Expert Team on Climate Change Detection and Indices (ETCCDI) had worked out a set of 27 core indices based on daily temperature and precipitation data, describing extreme weather and climate events on an annual basis. The CLIMDEX project (http://www.climdex.org) had produced public domain datasets of such indices for data from a variety of sources, including output from global climate models (GCM) participating in the Coupled Model Intercomparison Project Phase 5 (CMIP5). Among the 27 ETCCDI indices, there are six percentile-based temperature extremes indices that may fall into two groups: exceedance rates (ER) (TN10p, TN90p, TX10p and TX90p) and durations (CSDI and WSDI). Percentiles must be estimated prior to the calculation of the indices, and could more or less be biased by the adopted algorithm. Such biases will in turn be propagated to the final results of indices. The CLIMDEX used an empirical quantile estimator combined with a bootstrap resampling procedure to reduce the inhomogeneity in the annual series of the ER indices. However, there are still some problems remained in the CLIMDEX datasets, namely the overestimated climate variability due to unaccounted autocorrelation in the daily temperature data, seasonally varying biases and inconsistency between algorithms applied to the ER indices and to the duration indices. We now present new results of the six indices through a semiparametric quantile regression approach for the CMIP5 model output. By using the base-period data as a whole and taking seasonality and autocorrelation into account, this approach successfully addressed the aforementioned issues and came out with consistent results. The new datasets cover the historical and three projected (RCP2.6, RCP4.5 and RCP

  11. Maximum entropy method in momentum density reconstruction

    International Nuclear Information System (INIS)

    Dobrzynski, L.; Holas, A.

    1997-01-01

    The Maximum Entropy Method (MEM) is applied to the reconstruction of the 3-dimensional electron momentum density distributions observed through the set of Compton profiles measured along various crystallographic directions. It is shown that the reconstruction of electron momentum density may be reliably carried out with the aid of simple iterative algorithm suggested originally by Collins. A number of distributions has been simulated in order to check the performance of MEM. It is shown that MEM can be recommended as a model-free approach. (author). 13 refs, 1 fig

  12. Maximum entropy PDF projection: A review

    Science.gov (United States)

    Baggenstoss, Paul M.

    2017-06-01

    We review maximum entropy (MaxEnt) PDF projection, a method with wide potential applications in statistical inference. The method constructs a sampling distribution for a high-dimensional vector x based on knowing the sampling distribution p(z) of a lower-dimensional feature z = T (x). Under mild conditions, the distribution p(x) having highest possible entropy among all distributions consistent with p(z) may be readily found. Furthermore, the MaxEnt p(x) may be sampled, making the approach useful in Monte Carlo methods. We review the theorem and present a case study in model order selection and classification for handwritten character recognition.

  13. Estimating minimum and maximum air temperature using MODIS ...

    Indian Academy of Sciences (India)

    in a wide range of applications in areas of ecology, hydrology ... stations, thus attracting researchers to make use ... simpler because of the lack of solar radiation effect .... water from the snow packed Himalayan region to ... tribution System (LAADS) webdata archive cen- ..... ing due to greenhouse gases is different for the air.

  14. Generic maximum likely scale selection

    DEFF Research Database (Denmark)

    Pedersen, Kim Steenstrup; Loog, Marco; Markussen, Bo

    2007-01-01

    in this work is on applying this selection principle under a Brownian image model. This image model provides a simple scale invariant prior for natural images and we provide illustrative examples of the behavior of our scale estimation on such images. In these illustrative examples, estimation is based......The fundamental problem of local scale selection is addressed by means of a novel principle, which is based on maximum likelihood estimation. The principle is generally applicable to a broad variety of image models and descriptors, and provides a generic scale estimation methodology. The focus...

  15. Evolution of Western Mediterranean Sea Surface Temperature between 1985 and 2005: a complementary study in situ, satellite and modelling approaches

    Science.gov (United States)

    Troupin, C.; Lenartz, F.; Sirjacobs, D.; Alvera-Azcárate, A.; Barth, A.; Ouberdous, M.; Beckers, J.-M.

    2009-04-01

    In order to evaluate the variability of the sea surface temperature (SST) in the Western Mediterranean Sea between 1985 and 2005, an integrated approach combining geostatistical tools and modelling techniques has been set up. The objectives are: underline the capability of each tool to capture characteristic phenomena, compare and assess the quality of their outputs, infer an interannual trend from the results. Diva (Data Interpolating Variationnal Analysis, Brasseur et al. (1996) Deep-Sea Res.) was applied on a collection of in situ data gathered from various sources (World Ocean Database 2005, Hydrobase2, Coriolis and MedAtlas2), from which duplicates and suspect values were removed. This provided monthly gridded fields in the region of interest. Heterogeneous time data coverage was taken into account by computing and removing the annual trend, provided by Diva detrending tool. Heterogeneous correlation length was applied through an advection constraint. Statistical technique DINEOF (Data Interpolation with Empirical Orthogonal Functions, Alvera-Azc

  16. Quantum inelastic electron-vibration scattering in molecular wires: Landauer-like versus Green's function approaches and temperature effects

    International Nuclear Information System (INIS)

    Ness, H

    2006-01-01

    In this paper, we consider the problem of inelastic electron transport in molecular systems in which both electronic and vibrational degrees of freedom are considered on the quantum level. The electronic transport properties of the corresponding molecular nanojunctions are obtained by means of a non-perturbative Landauer-like multi-channel inelastic scattering technique. The connections between this approach and other Green's function techniques that are useful in particular cases are studied in detail. The validity of the wide-band approximation, the effects of the lead self-energy and the dynamical polaron shift are also studied for a wide range of parameters. As a practical application of the method, we consider the effects of the temperature on the conductance properties of molecular breakjunctions in relation to recent experiments

  17. A Splash to Nano-Sized Inorganic Energy-Materials by the Low-Temperature Molecular Precursor Approach.

    Science.gov (United States)

    Driess, Matthias; Panda, Chakadola; Menezes, Prashanth Wilfried

    2018-05-07

    The low-temperature synthesis of inorganic materials and their interfaces at the atomic and molecular level provides numerous opportunities for the design and improvement of inorganic materials in heterogeneous catalysis for sustainable chemical energy conversion or other energy-saving areas. Using suitable molecular precursors for functional inorganic nanomaterial synthesis allows for facile control over uniform particle size distribution, stoichiometry, and leads to desired chemical and physical properties. This minireview outlines some advantages of the molecular precursor approach in light of selected recent developments of molecule-to-nanomaterials synthesis for renewable energy applications, relevant for the oxygen evolution reaction (OER), hydrogen evolution reaction (HER) and overall water-splitting. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. What controls the maximum magnitude of injection-induced earthquakes?

    Science.gov (United States)

    Eaton, D. W. S.

    2017-12-01

    Three different approaches for estimation of maximum magnitude are considered here, along with their implications for managing risk. The first approach is based on a deterministic limit for seismic moment proposed by McGarr (1976), which was originally designed for application to mining-induced seismicity. This approach has since been reformulated for earthquakes induced by fluid injection (McGarr, 2014). In essence, this method assumes that the upper limit for seismic moment release is constrained by the pressure-induced stress change. A deterministic limit is given by the product of shear modulus and the net injected fluid volume. This method is based on the assumptions that the medium is fully saturated and in a state of incipient failure. An alternative geometrical approach was proposed by Shapiro et al. (2011), who postulated that the rupture area for an induced earthquake falls entirely within the stimulated volume. This assumption reduces the maximum-magnitude problem to one of estimating the largest potential slip surface area within a given stimulated volume. Finally, van der Elst et al. (2016) proposed that the maximum observed magnitude, statistically speaking, is the expected maximum value for a finite sample drawn from an unbounded Gutenberg-Richter distribution. These three models imply different approaches for risk management. The deterministic method proposed by McGarr (2014) implies that a ceiling on the maximum magnitude can be imposed by limiting the net injected volume, whereas the approach developed by Shapiro et al. (2011) implies that the time-dependent maximum magnitude is governed by the spatial size of the microseismic event cloud. Finally, the sample-size hypothesis of Van der Elst et al. (2016) implies that the best available estimate of the maximum magnitude is based upon observed seismicity rate. The latter two approaches suggest that real-time monitoring is essential for effective management of risk. A reliable estimate of maximum

  19. Coordenadas geográficas na estimativa das temperaturas máxima e média decendiais do ar no Estado do Rio Grande do Sul Geographic coordinates in the ten-day maximum and mean air temperature estimation in the State of Rio Grande do Sul, Brazil

    Directory of Open Access Journals (Sweden)

    Alberto Cargnelutti Filho

    2008-12-01

    Full Text Available A partir dos dados referentes à temperatura máxima média decendial (Tx e à temperatura média decendial (Tm do ar de 41 municípios do Estado do Rio Grande do Sul, de 1945 a 1974, este trabalho teve como objetivo verificar se a Tx e a Tm podem ser estimadas em função da altitude, latitude e longitude. Para cada um dos 36 decêndios do ano, realizou-se análise de correlação e estimaram-se os parâmetros do modelo das equações de regressão linear múltipla, considerando Tx e Tm como variável dependente e altitude, latitude e longitude como variáveis independentes. Na validação dos modelos de estimativa da Tx e Tm, usou-se o coeficiente de correlação linear de Pearson, entre a Tx e a Tm estimada e a Tx e a Tm observada em dez municípios do Estado, com dados da série de observações meteorológicas de 1975 a 2004. A temperatura máxima média decendial e a temperatura média decendial podem ser estimadas por meio da altitude, latitude e longitude, em qualquer local e decêndio, no Estado do Rio Grande do Sul.The objective of this research was to estimate ten-day maximum (Tx and mean (Tm air temperature using altitude and the geographic coordinates latitude and longitude for the Rio Grande do Sul State, Brazil. Normal ten-day maximum and mean air temperature of 41 counties in the State of Rio Grande do Sul, from 1945 to 1974 were used. Correlation analysis and parameters estimate of multiple linear regression equations were performed using Tx and Tm as dependent variable and altitude, latitude and longitude as independent variables, for the 36 ten-day periods of the year. Pearson's linear correlation coefficient between estimated and observed Tx and Tm, calculated for tem counties using data of were used as independent data sets. The ten-day maximum and mean air temperature may be estimated from the altitude and the geographic coordinates latitude and longitude in the State of Rio Grande do Sul.

  20. Chemometric evaluation of temperature-dependent surface-enhanced Raman spectra of riboflavin: What is the best multivariate approach to describe the effect of temperature?

    Science.gov (United States)

    Kokaislová, Alžběta; Kalhousová, Milena; Gráfová, Michaela; Matějka, Pavel

    2014-10-01

    Riboflavin is an essential nutrient involved in energetic metabolism. It is used as a pharmacologically active substance in treatment of several diseases. From analytical point of view, riboflavin can be used as an active part of sensors for substances with affinity to riboflavin molecules. In biological environment, metal substrates coated with riboflavin are exposed to temperatures that are different from room temperature. Hence, it is important to describe the influence of temperature on adsorbed molecules of riboflavin, especially on orientation of molecules towards the metal surface and on stability of adsorbed molecular layer. Surface-enhanced Raman scattering (SERS) spectroscopy is a useful tool for investigation of architecture of molecular layers adsorbed on metal surfaces because the spectral features in SERS spectra change with varying orientation of molecules towards the metal surface, as well as with changes in mutual interactions among adsorbed molecules. In this study, riboflavin was adsorbed on electrochemically prepared massive silver substrates that were exposed to temperature changes according to four different temperature programs. Raman spectra measured at different temperatures were compared considering positions of spectral bands, their intensities, bandwidths and variability of all these parameters. It was found out that increase of substrate temperature up to 50 °C does not lead to any observable decomposition of riboflavin molecules, but the changes of band intensity ratios within individual spectra are apparent. To distinguish sources of variability beside changes in band intensities and widths, Principal Component Analysis (PCA) was applied. Discriminant Analysis (DA) was used to explore if the SERS spectra can be separated according to temperature. The results of Partial Least Squares (PLS) regression demonstrate the possibility to predict the sample temperature using SERS spectral features. Results of all performed experiments and

  1. LEAST SQUARE APPROACH FOR ESTIMATING OF LAND SURFACE TEMPERATURE FROM LANDSAT-8 SATELLITE DATA USING RADIATIVE TRANSFER EQUATION

    Directory of Open Access Journals (Sweden)

    Y. Jouybari-Moghaddam

    2017-09-01

    Full Text Available Land Surface Temperature (LST is one of the significant variables measured by remotely sensed data, and it is applied in many environmental and Geoscience studies. The main aim of this study is to develop an algorithm to retrieve the LST from Landsat-8 satellite data using Radiative Transfer Equation (RTE. However, LST can be retrieved from RTE, but, since the RTE has two unknown parameters including LST and surface emissivity, estimating LST from RTE is an under the determined problem. In this study, in order to solve this problem, an approach is proposed an equation set includes two RTE based on Landsat-8 thermal bands (i.e.: band 10 and 11 and two additional equations based on the relation between the Normalized Difference Vegetation Index (NDVI and emissivity of Landsat-8 thermal bands by using simulated data for Landsat-8 bands. The iterative least square approach was used for solving the equation set. The LST derived from proposed algorithm is evaluated by the simulated dataset, built up by MODTRAN. The result shows the Root Mean Squared Error (RMSE is less than 1.18°K. Therefore; the proposed algorithm can be a suitable and robust method to retrieve the LST from Landsat-8 satellite data.

  2. Least Square Approach for Estimating of Land Surface Temperature from LANDSAT-8 Satellite Data Using Radiative Transfer Equation

    Science.gov (United States)

    Jouybari-Moghaddam, Y.; Saradjian, M. R.; Forati, A. M.

    2017-09-01

    Land Surface Temperature (LST) is one of the significant variables measured by remotely sensed data, and it is applied in many environmental and Geoscience studies. The main aim of this study is to develop an algorithm to retrieve the LST from Landsat-8 satellite data using Radiative Transfer Equation (RTE). However, LST can be retrieved from RTE, but, since the RTE has two unknown parameters including LST and surface emissivity, estimating LST from RTE is an under the determined problem. In this study, in order to solve this problem, an approach is proposed an equation set includes two RTE based on Landsat-8 thermal bands (i.e.: band 10 and 11) and two additional equations based on the relation between the Normalized Difference Vegetation Index (NDVI) and emissivity of Landsat-8 thermal bands by using simulated data for Landsat-8 bands. The iterative least square approach was used for solving the equation set. The LST derived from proposed algorithm is evaluated by the simulated dataset, built up by MODTRAN. The result shows the Root Mean Squared Error (RMSE) is less than 1.18°K. Therefore; the proposed algorithm can be a suitable and robust method to retrieve the LST from Landsat-8 satellite data.

  3. Efeito de níveis de água, coberturas do solo e condições ambientais na temperatura do solo e no cultivo de morangueiro em ambiente protegido e a céu aberto Effect of water levels, soil covers and enviroment in maximum soil temperature in strawberry crop in field and greenhouse

    Directory of Open Access Journals (Sweden)

    Regina C. de M. Pires

    2004-12-01

    Full Text Available A temperatura do solo é um importante parâmetro no cultivo do morangueiro, pois interfere no desenvolvimento vegetativo, na sanidade e na produção. O objetivo do presente trabalho foi avaliar o efeito de diferentes níveis de água, coberturas de canteiro em campo aberto e em ambiente protegido, na temperatura máxima do solo no cultivo do morangueiro. Foram realizados dois experimentos: um em cultivo protegido e outro a campo aberto, em Atibaia - SP, em esquema fatorial 2 x 3 (coberturas do solo e níveis de irrigação, em blocos ao acaso, com cinco repetições. As coberturas de solo utilizadas foram filmes de polietileno preto e transparente. A irrigação localizada foi aplicada por gotejo sempre que o potencial de água no solo atingisse -0,010 (N1, -0,035 (N2 e -0,070 (N3 MPa, em tensiômetros instalados a 10 cm de profundidade. A temperatura do solo foi avaliada por termógrafos, sendo os sensores instalados a 5 cm de profundidade. Houve influência do ambiente de cultivo, da cobertura do solo e dos níveis de irrigação na temperatura máxima do solo. A temperatura do solo sob diferentes coberturas dependeu não somente das características físicas do plástico, como também da forma de instalação no canteiro. A temperatura máxima do solo aumentou com a diminuição do potencial da água no solo, no momento da irrigação.The soil temperature is an important parameter in strawberry crop, because, it interferes in vegetative development, plant health conditions and yield. The aim of this work was to evaluate the effect of different water levels, soil covers in field conditions and greenhouse in maximum soil temperature in strawberry crop. Two experiments were accomplished, one in greenhouse and other in field conditions, at Atibaia - SP, Brazil. The experimental design was a factorial 2 x 3 (soil covers and water levels, with 5 repetitions. The soil covers were clear and black plastics. The trickle irrigation was applied

  4. On the maximum Q in feedback controlled subignited plasmas

    International Nuclear Information System (INIS)

    Anderson, D.; Hamnen, H.; Lisak, M.

    1990-01-01

    High Q operation in feedback controlled subignited fusion plasma requires the operating temperature to be close to the ignition temperature. In the present work we discuss technological and physical effects which may restrict this temperature difference. The investigation is based on a simplified, but still accurate, 0=D analytical analysis of the maximum Q of a subignited system. Particular emphasis is given to sawtooth ocsillations which complicate the interpretation of diagnostic neutron emission data into plasma temperatures and may imply an inherent lower bound on the temperature deviation from the ignition point. The estimated maximum Q is found to be marginal (Q = 10-20) from the point of view of a fusion reactor. (authors)

  5. Maximum organic carbon limits at different melter feed rates (U)

    International Nuclear Information System (INIS)

    Choi, A.S.

    1995-01-01

    This report documents the results of a study to assess the impact of varying melter feed rates on the maximum total organic carbon (TOC) limits allowable in the DWPF melter feed. Topics discussed include: carbon content; feed rate; feed composition; melter vapor space temperature; combustion and dilution air; off-gas surges; earlier work on maximum TOC; overview of models; and the results of the work completed

  6. System for memorizing maximum values

    Science.gov (United States)

    Bozeman, Richard J., Jr.

    1992-08-01

    The invention discloses a system capable of memorizing maximum sensed values. The system includes conditioning circuitry which receives the analog output signal from a sensor transducer. The conditioning circuitry rectifies and filters the analog signal and provides an input signal to a digital driver, which may be either linear or logarithmic. The driver converts the analog signal to discrete digital values, which in turn triggers an output signal on one of a plurality of driver output lines n. The particular output lines selected is dependent on the converted digital value. A microfuse memory device connects across the driver output lines, with n segments. Each segment is associated with one driver output line, and includes a microfuse that is blown when a signal appears on the associated driver output line.

  7. Remarks on the maximum luminosity

    Science.gov (United States)

    Cardoso, Vitor; Ikeda, Taishi; Moore, Christopher J.; Yoo, Chul-Moon

    2018-04-01

    The quest for fundamental limitations on physical processes is old and venerable. Here, we investigate the maximum possible power, or luminosity, that any event can produce. We show, via full nonlinear simulations of Einstein's equations, that there exist initial conditions which give rise to arbitrarily large luminosities. However, the requirement that there is no past horizon in the spacetime seems to limit the luminosity to below the Planck value, LP=c5/G . Numerical relativity simulations of critical collapse yield the largest luminosities observed to date, ≈ 0.2 LP . We also present an analytic solution to the Einstein equations which seems to give an unboundedly large luminosity; this will guide future numerical efforts to investigate super-Planckian luminosities.

  8. Scintillation counter, maximum gamma aspect

    International Nuclear Information System (INIS)

    Thumim, A.D.

    1975-01-01

    A scintillation counter, particularly for counting gamma ray photons, includes a massive lead radiation shield surrounding a sample-receiving zone. The shield is disassembleable into a plurality of segments to allow facile installation and removal of a photomultiplier tube assembly, the segments being so constructed as to prevent straight-line access of external radiation through the shield into radiation-responsive areas. Provisions are made for accurately aligning the photomultiplier tube with respect to one or more sample-transmitting bores extending through the shield to the sample receiving zone. A sample elevator, used in transporting samples into the zone, is designed to provide a maximum gamma-receiving aspect to maximize the gamma detecting efficiency. (U.S.)

  9. Using a network-based approach and targeted maximum likelihood estimation to evaluate the effect of adding pre-exposure prophylaxis to an ongoing test-and-treat trial.

    Science.gov (United States)

    Balzer, Laura; Staples, Patrick; Onnela, Jukka-Pekka; DeGruttola, Victor

    2017-04-01

    Several cluster-randomized trials are underway to investigate the implementation and effectiveness of a universal test-and-treat strategy on the HIV epidemic in sub-Saharan Africa. We consider nesting studies of pre-exposure prophylaxis within these trials. Pre-exposure prophylaxis is a general strategy where high-risk HIV- persons take antiretrovirals daily to reduce their risk of infection from exposure to HIV. We address how to target pre-exposure prophylaxis to high-risk groups and how to maximize power to detect the individual and combined effects of universal test-and-treat and pre-exposure prophylaxis strategies. We simulated 1000 trials, each consisting of 32 villages with 200 individuals per village. At baseline, we randomized the universal test-and-treat strategy. Then, after 3 years of follow-up, we considered four strategies for targeting pre-exposure prophylaxis: (1) all HIV- individuals who self-identify as high risk, (2) all HIV- individuals who are identified by their HIV+ partner (serodiscordant couples), (3) highly connected HIV- individuals, and (4) the HIV- contacts of a newly diagnosed HIV+ individual (a ring-based strategy). We explored two possible trial designs, and all villages were followed for a total of 7 years. For each village in a trial, we used a stochastic block model to generate bipartite (male-female) networks and simulated an agent-based epidemic process on these networks. We estimated the individual and combined intervention effects with a novel targeted maximum likelihood estimator, which used cross-validation to data-adaptively select from a pre-specified library the candidate estimator that maximized the efficiency of the analysis. The universal test-and-treat strategy reduced the 3-year cumulative HIV incidence by 4.0% on average. The impact of each pre-exposure prophylaxis strategy on the 4-year cumulative HIV incidence varied by the coverage of the universal test-and-treat strategy with lower coverage resulting in a larger

  10. A unique approach to demonstrating that apical bud temperature specifically determines leaf initiation rate in the dicot Cucumis sativus

    NARCIS (Netherlands)

    Savvides, Andreas; Dieleman, Anja; Ieperen, van Wim; Marcelis, Leo F.M.

    2016-01-01

    Main conclusion: Leaf initiation rate is largely determined by the apical bud temperature even when apical bud temperature largely deviates from the temperature of other plant organs.We have long known that the rate of leaf initiation (LIR) is highly sensitive to temperature, but previous studies

  11. Non-invasive Estimation of Temperature during Physiotherapeutic Ultrasound Application Using the Average Gray-Level Content of B-Mode Images: A Metrological Approach.

    Science.gov (United States)

    Alvarenga, André V; Wilkens, Volker; Georg, Olga; Costa-Félix, Rodrigo P B

    2017-09-01

    Healing therapies that make use of ultrasound are based on raising the temperature in biological tissue. However, it is not possible to heal impaired tissue by applying a high dose of ultrasound. The temperature of the tissue is ultimately the physical quantity that has to be assessed to minimize the risk of undesired injury. Invasive temperature measurement techniques are easy to use, despite the fact that they are detrimental to human well being. Another approach to assessing a rise in tissue temperature is to derive the material's general response to temperature variations from ultrasonic parameters. In this article, a method for evaluating temperature variations is described. The method is based on the analytical study of an ultrasonic image, in which gray-level variations are correlated to the temperature variations in a tissue-mimicking material. The physical assumption is that temperature variations induce wave propagation changes modifying the backscattered ultrasound signal, which are expressed in the ultrasonographic images. For a temperature variation of about 15°C, the expanded uncertainty for a coverage probability of 0.95 was found to be 2.5°C in the heating regime and 1.9°C in the cooling regime. It is possible to use the model proposed in this article in a straightforward manner to monitor temperature variation during a physiotherapeutic ultrasound application, provided the tissue-mimicking material approach is transferred to actual biological tissue. The novelty of such approach resides in the metrology-based investigation outlined here, as well as in its ease of reproducibility. Copyright © 2017 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.

  12. Maximum entropy and Bayesian methods

    International Nuclear Information System (INIS)

    Smith, C.R.; Erickson, G.J.; Neudorfer, P.O.

    1992-01-01

    Bayesian probability theory and Maximum Entropy methods are at the core of a new view of scientific inference. These 'new' ideas, along with the revolution in computational methods afforded by modern computers allow astronomers, electrical engineers, image processors of any type, NMR chemists and physicists, and anyone at all who has to deal with incomplete and noisy data, to take advantage of methods that, in the past, have been applied only in some areas of theoretical physics. The title workshops have been the focus of a group of researchers from many different fields, and this diversity is evident in this book. There are tutorial and theoretical papers, and applications in a very wide variety of fields. Almost any instance of dealing with incomplete and noisy data can be usefully treated by these methods, and many areas of theoretical research are being enhanced by the thoughtful application of Bayes' theorem. Contributions contained in this volume present a state-of-the-art overview that will be influential and useful for many years to come

  13. Nanoscale multiphase phase field approach for stress- and temperature-induced martensitic phase transformations with interfacial stresses at finite strains

    Science.gov (United States)

    Basak, Anup; Levitas, Valery I.

    2018-04-01

    A thermodynamically consistent, novel multiphase phase field approach for stress- and temperature-induced martensitic phase transformations at finite strains and with interfacial stresses has been developed. The model considers a single order parameter to describe the austenite↔martensitic transformations, and another N order parameters describing N variants and constrained to a plane in an N-dimensional order parameter space. In the free energy model coexistence of three or more phases at a single material point (multiphase junction), and deviation of each variant-variant transformation path from a straight line have been penalized. Some shortcomings of the existing models are resolved. Three different kinematic models (KMs) for the transformation deformation gradient tensors are assumed: (i) In KM-I the transformation deformation gradient tensor is a linear function of the Bain tensors for the variants. (ii) In KM-II the natural logarithms of the transformation deformation gradient is taken as a linear combination of the natural logarithm of the Bain tensors multiplied with the interpolation functions. (iii) In KM-III it is derived using the twinning equation from the crystallographic theory. The instability criteria for all the phase transformations have been derived for all the kinematic models, and their comparative study is presented. A large strain finite element procedure has been developed and used for studying the evolution of some complex microstructures in nanoscale samples under various loading conditions. Also, the stresses within variant-variant boundaries, the sample size effect, effect of penalizing the triple junctions, and twinned microstructures have been studied. The present approach can be extended for studying grain growth, solidifications, para↔ferro electric transformations, and diffusive phase transformations.

  14. Local approach: fracture at high temperature in an austenitic stainless steel submitted to thermomechanical loadings. Calculations and experimental validations

    International Nuclear Information System (INIS)

    Poquillon, D.

    1997-10-01

    Usually, for the integrity assessment of defective components, well established rules are used: global approach to fracture. A more fundamental way to deal with these problems is based on the local approach to fracture. In this study, we choose this way and we perform numerical simulations of intergranular crack initiation and intergranular crack propagation. This type of damage can be find in components of fast breeder reactors in 316 L austenitic stainless steel which operate at high temperatures. This study deals with methods coupling partly the behaviour and the damage for crack growth in specimens submitted to various thermomechanical loadings. A new numerical method based on finite element computations and a damage model relying on quantitative observations of grain boundary damage is proposed. Numerical results of crack initiation and growth are compared with a number of experimental data obtained in previous studies. Creep and creep-fatigue crack growth are studied. Various specimen geometries are considered: compact Tension Specimens and axisymmetric notched bars tested under isothermal (600 deg C) conditions and tubular structures containing a circumferential notch tested under thermal shock. Adaptative re-meshing technique and/or node release technique are used and compared. In order to broaden our knowledge on stress triaxiality effects on creep intergranular damage, new experiments are defined and conducted on sharply notched tubular specimens in torsion. These isothermal (600 deg C) Mode II creep tests reveal severe intergranular damage and creep crack initiation. Calculated damage fields at the crack tip are compared with the experimental observations. The good agreement between calculations and experimental data shows the damage criterion used can improve the accuracy of life prediction of components submitted to intergranular creep damage. (author)

  15. Conformational temperature-dependent behavior of a histone H2AX: a coarse-grained Monte Carlo approach via knowledge-based interaction potentials.

    Directory of Open Access Journals (Sweden)

    Miriam Fritsche

    Full Text Available Histone proteins are not only important due to their vital role in cellular processes such as DNA compaction, replication and repair but also show intriguing structural properties that might be exploited for bioengineering purposes such as the development of nano-materials. Based on their biological and technological implications, it is interesting to investigate the structural properties of proteins as a function of temperature. In this work, we study the spatial response dynamics of the histone H2AX, consisting of 143 residues, by a coarse-grained bond fluctuating model for a broad range of normalized temperatures. A knowledge-based interaction matrix is used as input for the residue-residue Lennard-Jones potential.We find a variety of equilibrium structures including global globular configurations at low normalized temperature (T* = 0.014, combination of segmental globules and elongated chains (T* = 0.016,0.017, predominantly elongated chains (T* = 0.019,0.020, as well as universal SAW conformations at high normalized temperature (T* ≥ 0.023. The radius of gyration of the protein exhibits a non-monotonic temperature dependence with a maximum at a characteristic temperature (T(c* = 0.019 where a crossover occurs from a positive (stretching at T* ≤ T(c* to negative (contraction at T* ≥ T(c* thermal response on increasing T*.

  16. Genetic algorithms optimized fuzzy logic control for the maximum power point tracking in photovoltaic system

    Energy Technology Data Exchange (ETDEWEB)

    Larbes, C.; Ait Cheikh, S.M.; Obeidi, T.; Zerguerras, A. [Laboratoire des Dispositifs de Communication et de Conversion Photovoltaique, Departement d' Electronique, Ecole Nationale Polytechnique, 10, Avenue Hassen Badi, El Harrach, Alger 16200 (Algeria)

    2009-10-15

    This paper presents an intelligent control method for the maximum power point tracking (MPPT) of a photovoltaic system under variable temperature and irradiance conditions. First, for the purpose of comparison and because of its proven and good performances, the perturbation and observation (P and O) technique is briefly introduced. A fuzzy logic controller based MPPT (FLC) is then proposed which has shown better performances compared to the P and O MPPT based approach. The proposed FLC has been also improved using genetic algorithms (GA) for optimisation. Different development stages are presented and the optimized fuzzy logic MPPT controller (OFLC) is then simulated and evaluated, which has shown better performances. (author)

  17. Maximum entropy principal for transportation

    International Nuclear Information System (INIS)

    Bilich, F.; Da Silva, R.

    2008-01-01

    In this work we deal with modeling of the transportation phenomenon for use in the transportation planning process and policy-impact studies. The model developed is based on the dependence concept, i.e., the notion that the probability of a trip starting at origin i is dependent on the probability of a trip ending at destination j given that the factors (such as travel time, cost, etc.) which affect travel between origin i and destination j assume some specific values. The derivation of the solution of the model employs the maximum entropy principle combining a priori multinomial distribution with a trip utility concept. This model is utilized to forecast trip distributions under a variety of policy changes and scenarios. The dependence coefficients are obtained from a regression equation where the functional form is derived based on conditional probability and perception of factors from experimental psychology. The dependence coefficients encode all the information that was previously encoded in the form of constraints. In addition, the dependence coefficients encode information that cannot be expressed in the form of constraints for practical reasons, namely, computational tractability. The equivalence between the standard formulation (i.e., objective function with constraints) and the dependence formulation (i.e., without constraints) is demonstrated. The parameters of the dependence-based trip-distribution model are estimated, and the model is also validated using commercial air travel data in the U.S. In addition, policy impact analyses (such as allowance of supersonic flights inside the U.S. and user surcharge at noise-impacted airports) on air travel are performed.

  18. Combined analysis of steady state and transient transport by the maximum entropy method

    Energy Technology Data Exchange (ETDEWEB)

    Giannone, L.; Stroth, U; Koellermeyer, J [Association Euratom-Max-Planck-Institut fuer Plasmaphysik, Garching (Germany); and others

    1996-04-01

    A new maximum entropy approach has been applied to analyse three types of transient transport experiments. For sawtooth propagation experiments in the ASDEX Upgrade and ECRH power modulation and power-switching experiments in the Wendelstein 7-AS Stellarator, either the time evolution of the temperature perturbation or the phase and amplitude of the modulated temperature perturbation are used as non-linear constraints to the {chi}{sub e} profile to be fitted. Simultaneously, the constraints given by the equilibrium temperature profile for steady-state power balance are fitted. In the maximum entropy formulation, the flattest {chi}{sub e} profile consistent with the constraints is found. It was found that {chi}{sub e} determined from sawtooth propagation was greater than the power balance value by a factor of five in the ASDEX Upgrade. From power modulation experiments, employing the measurements of four modulation frequencies simultaneously, the power deposition profile as well as the {chi}{sub e} profile could be determined. A comparison of the predictions of a time-independent {chi}{sub e} model and a power-dependent {chi}{sub e} model is made. The power-switching experiments show that the {chi}{sub e} profile must change within a millisecond to a new value consistent with the power balance value at the new input power. Neither power deposition broadening due to suprathermal electrons nor temperature or gradient dependences of {chi}{sub e} can explain this observation. (author).

  19. Stochastic identification of temperature effects on the dynamics of a smart composite beam: assessment of multi-model and global model approaches

    International Nuclear Information System (INIS)

    Hios, J D; Fassois, S D

    2009-01-01

    The temperature effects on the dynamics of a smart composite beam are experimentally studied via conventional multi-model and novel global model identification approaches. The multi-model approaches are based on non-parametric and parametric VARX representations, whereas the global model approaches are based on novel constant coefficient pooled (CCP) and functionally pooled (FP) VARX parametric representations. The analysis indicates that the obtained multi-model and global model representations are in rough overall agreement. Nevertheless, the latter simultaneously use all available data records offering more compact descriptions of the dynamics, improved numerical robustness and estimation accuracy, which is reflected in significantly reduced modal parameter uncertainties. Although the CCP-VARX representations provide only 'averaged' descriptions of the structural dynamics over temperature, their FP-VARX counterparts allow for the explicit, analytical modeling of temperature dependence exhibiting a 'smooth' deterministic dependence of the dynamics on temperature which is compatible with the physics of the problem. In accordance with previous studies, the obtained natural frequencies decrease with temperature in a weakly nonlinear or approximately linear fashion. The damping factors are less affected, although their dependence on temperature may be of a potentially more complex nature

  20. Determination of the maximum-depth to potential field sources by a maximum structural index method

    Science.gov (United States)

    Fedi, M.; Florio, G.

    2013-01-01

    A simple and fast determination of the limiting depth to the sources may represent a significant help to the data interpretation. To this end we explore the possibility of determining those source parameters shared by all the classes of models fitting the data. One approach is to determine the maximum depth-to-source compatible with the measured data, by using for example the well-known Bott-Smith rules. These rules involve only the knowledge of the field and its horizontal gradient maxima, and are independent from the density contrast. Thanks to the direct relationship between structural index and depth to sources we work out a simple and fast strategy to obtain the maximum depth by using the semi-automated methods, such as Euler deconvolution or depth-from-extreme-points method (DEXP). The proposed method consists in estimating the maximum depth as the one obtained for the highest allowable value of the structural index (Nmax). Nmax may be easily determined, since it depends only on the dimensionality of the problem (2D/3D) and on the nature of the analyzed field (e.g., gravity field or magnetic field). We tested our approach on synthetic models against the results obtained by the classical Bott-Smith formulas and the results are in fact very similar, confirming the validity of this method. However, while Bott-Smith formulas are restricted to the gravity field only, our method is applicable also to the magnetic field and to any derivative of the gravity and magnetic field. Our method yields a useful criterion to assess the source model based on the (∂f/∂x)max/fmax ratio. The usefulness of the method in real cases is demonstrated for a salt wall in the Mississippi basin, where the estimation of the maximum depth agrees with the seismic information.

  1. Last Glacial Maximum Salinity Reconstruction

    Science.gov (United States)

    Homola, K.; Spivack, A. J.

    2016-12-01

    It has been previously demonstrated that salinity can be reconstructed from sediment porewater. The goal of our study is to reconstruct high precision salinity during the Last Glacial Maximum (LGM). Salinity is usually determined at high precision via conductivity, which requires a larger volume of water than can be extracted from a sediment core, or via chloride titration, which yields lower than ideal precision. It has been demonstrated for water column samples that high precision density measurements can be used to determine salinity at the precision of a conductivity measurement using the equation of state of seawater. However, water column seawater has a relatively constant composition, in contrast to porewater, where variations from standard seawater composition occur. These deviations, which affect the equation of state, must be corrected for through precise measurements of each ion's concentration and knowledge of apparent partial molar density in seawater. We have developed a density-based method for determining porewater salinity that requires only 5 mL of sample, achieving density precisions of 10-6 g/mL. We have applied this method to porewater samples extracted from long cores collected along a N-S transect across the western North Atlantic (R/V Knorr cruise KN223). Density was determined to a precision of 2.3x10-6 g/mL, which translates to salinity uncertainty of 0.002 gms/kg if the effect of differences in composition is well constrained. Concentrations of anions (Cl-, and SO4-2) and cations (Na+, Mg+, Ca+2, and K+) were measured. To correct salinities at the precision required to unravel LGM Meridional Overturning Circulation, our ion precisions must be better than 0.1% for SO4-/Cl- and Mg+/Na+, and 0.4% for Ca+/Na+, and K+/Na+. Alkalinity, pH and Dissolved Inorganic Carbon of the porewater were determined to precisions better than 4% when ratioed to Cl-, and used to calculate HCO3-, and CO3-2. Apparent partial molar densities in seawater were

  2. The asymptotic behaviour of the maximum likelihood function of Kriging approximations using the Gaussian correlation function

    CSIR Research Space (South Africa)

    Kok, S

    2012-07-01

    Full Text Available continuously as the correlation function hyper-parameters approach zero. Since the global minimizer of the maximum likelihood function is an asymptote in this case, it is unclear if maximum likelihood estimation (MLE) remains valid. Numerical ill...

  3. Beat the Deviations in Estimating Maximum Power of Thermoelectric Modules

    DEFF Research Database (Denmark)

    Gao, Junling; Chen, Min

    2013-01-01

    Under a certain temperature difference, the maximum power of a thermoelectric module can be estimated by the open-circuit voltage and the short-circuit current. In practical measurement, there exist two switch modes, either from open to short or from short to open, but the two modes can give...... different estimations on the maximum power. Using TEG-127-2.8-3.5-250 and TEG-127-1.4-1.6-250 as two examples, the difference is about 10%, leading to some deviations with the temperature change. This paper analyzes such differences by means of a nonlinear numerical model of thermoelectricity, and finds out...... that the main cause is the influence of various currents on the produced electromotive potential. A simple and effective calibration method is proposed to minimize the deviations in specifying the maximum power. Experimental results validate the method with improved estimation accuracy....

  4. Probabilistic maximum-value wind prediction for offshore environments

    DEFF Research Database (Denmark)

    Staid, Andrea; Pinson, Pierre; Guikema, Seth D.

    2015-01-01

    statistical models to predict the full distribution of the maximum-value wind speeds in a 3 h interval. We take a detailed look at the performance of linear models, generalized additive models and multivariate adaptive regression splines models using meteorological covariates such as gust speed, wind speed......, convective available potential energy, Charnock, mean sea-level pressure and temperature, as given by the European Center for Medium-Range Weather Forecasts forecasts. The models are trained to predict the mean value of maximum wind speed, and the residuals from training the models are used to develop...... the full probabilistic distribution of maximum wind speed. Knowledge of the maximum wind speed for an offshore location within a given period can inform decision-making regarding turbine operations, planned maintenance operations and power grid scheduling in order to improve safety and reliability...

  5. Parametric optimization of thermoelectric elements footprint for maximum power generation

    DEFF Research Database (Denmark)

    Rezania, A.; Rosendahl, Lasse; Yin, Hao

    2014-01-01

    The development studies in thermoelectric generator (TEG) systems are mostly disconnected to parametric optimization of the module components. In this study, optimum footprint ratio of n- and p-type thermoelectric (TE) elements is explored to achieve maximum power generation, maximum cost......-performance, and variation of efficiency in the uni-couple over a wide range of the heat transfer coefficient on the cold junction. The three-dimensional (3D) governing equations of the thermoelectricity and the heat transfer are solved using the finite element method (FEM) for temperature dependent properties of TE...... materials. The results, which are in good agreement with the previous computational studies, show that the maximum power generation and the maximum cost-performance in the module occur at An/Ap

  6. Maximum power point tracker based on fuzzy logic

    International Nuclear Information System (INIS)

    Daoud, A.; Midoun, A.

    2006-01-01

    The solar energy is used as power source in photovoltaic power systems and the need for an intelligent power management system is important to obtain the maximum power from the limited solar panels. With the changing of the sun illumination due to variation of angle of incidence of sun radiation and of the temperature of the panels, Maximum Power Point Tracker (MPPT) enables optimization of solar power generation. The MPPT is a sub-system designed to extract the maximum power from a power source. In the case of solar panels power source. the maximum power point varies as a result of changes in its electrical characteristics which in turn are functions of radiation dose, temperature, ageing and other effects. The MPPT maximum the power output from panels for a given set of conditions by detecting the best working point of the power characteristic and then controls the current through the panels or the voltage across them. Many MPPT methods have been reported in literature. These techniques of MPPT can be classified into three main categories that include: lookup table methods, hill climbing methods and computational methods. The techniques vary according to the degree of sophistication, processing time and memory requirements. The perturbation and observation algorithm (hill climbing technique) is commonly used due to its ease of implementation, and relative tracking efficiency. However, it has been shown that when the insolation changes rapidly, the perturbation and observation method is slow to track the maximum power point. In recent years, the fuzzy controllers are used for maximum power point tracking. This method only requires the linguistic control rules for maximum power point, the mathematical model is not required and therefore the implementation of this control method is easy to real control system. In this paper, we we present a simple robust MPPT using fuzzy set theory where the hardware consists of the microchip's microcontroller unit control card and

  7. Equations of viscous flow of silicate liquids with different approaches for universality of high temperature viscosity limit

    Directory of Open Access Journals (Sweden)

    Ana F. Kozmidis-Petrović

    2014-06-01

    Full Text Available The Vogel-Fulcher-Tammann (VFT, Avramov and Milchev (AM as well as Mauro, Yue, Ellison, Gupta and Allan (MYEGA functions of viscous flow are analysed when the compositionally independent high temperature viscosity limit is introduced instead of the compositionally dependent parameter η∞ . Two different approaches are adopted. In the first approach, it is assumed that each model should have its own (average high-temperature viscosity parameter η∞ . In that case, η∞ is different for each of these three models. In the second approach, it is assumed that the high-temperature viscosity is a truly universal value, independent of the model. In this case, the parameter η∞ would be the same and would have the same value: log η∞ = −1.93 dPa·s for all three models. 3D diagrams can successfully predict the difference in behaviour of viscous functions when average or universal high temperature limit is applied in calculations. The values of the AM functions depend, to a greater extent, on whether the average or the universal value for η∞ is used which is not the case with the VFT model. Our tests and values of standard error of estimate (SEE show that there are no general rules whether the average or universal high temperature viscosity limit should be applied to get the best agreement with the experimental functions.

  8. Mammographic image restoration using maximum entropy deconvolution

    International Nuclear Information System (INIS)

    Jannetta, A; Jackson, J C; Kotre, C J; Birch, I P; Robson, K J; Padgett, R

    2004-01-01

    An image restoration approach based on a Bayesian maximum entropy method (MEM) has been applied to a radiological image deconvolution problem, that of reduction of geometric blurring in magnification mammography. The aim of the work is to demonstrate an improvement in image spatial resolution in realistic noisy radiological images with no associated penalty in terms of reduction in the signal-to-noise ratio perceived by the observer. Images of the TORMAM mammographic image quality phantom were recorded using the standard magnification settings of 1.8 magnification/fine focus and also at 1.8 magnification/broad focus and 3.0 magnification/fine focus; the latter two arrangements would normally give rise to unacceptable geometric blurring. Measured point-spread functions were used in conjunction with the MEM image processing to de-blur these images. The results are presented as comparative images of phantom test features and as observer scores for the raw and processed images. Visualization of high resolution features and the total image scores for the test phantom were improved by the application of the MEM processing. It is argued that this successful demonstration of image de-blurring in noisy radiological images offers the possibility of weakening the link between focal spot size and geometric blurring in radiology, thus opening up new approaches to system optimization

  9. Two-dimensional maximum entropy image restoration

    International Nuclear Information System (INIS)

    Brolley, J.E.; Lazarus, R.B.; Suydam, B.R.; Trussell, H.J.

    1977-07-01

    An optical check problem was constructed to test P LOG P maximum entropy restoration of an extremely distorted image. Useful recovery of the original image was obtained. Comparison with maximum a posteriori restoration is made. 7 figures

  10. A Novel Temperature Measurement Approach for a High Pressure Dielectric Barrier Discharge Using Diode Laser Absorption Spectroscopy (Preprint)

    National Research Council Canada - National Science Library

    Leiweke, R. J; Ganguly, B. N

    2006-01-01

    A tunable diode laser absorption spectroscopic technique is used to measure both electronically excited state production efficiency and gas temperature rise in a dielectric barrier discharge in argon...

  11. Maximum Power Point Tracking Based on Sliding Mode Control

    Directory of Open Access Journals (Sweden)

    Nimrod Vázquez

    2015-01-01

    Full Text Available Solar panels, which have become a good choice, are used to generate and supply electricity in commercial and residential applications. This generated power starts with the solar cells, which have a complex relationship between solar irradiation, temperature, and output power. For this reason a tracking of the maximum power point is required. Traditionally, this has been made by considering just current and voltage conditions at the photovoltaic panel; however, temperature also influences the process. In this paper the voltage, current, and temperature in the PV system are considered to be a part of a sliding surface for the proposed maximum power point tracking; this means a sliding mode controller is applied. Obtained results gave a good dynamic response, as a difference from traditional schemes, which are only based on computational algorithms. A traditional algorithm based on MPPT was added in order to assure a low steady state error.

  12. Combining multiple approaches and optimized data resolution for an improved understanding of stream temperature dynamics of a forested headwater basin in the Southern Appalachians

    Science.gov (United States)

    Belica, L.; Mitasova, H.; Caldwell, P.; McCarter, J. B.; Nelson, S. A. C.

    2017-12-01

    Thermal regimes of forested headwater streams continue to be an area of active research as climatic, hydrologic, and land cover changes can influence water temperature, a key aspect of aquatic ecosystems. Widespread monitoring of stream temperatures have provided an important data source, yielding insights on the temporal and spatial patterns and the underlying processes that influence stream temperature. However, small forested streams remain challenging to model due to the high spatial and temporal variability of stream temperatures and the climatic and hydrologic conditions that drive them. Technological advances and increased computational power continue to provide new tools and measurement methods and have allowed spatially explicit analyses of dynamic natural systems at greater temporal resolutions than previously possible. With the goal of understanding how current stream temperature patterns and processes may respond to changing landcover and hydroclimatoligical conditions, we combined high-resolution, spatially explicit geospatial modeling with deterministic heat flux modeling approaches using data sources that ranged from traditional hydrological and climatological measurements to emerging remote sensing techniques. Initial analyses of stream temperature monitoring data revealed that high temporal resolution (5 minutes) and measurement resolutions (guide field data collection for further heat flux modeling. By integrating multiple approaches and optimizing data resolution for the processes being investigated, small, but ecologically significant differences in stream thermal regimes were revealed. In this case, multi-approach research contributed to the identification of the dominant mechanisms driving stream temperature in the study area and advanced our understanding of the current thermal fluxes and how they may change as environmental conditions change in the future.

  13. Alloy by design : A materials genome approach to advanced high strength stainless steels for low and high temperature applications

    NARCIS (Netherlands)

    Lu, Q.; Xu, W.; Van der Zwaag, S.

    2016-01-01

    We report a computational 'alloy by design' approach which can significantly accelerate the design process and substantially reduce the development costs. This approach allows simultaneously optimization of alloy composition and heat treatment parameters based on the integration of thermodynamic,

  14. Combined quantum-mechanical and Calphad approach to description of heat capacity of pure elements below room temperature

    Czech Academy of Sciences Publication Activity Database

    Pavlů, J.; Řehák, Petr; Vřešťál, Jan; Šob, Mojmír

    2015-01-01

    Roč. 51, č. 1 (2015), s. 161-171 ISSN 0364-5916 Institutional support: RVO:68081723 Keywords : Einstein temperature * Heat capacity * Low temperature * Pure elements * SGTE data * Zero Kelvin Subject RIV: BM - Solid Matter Physics ; Magnetism Impact factor: 2.129, year: 2015

  15. Relative effects of climate change and wildfires on stream temperatures: A simulation modeling approach in a Rocky Mountain watershed

    Science.gov (United States)

    Lisa Holsinger; Robert E. Keane; Daniel J. Isaak; Lisa Eby; Michael K. Young

    2014-01-01

    Freshwater ecosystems are warming globally from the direct effects of climate change on air temperature and hydrology and the indirect effects on near-stream vegetation. In fire-prone landscapes, vegetative change may be especially rapid and cause significant local stream temperature increases but the importance of these increases relative to broader changes associated...

  16. Ranking site vulnerability to increasing temperatures in southern Appalachian brook trout streams in Virginia: An exposure-sensitivity approach

    Science.gov (United States)

    Bradly A. Trumbo; Keith H. Nislow; Jonathan Stallings; Mark Hudy; Eric P. Smith; Dong-Yun Kim; Bruce Wiggins; Charles A. Dolloff

    2014-01-01

    Models based on simple air temperature–water temperature relationships have been useful in highlighting potential threats to coldwater-dependent species such as Brook Trout Salvelinus fontinalis by predicting major losses of habitat and substantial reductions in geographic distribution. However, spatial variability in the relationship between changes...

  17. TRENDS IN ESTIMATED MIXING DEPTH DAILY MAXIMUMS

    Energy Technology Data Exchange (ETDEWEB)

    Buckley, R; Amy DuPont, A; Robert Kurzeja, R; Matt Parker, M

    2007-11-12

    Mixing depth is an important quantity in the determination of air pollution concentrations. Fireweather forecasts depend strongly on estimates of the mixing depth as a means of determining the altitude and dilution (ventilation rates) of smoke plumes. The Savannah River United States Forest Service (USFS) routinely conducts prescribed fires at the Savannah River Site (SRS), a heavily wooded Department of Energy (DOE) facility located in southwest South Carolina. For many years, the Savannah River National Laboratory (SRNL) has provided forecasts of weather conditions in support of the fire program, including an estimated mixing depth using potential temperature and turbulence change with height at a given location. This paper examines trends in the average estimated mixing depth daily maximum at the SRS over an extended period of time (4.75 years) derived from numerical atmospheric simulations using two versions of the Regional Atmospheric Modeling System (RAMS). This allows for differences to be seen between the model versions, as well as trends on a multi-year time frame. In addition, comparisons of predicted mixing depth for individual days in which special balloon soundings were released are also discussed.

  18. Receiver function estimated by maximum entropy deconvolution

    Institute of Scientific and Technical Information of China (English)

    吴庆举; 田小波; 张乃铃; 李卫平; 曾融生

    2003-01-01

    Maximum entropy deconvolution is presented to estimate receiver function, with the maximum entropy as the rule to determine auto-correlation and cross-correlation functions. The Toeplitz equation and Levinson algorithm are used to calculate the iterative formula of error-predicting filter, and receiver function is then estimated. During extrapolation, reflective coefficient is always less than 1, which keeps maximum entropy deconvolution stable. The maximum entropy of the data outside window increases the resolution of receiver function. Both synthetic and real seismograms show that maximum entropy deconvolution is an effective method to measure receiver function in time-domain.

  19. Maximum entropy deconvolution of low count nuclear medicine images

    International Nuclear Information System (INIS)

    McGrath, D.M.

    1998-12-01

    Maximum entropy is applied to the problem of deconvolving nuclear medicine images, with special consideration for very low count data. The physics of the formation of scintigraphic images is described, illustrating the phenomena which degrade planar estimates of the tracer distribution. Various techniques which are used to restore these images are reviewed, outlining the relative merits of each. The development and theoretical justification of maximum entropy as an image processing technique is discussed. Maximum entropy is then applied to the problem of planar deconvolution, highlighting the question of the choice of error parameters for low count data. A novel iterative version of the algorithm is suggested which allows the errors to be estimated from the predicted Poisson mean values. This method is shown to produce the exact results predicted by combining Poisson statistics and a Bayesian interpretation of the maximum entropy approach. A facility for total count preservation has also been incorporated, leading to improved quantification. In order to evaluate this iterative maximum entropy technique, two comparable methods, Wiener filtering and a novel Bayesian maximum likelihood expectation maximisation technique, were implemented. The comparison of results obtained indicated that this maximum entropy approach may produce equivalent or better measures of image quality than the compared methods, depending upon the accuracy of the system model used. The novel Bayesian maximum likelihood expectation maximisation technique was shown to be preferable over many existing maximum a posteriori methods due to its simplicity of implementation. A single parameter is required to define the Bayesian prior, which suppresses noise in the solution and may reduce the processing time substantially. Finally, maximum entropy deconvolution was applied as a pre-processing step in single photon emission computed tomography reconstruction of low count data. Higher contrast results were

  20. Atoms in molecules, an axiomatic approach. I. Maximum transferability

    Science.gov (United States)

    Ayers, Paul W.

    2000-12-01

    Central to chemistry is the concept of transferability: the idea that atoms and functional groups retain certain characteristic properties in a wide variety of environments. Providing a completely satisfactory mathematical basis for the concept of atoms in molecules, however, has proved difficult. The present article pursues an axiomatic basis for the concept of an atom within a molecule, with particular emphasis devoted to the definition of transferability and the atomic description of Hirshfeld.

  1. Adaptive Statistical Language Modeling; A Maximum Entropy Approach

    Science.gov (United States)

    1994-04-19

    t= A’S OAKLAND DODGERS BASEBALL CATCHER ATHLETICS INNING GAMES GAME DAVE LEAGUE SERIES TEAM SEASON FRANCISCO BAY SAN PARK BALL RUNS A.’S -= A.’S...by the MI-3g Measure ’EM -- ’EM YOU SEASON GAME GAMES LEAGUE TEAM GUYS I BASEBALL COACH TEAM’S FOOTBALL WON HERE ME SEASONS TEAMS MY CHAMPIONSHIP ’N

  2. Temperature dependence of the electronic structure of La2CuO4 in the multielectron LDA+GTB approach

    International Nuclear Information System (INIS)

    Makarov, I. A.; Ovchinnikov, S. G.

    2015-01-01

    The band structure of La 2 CuO 4 in antiferromagnetic and paramagnetic phases is calculated at finite temperatures by the multielectron LDA+GTB method. The temperature dependence of the band spectrum and the spectral weight of Hubbard fermions is caused by a change in the occupation numbers of local multielectron spin-split terms in the antiferromagnetic phase. A decrease in the magnetization of the sublattice with temperature gives rise to new bands near the bottom of the conduction band and the top of the valence band. It is shown that the band gap decreases with increasing temperature, but La 2 CuO 4 remains an insulator in the paramagnetic phase as well. These results are consistent with measurements of the red shift of the absorption edge in La 2 CuO 4 with increasing temperature

  3. Combining Experiments and Simulations Using the Maximum Entropy Principle

    DEFF Research Database (Denmark)

    Boomsma, Wouter; Ferkinghoff-Borg, Jesper; Lindorff-Larsen, Kresten

    2014-01-01

    are not in quantitative agreement with experimental data. The principle of maximum entropy is a general procedure for constructing probability distributions in the light of new data, making it a natural tool in cases when an initial model provides results that are at odds with experiments. The number of maximum entropy...... in the context of a simple example, after which we proceed with a real-world application in the field of molecular simulations, where the maximum entropy procedure has recently provided new insight. Given the limited accuracy of force fields, macromolecular simulations sometimes produce results....... Three very recent papers have explored this problem using the maximum entropy approach, providing both new theoretical and practical insights to the problem. We highlight each of these contributions in turn and conclude with a discussion on remaining challenges....

  4. Maximum Power from a Solar Panel

    Directory of Open Access Journals (Sweden)

    Michael Miller

    2010-01-01

    Full Text Available Solar energy has become a promising alternative to conventional fossil fuel sources. Solar panels are used to collect solar radiation and convert it into electricity. One of the techniques used to maximize the effectiveness of this energy alternative is to maximize the power output of the solar collector. In this project the maximum power is calculated by determining the voltage and the current of maximum power. These quantities are determined by finding the maximum value for the equation for power using differentiation. After the maximum values are found for each time of day, each individual quantity, voltage of maximum power, current of maximum power, and maximum power is plotted as a function of the time of day.

  5. Temperature evolution during compaction of pharmaceutical powders.

    Science.gov (United States)

    Zavaliangos, Antonios; Galen, Steve; Cunningham, John; Winstead, Denita

    2008-08-01

    A numerical approach to the prediction of temperature evolution in tablet compaction is presented here. It is based on a coupled thermomechanical finite element analysis and a calibrated Drucker-Prager Cap model. This approach is capable of predicting transient temperatures during compaction, which cannot be assessed by experimental techniques due to inherent test limitations. Model predictions are validated with infrared (IR) temperature measurements of the top tablet surface after ejection and match well with experiments. The dependence of temperature fields on speed and degree of compaction are naturally captured. The estimated transient temperatures are maximum at the end of compaction at the center of the tablet and close to the die wall next to the powder/die interface.

  6. Temperature reconstruction from dripwater hydrochemistry, speleothem fabric and speleothem δ13C: towards an integrated approach in temperate climate caves

    Science.gov (United States)

    Borsato, Andrea; Frisia, Silvia; Johnston, Vanessa; Spötl, Christoph

    2017-04-01

    Accurate reconstruction of past climate records from speleothem minerals requires a thorough understanding of both environmental and hydrologic conditions underpinning their formation. These conditions likely influenced how speleothems incorporate chemical signals that are used as climate proxies. Thus, a thorough investigation of environmental and hydrologic parameters is a pre-requisite to gain robust palaeoclimate reconstructions from stalagmites. Here, we present a systematic study of soil, dripwater and speleothems in temperate climate caves at different altitudes, which allowed the assessment of how mean annual air temperature in the infiltration area (MATinf) influences vegetation cover, soil pCO2 and, eventually, pCO2 of karst water and cave air. Our study demonstrates that for caves developed in pure carbonate rocks, the soil and aquifer pCO2 are directly related to the MATinf (Borsato et al., 2015). It is well known that soil and aquifer pCO2 control carbonate dissolution and the carbonate-carbonic acid system. By establishing a relationship between dripwater pCO2 and MATinf, we show that dripwater Ca content and calcite saturation state SIcc) are correlated with MATinf when unaffected by Prior Calcite Precipitation. In particular, dripwater saturation (SIcc = 0) is reached at a MATinf of 4.4°C in our study area. This MATinf delineates a ''speleothem limit", above which speleothems composed of sparitic calcite should not form (Borsato et al., 2016). In fact, sparitic calcite speleothems do not form, today, in caves with a MATinf well as calcite δ13C in speleothems that were not significantly influenced by kinetic fractionation. A linear correlation between calcite δ13C and MATinf was obtained for modern sparitic speleothems that formed at isotopic equilibrium (Johnston et al., 2013). The combination of these two approaches (present-day dripwater SIcc and calcite δ13C in sparitic speleothems) can be used to reconstruct the past MATinf for high

  7. Discontinuity of maximum entropy inference and quantum phase transitions

    International Nuclear Information System (INIS)

    Chen, Jianxin; Ji, Zhengfeng; Yu, Nengkun; Zeng, Bei; Li, Chi-Kwong; Poon, Yiu-Tung; Shen, Yi; Zhou, Duanlu

    2015-01-01

    In this paper, we discuss the connection between two genuinely quantum phenomena—the discontinuity of quantum maximum entropy inference and quantum phase transitions at zero temperature. It is shown that the discontinuity of the maximum entropy inference of local observable measurements signals the non-local type of transitions, where local density matrices of the ground state change smoothly at the transition point. We then propose to use the quantum conditional mutual information of the ground state as an indicator to detect the discontinuity and the non-local type of quantum phase transitions in the thermodynamic limit. (paper)

  8. An Integrated Approach to Estimate Instantaneous Near-Surface Air Temperature and Sensible Heat Flux Fields during the SEMAPHORE Experiment.

    Science.gov (United States)

    Bourras, Denis; Eymard, Laurence; Liu, W. Timothy; Dupuis, Hélène

    2002-03-01

    A new technique was developed to retrieve near-surface instantaneous air temperatures and turbulent sensible heat fluxes using satellite data during the Structure des Echanges Mer-Atmosphere, Proprietes des Heterogeneites Oceaniques: Recherche Experimentale (SEMAPHORE) experiment, which was conducted in 1993 under mainly anticyclonic conditions. The method is based on a regional, horizontal atmospheric temperature advection model whose inputs are wind vectors, sea surface temperature fields, air temperatures around the region under study, and several constants derived from in situ measurements. The intrinsic rms error of the method is 0.7°C in terms of air temperature and 9 W m2 for the fluxes, both at 0.16° × 0.16° and 1.125° × 1.125° resolution. The retrieved air temperature and flux horizontal structures are in good agreement with fields from two operational general circulation models. The application to SEMAPHORE data involves the First European Remote Sensing Satellite (ERS-1) wind fields, Advanced Very High Resolution Radiometer (AVHRR) SST fields, and European Centre for Medium-Range Weather Forecasts (ECMWF) air temperature boundary conditions. The rms errors obtained by comparing the estimations with research vessel measurements are 0.3°C and 5 W m2.

  9. Land surface temperature as an indicator of the unsaturated zone thickness: A remote sensing approach in the Atacama Desert.

    Science.gov (United States)

    Urqueta, Harry; Jódar, Jorge; Herrera, Christian; Wilke, Hans-G; Medina, Agustín; Urrutia, Javier; Custodio, Emilio; Rodríguez, Jazna

    2018-01-15

    Land surface temperature (LST) seems to be related to the temperature of shallow aquifers and the unsaturated zone thickness (∆Z uz ). That relationship is valid when the study area fulfils certain characteristics: a) there should be no downward moisture fluxes in an unsaturated zone, b) the soil composition in terms of both, the different horizon materials and their corresponding thermal and hydraulic properties, must be as homogeneous and isotropic as possible, c) flat and regular topography, and d) steady state groundwater temperature with a spatially homogeneous temperature distribution. A night time Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) image and temperature field measurements are used to test the validity of the relationship between LST and ∆Z uz at the Pampa del Tamarugal, which is located in the Atacama Desert (Chile) and meets the above required conditions. The results indicate that there is a relation between the land surface temperature and the unsaturated zone thickness in the study area. Moreover, the field measurements of soil temperature indicate that shallow aquifers dampen both the daily and the seasonal amplitude of the temperature oscillation generated by the local climate conditions. Despite empirically observing the relationship between the LST and ∆Z uz in the study zone, such a relationship cannot be applied to directly estimate ∆Z uz using temperatures from nighttime thermal satellite images. To this end, it is necessary to consider the soil thermal properties, the soil surface roughness and the unseen water and moisture fluxes (e.g., capillarity and evaporation) that typically occur in the subsurface. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. A hybrid stochastic hierarchy equations of motion approach to treat the low temperature dynamics of non-Markovian open quantum systems

    Science.gov (United States)

    Moix, Jeremy M.; Cao, Jianshu

    2013-10-01

    The hierarchical equations of motion technique has found widespread success as a tool to generate the numerically exact dynamics of non-Markovian open quantum systems. However, its application to low temperature environments remains a serious challenge due to the need for a deep hierarchy that arises from the Matsubara expansion of the bath correlation function. Here we present a hybrid stochastic hierarchical equation of motion (sHEOM) approach that alleviates this bottleneck and leads to a numerical cost that is nearly independent of temperature. Additionally, the sHEOM method generally converges with fewer hierarchy tiers allowing for the treatment of larger systems. Benchmark calculations are presented on the dynamics of two level systems at both high and low temperatures to demonstrate the efficacy of the approach. Then the hybrid method is used to generate the exact dynamics of systems that are nearly impossible to treat by the standard hierarchy. First, exact energy transfer rates are calculated across a broad range of temperatures revealing the deviations from the Förster rates. This is followed by computations of the entanglement dynamics in a system of two qubits at low temperature spanning the weak to strong system-bath coupling regimes.

  11. Projections of Temperature-Attributable Premature Deaths in 209 U.S. Cities Using a Cluster-Based Poisson Approach

    Science.gov (United States)

    Schwartz, Joel D.; Lee, Mihye; Kinney, Patrick L.; Yang, Suijia; Mills, David; Sarofim, Marcus C.; Jones, Russell; Streeter, Richard; St. Juliana, Alexis; Peers, Jennifer; hide

    2015-01-01

    Background: A warming climate will affect future temperature-attributable premature deaths. This analysis is the first to project these deaths at a near national scale for the United States using city and month-specific temperature-mortality relationships. Methods: We used Poisson regressions to model temperature-attributable premature mortality as a function of daily average temperature in 209 U.S. cities by month. We used climate data to group cities into clusters and applied an Empirical Bayes adjustment to improve model stability and calculate cluster-based month-specific temperature-mortality functions. Using data from two climate models, we calculated future daily average temperatures in each city under Representative Concentration Pathway 6.0. Holding population constant at 2010 levels, we combined the temperature data and cluster-based temperature-mortality functions to project city-specific temperature-attributable premature deaths for multiple future years which correspond to a single reporting year. Results within the reporting periods are then averaged to account for potential climate variability and reported as a change from a 1990 baseline in the future reporting years of 2030, 2050 and 2100. Results: We found temperature-mortality relationships that vary by location and time of year. In general, the largest mortality response during hotter months (April - September) was in July in cities with cooler average conditions. The largest mortality response during colder months (October-March) was at the beginning (October) and end (March) of the period. Using data from two global climate models, we projected a net increase in premature deaths, aggregated across all 209 cities, in all future periods compared to 1990. However, the magnitude and sign of the change varied by cluster and city. Conclusions: We found increasing future premature deaths across the 209 modeled U.S. cities using two climate model projections, based on constant temperature

  12. The Maximum Entropy Principle and the Modern Portfolio Theory

    Directory of Open Access Journals (Sweden)

    Ailton Cassetari

    2003-12-01

    Full Text Available In this work, a capital allocation methodology base don the Principle of Maximum Entropy was developed. The Shannons entropy is used as a measure, concerning the Modern Portfolio Theory, are also discuted. Particularly, the methodology is tested making a systematic comparison to: 1 the mean-variance (Markovitz approach and 2 the mean VaR approach (capital allocations based on the Value at Risk concept. In principle, such confrontations show the plausibility and effectiveness of the developed method.

  13. A paired apatite and calcite clumped isotope thermometry approach to estimating Cambro-Ordovician seawater temperatures and isotopic composition

    Science.gov (United States)

    Bergmann, Kristin D.; Finnegan, Seth; Creel, Roger; Eiler, John M.; Hughes, Nigel C.; Popov, Leonid E.; Fischer, Woodward W.

    2018-03-01

    The secular increase in δ18O values of both calcitic and phosphatic marine fossils through early Phanerozoic time suggests either that (1) early Paleozoic surface temperatures were high, in excess of 40 °C (tropical MAT), (2) the δ18O value of seawater has increased by 7-8‰ VSMOW through Paleozoic time, or (3) diagenesis has altered secular trends in early Paleozoic samples. Carbonate clumped isotope analysis, in combination with petrographic and elemental analysis, can deconvolve fluid composition from temperature effects and therefore determine which of these hypotheses best explain the secular δ18O increase. Clumped isotope measurements of a suite of calcitic and phosphatic marine fossils from late Cambrian- to Middle-late Ordovician-aged strata-the first paired fossil study of its kind-document tropical sea surface temperatures near modern temperatures (26-38 °C) and seawater oxygen isotope ratios similar to today's ratios.

  14. Finite temperature magnon spectra in yttrium iron garnet from a mean field approach in a tight-binding model

    Science.gov (United States)

    Shen, Ka

    2018-04-01

    We study magnon spectra at finite temperature in yttrium iron garnet using a tight-binding model with nearest-neighbor exchange interaction. The spin reduction due to thermal magnon excitation is taken into account via the mean field approximation to the local spin and is found to be different at two sets of iron atoms. The resulting temperature dependence of the spin wave gap shows good agreement with experiment. We find that only two magnon modes are relevant to the ferromagnetic resonance.

  15. Food crops face rising temperatures: An overview of responses, adaptive mechanisms, and approaches to improve heat tolerance

    OpenAIRE

    Neeru Kaushal; Kalpna Bhandari; Kadambot H.M. Siddique; Harsh Nayyar

    2016-01-01

    The rising temperatures are resulting in heat stress for various agricultural crops to limit their growth, metabolism, and leading to significant loss of yield potential worldwide. Heat stress adversely affects normal plant growth and development depending on the sensitivity of each crop species. Each crop species has its own range of temperature maxima and minima at different developmental stages beyond which all these processes get inhibited. The reproductive stage is on the whole more sens...

  16. A Bayesian approach to infer the radial distribution of temperature and anisotropy in the transition zone from seismic data

    Science.gov (United States)

    Drilleau, M.; Beucler, E.; Mocquet, A.; Verhoeven, O.; Moebs, G.; Burgos, G.; Montagner, J.

    2013-12-01

    Mineralogical transformations and matter transfers within the Earth's mantle make the 350-1000 km depth range (considered here as the mantle transition zone) highly heterogeneous and anisotropic. Most of the 3-D global tomographic models are anchored on small perturbations from 1-D models such as PREM, and are secondly interpreted in terms of temperature and composition distributions. However, the degree of heterogeneity in the transition zone can be strong enough so that the concept of a 1-D reference seismic model may be addressed. To avoid the use of any seismic reference model, we developed a Markov chain Monte Carlo algorithm to directly interpret surface wave dispersion curves in terms of temperature and radial anisotropy distributions, considering a given composition of the mantle. These interpretations are based on laboratory measurements of elastic moduli and Birch-Murnaghan equation of state. An originality of the algorithm is its ability to explore both smoothly varying models and first-order discontinuities, using C1-Bézier curves, which interpolate the randomly chosen values for depth, temperature and radial anisotropy. This parameterization is able to generate a self-adapting parameter space exploration while reducing the computing time. Using a Bayesian exploration, the probability distributions on temperature and anisotropy are governed by uncertainties on the data set. The method was successfully applied to both synthetic data and real dispersion curves. Surface wave measurements along the Vanuatu- California path suggest a strong anisotropy above 400 km depth which decreases below, and a monotonous temperature distribution between 350 and 1000 km depth. On the contrary, a negative shear wave anisotropy of about 2 % is found at the top of the transition zone below Eurasia. Considering compositions ranging from piclogite to pyrolite, the overall temperature profile and temperature gradient are higher for the continental path than for the oceanic

  17. A novel approach to determine the effect of irrigation on temperature and failure of Ni-Ti endodontic rotary files

    Science.gov (United States)

    Mousavi, Sayed Ali; Kargar-Dehnavi, Vida; Mousavi, Sayed Amir

    2012-01-01

    Background: Nickel-titanium (Ni-Ti) rotary instrument files are important devices in Endodontics in root canal preparation. Ni-Ti file breakage is a critical and problematic issue and irrigation techniques were applied to decrease risk of file failure root. The aim of the present study was to compare the temperature gradient change of different irrigation solutions with Ni-Ti rotary instrument system during root canal preparation and also to define their effects on the file failure. Materials and Methods: A novel computerized instrumentation was utilized and thirty standard (ProFile #25/.04) files were divided into three groups and subjected to a filing in the root canal test. Changes in temperature on teeth under constant instrumental conditions with custom-designed computerized experimental apparatus were measured by using a temperature sensor bonded to the apical hole. A rotary instrument for canal preparation in three series of solution was used and the changes in temperature after each solution were compared. Finally, the file failure results were mentored according to each step of test. Comparisons were performed between group status clinically by using ANOVA (t) test, once the sample showed up normal and differences of Pinstruments, which were immersed in 5% NaOCl, when compared with the water group (Pinstruments immersed in water, when compared with the no solution group (Pinstruments. Conclusion: By immersing the file in 5% NaOCl, the temperature gradient decreased and instrument failure was reduced. PMID:23087732

  18. A multi-scale approach of mechanical and transport properties of cementitious materials under rises of temperature

    International Nuclear Information System (INIS)

    Caratini, G.

    2012-01-01

    The modern industrial activities (storage of nuclear waste, geothermal wells, nuclear power plants,...) can submit cementitious materials to some extreme conditions, for example at temperatures above 200 C. This level of temperature will induce phenomena of dehydration in the cement paste, particularly impacting the CSH hydrates which led to the mechanical cohesion. The effects of these temperatures on the mechanical and transport properties have been the subject of this thesis.To understand these effects, we need to take into account the heterogeneous, porous, multi-scale aspects of these materials. To do this, micro-mechanics and homogenization tools based on the Eshelby problem's solution were used. Moreover, to support this multi-scale modeling, mechanical testing based on the theory of porous media were conducted. The measurements of modulus compressibility, permeability and porosity under confining pressure were used to investigate the mechanisms of degradation of these materials during thermal loads up to 400 C. (author)

  19. Selective Sensing of Gas Mixture via a Temperature Modulation Approach: New Strategy for Potentiometric Gas Sensor Obtaining Satisfactory Discriminating Features.

    Science.gov (United States)

    Li, Fu-An; Jin, Han; Wang, Jinxia; Zou, Jie; Jian, Jiawen

    2017-03-12

    A new strategy to discriminate four types of hazardous gases is proposed in this research. Through modulating the operating temperature and the processing response signal with a pattern recognition algorithm, a gas sensor consisting of a single sensing electrode, i.e., ZnO/In₂O₃ composite, is designed to differentiate NO₂, NH₃, C₃H₆, CO within the level of 50-400 ppm. Results indicate that with adding 15 wt.% ZnO to In₂O₃, the sensor fabricated at 900 °C shows optimal sensing characteristics in detecting all the studied gases. Moreover, with the aid of the principle component analysis (PCA) algorithm, the sensor operating in the temperature modulation mode demonstrates acceptable discrimination features. The satisfactory discrimination features disclose the future that it is possible to differentiate gas mixture efficiently through operating a single electrode sensor at temperature modulation mode.

  20. Sorption isotherms modeling approach of rice-based instant soup mix stored under controlled temperature and humidity

    Directory of Open Access Journals (Sweden)

    Yogender Singh

    2015-12-01

    Full Text Available Moisture sorption isotherms of rice-based instant soup mix at temperature range 15–45°C and relative humidity from 0.11 to 0.86 were determined using the standard gravimetric static method. The experimental sorption curves were fitted by five equations: Chung-Pfost, GAB, Henderson, Kuhn, and Oswin. The sorption isotherms of soup mix decreased with increasing temperature, exhibited type II behavior according to BET classification. The GAB, Henderson, Kuhn, and Oswin models were found to be the most suitable for describing the sorption curves. The isosteric heat of sorption of water was determined from the equilibrium data at different temperatures. It decreased as moisture content increased and was found to be a polynomial function of moisture content. The study has provided information and data useful in large-scale commercial production of soup and have great importance to combat the problem of protein-energy malnutrition in underdeveloped and developing countries.