Sample records for maximum temperatures approaching

  1. Maximum Temperature Detection System for Integrated Circuits

    Frankiewicz, Maciej; Kos, Andrzej


    The paper describes structure and measurement results of the system detecting present maximum temperature on the surface of an integrated circuit. The system consists of the set of proportional to absolute temperature sensors, temperature processing path and a digital part designed in VHDL. Analogue parts of the circuit where designed with full-custom technique. The system is a part of temperature-controlled oscillator circuit - a power management system based on dynamic frequency scaling method. The oscillator cooperates with microprocessor dedicated for thermal experiments. The whole system is implemented in UMC CMOS 0.18 μm (1.8 V) technology.

  2. Methodological aspects of a pattern-scaling approach to produce global fields of monthly means of daily maximum and minimum temperature

    Kremser, S.; Bodeker, G. E.; Lewis, J.


    A Climate Pattern-Scaling Model (CPSM) that simulates global patterns of climate change, for a prescribed emissions scenario, is described. A CPSM works by quantitatively establishing the statistical relationship between a climate variable at a specific location (e.g. daily maximum surface temperature, Tmax) and one or more predictor time series (e.g. global mean surface temperature, Tglobal) - referred to as the "training" of the CPSM. This training uses a regression model to derive fit coefficients that describe the statistical relationship between the predictor time series and the target climate variable time series. Once that relationship has been determined, and given the predictor time series for any greenhouse gas (GHG) emissions scenario, the change in the climate variable of interest can be reconstructed - referred to as the "application" of the CPSM. The advantage of using a CPSM rather than a typical atmosphere-ocean global climate model (AOGCM) is that the predictor time series required by the CPSM can usually be generated quickly using a simple climate model (SCM) for any prescribed GHG emissions scenario and then applied to generate global fields of the climate variable of interest. The training can be performed either on historical measurements or on output from an AOGCM. Using model output from 21st century simulations has the advantage that the climate change signal is more pronounced than in historical data and therefore a more robust statistical relationship is obtained. The disadvantage of using AOGCM output is that the CPSM training might be compromised by any AOGCM inadequacies. For the purposes of exploring the various methodological aspects of the CPSM approach, AOGCM output was used in this study to train the CPSM. These investigations of the CPSM methodology focus on monthly mean fields of daily temperature extremes (Tmax and Tmin). The methodological aspects of the CPSM explored in this study include (1) investigation of the advantage

  3. Dynamical maximum entropy approach to flocking

    Cavagna, Andrea; Giardina, Irene; Ginelli, Francesco; Mora, Thierry; Piovani, Duccio; Tavarone, Raffaele; Walczak, Aleksandra M.


    We derive a new method to infer from data the out-of-equilibrium alignment dynamics of collectively moving animal groups, by considering the maximum entropy model distribution consistent with temporal and spatial correlations of flight direction. When bird neighborhoods evolve rapidly, this dynamical inference correctly learns the parameters of the model, while a static one relying only on the spatial correlations fails. When neighbors change slowly and the detailed balance is satisfied, we recover the static procedure. We demonstrate the validity of the method on simulated data. The approach is applicable to other systems of active matter.

  4. Recommended Maximum Temperature For Mars Returned Samples

    Beaty, D. W.; McSween, H. Y.; Czaja, A. D.; Goreva, Y. S.; Hausrath, E.; Herd, C. D. K.; Humayun, M.; McCubbin, F. M.; McLennan, S. M.; Hays, L. E.


    The Returned Sample Science Board (RSSB) was established in 2015 by NASA to provide expertise from the planetary sample community to the Mars 2020 Project. The RSSB's first task was to address the effect of heating during acquisition and storage of samples on scientific investigations that could be expected to be conducted if the samples are returned to Earth. Sample heating may cause changes that could ad-versely affect scientific investigations. Previous studies of temperature requirements for returned mar-tian samples fall within a wide range (-73 to 50 degrees Centigrade) and, for mission concepts that have a life detection component, the recommended threshold was less than or equal to -20 degrees Centigrade. The RSSB was asked by the Mars 2020 project to determine whether or not a temperature requirement was needed within the range of 30 to 70 degrees Centigrade. There are eight expected temperature regimes to which the samples could be exposed, from the moment that they are drilled until they are placed into a temperature-controlled environment on Earth. Two of those - heating during sample acquisition (drilling) and heating while cached on the Martian surface - potentially subject samples to the highest temperatures. The RSSB focused on the upper temperature limit that Mars samples should be allowed to reach. We considered 11 scientific investigations where thermal excursions may have an adverse effect on the science outcome. Those are: (T-1) organic geochemistry, (T-2) stable isotope geochemistry, (T-3) prevention of mineral hydration/dehydration and phase transformation, (T-4) retention of water, (T-5) characterization of amorphous materials, (T-6) putative Martian organisms, (T-7) oxidation/reduction reactions, (T-8) (sup 4) He thermochronometry, (T-9) radiometric dating using fission, cosmic-ray or solar-flare tracks, (T-10) analyses of trapped gasses, and (T-11) magnetic studies.

  5. MB Distribution and its application using maximum entropy approach

    Bhadra Suman


    Full Text Available Maxwell Boltzmann distribution with maximum entropy approach has been used to study the variation of political temperature and heat in a locality. We have observed that the political temperature rises without generating any political heat when political parties increase their attractiveness by intense publicity, but voters do not shift their loyalties. It has also been shown that political heat is generated and political entropy increases with political temperature remaining constant when parties do not change their attractiveness, but voters shift their loyalties (to more attractive parties.

  6. 16 CFR 1505.8 - Maximum acceptable material temperatures.


    ... Association, 155 East 44th Street, New York, NY 10017. Material Degrees C. Degrees F. Capacitors (1) (1) Class... capacitor has no marked temperature limit, the maximum acceptable temperature will be assumed to be 65...

  7. CMB Maximum Temperature Asymmetry Axis: Alignment with Other Cosmic Asymmetries

    Mariano, Antonio


    We use a global pixel based estimator to identify the axis of the residual Maximum Temperature Asymmetry (MTA) (after the dipole subtraction) of the WMAP 7 year Internal Linear Combination (ILC) CMB temperature sky map. The estimator is based on considering the temperature differences between opposite pixels in the sky at various angular resolutions (4 degrees-15 degrees and selecting the axis that maximizes this difference. We consider three large scale Healpix resolutions (N_{side}=16 (3.7 degrees), N_{side}=8 (7.3 degrees) and N_{side}=4 (14.7 degrees)). We compare the direction and magnitude of this asymmetry with three other cosmic asymmetry axes (\\alpha dipole, Dark Energy Dipole and Dark Flow) and find that the four asymmetry axes are abnormally close to each other. We compare the observed MTA axis with the corresponding MTA axes of 10^4 Gaussian isotropic simulated ILC maps (based on LCDM). The fraction of simulated ILC maps that reproduces the observed magnitude of the MTA asymmetry and alignment wit...

  8. A New Detection Approach Based on the Maximum Entropy Model

    DONG Xiaomei; XIANG Guang; YU Ge; LI Xiaohua


    The maximum entropy model was introduced and a new intrusion detection approach based on the maximum entropy model was proposed. The vector space model was adopted for data presentation. The minimal entropy partitioning method was utilized for attribute discretization. Experiments on the KDD CUP 1999 standard data set were designed and the experimental results were shown. The receiver operating characteristic(ROC) curve analysis approach was utilized to analyze the experimental results. The analysis results show that the proposed approach is comparable to those based on support vector machine(SVM) and outperforms those based on C4.5 and Naive Bayes classifiers. According to the overall evaluation result, the proposed approach is a little better than those based on SVM.

  9. PNNL: A Supervised Maximum Entropy Approach to Word Sense Disambiguation

    Tratz, Stephen C.; Sanfilippo, Antonio P.; Gregory, Michelle L.; Chappell, Alan R.; Posse, Christian; Whitney, Paul D.


    In this paper, we described the PNNL Word Sense Disambiguation system as applied to the English All-Word task in Se-mEval 2007. We use a supervised learning approach, employing a large number of features and using Information Gain for dimension reduction. Our Maximum Entropy approach combined with a rich set of features produced results that are significantly better than baseline and are the highest F-score for the fined-grained English All-Words subtask.

  10. A Unified Maximum Likelihood Approach to Document Retrieval.

    Bodoff, David; Enache, Daniel; Kambil, Ajit; Simon, Gary; Yukhimets, Alex


    Addresses the query- versus document-oriented dichotomy in information retrieval. Introduces a maximum likelihood approach to utilizing feedback data that can be used to construct a concrete object function that estimates both document and query parameters in accordance with all available feedback data. (AEF)

  11. Impact of soil moisture on extreme maximum temperatures in Europe

    Kirien Whan


    Full Text Available Land-atmosphere interactions play an important role for hot temperature extremes in Europe. Dry soils may amplify such extremes through feedbacks with evapotranspiration. While previous observational studies generally focused on the relationship between precipitation deficits and the number of hot days, we investigate here the influence of soil moisture (SM on summer monthly maximum temperatures (TXx using water balance model-based SM estimates (driven with observations and temperature observations. Generalized extreme value distributions are fitted to TXx using SM as a covariate. We identify a negative relationship between SM and TXx, whereby a 100 mm decrease in model-based SM is associated with a 1.6 °C increase in TXx in Southern-Central and Southeastern Europe. Dry SM conditions result in a 2–4 °C increase in the 20-year return value of TXx compared to wet conditions in these two regions. In contrast with SM impacts on the number of hot days (NHD, where low and high surface-moisture conditions lead to different variability, we find a mostly linear dependency of the 20-year return value on surface-moisture conditions. We attribute this difference to the non-linear relationship between TXx and NHD that stems from the threshold-based calculation of NHD. Furthermore the employed SM data and the Standardized Precipitation Index (SPI are only weakly correlated in the investigated regions, highlighting the importance of evapotranspiration and runoff for resulting SM. Finally, in a case study for the hot 2003 summer we illustrate that if 2003 spring conditions in Southern-Central Europe had been as dry as in the more recent 2011 event, temperature extremes in summer would have been higher by about 1 °C, further enhancing the already extreme conditions which prevailed in that year.

  12. Triadic conceptual structure of the maximum entropy approach to evolution.

    Herrmann-Pillath, Carsten; Salthe, Stanley N


    Many problems in evolutionary theory are cast in dyadic terms, such as the polar oppositions of organism and environment. We argue that a triadic conceptual structure offers an alternative perspective under which the information generating role of evolution as a physical process can be analyzed, and propose a new diagrammatic approach. Peirce's natural philosophy was deeply influenced by his reception of both Darwin's theory and thermodynamics. Thus, we elaborate on a new synthesis which puts together his theory of signs and modern Maximum Entropy approaches to evolution in a process discourse. Following recent contributions to the naturalization of Peircean semiosis, pointing towards 'physiosemiosis' or 'pansemiosis', we show that triadic structures involve the conjunction of three different kinds of causality, efficient, formal and final. In this, we accommodate the state-centered thermodynamic framework to a process approach. We apply this on Ulanowicz's analysis of autocatalytic cycles as primordial patterns of life. This paves the way for a semiotic view of thermodynamics which is built on the idea that Peircean interpretants are systems of physical inference devices evolving under natural selection. In this view, the principles of Maximum Entropy, Maximum Power, and Maximum Entropy Production work together to drive the emergence of information carrying structures, which at the same time maximize information capacity as well as the gradients of energy flows, such that ultimately, contrary to Schrödinger's seminal contribution, the evolutionary process is seen to be a physical expression of the Second Law.

  13. Multitime maximum principle approach of minimal submanifolds and harmonic maps

    Udriste, Constantin


    Some optimization problems coming from the Differential Geometry, as for example, the minimal submanifolds problem and the harmonic maps problem are solved here via interior solutions of appropriate multitime optimal control problems. Section 1 underlines some science domains where appear multitime optimal control problems. Section 2 (Section 3) recalls the multitime maximum principle for optimal control problems with multiple (curvilinear) integral cost functionals and $m$-flow type constraint evolution. Section 4 shows that there exists a multitime maximum principle approach of multitime variational calculus. Section 5 (Section 6) proves that the minimal submanifolds (harmonic maps) are optimal solutions of multitime evolution PDEs in an appropriate multitime optimal control problem. Section 7 uses the multitime maximum principle to show that of all solids having a given surface area, the sphere is the one having the greatest volume. Section 8 studies the minimal area of a multitime linear flow as optimal c...

  14. Collective behaviours in the stock market -- A maximum entropy approach

    Bury, Thomas


    Scale invariance, collective behaviours and structural reorganization are crucial for portfolio management (portfolio composition, hedging, alternative definition of risk, etc.). This lack of any characteristic scale and such elaborated behaviours find their origin in the theory of complex systems. There are several mechanisms which generate scale invariance but maximum entropy models are able to explain both scale invariance and collective behaviours. The study of the structure and collective modes of financial markets attracts more and more attention. It has been shown that some agent based models are able to reproduce some stylized facts. Despite their partial success, there is still the problem of rules design. In this work, we used a statistical inverse approach to model the structure and co-movements in financial markets. Inverse models restrict the number of assumptions. We found that a pairwise maximum entropy model is consistent with the data and is able to describe the complex structure of financial...

  15. Operational forecasting of daily temperatures in the Valencia Region. Part I: maximum temperatures in summer.

    Gómez, I.; Estrela, M.


    Extreme temperature events have a great impact on human society. Knowledge of summer maximum temperatures is very useful for both the general public and organisations whose workers have to operate in the open, e.g. railways, roadways, tourism, etc. Moreover, summer maximum daily temperatures are considered a parameter of interest and concern since persistent heat-waves can affect areas as diverse as public health, energy consumption, etc. Thus, an accurate forecasting of these temperatures could help to predict heat-wave conditions and permit the implementation of strategies aimed at minimizing the negative effects that high temperatures have on human health. The aim of this work is to evaluate the skill of the RAMS model in determining daily maximum temperatures during summer over the Valencia Region. For this, we have used the real-time configuration of this model currently running at the CEAM Foundation. To carry out the model verification process, we have analysed not only the global behaviour of the model for the whole Valencia Region, but also its behaviour for the individual stations distributed within this area. The study has been performed for the summer forecast period of 1 June - 30 September, 2007. The results obtained are encouraging and indicate a good agreement between the observed and simulated maximum temperatures. Moreover, the model captures quite well the temperatures in the extreme heat episodes. Acknowledgement. This work was supported by "GRACCIE" (CSD2007-00067, Programa Consolider-Ingenio 2010), by the Spanish Ministerio de Educación y Ciencia, contract number CGL2005-03386/CLI, and by the Regional Government of Valencia Conselleria de Sanitat, contract "Simulación de las olas de calor e invasiones de frío y su regionalización en la Comunidad Valenciana" ("Heat wave and cold invasion simulation and their regionalization at Valencia Region"). The CEAM Foundation is supported by the Generalitat Valenciana and BANCAIXA (Valencia, Spain).

  16. A simple approach for maximum heat recovery calculations

    Jezowski, J. (Wroclaw Technical Univ. (PL). Inst. of Chemical Engineering and Heating Equipment); Friedler, F. (Hungarian Academy of Sciences, Egyetem (HU). Research Inst. for Technical Chmeistry)


    This paper addresses the problem of calculating the maximum heat energy recovery for a given set of process streams. Simple, straightforward algorithms of calculations are presented that account for tasks with multiple utilities, forbidden matches and nonpoint utilities. A new way of applying the so-called dual-stream approach to reduce utility usage for tasks with forbidden matches is also given in this paper. The calculation methods do not require computer programs and mathematical programming application. They give the user a proper insight into a problem to understand heat integration as well as to recognize options and traps in heat exchanger network synthesis. (author).

  17. Estimating minimum and maximum air temperature using MODIS data over Indo-Gangetic Plain

    D B Shah; M R Pandya; H J Trivedi; A R Jani


    Spatially distributed air temperature data are required for climatological, hydrological and environmental studies. However, high spatial distribution patterns of air temperature are not available from meteorological stations due to its sparse network. The objective of this study was to estimate high spatial resolution minimum air temperature (min) and maximum air temperature (max) over the Indo-Gangetic Plain using Moderate Resolution Imaging Spectroradiometer (MODIS) data and India Meteorological Department (IMD) ground station data. min was estimated by establishing an empirical relationship between IMD min and night-time MODIS Land Surface Temperature (s). While, max was estimated using the Temperature-Vegetation Index (TVX) approach. The TVX approach is based on the linear relationship between s and Normalized Difference Vegetation Index (NDVI) data where max is estimated by extrapolating the NDVI-s regression line to maximum value of NDVImax for effective full vegetation cover. The present study also proposed a methodology to estimate NDVImax using IMD measured max for the Indo-Gangetic Plain. Comparison of MODIS estimated min with IMD measured min showed mean absolute error (MAE) of 1.73°C and a root mean square error (RMSE) of 2.2°C. Analysis in the study for max estimation showed that calibrated NDVImax performed well, with the MAE of 1.79°C and RMSE of 2.16°C.

  18. MARSpline model for lead seven-day maximum and minimum air temperature prediction in Chennai, India

    K Ramesh; R Anitha


    In this study, a Multivariate Adaptive Regression Spline (MARS) based lead seven days minimum and maximum surface air temperature prediction system is modelled for station Chennai, India. To emphasize the effectiveness of the proposed system, comparison is made with the models created using statistical learning technique Support Vector Machine Regression (SVMr). The analysis highlights that prediction accuracy of MARS models for minimum temperature forecast are promising for short-term forecast (lead days 1 to 3) with mean absolute error (MAE) less than 1°C and the prediction efficiency and skill degrades in medium term forecast (lead days 4 to 7) with slightly above 1°C. The MAE of maximum temperature is little higher than minimum temperature forecast varying from 0.87°C for day-one to 1.27°C for lag day-seven with MARS approach. The statistical error analysis emphasizes that MARS models perform well with an average 0.2°C of reduction in MAE over SVMr models for all ahead seven days and provide significant guidance for the prediction of temperature event. The study also suggests that the correlation between the atmospheric parameters used as predictors and the temperature event decreases as the lag increases with both approaches.

  19. Triadic Conceptual Structure of the Maximum Entropy Approach to Evolution

    Herrmann-Pillath, Carsten


    Many problems in evolutionary theory are cast in dyadic terms, such as the polar oppositions of organism and environment. We argue that a triadic conceptual structure offers an alternative perspective under which the information generating role of evolution as a physical process can be analyzed, and propose a new diagrammatic approach. Peirce's natural philosophy was deeply influenced by his reception of both Darwin's theory and thermodynamics. Thus, we elaborate on a new synthesis which puts together his theory of signs and modern Maximum Entropy approaches to evolution. Following recent contributions to the naturalization of Peircean semiosis, we show that triadic structures involve the conjunction of three different kinds of causality, efficient, formal and final. We apply this on Ulanowicz's analysis of autocatalytic cycles as primordial patterns of life. This paves the way for a semiotic view of thermodynamics which is built on the idea that Peircean interpretants are systems of physical inference device...

  20. Narrow band interference cancelation in OFDM: Astructured maximum likelihood approach

    Sohail, Muhammad Sadiq


    This paper presents a maximum likelihood (ML) approach to mitigate the effect of narrow band interference (NBI) in a zero padded orthogonal frequency division multiplexing (ZP-OFDM) system. The NBI is assumed to be time variant and asynchronous with the frequency grid of the ZP-OFDM system. The proposed structure based technique uses the fact that the NBI signal is sparse as compared to the ZP-OFDM signal in the frequency domain. The structure is also useful in reducing the computational complexity of the proposed method. The paper also presents a data aided approach for improved NBI estimation. The suitability of the proposed method is demonstrated through simulations. © 2012 IEEE.

  1. Delocalized Epidemics on Graphs: A Maximum Entropy Approach

    Sahneh, Faryad Darabi; Scoglio, Caterina


    The susceptible--infected--susceptible (SIS) epidemic process on complex networks can show metastability, resembling an endemic equilibrium. In a general setting, the metastable state may involve a large portion of the network, or it can be localized on small subgraphs of the contact network. Localized infections are not interesting because a true outbreak concerns network--wide invasion of the contact graph rather than localized infection of certain sites within the contact network. Existing approaches to localization phenomenon suffer from a major drawback: they fully rely on the steady--state solution of mean--field approximate models in the neighborhood of their phase transition point, where their approximation accuracy is worst; as statistical physics tells us. We propose a dispersion entropy measure that quantifies the localization of infections in a generic contact graph. Formulating a maximum entropy problem, we find an upper bound for the dispersion entropy of the possible metastable state in the exa...

  2. Asymmetrical Change Characteristics of Maximum and Minimum Temperatures in Shangqiu in Recent 50 Years


    [Objective] The research aimed to analyze temporal and spatial variation characteristics of temperature in Shangqiu City during 1961-2010.[Method] Based on temperature data in eight meteorological stations of Shangqiu during 1961-2010,by using trend analysis method,the temporal and spatial evolution characteristics of annual average temperature,annual average maximum and minimum temperatures,annual extreme maximum and minimum temperatures,daily range of annual average temperature in Shangqiu City were analy...

  3. Decadal trends in Red Sea maximum surface temperature

    Chaidez, Veronica


    Ocean warming is a major consequence of climate change, with the surface of the ocean having warmed by 0.11 °C decade-1 over the last 50 years and is estimated to continue to warm by an additional 0.6 - 2.0 °C before the end of the century1. However, there is considerable variability in the rates experienced by different ocean regions, so understanding regional trends is important to inform on possible stresses for marine organisms, particularly in warm seas where organisms may be already operating in the high end of their thermal tolerance. Although the Red Sea is one of the warmest ecosystems on earth, its historical warming trends and thermal evolution remain largely understudied. We characterized the Red Sea\\'s thermal regimes at the basin scale, with a focus on the spatial distribution and changes over time of sea surface temperature maxima, using remotely sensed sea surface temperature data from 1982 - 2015. The overall rate of warming for the Red Sea is 0.17 ± 0.07 °C decade-1, while the northern Red Sea is warming between 0.40 and 0.45 °C decade-1, all exceeding the global rate. Our findings show that the Red Sea is fast warming, which may in the future challenge its organisms and communities.

  4. Asymmetric variability between maximum and minimum temperatures in Northeastern Tibetan Plateau: Evidence from tree rings


    Ecological systems in the headwaters of the Yellow River, characterized by hash natural environmental conditions, are very vulnerable to climatic change. In the most recent decades, this area greatly attracted the public's attention for its more and more deteriorating environmental conditions. Based on tree-ring samples from the Xiqing Mountain and A'nyêmagên Mountains at the headwaters of the Yellow River in the Northeastern Tibetan Plateau, we reconstructed the minimum temperatures in the winter half year over the last 425 years and the maximum temperatures in the summer half year over the past 700 years in this region. The variation of minimum temperature in the winter half year during the time span of 1578-1940 was a relatively stable trend, which was followed by an abrupt warming trend since 1941. However, there is no significant warming trend for the maximum temperature in the summer half year over the 20th century. The asymmetric variation patterns between the minimum and maximum temperatures were observed in this study over the past 425 years. During the past 425 years, there are similar variation patterns between the minimum and maximum temperatures; however, the minimum temperatures vary about 25 years earlier compared to the maximum temperatures. If such a trend of variation patterns between the minimum and maximum temperatures over the past 425 years continues in the future 30 years, the maximum temperature in this region will increase significantly.

  5. Asymmetric variability between maximum and minimum temperatures in Northeastern Tibetan Plateau:Evidence from tree rings

    Jacoby; GORDON


    Ecological systems in the headwaters of the Yellow River, characterized by hash natural environmental conditions, are very vulnerable to climatic change. In the most recent decades, this area greatly attracted the public’s attention for its more and more deteriorating environmental conditions. Based on tree-ring samples from the Xiqing Mountain and A’nyêmagên Mountains at the headwaters of the Yellow River in the Northeastern Tibetan Plateau, we reconstructed the minimum temperatures in the winter half year over the last 425 years and the maximum temperatures in the summer half year over the past 700 years in this region. The variation of minimum temperature in the winter half year during the time span of 1578―1940 was a relatively stable trend, which was followed by an abrupt warming trend since 1941. However, there is no significant warming trend for the maximum temperature in the summer half year over the 20th century. The asymmetric variation patterns between the minimum and maximum temperatures were observed in this study over the past 425 years. During the past 425 years, there are similar variation patterns between the minimum and maximum temperatures; however, the minimum temperatures vary about 25 years earlier compared to the maximum temperatures. If such a trend of variation patterns between the minimum and maximum temperatures over the past 425 years continues in the future 30 years, the maximum temperature in this region will increase significantly.

  6. A new global reconstruction of temperature changes at the Last Glacial Maximum

    J. D. Annan


    Full Text Available Some recent compilations of proxy data both on land and ocean (MARGO Project Members, 2009; Bartlein et al., 2011; Shakun et al., 2012, have provided a new opportunity for an improved assessment of the overall climatic state of the Last Glacial Maximum. In this paper, we combine these proxy data with the ensemble of structurally diverse state of the art climate models which participated in the PMIP2 project (Braconnot et al., 2007 to generate a spatially complete reconstruction of surface air (and sea surface temperatures. We test a variety of approaches, and show that multiple linear regression performs well for this application. Our reconstruction is significantly different to and more accurate than previous approaches and we obtain an estimated global mean cooling of 4.0 ± 0.8 °C (95% CI.

  7. On the magnitude of temperature decrease in the equatorial regions during the Last Glacial Maximum

    王宁练; 姚檀栋; 施雅风; L.G.Thompson; J.Cole-Dai; P.-N.Lin; and; M.E.Davis


    Based on the data of temperature changes revealed by means of various palaeothermometric proxy indices,it is found that the magnitude of temperature decrease became large with altitude in the equatorial regions during the Last Glacial Maximum. The direct cause of this phenomenon was the change in temperature lapse rate, which was about(0.1±0.05)℃/100 m larger in the equator during the Last Glacial Maximum than at present. Moreover, the analyses show that CLIMAP possibly underestimated the sea surface temperature decrease in the equatorial regions during the Last Glacial Maximum.

  8. Adaptive Statistical Language Modeling; A Maximum Entropy Approach


    recognition systems were built that could recognize vowels or digits, but they could not be successfully extended to handle more realistic language...maximum likelihood of gener- ating the training data. The identity of the ML and ME solutions, apart from being aesthetically pleasing, is extremely

  9. A probabilistic approach to the concept of Probable Maximum Precipitation

    Papalexiou, S. M.; D. Koutsoyiannis


    International audience; The concept of Probable Maximum Precipitation (PMP) is based on the assumptions that (a) there exists an upper physical limit of the precipitation depth over a given area at a particular geographical location at a certain time of year, and (b) that this limit can be estimated based on deterministic considerations. The most representative and widespread estimation method of PMP is the so-called moisture maximization method. This method maximizes observed storms assuming...

  10. Observed Abrupt Changes in Minimum and Maximum Temperatures in Jordan in the 20th Century

    Mohammad M.  samdi


    Full Text Available This study examines changes in annual and seasonal mean (minimum and maximum temperatures variations in Jordan during the 20th century. The analyses focus on the time series records at the Amman Airport Meteorological (AAM station. The occurrence of abrupt changes and trends were examined using cumulative sum charts (CUSUM and bootstrapping and the Mann-Kendall rank test. Statistically significant abrupt changes and trends have been detected. Major change points in the mean minimum (night-time and mean maximum (day-time temperatures occurred in 1957 and 1967, respectively. A minor change point in the annual mean maximum temperature also occurred in 1954, which is essential agreement with the detected change in minimum temperature. The analysis showed a significant warming trend after the years 1957 and 1967 for the minimum and maximum temperatures, respectively. The analysis of maximum temperatures shows a significant warming trend after the year 1967 for the summer season with a rate of temperature increase of 0.038°C/year. The analysis of minimum temperatures shows a significant warming trend after the year 1957 for all seasons. Temperature and rainfall data from other stations in the country have been considered and showed similar changes.

  11. Detailed analysis of an endoreversible fuel cell : Maximum power and optimal operating temperature determination

    Vaudrey, A; Lanzetta, F; Glises, R


    Producing useful electrical work in consuming chemical energy, the fuel cell have to reject heat to its surrounding. However, as it occurs for any other type of engine, this thermal energy cannot be exchanged in an isothermal way in finite time through finite areas. As it was already done for various types of systems, we study the fuel cell within the finite time thermodynamics framework and define an endoreversible fuel cell. Considering different types of heat transfer laws, we obtain an optimal value of the operating temperature, corresponding to a maximum produced power. This analysis is a first step of a thermodynamical approach of design of thermal management devices, taking into account performances of the whole system.

  12. Downscaling Maximum Temperatures to Subkilometer Resolutions in the Shenandoah National Park of Virginia, USA

    Temple R. Lee


    Full Text Available Downscaling future temperature projections to mountainous regions is vital for many applications, including ecological and water resource management. In this study, we demonstrate a method to downscale maximum temperatures to subkilometer resolutions using the Parameter-elevation Regression on Independent Slopes Model (PRISM. We evaluate the downscaling method with observations from a network of temperature sensors deployed along western and eastern slopes of Virginia’s Shenandoah National Park in the southern Appalachian Mountains. We find that the method overestimates mean July maximum temperatures by about 2°C (4°C along the western (eastern slopes. Based on this knowledge, we introduce corrections to generate maps of current and future maximum temperatures in the Shenandoah National Park.

  13. Variability of maximum and mean average temperature across Libya (1945-2009)

    Ageena, I.; Macdonald, N.; Morse, A. P.


    Spatial and temporal variability in daily maximum and mean average daily temperature, monthly maximum and mean average monthly temperature for nine coastal stations during the period 1956-2009 (54 years), and annual maximum and mean average temperature for coastal and inland stations for the period 1945-2009 (65 years) across Libya are analysed. During the period 1945-2009, significant increases in maximum temperature (0.017 °C/year) and mean average temperature (0.021 °C/year) are identified at most stations. Significantly, warming in annual maximum temperature (0.038 °C/year) and mean average annual temperatures (0.049 °C/year) are observed at almost all study stations during the last 32 years (1978-2009). The results show that Libya has witnessed a significant warming since the middle of the twentieth century, which will have a considerable impact on societies and the ecology of the North Africa region, if increases continue at current rates.

  14. The Hengill geothermal area, Iceland: variation of temperature gradients deduced from the maximum depth of seismogenesis

    Foulger, G.R.


    Given a uniform lithology and strain rate and a full seismic data set, the maximum depth of earthquakes may be viewed to a first order as an isotherm. These conditions are approached at the Hengill geothermal area, S. Iceland, a dominantly basaltic area. The temperature at which seismic failure ceases for the strain rates likely at the Hengill geothermal area is determined by analogy with oceanic crust, and is about 650 ?? 50??C. The topographies of the top and bottom of the seismogenic layer were mapped using 617 earthquakes. The thickness of the seismogenic layer is roughly constant and about 3 km. A shallow, aseismic, low-velocity volume within the spreading plate boundary that crosses the area occurs above the top of the seismogenic layer and is interpreted as an isolated body of partial melt. The base of the seismogenic layer has a maximum depth of about 6.5 km beneath the spreading axis and deepens to about 7 km beneath a transform zone in the south of the area. -from Author

  15. A Maximum Likelihood Approach to Least Absolute Deviation Regression

    Yinbo Li


    Full Text Available Least absolute deviation (LAD regression is an important tool used in numerous applications throughout science and engineering, mainly due to the intrinsic robust characteristics of LAD. In this paper, we show that the optimization needed to solve the LAD regression problem can be viewed as a sequence of maximum likelihood estimates (MLE of location. The derived algorithm reduces to an iterative procedure where a simple coordinate transformation is applied during each iteration to direct the optimization procedure along edge lines of the cost surface, followed by an MLE of location which is executed by a weighted median operation. Requiring weighted medians only, the new algorithm can be easily modularized for hardware implementation, as opposed to most of the other existing LAD methods which require complicated operations such as matrix entry manipulations. One exception is Wesolowsky's direct descent algorithm, which among the top algorithms is also based on weighted median operations. Simulation shows that the new algorithm is superior in speed to Wesolowsky's algorithm, which is simple in structure as well. The new algorithm provides a better tradeoff solution between convergence speed and implementation complexity.

  16. A hybrid solar panel maximum power point search method that uses light and temperature sensors

    Ostrowski, Mariusz


    Solar cells have low efficiency and non-linear characteristics. To increase the output power solar cells are connected in more complex structures. Solar panels consist of series of connected solar cells with a few bypass diodes, to avoid negative effects of partial shading conditions. Solar panels are connected to special device named the maximum power point tracker. This device adapt output power from solar panels to load requirements and have also build in a special algorithm to track the maximum power point of solar panels. Bypass diodes may cause appearance of local maxima on power-voltage curve when the panel surface is illuminated irregularly. In this case traditional maximum power point tracking algorithms can find only a local maximum power point. In this article the hybrid maximum power point search algorithm is presented. The main goal of the proposed method is a combination of two algorithms: a method that use temperature sensors to track maximum power point in partial shading conditions and a method that use illumination sensor to track maximum power point in equal illumination conditions. In comparison to another methods, the proposed algorithm uses correlation functions to determinate the relationship between values of illumination and temperature sensors and the corresponding values of current and voltage in maximum power point. In partial shading condition the algorithm calculates local maximum power points bases on the value of temperature and the correlation function and after that measures the value of power on each of calculated point choose those with have biggest value, and on its base run the perturb and observe search algorithm. In case of equal illumination algorithm calculate the maximum power point bases on the illumination value and the correlation function and on its base run the perturb and observe algorithm. In addition, the proposed method uses a special coefficient modification of correlation functions algorithm. This sub

  17. A Hybrid Maximum Power Point Search Method Using Temperature Measurements in Partial Shading Conditions

    Mroczka Janusz


    Full Text Available Photovoltaic panels have a non-linear current-voltage characteristics to produce the maximum power at only one point called the maximum power point. In the case of the uniform illumination a single solar panel shows only one maximum power, which is also the global maximum power point. In the case an irregularly illuminated photovoltaic panel many local maxima on the power-voltage curve can be observed and only one of them is the global maximum. The proposed algorithm detects whether a solar panel is in the uniform insolation conditions. Then an appropriate strategy of tracking the maximum power point is taken using a decision algorithm. The proposed method is simulated in the environment created by the authors, which allows to stimulate photovoltaic panels in real conditions of lighting, temperature and shading.

  18. How do GCMs represent daily maximum and minimum temperatures in La Plata Basin?

    Bettolli, M. L.; Penalba, O. C.; Krieger, P. A.


    This work focuses on southern La Plata Basin region which is one of the most important agriculture and hydropower producing regions worldwide. Extreme climate events such as cold and heat waves and frost events have a significant socio-economic impact. It is a big challenge for global climate models (GCMs) to simulate regional patterns, temporal variations and distribution of temperature in a daily basis. Taking into account the present and future relevance of the region for the economy of the countries involved, it is very important to analyze maximum and minimum temperatures for model evaluation and development. This kind of study is aslo the basis for a great deal of the statistical downscaling methods in a climate change context. The aim of this study is to analyze the ability of the GCMs to reproduce the observed daily maximum and minimum temperatures in the southern La Plata Basin region. To this end, daily fields of maximum and minimum temperatures from a set of 15 GCMs were used. The outputs corresponding to the historical experiment for the reference period 1979-1999 were obtained from the WCRP CMIP5 (World Climate Research Programme Coupled Model Intercomparison Project Phase 5). In order to compare daily temperature values in the southern La Plata Basin region as generated by GCMs to those derived from observations, daily maximum and minimum temperatures were used from the gridded dataset generated by the Claris LPB Project ("A Europe-South America Network for Climate Change Assessment and Impact Studies in La Plata Basin"). Additionally, reference station data was included in the study. The analysis was focused on austral winter (June, July, August) and summer (December, January, February). The study was carried out by analyzing the performance of the 15 GCMs , as well as their ensemble mean, in simulating the probability distribution function (pdf) of maximum and minimum temperatures which include mean values, variability, skewness, et c, and regional

  19. Ambient maximum temperature as a function of Salmonella food poisoning cases in the Republic of Macedonia

    Vladimir Kendrovski


    Full Text Available Background: Higher temperatures have been associated with higher salmonellosis notifications worldwide. Aims : The objective of this paper is to assess the seasonal pattern of Salmonella cases among humans. Material and Methods: The relationship between ambient maximum temperature and reports of confirmed cases of Salmonella in the Republic of Macedonia and Skopje during the summer months (i.e. June, July, August and September beginning in 1998 through 2008 was investigated. The monthly number of reported Salmonella cases and ambient maximum temperatures for Skopje were related to the national number of cases and temperatures recorded during the same timeframe using regression statistical analyses. The Poisson regression model was adapted for the analysis of the data. Results: While a decreasing tendency was registered at the national level, the analysis for Skopje showed an increasing tendency for registration of new salmonella cases. Reported incidents of salmonellosis, were positively associated (P<0.05 with temperature during the summer months. By increasing of the maximum monthly mean temperature of 1° C in Skopje, the salmonellosis incidence increased by 5.2% per month. Conclusions: The incidence of Salmonella cases in the Macedonian population varies seasonally: the highest values of the Seasonal Index for Salmonella cases were registered in the summer months, i.e. June, July, August and September.

  20. Effects of Body Weight and Water Temperature on Maximum Food Consumption of Juvenile Sebastodes fuscescens (Houttuyn)

    谢松光; 杨红生; 周毅; 张福绥


    Maximum rate of food consumption (Cmax) was determined for juvenile Sebastodes fuscescens (Houttuyn) at water temperature of 10, 15, 20 and 25℃. The relationships of Cmax to the body weight (W) at each temperature were described by a power equation: lnCmax = a + b lnW. Covariance analysis revealed significant interaction of the temperature and body weight. The relationship of adjusted Cmax to water temperature (T) was described by a quadratic equation: Cmax =-0.369 + 0.456T - 0.0117T2. The optimal feeding temperature calculated from this equation was 19.5℃. The coefficients of the multiple regression estimation relating Cmax to body weight (W) and water temperature (T) were given in the Table 2.

  1. Trends in Mean Annual Minimum and Maximum Near Surface Temperature in Nairobi City, Kenya

    George Lukoye Makokha


    Full Text Available This paper examines the long-term urban modification of mean annual conditions of near surface temperature in Nairobi City. Data from four weather stations situated in Nairobi were collected from the Kenya Meteorological Department for the period from 1966 to 1999 inclusive. The data included mean annual maximum and minimum temperatures, and was first subjected to homogeneity test before analysis. Both linear regression and Mann-Kendall rank test were used to discern the mean annual trends. Results show that the change of temperature over the thirty-four years study period is higher for minimum temperature than maximum temperature. The warming trends began earlier and are more significant at the urban stations than is the case at the sub-urban stations, an indication of the spread of urbanisation from the built-up Central Business District (CBD to the suburbs. The established significant warming trends in minimum temperature, which are likely to reach higher proportions in future, pose serious challenges on climate and urban planning of the city. In particular the effect of increased minimum temperature on human physiological comfort, building and urban design, wind circulation and air pollution needs to be incorporated in future urban planning programmes of the city.

  2. Dynamic Performance of Maximum Power Point Trackers in TEG Systems Under Rapidly Changing Temperature Conditions

    Man, E. A.; Sera, D.; Mathe, L.; Schaltz, E.; Rosendahl, L.


    Characterization of thermoelectric generators (TEG) is widely discussed and equipment has been built that can perform such analysis. One method is often used to perform such characterization: constant temperature with variable thermal power input. Maximum power point tracking (MPPT) methods for TEG systems are mostly tested under steady-state conditions for different constant input temperatures. However, for most TEG applications, the input temperature gradient changes, exposing the MPPT to variable tracking conditions. An example is the exhaust pipe on hybrid vehicles, for which, because of the intermittent operation of the internal combustion engine, the TEG and its MPPT controller are exposed to a cyclic temperature profile. Furthermore, there are no guidelines on how fast the MPPT must be under such dynamic conditions. In the work discussed in this paper, temperature gradients for TEG integrated in several applications were evaluated; the results showed temperature variation up to 5°C/s for TEG systems. Electrical characterization of a calcium-manganese oxide TEG was performed at steady-state for different input temperatures and a maximum temperature of 401°C. By using electrical data from characterization of the oxide module, a solar array simulator was emulated to perform as a TEG. A trapezoidal temperature profile with different gradients was used on the TEG simulator to evaluate the dynamic MPPT efficiency. It is known that the perturb and observe (P&O) algorithm may have difficulty accurately tracking under rapidly changing conditions. To solve this problem, a compromise must be found between the magnitude of the increment and the sampling frequency of the control algorithm. The standard P&O performance was evaluated experimentally by using different temperature gradients for different MPPT sampling frequencies, and efficiency values are provided for all cases. The results showed that a tracking speed of 2.5 Hz can be successfully implemented on a TEG

  3. Evaluation of maximum allowable temperature inside basket of dry storage module for CANDU spent fuel

    Lee, Kyung Ho; Yoon, Jeong Hyoun; Chae, Kyoung Myoung; Choi, Byung Il; Lee, Heung Young; Song, Myung Jae [Nuclear Environment Technology Institute, Taejon (Korea, Republic of); Cho, Gyu Seong [Korea Advanced Institute of Science and Technology, Taejon (Korea, Republic of)


    This study provides a maximum allowable fuel temperature through a preliminary evaluation of the UO{sub 2} weight gain that may occur on a failed (breached sheathing) element of a fuel bundle. Intact bundles would not be affected as the UO{sub 2} would not be in contact with the air for the fuel storage basket. The analysis is made for the MACSTOR/KN-400 to be operated in Wolsong ambient air temperature conditions. The design basis fuel is a 6-year cooled fuel bundle that, on average has reached a burnup of 7,800 MWd/MTU. The fuel bundle considered for analysis is assumed to have a high burnup of 12,000 MWd/MTU and be located in a hot basket. The MACSTOR/KN-400 has the same air circuit as the MACSTOR and the air circuit will require a slightly higher temperature difference to exit the increased heat load. The maximum temperature of a high burnup bundle stored in the new MACSTOR/KN-400 is expected to be about 9 .deg. C higher than the fuel temperature of the MACSTOR at an equivalent constant ambient temperature. This temperature increase will in turn increase the UO{sub 2} weight gain from 0.06% (MACSTOR for Wolsong conditions) to an estimated 0.13% weight gain for the MACSTOR/KN-400. Compared to an acceptable UO{sub 2} weight gain of 0.6%, we are thus expecting to maintain a very acceptable safety factor of 4 to 5 for the new module against unacceptable stresses in the fuel sheathing. For the UO{sub 2} weight gain, the maximum allowable fuel temperature was shown by 164 .deg. C.

  4. The maximum efficiency of nano heat engines depends on more than temperature

    Woods, Mischa; Ng, Nelly; Wehner, Stephanie

    Sadi Carnot's theorem regarding the maximum efficiency of heat engines is considered to be of fundamental importance in the theory of heat engines and thermodynamics. Here, we show that at the nano and quantum scale, this law needs to be revised in the sense that more information about the bath other than its temperature is required to decide whether maximum efficiency can be achieved. In particular, we derive new fundamental limitations of the efficiency of heat engines at the nano and quantum scale that show that the Carnot efficiency can only be achieved under special circumstances, and we derive a new maximum efficiency for others. A preprint can be found here arXiv:1506.02322 [quant-ph] Singapore's MOE Tier 3A Grant & STW, Netherlands.

  5. Temperature dependence of attitude sensor coalignments on the Solar Maximum Mission (SMM)

    Pitone, D. S.; Eudell, A. H.; Patt, F. S.


    The temperature correlation of the relative coalignment between the fine-pointing sun sensor and fixed-head star trackers measured on the Solar Maximum Mission (SMM) is analyzed. An overview of the SMM, including mission history and configuration, is given. Possible causes of the misalignment variation are discussed, with focus placed on spacecraft bending due to solar-radiation pressure, electronic or mechanical changes in the sensors, uncertainty in the attitude solutions, and mounting-plate expansion and contraction due to thermal effects. Yaw misalignment variation from the temperature profile is assessed, and suggestions for spacecraft operations are presented, involving methods to incorporate flight measurements of the temperature-versus-alignment function and its variance in operational procedures and the spacecraft structure temperatures in the attitude telemetry record.

  6. On the estimation of the curvatures and bending rigidity of membrane networks via a local maximum-entropy approach

    Fraternali, Fernando; Marcelli, Gianluca


    We present a meshfree method for the curvature estimation of membrane networks based on the Local Maximum Entropy approach recently presented in (Arroyo and Ortiz, 2006). A continuum regularization of the network is carried out by balancing the maximization of the information entropy corresponding to the nodal data, with the minimization of the total width of the shape functions. The accuracy and convergence properties of the given curvature prediction procedure are assessed through numerical applications to benchmark problems, which include coarse grained molecular dynamics simulations of the fluctuations of red blood cell membranes (Marcelli et al., 2005; Hale et al., 2009). We also provide an energetic discrete-to-continuum approach to the prediction of the zero-temperature bending rigidity of membrane networks, which is based on the integration of the local curvature estimates. The Local Maximum Entropy approach is easily applicable to the continuum regularization of fluctuating membranes, and the predict...

  7. Intensification of the meridional temperature gradient in the Great Barrier Reef following the Last Glacial Maximum.

    Felis, Thomas; McGregor, Helen V; Linsley, Braddock K; Tudhope, Alexander W; Gagan, Michael K; Suzuki, Atsushi; Inoue, Mayuri; Thomas, Alexander L; Esat, Tezer M; Thompson, William G; Tiwari, Manish; Potts, Donald C; Mudelsee, Manfred; Yokoyama, Yusuke; Webster, Jody M


    Tropical south-western Pacific temperatures are of vital importance to the Great Barrier Reef (GBR), but the role of sea surface temperatures (SSTs) in the growth of the GBR since the Last Glacial Maximum remains largely unknown. Here we present records of Sr/Ca and δ(18)O for Last Glacial Maximum and deglacial corals that show a considerably steeper meridional SST gradient than the present day in the central GBR. We find a 1-2 °C larger temperature decrease between 17° and 20°S about 20,000 to 13,000 years ago. The result is best explained by the northward expansion of cooler subtropical waters due to a weakening of the South Pacific gyre and East Australian Current. Our findings indicate that the GBR experienced substantial meridional temperature change during the last deglaciation, and serve to explain anomalous deglacial drying of northeastern Australia. Overall, the GBR developed through significant SST change and may be more resilient than previously thought.

  8. Sea-surface temperatures around the Australian margin and Indian Ocean during the Last Glacial Maximum

    Barrows, Timothy T.; Juggins, Steve


    We present new last glacial maximum (LGM) sea-surface temperature (SST) maps for the oceans around Australia based on planktonic foraminifera assemblages. To provide the most reliable SST estimates we use the modern analog technique, the revised analog method, and artificial neural networks in conjunction with an expanded modern core top database. All three methods produce similar quality predictions and the root mean squared error of the consensus prediction (the average of the three) under cross-validation is only ±0.77 °C. We determine LGM SST using data from 165 cores, most of which have good age control from oxygen isotope stratigraphy and radiocarbon dates. The coldest SST occurred at 20,500±1400 cal yr BP, predating the maximum in oxygen isotope records at 18,200±1500 cal yr BP. During the LGM interval we observe cooling within the tropics of up to 4 °C in the eastern Indian Ocean, and mostly between 0 and 3 °C elsewhere along the equator. The high latitudes cooled by the greatest degree, a maximum of 7-9 °C in the southwest Pacific Ocean. Our maps improve substantially on previous attempts by making higher quality temperature estimates, using more cores, and improving age control.

  9. Maximum Likelihood Approach for RFID Tag Set Cardinality Estimation with Detection Errors

    Nguyen, Chuyen T.; Hayashi, Kazunori; Kaneko, Megumi


    Abstract Estimation schemes of Radio Frequency IDentification (RFID) tag set cardinality are studied in this paper using Maximum Likelihood (ML) approach. We consider the estimation problem under the model of multiple independent reader sessions with detection errors due to unreliable radio...... is evaluated under dierent system parameters and compared with that of the conventional method via computer simulations assuming flat Rayleigh fading environments and framed-slotted ALOHA based protocol. Keywords RFID tag cardinality estimation maximum likelihood detection error...

  10. DSC “peak temperature” versus “maximum slope temperature” in determining TSSD temperature

    Khatamian, D.


    One of the concerns of the nuclear industry is the deleterious effect of hydrogen on the structural integrity of the reactor core components due to delayed hydride cracking (DHC). The DHC process occurs when hydrogen concentration exceeds the terminal solid solubility (TSS) in the component. Thus, the accurate knowledge of TSS is necessary to predict the lifetime of the components. Differential scanning calorimetry (DSC) is normally used to measure the hydrogen TSS in zirconium alloys. There is a measurable change in the amount of heat absorbed by the specimen when the hydrides dissolve. The hydride dissolution process does not exhibit a well-defined "sharp" change in the heat-flow signal at the transition temperature. A typical DSC heat-flow curve for hydride dissolution has three definite features; "peak temperature" (PT), "maximum slope temperature" (MST) and "completion temperature". The present investigation aims to identify the part of the heat-flow signal that closely corresponds to the TSS temperature for hydride dissolution ( TTSSD). Coupons were cut from a Zr-2.5Nb specimen, which had been previously hydrided using an electrolytic cell to create a surface hydride layer of ˜20 μm thick on all sides of the specimen. The coupons were then annealed isothermally at various temperatures to establish the TTSSD under equilibrium conditions. Subsequently the hydride layer was removed and the coupons were analyzed for TSSD temperature using DSC. The PT and MST for each DSC run were determined and compared to the annealing temperature of the coupon. The results show that the annealing temperature (the equilibrium TTSSD) is much closer to the DSC PT than any other feature of the heat-flow curve.

  11. Improved Determination of the Location of the Temperature Maximum in the Corona

    Lemaire, J. F.; Stegen, K.


    The most used method to calculate the coronal electron temperature [Te (r)] from a coronal density distribution [ne (r)] is the scale-height method (SHM). We introduce a novel method that is a generalization of a method introduced by Alfvén ( Ark. Mat. Astron. Fys. 27, 1, 1941) to calculate Te(r) for a corona in hydrostatic equilibrium: the "HST" method. All of the methods discussed here require given electron-density distributions [ne (r)] which can be derived from white-light (WL) eclipse observations. The new "DYN" method determines the unique solution of Te(r) for which Te(r → ∞) → 0 when the solar corona expands radially as realized in hydrodynamical solar-wind models. The applications of the SHM method and DYN method give comparable distributions for Te(r). Both have a maximum [T_{max}] whose value ranges between 1 - 3 MK. However, the peak of temperature is located at a different altitude in both cases. Close to the Sun where the expansion velocity is subsonic (r < 1.3 R_{⊙}) the DYN method gives the same results as the HST method. The effects of the other free parameters on the DYN temperature distribution are presented in the last part of this study. Our DYN method is a new tool to evaluate the range of altitudes where the heating rate is maximum in the solar corona when the electron-density distribution is obtained from WL coronal observations.

  12. The evolution of photospheric temperature in nova V2676 Oph toward the formation of C2 and CN during its near-maximum phase

    Kawakita, Hideyo; Arai, Akira; Fujii, Mitsugu


    The molecular formation of C2 and CN in the dust-forming classical nova V2676 Oph occurs during its near-maximum phase. We investigated the temporal evolution of the photospheric temperature of the nova as it approached molecular formation during its early phase. The effective temperature of the nova around the maximum decreased from ˜7000 K to ˜5000 K over the course of ˜3 d. The molecules formed at temperatures of conditions favorable to the molecular formation of C2 and CN in V2676 Oph.

  13. Comparative High Field Magneto-transport Of Rare Earth Oxypnictides With Maximum Transition Temperatures

    Balakirev, Fedor F [Los Alamos National Laboratory; Migliori, A [MPA-NHMFL; Riggs, S [NHMFL-FSU; Hunte, F [NHMFL-FSU; Gurevich, A [NHMFL-FSU; Larbalestier, D [NHMFL-FSU; Boebinger, G [NHMFL-FSU; Jaroszynski, J [NHMFL-FSU; Ren, Z [CHINA; Lu, W [CHINA; Yang, J [CHINA; Shen, X [CHINA; Dong, X [CHINA; Zhao, Z [CHINA; Jin, R [ORNL; Sefat, A [ORNL; Mcguire, M [ORNL; Sales, B [ORNL; Christen, D [ORNL; Mandrus, D [ORNL


    We compare magnetotransport of the three iron-arsenide-based compounds ReFeAsO (Re=La, Sm, Nd) in very high DC and pulsed magnetic fields up to 45 and 54 T, respectively. Each sample studied exhibits a superconducting transition temperature near the maximum reported to date for that particular compound. While high magnetic fields do not suppress the superconducting state appreciably, the resistivity, Hall coefficient, and critical magnetic fields, taken together, suggest that the phenomenology and superconducting parameters of the oxypnictide superconductors bridges the gap between MgB{sub 2} and YBCO.

  14. Probing Ionic Liquid Aqueous Solutions Using Temperature of Maximum Density Isotope Effects

    Mohammad Tariq


    Full Text Available This work is a new development of an extensive research program that is investigating for the first time shifts in the temperature of maximum density (TMD of aqueous solutions caused by ionic liquid solutes. In the present case we have compared the shifts caused by three ionic liquid solutes with a common cation—1-ethyl-3-methylimidazolium coupled with acetate, ethylsulfate and tetracyanoborate anions—in normal and deuterated water solutions. The observed differences are discussed in terms of the nature of the corresponding anion-water interactions.

  15. The Paleocene - Eocene Thermal Maximum: Temperature and Ecology in the Tropics

    Frieling, J.; Gebhardt, H.; Adekeye, O. A.; Akande, S. O.; Reichart, G. J.; Middelburg, J. J. B. M.; Schouten, S.; Huber, M.; Sluijs, A.


    Various records across the Paleocene - Eocene Thermal Maximum (PETM) have established approximately 5 °C of additional surface and deep ocean warming, superimposed on the already warm latest Paleocene. The PETM is further characterized by a global negative stable carbon isotope excursion (CIE), poleward migration of thermophilic biota, ocean acidification, increased weathering, photic zone euxinia and intensified hydrological cycle. Reconstructed temperatures for the PETM in mid and high-latitudes regularly exceed modern open marine tropical temperatures. Constraints on absolute tropical temperatures are, however, limited. We studied the PETM in a sediment section from the Nigerian sector of the Dahomey Basin, deposited on the shelf near the equator. We estimate sea surface temperatures by paired analyses of TEX86, and Mg/Ca and δ18O of foraminifera from the Shagamu Quarry. These show Palaeocene temperatures of ~33 °C and SSTs rose by 4 °C during the PETM based on TEX86. During the PETM, intermittent photic zone euxinia developed based on the presence of the biomarker isorenieratane. Interestingly, during peak warmth, dinoflagellate cyst abundances and diversity are remarkably low. From our new data and evidence from modern dinoflagellate experiments, we conclude that thermal stress was the main driver for this observation. We derive that endothermal and most ectothermal nektonic and planktonic marine eukaryotic organisms could not have lived in the surface waters in this part of the tropics during the PETM.

  16. Single Temperature Sensor Superheat Control Using a Novel Maximum Slope-seeking Method

    Vinther, Kasper; Rasmussen, Henrik; Izadi-Zamanabadi, Roozbeh;


    Superheating of refrigerant in the evaporator is an important aspect of safe operation of refrigeration systems. The level of superheat is typically controlled by adjusting the flow of refrigerant using an electronic expansion valve, where the superheat is calculated using measurements from...... a pressure and a temperature sensor. In this paper we show, through extensive testing, that the superheat or filling of the evaporator can actually be controlled using only a single temperature sensor. This can either reduce commissioning costs by lowering the necessary amount of sensors or add fault...... tolerance in existing systems if a sensor fails (e.g. pressure sensor). The solution is based on a novel maximum slope-seeking control method, where a perturbation signal is added to the valve opening degree, which gives additional information about the system for control purposes. Furthermore, the method...

  17. Verification of surface minimum, mean, and maximum temperature forecasts in Calabria for summer 2008

    S. Federico


    Full Text Available Since 2005, one-hour temperature forecasts for the Calabria region (southern Italy, modelled by the Regional Atmospheric Modeling System (RAMS, have been issued by CRATI/ISAC-CNR (Consortium for Research and Application of Innovative Technologies/Institute for Atmospheric and Climate Sciences of the National Research Council and are available online at (every six hours. Beginning in June 2008, the horizontal resolution was enhanced to 2.5 km. In the present paper, forecast skill and accuracy are evaluated out to four days for the 2008 summer season (from 6 June to 30 September, 112 runs. For this purpose, gridded high horizontal resolution forecasts of minimum, mean, and maximum temperatures are evaluated against gridded analyses at the same horizontal resolution (2.5 km.

    Gridded analysis is based on Optimal Interpolation (OI and uses the RAMS first-day temperature forecast as the background field. Observations from 87 thermometers are used in the analysis system. The analysis error is introduced to quantify the effect of using the RAMS first-day forecast as the background field in the OI analyses and to define the forecast error unambiguously, while spatial interpolation (SI analysis is considered to quantify the statistics' sensitivity to the verifying analysis and to show the quality of the OI analyses for different background fields.

    Two case studies, the first one with a low (less than the 10th percentile root mean square error (RMSE in the OI analysis, the second with the largest RMSE of the whole period in the OI analysis, are discussed to show the forecast performance under two different conditions. Cumulative statistics are used to quantify forecast errors out to four days. Results show that maximum temperature has the largest RMSE, while minimum and mean temperature errors are similar. For the period considered

  18. Scaling of maximum probability density functions of velocity and temperature increments in turbulent systems

    Huang, Y X; Zhou, Q; Qiu, X; Shang, X D; Lu, Z M; Liu, and Y L


    In this paper, we introduce a new way to estimate the scaling parameter of a self-similar process by considering the maximum probability density function (pdf) of tis increments. We prove this for $H$-self-similar processes in general and experimentally investigate it for turbulent velocity and temperature increments. We consider turbulent velocity database from an experimental homogeneous and nearly isotropic turbulent channel flow, and temperature data set obtained near the sidewall of a Rayleigh-B\\'{e}nard convection cell, where the turbulent flow is driven by buoyancy. For the former database, it is found that the maximum value of increment pdf $p_{\\max}(\\tau)$ is in a good agreement with lognormal distribution. We also obtain a scaling exponent $\\alpha\\simeq 0.37$, which is consistent with the scaling exponent for the first-order structure function reported in other studies. For the latter one, we obtain a scaling exponent $\\alpha_{\\theta}\\simeq0.33$. This index value is consistent with the Kolmogorov-Ob...

  19. Maximum likelihood approach to “informed” Sound Source Localization for Hearing Aid applications

    Farmani, Mojtaba; Pedersen, Michael Syskind; Tan, Zheng-Hua


    -free sound signal of the target talker at the HAS via the wireless connection. Therefore, in this paper, we propose a maximum likelihood (ML) approach, which we call MLSSL, to estimate the Direction of Arrival (DoA) of the target signal given access to the target signal content. Compared with other "informed...

  20. Maximum growing season temperature in Western Europe: multi proxy reconstructions in Fontainebleau from 1596 to 2000

    N. Etien


    Full Text Available In this study, we have combined a Burgundy grape harvest date record with new δ18O measurements conducted on timbers and living trees cellulose from Fontainebleau castle and forest. Our reconstruction is expected to provide a reference series for the variability of growing season temperature (from April to September in Western Europe from 1596 to 2000. We have estimated an uncertainty of 0.55°C on individual growing season maximum temperature reconstructions. We are able to assess this uncertainty, which is not the case for many documentary sources (diaries etc., and even not the case for early instrumental temperature data.

    We compare our data with a number of independent temperature estimates for Europe and the Northern Hemisphere. The comparison between our reconstruction and Manley mean growing season temperature data provides an independent control of the quality of CET data. We show that our reconstruction preserves more variance back in time, because it was not distorted/averaged by statistical/homogenisation methods.

    Further works will be conducted to compare the δ18O data from wood cellulose provided by transects of different tree species in Europe obtained within the EC ISONET project and the French ANR Program ESCARSEL, to analyse the spatial and temporal coherency between δ18O records. The decadal variability will be also compared with other precipitation δ18O records such as those obtained from benthic ostracods from deep peri-Alpine lakes or simulated by regional atmospheric models equipped with the modelling of water stable isotopes.

  1. Temperature profiles of ethanol tolerance: effects of ethanol on the minimum and the maximum temperatures for growth of the yeasts Saccharomyces cerevisiae and Kluyveromyces fragilis

    Sa-Correia, I.; Van Uden, N.


    Difficulties experienced by brewers with yeast performance in the brewing of lager at low temperatures has led the authors to study the effect of ethanol on the minimum temperature for growth (T. min). It has been found that both the maximum temperature (T max) and T min were adversely affected by ethanol and that ethanol tolerance prevailed at intermediate temperatures. (Refs. 8).

  2. Improved Determination of the Location of the Temperature Maximum in the Corona

    Lemaire, J. F.; Stegen, K.


    The most used method to calculate the coronal electron temperature [ Te (r)] from a coronal density distribution [ ne (r)] is the scale-height method (SHM). We introduce a novel method that is a generalization of a method introduced by Alfvén (Ark. Mat. Astron. Fys. 27, 1, 1941) to calculate Te(r) for a corona in hydrostatic equilibrium: the "HST" method. All of the methods discussed here require given electron-density distributions [ ne (r)] which can be derived from white-light (WL) eclipse observations. The new "DYN" method determines the unique solution of Te(r) for which Te(r → ∞) → 0 when the solar corona expands radially as realized in hydrodynamical solar-wind models. The applications of the SHM method and DYN method give comparable distributions for Te(r). Both have a maximum [ T_{max}] whose value ranges between 1 - 3 MK. However, the peak of temperature is located at a different altitude in both cases. Close to the Sun where the expansion velocity is subsonic ( r corona when the electron-density distribution is obtained from WL coronal observations.

  3. Shifts in the temperature of maximum density (TMD) of ionic liquid aqueous solutions.

    Tariq, M; Esperança, J M S S; Soromenho, M R C; Rebelo, L P N; Lopes, J N Canongia


    This work investigates for the first time shifts in the temperature of maximum density (TMD) of water caused by ionic liquid solutes. A vast amount of high-precision volumetric data--more than 6000 equilibrated (static) high-precision density determination corresponding to ∼90 distinct ionic liquid aqueous solutions of 28 different types of ionic liquid--allowed us to analyze the TMD shifts for different homologous series or similar sets of ionic solutes and explain the overall effects in terms of hydrophobic, electrostatic and hydrogen-bonding contributions. The differences between the observed TMD shifts in the -2 temperatures are discussed taking into account the different types of possible solute-water interactions that can modify the structure of the aqueous phase. The results also reveal different insights concerning the nature of the ions that constitute typical ionic liquids and are consistent with previous results that established hydrophobic and hydrophilic scales for ionic liquid ions based on their specific interactions with water and other probe molecules.

  4. An empirical method for estimating probability density functions of gridded daily minimum and maximum temperature

    Lussana, C.


    The presented work focuses on the investigation of gridded daily minimum (TN) and maximum (TX) temperature probability density functions (PDFs) with the intent of both characterising a region and detecting extreme values. The empirical PDFs estimation procedure has been realised using the most recent years of gridded temperature analysis fields available at ARPA Lombardia, in Northern Italy. The spatial interpolation is based on an implementation of Optimal Interpolation using observations from a dense surface network of automated weather stations. An effort has been made to identify both the time period and the spatial areas with a stable data density otherwise the elaboration could be influenced by the unsettled station distribution. The PDF used in this study is based on the Gaussian distribution, nevertheless it is designed to have an asymmetrical (skewed) shape in order to enable distinction between warming and cooling events. Once properly defined the occurrence of extreme events, it is possible to straightforwardly deliver to the users the information on a local-scale in a concise way, such as: TX extremely cold/hot or TN extremely cold/hot.

  5. Optimal Multi-Level Thresholding Based on Maximum Tsallis Entropy via an Artificial Bee Colony Approach

    Yudong Zhang


    Full Text Available This paper proposes a global multi-level thresholding method for image segmentation. As a criterion for this, the traditional method uses the Shannon entropy, originated from information theory, considering the gray level image histogram as a probability distribution, while we applied the Tsallis entropy as a general information theory entropy formalism. For the algorithm, we used the artificial bee colony approach since execution of an exhaustive algorithm would be too time-consuming. The experiments demonstrate that: 1 the Tsallis entropy is superior to traditional maximum entropy thresholding, maximum between class variance thresholding, and minimum cross entropy thresholding; 2 the artificial bee colony is more rapid than either genetic algorithm or particle swarm optimization. Therefore, our approach is effective and rapid.

  6. Attributes for NHDPlus Catchments (Version 1.1) for the Conterminous United States: Average Annual Daily Maximum Temperature, 2002

    U.S. Geological Survey, Department of the Interior — This data set represents the average monthly maximum temperature in Celsius multiplied by 100 for 2002 compiled for every catchment of NHDPlus for the conterminous...

  7. Structural modelling and control design under incomplete parameter information: The maximum-entropy approach

    Hyland, D. C.


    A stochastic structural control model is described. In contrast to the customary deterministic model, the stochastic minimum data/maximum entropy model directly incorporates the least possible a priori parameter information. The approach is to adopt this model as the basic design model, thus incorporating the effects of parameter uncertainty at a fundamental level, and design mean-square optimal controls (that is, choose the control law to minimize the average of a quadratic performance index over the parameter ensemble).

  8. Climate change uncertainty for daily minimum and maximum temperatures: a model inter-comparison

    Lobell, D; Bonfils, C; Duffy, P


    Several impacts of climate change may depend more on changes in mean daily minimum (T{sub min}) or maximum (T{sub max}) temperatures than daily averages. To evaluate uncertainties in these variables, we compared projections of T{sub min} and T{sub max} changes by 2046-2065 for 12 climate models under an A2 emission scenario. Average modeled changes in T{sub max} were slightly lower in most locations than T{sub min}, consistent with historical trends exhibiting a reduction in diurnal temperature ranges. However, while average changes in T{sub min} and T{sub max} were similar, the inter-model variability of T{sub min} and T{sub max} projections exhibited substantial differences. For example, inter-model standard deviations of June-August T{sub max} changes were more than 50% greater than for T{sub min} throughout much of North America, Europe, and Asia. Model differences in cloud changes, which exert relatively greater influence on T{sub max} during summer and T{sub min} during winter, were identified as the main source of uncertainty disparities. These results highlight the importance of considering separately projections for T{sub max} and T{sub min} when assessing climate change impacts, even in cases where average projected changes are similar. In addition, impacts that are most sensitive to summertime T{sub min} or wintertime T{sub max} may be more predictable than suggested by analyses using only projections of daily average temperatures.

  9. Protein side-chain packing problem: a maximum edge-weight clique algorithmic approach.

    Dukka Bahadur, K C; Tomita, Etsuji; Suzuki, Jun'ichi; Akutsu, Tatsuya


    "Protein Side-chain Packing" has an ever-increasing application in the field of bio-informatics, dating from the early methods of homology modeling to protein design and to the protein docking. However, this problem is computationally known to be NP-hard. In this regard, we have developed a novel approach to solve this problem using the notion of a maximum edge-weight clique. Our approach is based on efficient reduction of protein side-chain packing problem to a graph and then solving the reduced graph to find the maximum clique by applying an efficient clique finding algorithm developed by our co-authors. Since our approach is based on deterministic algorithms in contrast to the various existing algorithms based on heuristic approaches, our algorithm guarantees of finding an optimal solution. We have tested this approach to predict the side-chain conformations of a set of proteins and have compared the results with other existing methods. We have found that our results are favorably comparable or better than the results produced by the existing methods. As our test set contains a protein of 494 residues, we have obtained considerable improvement in terms of size of the proteins and in terms of the efficiency and the accuracy of prediction.

  10. New results on equatorial thermospheric winds and the midnight temperature maximum

    Meriwether, J.; Faivre, M.; Fesen, C. [Clemson Univ., SC (United States). Dept. of Physics and Astronomy; Sherwood, P. [Interactive Technology, Waban, MA (United States); Veliz, O. [Inst. Geofisica del Peru, Lima (Peru). Radio Observatorio de Jicamarca


    Optical observations of thermospheric winds and temperatures determined with high resolution measurements of Doppler shifts and Doppler widths of the OI 630-nm equatorial nightglow emission have been made with improved accuracy at Arequipa, Peru (16.4 S, 71.4 W) with an imaging Fabry-Perot interferometer. An observing procedure previously used at Arecibo Observatory was applied to achieve increased spatial and temporal sampling of the thermospheric wind and temperature with the selection of eight azimuthal directions, equally spaced from 0 to 360 , at a zenith angle of 60 . By assuming the equivalence of longitude and local time, the data obtained using this technique is analyzed to determine the mean neutral wind speeds and mean horizontal gradients of the wind field in the zonal and meridional directions. The new temperature measurements obtained with the improved instrumental accuracy clearly show the midnight temperature maximum (MTM) peak with amplitudes of 25 to 200 K in all directions observed for most nights. The horizontal wind field maps calculated from the mean winds and gradients show the MTM peak is always preceded by an equatorward wind surge lasting 1-2 h. The results also show for winter events a meridional wind abatement seen after the MTM peak. On one occasion, near the September equinox, a reversal was observed during the poleward transit of the MTM over Arequipa. Analysis inferring vertical winds from the observed convergence yielded inconsistent results, calling into question the validity of this calculation for the MTM structure at equatorial latitudes during solar minimum. Comparison of the observations with the predictions of the NCAR general circulation model indicates that the model fails to reproduce the observed amplitude by a factor of 5 or more. This is attributed in part to the lack of adequate spatial resolution in the model as the MTM phenomenon takes place within a scale of 300-500 km and {proportional_to}45 min in local time. The

  11. Variability and trends in daily minimum and maximum temperatures and in the diurnal temperature range in Lithuania, Latvia and Estonia in 1951-2010

    Jaagus, Jaak; Briede, Agrita; Rimkus, Egidijus; Remm, Kalle


    Spatial distribution and trends in mean and absolute maximum and minimum temperatures and in the diurnal temperature range were analysed at 47 stations in the eastern Baltic region (Lithuania, Latvia and Estonia) during 1951-2010. Dependence of the studied variables on geographical factors (latitude, the Baltic Sea, land elevation) is discussed. Statistically significant increasing trends in maximum and minimum temperatures were detected for March, April, July, August and annual values. At the majority of stations, the increase was detected also in February and May in case of maximum temperature and in January and May in case of minimum temperature. Warming was slightly higher in the northern part of the study area, i.e. in Estonia. Trends in the diurnal temperature range differ seasonally. The highest increasing trend revealed in April and, at some stations, also in May, July and August. Negative and mostly insignificant changes have occurred in January, February, March and June. The annual temperature range has not changed.

  12. On the Trend of the Annual Mean, Maximum, and Minimum Temperature and the Diurnal Temperature Range in the Armagh Observatory, Northern Ireland, Dataset, 1844 -2012

    Wilson, Robert M.


    Examined are the annual averages, 10-year moving averages, decadal averages, and sunspot cycle (SC) length averages of the mean, maximum, and minimum surface air temperatures and the diurnal temperature range (DTR) for the Armagh Observatory, Northern Ireland, during the interval 1844-2012. Strong upward trends are apparent in the Armagh surface-air temperatures (ASAT), while a strong downward trend is apparent in the DTR, especially when the ASAT data are averaged by decade or over individual SC lengths. The long-term decrease in the decadaland SC-averaged annual DTR occurs because the annual minimum temperatures have risen more quickly than the annual maximum temperatures. Estimates are given for the Armagh annual mean, maximum, and minimum temperatures and the DTR for the current decade (2010-2019) and SC24.

  13. Minimum redundancy maximum relevance feature selection approach for temporal gene expression data.

    Radovic, Milos; Ghalwash, Mohamed; Filipovic, Nenad; Obradovic, Zoran


    Feature selection, aiming to identify a subset of features among a possibly large set of features that are relevant for predicting a response, is an important preprocessing step in machine learning. In gene expression studies this is not a trivial task for several reasons, including potential temporal character of data. However, most feature selection approaches developed for microarray data cannot handle multivariate temporal data without previous data flattening, which results in loss of temporal information. We propose a temporal minimum redundancy - maximum relevance (TMRMR) feature selection approach, which is able to handle multivariate temporal data without previous data flattening. In the proposed approach we compute relevance of a gene by averaging F-statistic values calculated across individual time steps, and we compute redundancy between genes by using a dynamical time warping approach. The proposed method is evaluated on three temporal gene expression datasets from human viral challenge studies. Obtained results show that the proposed method outperforms alternatives widely used in gene expression studies. In particular, the proposed method achieved improvement in accuracy in 34 out of 54 experiments, while the other methods outperformed it in no more than 4 experiments. We developed a filter-based feature selection method for temporal gene expression data based on maximum relevance and minimum redundancy criteria. The proposed method incorporates temporal information by combining relevance, which is calculated as an average F-statistic value across different time steps, with redundancy, which is calculated by employing dynamical time warping approach. As evident in our experiments, incorporating the temporal information into the feature selection process leads to selection of more discriminative features.

  14. Extreme maximum temperature events and their relationships with large-scale modes: potential hazard on the Iberian Peninsula

    Merino, Andrés; Martín, M. L.; Fernández-González, S.; Sánchez, J. L.; Valero, F.


    The aim of this paper is to analyze spatiotemporal distribution of maximum temperatures in the Iberian Peninsula (IP) by using various extreme maximum temperature indices. Thresholds for determining temperature extreme event (TEE) severity are defined using 99th percentiles of daily temperature time series for the period 1948 to 2009. The synoptic-scale fields of such events were analyzed in order to better understand the related atmospheric processes. The results indicate that the regions with a higher risk of maximum temperatures are located in the river valleys of southwest and northeast of the IP, while the Cantabrian coast and mountain ranges are characterized by lower risk. The TEEs were classified, by means of several synoptic fields (sea level pressure, temperature, and geopotential height at 850 and 500 hPa), in four clusters that largely explain their spatiotemporal distribution on the IP. The results of this study show that TEEs mainly occur associated with a ridge elongated from Subtropical areas. The relationships of TEEs with teleconnection patterns, such as the North Atlantic Oscillation (NAO), Western Mediterranean Oscillation (WeMO), and Mediterranean Oscillation (MO), showed that the interannual variability of extreme maximum temperatures is largely controlled by the dominant phase of WeMO in all seasons except wintertime where NAO is prevailing. Results related to MO pattern show less relevance in the maximum temperatures variability. The correct identification of synoptic patterns linked with the most extreme temperature event associated with each cluster will assist the prediction of events that can pose a natural hazard, thereby providing useful information for decision making and warning systems.

  15. A 368-year maximum temperature reconstruction based on tree-ring data in the northwestern Sichuan Plateau (NWSP), China

    Zhu, Liangjun; Zhang, Yuandong; Li, Zongshan; Guo, Binde; Wang, Xiaochun


    We present a reconstruction of July-August mean maximum temperature variability based on a chronology of tree-ring widths over the period AD 1646-2013 in the northern part of the northwestern Sichuan Plateau (NWSP), China. A regression model explains 37.1 % of the variance of July-August mean maximum temperature during the calibration period from 1954 to 2012. Compared with nearby temperature reconstructions and gridded land surface temperature data, our temperature reconstruction had high spatial representativeness. Seven major cold periods were identified (1708-1711, 1765-1769, 1818-1821, 1824-1828, 1832-1836, 1839-1842, and 1869-1877), and three major warm periods occurred in 1655-1668, 1719-1730, and 1858-1859 from this reconstruction. The typical Little Ice Age climate can also be well represented in our reconstruction and clearly ended with climatic amelioration at the late of the 19th century. The 17th and 19th centuries were cold with more extreme cold years, while the 18th and 20th centuries were warm with less extreme cold years. Moreover, the 20th century rapid warming was not obvious in the NWSP mean maximum temperature reconstruction, which implied that mean maximum temperature might play an important and different role in global change as unique temperature indicators. Multi-taper method (MTM) spectral analysis revealed significant periodicities of 170-, 49-114-, 25-32-, 5.7-, 4.6-4.7-, 3.0-3.1-, 2.5-, and 2.1-2.3-year quasi-cycles at a 95 % confidence level in our reconstruction. Overall, the mean maximum temperature variability in the NWSP may be associated with global land-sea atmospheric circulation (e.g., ENSO, PDO, or AMO) as well as solar and volcanic forcing.

  16. A maximum likelihood approach to estimating articulator positions from speech acoustics

    Hogden, J.


    This proposal presents an algorithm called maximum likelihood continuity mapping (MALCOM) which recovers the positions of the tongue, jaw, lips, and other speech articulators from measurements of the sound-pressure waveform of speech. MALCOM differs from other techniques for recovering articulator positions from speech in three critical respects: it does not require training on measured or modeled articulator positions, it does not rely on any particular model of sound propagation through the vocal tract, and it recovers a mapping from acoustics to articulator positions that is linearly, not topographically, related to the actual mapping from acoustics to articulation. The approach categorizes short-time windows of speech into a finite number of sound types, and assumes the probability of using any articulator position to produce a given sound type can be described by a parameterized probability density function. MALCOM then uses maximum likelihood estimation techniques to: (1) find the most likely smooth articulator path given a speech sample and a set of distribution functions (one distribution function for each sound type), and (2) change the parameters of the distribution functions to better account for the data. Using this technique improves the accuracy of articulator position estimates compared to continuity mapping -- the only other technique that learns the relationship between acoustics and articulation solely from acoustics. The technique has potential application to computer speech recognition, speech synthesis and coding, teaching the hearing impaired to speak, improving foreign language instruction, and teaching dyslexics to read. 34 refs., 7 figs.

  17. Estimating distribution parameters of annual maximum streamflows in Johor, Malaysia using TL-moments approach

    Mat Jan, Nur Amalina; Shabri, Ani


    TL-moments approach has been used in an analysis to identify the best-fitting distributions to represent the annual series of maximum streamflow data over seven stations in Johor, Malaysia. The TL-moments with different trimming values are used to estimate the parameter of the selected distributions namely: Three-parameter lognormal (LN3) and Pearson Type III (P3) distribution. The main objective of this study is to derive the TL-moments ( t 1,0), t 1 = 1,2,3,4 methods for LN3 and P3 distributions. The performance of TL-moments ( t 1,0), t 1 = 1,2,3,4 was compared with L-moments through Monte Carlo simulation and streamflow data over a station in Johor, Malaysia. The absolute error is used to test the influence of TL-moments methods on estimated probability distribution functions. From the cases in this study, the results show that TL-moments with four trimmed smallest values from the conceptual sample (TL-moments [4, 0]) of LN3 distribution was the most appropriate in most of the stations of the annual maximum streamflow series in Johor, Malaysia.

  18. The Modern Temperature-Accelerated Dynamics Approach.

    Zamora, Richard J; Uberuaga, Blas P; Perez, Danny; Voter, Arthur F


    Accelerated molecular dynamics (AMD) is a class of MD-based methods used to simulate atomistic systems in which the metastable state-to-state evolution is slow compared with thermal vibrations. Temperature-accelerated dynamics (TAD) is a particularly efficient AMD procedure in which the predicted evolution is hastened by elevating the temperature of the system and then recovering the correct state-to-state dynamics at the temperature of interest. TAD has been used to study various materials applications, often revealing surprising behavior beyond the reach of direct MD. This success has inspired several algorithmic performance enhancements, as well as the analysis of its mathematical framework. Recently, these enhancements have leveraged parallel programming techniques to enhance both the spatial and temporal scaling of the traditional approach. We review the ongoing evolution of the modern TAD method and introduce the latest development: speculatively parallel TAD.

  19. Estimation of the minimum and maximum substrate temperatures for diamond growth from hydrogen-hydrocarbon gas mixtures

    Zhang, Yafei; Zhang, Fangqing; Chen, Guanghua


    It is proposed in this paper that the minimum substrate temperature for diamond growth from hydrogen-hydrocarbon gas mixtures be determined by the packing arrangements of hydrocarbon fragments at the surface, and the maximum substrate temperature be limited by the diamond growth surface reconstruction, which can be prevented by saturating the surface dangling bonds with atomic hydrogen. Theoretical calculations have been done by a formula proposed by Dryburgh [J. Crystal Growth 130 (1993) 305], and the results show that diamond can be deposited at the substrate temperatures ranging from ≈ 400 to ≈ 1200°C by low pressure chemical vapor deposition. This is consistent with experimental observations.

  20. Optimisation of Hidden Markov Model using Baum–Welch algorithm for prediction of maximum and minimum temperature over Indian Himalaya

    J C Joshi; Tankeshwar Kumar; Sunita Srivastava; Divya Sachdeva


    Maximum and minimum temperatures are used in avalanche forecasting models for snow avalanche hazard mitigation over Himalaya. The present work is a part of development of Hidden Markov Model (HMM) based avalanche forecasting system for Pir-Panjal and Great Himalayan mountain ranges of the Himalaya. In this work, HMMs have been developed for forecasting of maximum and minimum temperatures for Kanzalwan in Pir-Panjal range and Drass in Great Himalayan range with a lead time of two days. The HMMs have been developed using meteorological variables collected from these stations during the past 20 winters from 1992 to 2012. The meteorological variables have been used to define observations and states of the models and to compute model parameters (initial state, state transition and observation probabilities). The model parameters have been used in the Forward and the Viterbi algorithms to generate temperature forecasts. To improve the model forecasts, the model parameters have been optimised using Baum–Welch algorithm. The models have been compared with persistence forecast by root mean square errors (RMSE) analysis using independent data of two winters (2012–13, 2013–14). The HMM for maximum temperature has shown a 4–12% and 17–19% improvement in the forecast over persistence forecast, for day-1 and day-2, respectively. For minimum temperature, it has shown 6–38% and 5–12% improvement for day-1 and day-2, respectively.

  1. Computational Amide I Spectroscopy for Refinement of Disordered Peptide Ensembles: Maximum Entropy and Related Approaches

    Reppert, Michael; Tokmakoff, Andrei

    The structural characterization of intrinsically disordered peptides (IDPs) presents a challenging biophysical problem. Extreme heterogeneity and rapid conformational interconversion make traditional methods difficult to interpret. Due to its ultrafast (ps) shutter speed, Amide I vibrational spectroscopy has received considerable interest as a novel technique to probe IDP structure and dynamics. Historically, Amide I spectroscopy has been limited to delivering global secondary structural information. More recently, however, the method has been adapted to study structure at the local level through incorporation of isotope labels into the protein backbone at specific amide bonds. Thanks to the acute sensitivity of Amide I frequencies to local electrostatic interactions-particularly hydrogen bonds-spectroscopic data on isotope labeled residues directly reports on local peptide conformation. Quantitative information can be extracted using electrostatic frequency maps which translate molecular dynamics trajectories into Amide I spectra for comparison with experiment. Here we present our recent efforts in the development of a rigorous approach to incorporating Amide I spectroscopic restraints into refined molecular dynamics structural ensembles using maximum entropy and related approaches. By combining force field predictions with experimental spectroscopic data, we construct refined structural ensembles for a family of short, strongly disordered, elastin-like peptides in aqueous solution.

  2. Maximum-likelihood approaches reveal signatures of positive selection in IL genes in mammals.

    Neves, Fabiana; Abrantes, Joana; Steinke, John W; Esteves, Pedro J


    ILs are part of the immune system and are involved in multiple biological activities. ILs have been shown to evolve under positive selection; however, little information exists regarding which codons are specifically selected. By using different codon-based maximum-likelihood (ML) approaches, signatures of positive selection in mammalian ILs were searched for. Sequences of 46 ILs were retrieved from publicly available databases of mammalian genomes to detect signatures of positive selection in individual codons. Evolutionary analyses were conducted under two ML frameworks, the HyPhy package implemented in the Data Monkey Web Server and CODEML implemented in PAML. Signatures of positive selection were found in 28 ILs: IL-1A and B; IL-2, IL-4 to IL-10, IL-12A and B; IL-14 to IL-17A and C; IL-18, IL-20 to IL-22, IL-25, IL-26, IL-27B, IL-31, IL-34, IL-36A; and G. Codons under positive selection varied between 1 and 15. No evidence of positive selection was detected in IL-13; IL-17B and F; IL-19, IL-23, IL-24, IL-27A; or IL-29. Most mammalian ILs have sites evolving under positive selection, which may be explained by the multitude of biological processes in which ILs are enrolled. The results obtained raise hypotheses concerning the ILs functions, which should be pursued by using mutagenesis and crystallographic approaches.

  3. Causal nexus between energy consumption and carbon dioxide emission for Malaysia using maximum entropy bootstrap approach.

    Gul, Sehrish; Zou, Xiang; Hassan, Che Hashim; Azam, Muhammad; Zaman, Khalid


    This study investigates the relationship between energy consumption and carbon dioxide emission in the causal framework, as the direction of causality remains has a significant policy implication for developed and developing countries. The study employed maximum entropy bootstrap (Meboot) approach to examine the causal nexus between energy consumption and carbon dioxide emission using bivariate as well as multivariate framework for Malaysia, over a period of 1975-2013. This is a unified approach without requiring the use of conventional techniques based on asymptotical theory such as testing for possible unit root and cointegration. In addition, it can be applied in the presence of non-stationary of any type including structural breaks without any type of data transformation to achieve stationary. Thus, it provides more reliable and robust inferences which are insensitive to time span as well as lag length used. The empirical results show that there is a unidirectional causality running from energy consumption to carbon emission both in the bivariate model and multivariate framework, while controlling for broad money supply and population density. The results indicate that Malaysia is an energy-dependent country and hence energy is stimulus to carbon emissions.

  4. Dynamic Performance of Maximum Power Point Trackers in TEG Systems Under Rapidly Changing Temperature Conditions

    Man, E. A.; Sera, D.; Mathe, L.


    systems are mostly tested under steady-state conditions for different constant input temperatures. However, for most TEG applications, the input temperature gradient changes, exposing the MPPT to variable tracking conditions. An example is the exhaust pipe on hybrid vehicles, for which, because...

  5. North American paleoclimate reconstructions for the Last Glacial Maximum using an inverse modeling through iterative forward modeling approach applied to pollen data

    Izumi, Kenji; Bartlein, Patrick J.


    The inverse modeling through iterative forward modeling (IMIFM) approach was used to reconstruct Last Glacial Maximum (LGM) climates from North American fossil pollen data. The approach was validated using modern pollen data and observed climate data. While the large-scale LGM temperature IMIFM reconstructions are similar to those calculated using conventional statistical approaches, the reconstructions of moisture variables differ between the two approaches. We used two vegetation models, BIOME4 and BIOME5-beta, with the IMIFM approach to evaluate the effects on the LGM climate reconstruction of differences in water use efficiency, carbon use efficiency, and atmospheric CO2 concentrations. Although lower atmospheric CO2 concentrations influence pollen-based LGM moisture reconstructions, they do not significantly affect temperature reconstructions over most of North America. This study implies that the LGM climate was very cold but not very much drier than present over North America, which is inconsistent with previous studies.

  6. A spatiotemporal dengue fever early warning model accounting for nonlinear associations with meteorological factors: a Bayesian maximum entropy approach

    Lee, Chieh-Han; Yu, Hwa-Lung; Chien, Lung-Chang


    Dengue fever has been identified as one of the most widespread vector-borne diseases in tropical and sub-tropical. In the last decade, dengue is an emerging infectious disease epidemic in Taiwan especially in the southern area where have annually high incidences. For the purpose of disease prevention and control, an early warning system is urgently needed. Previous studies have showed significant relationships between climate variables, in particular, rainfall and temperature, and the temporal epidemic patterns of dengue cases. However, the transmission of the dengue fever is a complex interactive process that mostly understated the composite space-time effects of dengue fever. This study proposes developing a one-week ahead warning system of dengue fever epidemics in the southern Taiwan that considered nonlinear associations between weekly dengue cases and meteorological factors across space and time. The early warning system based on an integration of distributed lag nonlinear model (DLNM) and stochastic Bayesian Maximum Entropy (BME) analysis. The study identified the most significant meteorological measures including weekly minimum temperature and maximum 24-hour rainfall with continuous 15-week lagged time to dengue cases variation under condition of uncertainty. Subsequently, the combination of nonlinear lagged effects of climate variables and space-time dependence function is implemented via a Bayesian framework to predict dengue fever occurrences in the southern Taiwan during 2012. The result shows the early warning system is useful for providing potential outbreak spatio-temporal prediction of dengue fever distribution. In conclusion, the proposed approach can provide a practical disease control tool for environmental regulators seeking more effective strategies for dengue fever prevention.

  7. Climate Prediction Center (CPC) U.S. Daily Maximum Air Temperature Observations

    National Oceanic and Atmospheric Administration, Department of Commerce — Observational reports of daily air temperature (1200 UTC to 1200 UTC) are made by members of the NWS Automated Surface Observing Systems (ASOS) network; NWS...

  8. A maximum likelihood approach to diffeomorphic speckle tracking for 3D strain estimation in echocardiography.

    Curiale, Ariel H; Vegas-Sánchez-Ferrero, Gonzalo; Bosch, Johan G; Aja-Fernández, Santiago


    The strain and strain-rate measures are commonly used for the analysis and assessment of regional myocardial function. In echocardiography (EC), the strain analysis became possible using Tissue Doppler Imaging (TDI). Unfortunately, this modality shows an important limitation: the angle between the myocardial movement and the ultrasound beam should be small to provide reliable measures. This constraint makes it difficult to provide strain measures of the entire myocardium. Alternative non-Doppler techniques such as Speckle Tracking (ST) can provide strain measures without angle constraints. However, the spatial resolution and the noisy appearance of speckle still make the strain estimation a challenging task in EC. Several maximum likelihood approaches have been proposed to statistically characterize the behavior of speckle, which results in a better performance of speckle tracking. However, those models do not consider common transformations to achieve the final B-mode image (e.g. interpolation). This paper proposes a new maximum likelihood approach for speckle tracking which effectively characterizes speckle of the final B-mode image. Its formulation provides a diffeomorphic scheme than can be efficiently optimized with a second-order method. The novelty of the method is threefold: First, the statistical characterization of speckle generalizes conventional speckle models (Rayleigh, Nakagami and Gamma) to a more versatile model for real data. Second, the formulation includes local correlation to increase the efficiency of frame-to-frame speckle tracking. Third, a probabilistic myocardial tissue characterization is used to automatically identify more reliable myocardial motions. The accuracy and agreement assessment was evaluated on a set of 16 synthetic image sequences for three different scenarios: normal, acute ischemia and acute dyssynchrony. The proposed method was compared to six speckle tracking methods. Results revealed that the proposed method is the most

  9. Effect of temperature-dependent surface heat transfer coefficient on the maximum surface stress in ceramics during quenching

    Shao, Y. F.; Song, F.; Jiang, C. P.; Xu, X. H.; Wei, J. C.; Zhou, Z. L.


    We study the difference in the maximum stress on a cylinder surface σmax using the measured surface heat transfer coefficient hm instead of its average value ha during quenching. In the quenching temperatures of 200, 300, 400, 500, 600 and 800°C, the maximum surface stress σmmax calculated by hm is always smaller than σamax calculated by ha, except in the case of 800°C; while the time to reach σmax calculated by hm (fmmax) is always earlier than that by ha (famax). It is inconsistent with the traditional view that σmax increases with increasing Biot number and the time to reach σmax decreases with increasing Biot number. Other temperature-dependent properties also have a small effect on the trend of their mutual ratios with quenching temperatures. Such a difference between the two maximum surface stresses is caused by the dramatic variation of hm with temperature, which needs to be considered in engineering analysis.

  10. Maximum Efficiency of Thermoelectric Heat Conversion in High-Temperature Power Devices

    V. I. Khvesyuk


    Full Text Available Modern trends in development of aircraft engineering go with development of vehicles of the fifth generation. The features of aircrafts of the fifth generation are motivation to use new high-performance systems of onboard power supply. The operating temperature of the outer walls of engines is of 800–1000 K. This corresponds to radiation heat flux of 10 kW/m2 . The thermal energy including radiation of the engine wall may potentially be converted into electricity. The main objective of this paper is to analyze if it is possible to use a high efficiency thermoelectric conversion of heat into electricity. The paper considers issues such as working processes, choice of materials, and optimization of thermoelectric conversion. It presents the analysis results of operating conditions of thermoelectric generator (TEG used in advanced hightemperature power devices. A high-temperature heat source is a favorable factor for the thermoelectric conversion of heat. It is shown that for existing thermoelectric materials a theoretical conversion efficiency can reach the level of 15–20% at temperatures up to 1500 K and available values of Ioffe parameter being ZT = 2–3 (Z is figure of merit, T is temperature. To ensure temperature regime and high efficiency thermoelectric conversion simultaneously it is necessary to have a certain match between TEG power, temperature of hot and cold surfaces, and heat transfer coefficient of the cooling system. The paper discusses a concept of radiation absorber on the TEG hot surface. The analysis has demonstrated a number of potentialities for highly efficient conversion through using the TEG in high-temperature power devices. This work has been implemented under support of the Ministry of Education and Science of the Russian Federation; project No. 1145 (the programme “Organization of Research Engineering Activities”.

  11. Temperature affects maximum H-reflex amplitude but not homosynaptic postactivation depression.

    Racinais, Sébastien; Cresswell, Andrew G


    This study aimed to determinate the effect of hyperthermia on transmission efficacy of the Ia-afferent spinal pathway. Recruitment curves of the Hoffman reflex (H-reflex) and compound motor potential (M-wave) along with homosynaptic postactivation depression (HPAD) recovery curves were obtained in 14 volunteers in two controlled ambient temperatures that resulted in significantly different core temperatures (CON, core temperature ∼37.3°C; and HOT, core temperature ∼39.0°C). Electromyographic responses were obtained from the soleus (SOL) and medial gastrocnemius (MG) muscles following electrical stimulation of the tibial nerve at varying intensities and paired pulse frequencies (0.07-10 Hz). Results showed that maximal amplitude of the H-reflex was reached for a similar intensity of stimulation in CON and HOT (both muscles P > 0.47), with a similar associated M-wave (both muscles P > 0.69) but was significantly decreased in HOT as compared to CON (all P wave (-23% in SOL, -32% in MG). The HPAD recovery curve was not affected by the elevated core temperature (both muscles P > 0.23). Taken together, these results suggest that hyperthermia can alter neuromuscular transmission across the neuromuscular junction and/or muscle membrane as well as transmission efficacy of the Ia-afferent pathway, albeit the latter not via an increase in HPAD.

  12. Analysis of Rayleigh waves with circular wavefront: a maximum likelihood approach

    Maranò, Stefano; Hobiger, Manuel; Bergamo, Paolo; Fäh, Donat


    Analysis of Rayleigh waves is an important task in seismology and geotechnical investigations. In fact, properties of Rayleigh waves such as velocity and polarization are important observables that carry information about the structure of the subsoil. Applications analysing Rayleigh waves include active and passive seismic surveys. In active surveys, there is a controlled source of seismic energy and the sensors are typically placed near the source. In passive surveys, there is not a controlled source, rather, seismic waves from ambient vibrations are analysed and the sources are assumed to be far outside the array, simplifying the analysis by the assumption of plane waves. Whenever the source is in the proximity of the array of sensors or even within the array it is necessary to model the wave propagation accounting for the circular wavefront. In addition, it is also necessary to model the amplitude decay due to geometrical spreading. This is the case of active seismic surveys in which sensors are located near the seismic source. In this work, we propose a maximum likelihood (ML) approach for the analysis of Rayleigh waves generated at a near source. Our statistical model accounts for the curvature of the wavefront and amplitude decay due to geometrical spreading. Using our method, we show applications on real data of the retrieval of Rayleigh wave dispersion and ellipticity. We employ arrays with arbitrary geometry. Furthermore, we show how it is possible to combine active and passive surveys. This enables us to enlarge the analysable frequency range and therefore the depths investigated. We retrieve properties of Rayleigh waves from both active and passive surveys and show the excellent agreement of the results from the two surveys. In our approach we use the same array of sensors for both the passive and the active survey. This greatly simplifies the logistics necessary to perform a survey.

  13. A New Maximum Likelihood Approach for Free Energy Profile Construction from Molecular Simulations

    Lee, Tai-Sung; Radak, Brian K.; Pabis, Anna; York, Darrin M.


    A novel variational method for construction of free energy profiles from molecular simulation data is presented. The variational free energy profile (VFEP) method uses the maximum likelihood principle applied to the global free energy profile based on the entire set of simulation data (e.g from multiple biased simulations) that spans the free energy surface. The new method addresses common obstacles in two major problems usually observed in traditional methods for estimating free energy surfaces: the need for overlap in the re-weighting procedure and the problem of data representation. Test cases demonstrate that VFEP outperforms other methods in terms of the amount and sparsity of the data needed to construct the overall free energy profiles. For typical chemical reactions, only ~5 windows and ~20-35 independent data points per window are sufficient to obtain an overall qualitatively correct free energy profile with sampling errors an order of magnitude smaller than the free energy barrier. The proposed approach thus provides a feasible mechanism to quickly construct the global free energy profile and identify free energy barriers and basins in free energy simulations via a robust, variational procedure that determines an analytic representation of the free energy profile without the requirement of numerically unstable histograms or binning procedures. It can serve as a new framework for biased simulations and is suitable to be used together with other methods to tackle with the free energy estimation problem. PMID:23457427

  14. A Maximum Entropy Approach to Assess Debonding in Honeycomb aluminum Plates

    Viviana Meruane


    Full Text Available Honeycomb sandwich structures are used in a wide variety of applications. Nevertheless, due to manufacturing defects or impact loads, these structures can be subject to imperfect bonding or debonding between the skin and the honeycomb core. The presence of debonding reduces the bending stiffness of the composite panel, which causes detectable changes in its vibration characteristics. This article presents a new supervised learning algorithm to identify debonded regions in aluminum honeycomb panels. The algorithm uses a linear approximation method handled by a statistical inference model based on the maximum-entropy principle. The merits of this new approach are twofold: training is avoided and data is processed in a period of time that is comparable to the one of neural networks. The honeycomb panels are modeled with finite elements using a simplified three-layer shell model. The adhesive layer between the skin and core is modeled using linear springs, the rigidities of which are reduced in debonded sectors. The algorithm is validated using experimental data of an aluminum honeycomb panel under different damage scenarios.

  15. A New Maximum Likelihood Approach for Free Energy Profile Construction from Molecular Simulations.

    Lee, Tai-Sung; Radak, Brian K; Pabis, Anna; York, Darrin M


    A novel variational method for construction of free energy profiles from molecular simulation data is presented. The variational free energy profile (VFEP) method uses the maximum likelihood principle applied to the global free energy profile based on the entire set of simulation data (e.g from multiple biased simulations) that spans the free energy surface. The new method addresses common obstacles in two major problems usually observed in traditional methods for estimating free energy surfaces: the need for overlap in the re-weighting procedure and the problem of data representation. Test cases demonstrate that VFEP outperforms other methods in terms of the amount and sparsity of the data needed to construct the overall free energy profiles. For typical chemical reactions, only ~5 windows and ~20-35 independent data points per window are sufficient to obtain an overall qualitatively correct free energy profile with sampling errors an order of magnitude smaller than the free energy barrier. The proposed approach thus provides a feasible mechanism to quickly construct the global free energy profile and identify free energy barriers and basins in free energy simulations via a robust, variational procedure that determines an analytic representation of the free energy profile without the requirement of numerically unstable histograms or binning procedures. It can serve as a new framework for biased simulations and is suitable to be used together with other methods to tackle with the free energy estimation problem.

  16. Maximum entropy approach to statistical inference for an ocean acoustic waveguide.

    Knobles, D P; Sagers, J D; Koch, R A


    A conditional probability distribution suitable for estimating the statistical properties of ocean seabed parameter values inferred from acoustic measurements is derived from a maximum entropy principle. The specification of the expectation value for an error function constrains the maximization of an entropy functional. This constraint determines the sensitivity factor (β) to the error function of the resulting probability distribution, which is a canonical form that provides a conservative estimate of the uncertainty of the parameter values. From the conditional distribution, marginal distributions for individual parameters can be determined from integration over the other parameters. The approach is an alternative to obtaining the posterior probability distribution without an intermediary determination of the likelihood function followed by an application of Bayes' rule. In this paper the expectation value that specifies the constraint is determined from the values of the error function for the model solutions obtained from a sparse number of data samples. The method is applied to ocean acoustic measurements taken on the New Jersey continental shelf. The marginal probability distribution for the values of the sound speed ratio at the surface of the seabed and the source levels of a towed source are examined for different geoacoustic model representations.

  17. A bioinspired approach for a multizone temperature control system

    Pantoja, A; Quijano, N [Departamento de IngenierIa Electrica y Electronica, Universidad de los Andes, Bogota (Colombia); Leirens, S, E-mail:, E-mail:, E-mail: [La Pillayre, 63160 Montmorin (France)


    Bioinspired design approaches seek to exploit nature in order to construct optimal solutions for engineering problems as uniform temperature control in multizone systems. The ideal free distribution (IFD) is a concept from behavioural ecology, which describes the arrangement of individuals in different habitats such that at equilibrium, all habitats are equally suitable. Here, we relax the IFD's main assumptions using the standing-crop idea to introduce dynamics into the supplies of each habitat. Then, we make an analogy with a multizone thermal system to propose a controller based on the replicator dynamics model, in order to obtain a maximum uniform temperature subject to constant power injection. Besides, we analytically show that the equilibrium point of the controlled system is asymptotically stable. Finally, some practical results obtained with a testbed and comparisons with the theoretical results are presented.

  18. County-Level Climate Uncertainty for Risk Assessments: Volume 4 Appendix C - Historical Maximum Near-Surface Air Temperature.

    Backus, George A.; Lowry, Thomas Stephen; Jones, Shannon M; Walker, La Tonya Nicole; Roberts, Barry L; Malczynski, Leonard A.


    This report uses the CMIP5 series of climate model simulations to produce country- level uncertainty distributions for use in socioeconomic risk assessments of climate change impacts. It provides appropriate probability distributions, by month, for 169 countries and autonomous-areas on temperature, precipitation, maximum temperature, maximum wind speed, humidity, runoff, soil moisture and evaporation for the historical period (1976-2005), and for decadal time periods to 2100. It also provides historical and future distributions for the Arctic region on ice concentration, ice thickness, age of ice, and ice ridging in 15-degree longitude arc segments from the Arctic Circle to 80 degrees latitude, plus two polar semicircular regions from 80 to 90 degrees latitude. The uncertainty is meant to describe the lack of knowledge rather than imprecision in the physical simulation because the emphasis is on unfalsified risk and its use to determine potential socioeconomic impacts. The full report is contained in 27 volumes.

  19. Experimental determination of a critical temperature for maximum anaerobic digester biogas production

    Sichilalu, S


    Full Text Available fission of methanogenic bacteria. The temperature was varied over time over several days and the biogas production is recorded every after 24 hours(1 day) . Based on the experiment setup, the results show a higher biogas production proportional to the rise...

  20. Concept for estimating mitochondrial DNA haplogroups using a maximum likelihood approach (EMMA)☆

    Röck, Alexander W.; Dür, Arne; van Oven, Mannis; Parson, Walther


    The assignment of haplogroups to mitochondrial DNA haplotypes contributes substantial value for quality control, not only in forensic genetics but also in population and medical genetics. The availability of Phylotree, a widely accepted phylogenetic tree of human mitochondrial DNA lineages, led to the development of several (semi-)automated software solutions for haplogrouping. However, currently existing haplogrouping tools only make use of haplogroup-defining mutations, whereas private mutations (beyond the haplogroup level) can be additionally informative allowing for enhanced haplogroup assignment. This is especially relevant in the case of (partial) control region sequences, which are mainly used in forensics. The present study makes three major contributions toward a more reliable, semi-automated estimation of mitochondrial haplogroups. First, a quality-controlled database consisting of 14,990 full mtGenomes downloaded from GenBank was compiled. Together with Phylotree, these mtGenomes serve as a reference database for haplogroup estimates. Second, the concept of fluctuation rates, i.e. a maximum likelihood estimation of the stability of mutations based on 19,171 full control region haplotypes for which raw lane data is available, is presented. Finally, an algorithm for estimating the haplogroup of an mtDNA sequence based on the combined database of full mtGenomes and Phylotree, which also incorporates the empirically determined fluctuation rates, is brought forward. On the basis of examples from the literature and EMPOP, the algorithm is not only validated, but both the strength of this approach and its utility for quality control of mitochondrial haplotypes is also demonstrated. PMID:23948335

  1. Effect of temperature on maximum swimming speed and cost of transport in juvenile European sea bass (Dicentrarchus labrax).

    Claireaux, Guy; Couturier, Christine; Groison, Anne-Laure


    This study is an attempt to gain an integrated understanding of the interactions between temperature, locomotion activity and metabolism in the European sea bass (Dicentrarchus labrax). To our knowledge this study is among the few that have investigated the influence of the seasonal changes in water temperature on swimming performance in fish. Using a Brett-type swim-tunnel respirometer the relationship between oxygen consumption and swimming speed was determined in fish acclimatised to 7, 11, 14, 18, 22, 26 and 30 degrees C. The corresponding maximum swimming speed (U(max)), optimal swimming speed (U(opt)), active (AMR) and standard (SMR) metabolic rates as well as aerobic metabolic scope (MS) were calculated. Using simple mathematical functions, these parameters were modelled as a function of water temperature and swimming speed. Both SMR and AMR were positively related to water temperature up to 24 degrees C. Above 24 degrees C SMR and AMR levelled off and MS tended to decrease. We found a tight relationship between AMR and U(max) and observed that raising the temperature increased AMR and increased swimming ability. However, although fish swam faster at high temperature, the net cost of transport (COT(net)) at a given speed was not influence by the elevation of the water temperature. Although U(opt) doubled between 7 degrees C and 30 degrees C (from 0.3 to 0.6 m s(-1)), metabolic rate at U(opt) represented a relatively constant fraction of the animal active metabolic rate (40-45%). A proposed model integrates the effects of water temperature on the interaction between metabolism and swimming performance. In particular the controlling effect of temperature on AMR is shown to be the key factor limiting maximal swimming speed of sea bass.

  2. The effect of maximum-allowable payload temperature on the mass of a multimegawatt space-based platform

    Dobranich, D.


    Calculations were performed to determine the mass of a space-based platform as a function of the maximum-allowed operating temperature of the electrical equipment within the platform payload. Two computer programs were used in conjunction to perform these calculations. The first program was used to determine the mass of the platform reactor, shield, and power conversion system. The second program was used to determine the mass of the main and secondary radiators of the platform. The main radiator removes the waste heat associated with the power conversion system and the secondary radiator removes the waste heat associated with the platform payload. These calculations were performed for both Brayton and Rankine cycle platforms with two different types of payload cooling systems: a pumped-loop system (a heat exchanger with a liquid coolant) and a refrigerator system. The results indicate that increases in the maximum-allowed payload temperature offer significant platform mass savings for both the Brayton and Rankine cycle platforms with either the pumped-loop or refrigerator payload cooling systems. Therefore, with respect to platform mass, the development of high temperature electrical equipment would be advantageous. 3 refs., 24 figs., 7 tabs.

  3. The Impacts of Maximum Temperature and Climate Change to Current and Future Pollen Distribution in Skopje, Republic of Macedonia

    Vladimir Kendrovski


    Full Text Available BACKGROUND. The goal of the present paper was to assess the impact of current and future burden of the ambient temperature to pollen distributions in Skopje. METHODS. In the study we have evaluated a correlation between the concentration of pollen grains in the atmosphere of Skopje and maximum temperature, during the vegetation period of 1996, 2003, 2007 and 2009 as a current burden in context of climate change. For our analysis we have selected 9 representative of each phytoallergen group (trees, grasses, weeds. The concentration of pollen grains has been monitored by a Lanzoni volumetric pollen trap. The correlation between the concentration of pollen grains in the atmosphere and selected meteorological variable from weekly monitoring has been studied with the help of linear regression and correlation coefficients. RESULTS. The prevalence of the sensibilization of standard pollen allergens in Skopje during the some period shows increasing from 16,9% in 1996 to 19,8% in 2009. We detect differences in onset of flowering, maximum and end of the length of seasons for pollen. The pollen distributions and risk increases in 3 main periods: early spring, spring and summer which are the main cause of allergies during these seasons. The largest increase of air temperature due to climate change in Skopje is expected in the summer season. CONCLUSION. The impacts of climate change by increasing of the temperature in the next decades very likely will include impacts on pollen production and differences in current pollen season. [TAF Prev Med Bull 2012; 11(1.000: 35-40

  4. Maximum Potential of the Car Cabin Temperature in the Outdoor Parking Conditions as a Source of Energy in Thermoelectric Generator

    Sunawar, A.; Garniwa, I.


    Cars using the principle of converting heat energy into mechanical energy, but a lot of wasted heat energy not entirely transformed into mechanical energy, studies have been conducted that converts the heat energy into electrical energy using the principle thermoelectrically. However, there are many other energies that can be harnessed from the car, such as when the car is parked in the sun or driving in the heat of the sun, the temperature in the cabin can reach 80 degrees Celsius. The heat can be harmful to humans and the children immediately into the vehicle, as well as for the goods stored in the cabin if it contains toxins can evaporate because of the heat and dangerous. The danger can be prevented by reducing the heat in the cabin and transform into other forms of energy such as electricity. By providing a temperature difference of 40 degrees on the cold side of the module can be acquired electricity thermoelectrically up to 0.17W for one of its module, if it is made a module block the energy produced is enough to lower the temperature and charge batteries for further cooling. This study will use experiment method to get the maximum drop in temperature in the car cabin

  5. Reconstructing temperatures in the Maritime Alps, Italy, since the Last Glacial Maximum using cosmogenic noble gas paleothermometry

    Tremblay, Marissa; Spagnolo, Matteo; Ribolini, Adriano; Shuster, David


    The Gesso Valley, located in the southwestern-most, Maritime portion of the European Alps, contains an exceptionally well-preserved record of glacial advances during the late Pleistocene and Holocene. Detailed geomorphic mapping, geochronology of glacial deposits, and glacier reconstructions indicate that glaciers in this Mediterranean region responded to millennial scale climate variability differently than glaciers in the interior of the European Alps. This suggests that the Mediterranean Sea somehow modulated the climate of this region. However, since glaciers respond to changes in temperature and precipitation, both variables were potentially influenced by proximity to the Sea. To disentangle the competing effects of temperature and precipitation changes on glacier size, we are constraining past temperature variations in the Gesso Valley since the Last Glacial Maximum (LGM) using cosmogenic noble gas paleothermometry. The cosmogenic noble gases 3He and 21Ne experience diffusive loss from common minerals like quartz and feldspars at Earth surface temperatures. Cosmogenic noble gas paleothermometry utilizes this open-system behavior to quantitatively constrain thermal histories of rocks during exposure to cosmic ray particles at the Earth's surface. We will present measurements of cosmogenic 3He in quartz sampled from moraines in the Gesso Valley with LGM, Bühl stadial, and Younger Dryas ages. With these 3He measurements and experimental data quantifying the diffusion kinetics of 3He in quartz, we will provide a preliminary temperature reconstruction for the Gesso Valley since the LGM. Future work on samples from younger moraines in the valley system will be used to fill in details of the more recent temperature history.

  6. Derivation of some new distributions in statistical mechanics using maximum entropy approach

    Ray Amritansu


    Full Text Available The maximum entropy principle has been earlier used to derive the Bose Einstein(B.E., Fermi Dirac(F.D. & Intermediate Statistics(I.S. distribution of statistical mechanics. The central idea of these distributions is to predict the distribution of the microstates, which are the particle of the system, on the basis of the knowledge of some macroscopic data. The latter information is specified in the form of some simple moment constraints. One distribution differs from the other in the way in which the constraints are specified. In the present paper, we have derived some new distributions similar to B.E., F.D. distributions of statistical mechanics by using maximum entropy principle. Some proofs of B.E. & F.D. distributions are shown, and at the end some new results are discussed.

  7. Application of the Maximum Entropy/optimal Projection Control Design Approach for Large Space Structures

    Hyland, D. C.


    The underlying philosophy and motivation of the optimal projection/maximum entropy (OP/ME) stochastic modelling and reduced order control design method for high order systems with parameter uncertainties are discussed. The OP/ME design equations for reduced-order dynamic compensation including the effect of parameter uncertainties are reviewed and the application of the methodology to several large space structure (LSS) problems of representative complexity is illustrated.

  8. Last Glacial Maximum sea surface temperature and sea-ice extent in the Pacific sector of the Southern Ocean

    Benz, Verena; Esper, Oliver; Gersonde, Rainer; Lamy, Frank; Tiedemann, Ralf


    Sea surface temperatures and sea-ice extent are most critical variables to evaluate the Southern Ocean paleoceanographic evolution in relation to the development of the global carbon cycle, atmospheric CO2 and ocean-atmosphere circulation. Here we present diatom transfer function-based summer sea surface temperature (SSST) and winter sea-ice (WSI) estimates from the Pacific sector of the Southern Ocean to bridge a gap in information that has to date hampered a well-established reconstruction of the last glacial Southern Ocean at circum-Antarctic scale. We studied the Last Glacial Maximum (LGM) at the EPILOG time slice (19,000-23,000 calendar years before present) in 17 cores and consolidated our LGM picture of the Pacific sector taking into account published data from its warmer regions. Our data display a distinct east-west differentiation with a rather stable WSI edge north of the Pacific-Antarctic Ridge in the Ross Sea sector and a more variable WSI extent over the Amundsen Abyssal Plain. The zone of maximum cooling (>4 K) during the LGM is in the present Subantarctic Zone and bounded to its south by the 4 °C isotherm. The isotherm is in the SSST range prevailing at the modern Antarctic Polar Front, representing a circum-Antarctic feature, and marks the northern edge of the glacial Antarctic Circumpolar Current (ACC). The northward deflection of colder than modern surface waters along the South American continent led to a significant cooling of the glacial Humboldt Current surface waters (4-8 K), which affected the temperature regimes as far north as tropical latitudes. The glacial reduction of ACC temperatures may also have resulted in significant cooling in the Atlantic and Indian Southern Ocean, thus enhancing thermal differentiation of the Southern Ocean and Antarctic continental cooling. The comparison with numerical temperature and sea-ice simulations yields discrepancies, especially concerning the estimates of the sea-ice fields, but some simulations

  9. Planktonic foraminiferal Mg/Ca as a proxy for past oceanic temperatures: a methodological overview and data compilation for the Last Glacial Maximum

    Barker, Stephen; Cacho, Isabel; Benway, Heather; Tachikawa, Kazuyo


    As part of the Multi-proxy Approach for the Reconstruction of the Glacial Ocean (MARGO) incentive, published and unpublished temperature reconstructions for the Last Glacial Maximum (LGM) based on planktonic foraminiferal Mg/Ca ratios have been synthesised and made available in an online database. Development and applications of Mg/Ca thermometry are described in order to illustrate the current state of the method. Various attempts to calibrate foraminiferal Mg/Ca ratios with temperature, including culture, trap and core-top approaches have given very consistent results although differences in methodological techniques can produce offsets between laboratories which need to be assessed and accounted for where possible. Dissolution of foraminiferal calcite at the sea-floor generally causes a lowering of Mg/Ca ratios. This effect requires further study in order to account and potentially correct for it if dissolution has occurred. Mg/Ca thermometry has advantages over other paleotemperature proxies including its use to investigate changes in the oxygen isotopic composition of seawater and the ability to reconstruct changes in the thermal structure of the water column by use of multiple species from different depth and or seasonal habitats. Presently available data are somewhat limited to low latitudes where they give fairly consistent values for the temperature difference between Late Holocene and the LGM (2-3.5 °C). Data from higher latitudes are more sparse, and suggest there may be complicating factors when comparing between multi-proxy reconstructions.

  10. Improving Estimations of Spatial Distribution of Soil Respiration Using the Bayesian Maximum Entropy Algorithm and Soil Temperature as Auxiliary Data.

    Hu, Junguo; Zhou, Jian; Zhou, Guomo; Luo, Yiqi; Xu, Xiaojun; Li, Pingheng; Liang, Junyi


    Soil respiration inherently shows strong spatial variability. It is difficult to obtain an accurate characterization of soil respiration with an insufficient number of monitoring points. However, it is expensive and cumbersome to deploy many sensors. To solve this problem, we proposed employing the Bayesian Maximum Entropy (BME) algorithm, using soil temperature as auxiliary information, to study the spatial distribution of soil respiration. The BME algorithm used the soft data (auxiliary information) effectively to improve the estimation accuracy of the spatiotemporal distribution of soil respiration. Based on the functional relationship between soil temperature and soil respiration, the BME algorithm satisfactorily integrated soil temperature data into said spatial distribution. As a means of comparison, we also applied the Ordinary Kriging (OK) and Co-Kriging (Co-OK) methods. The results indicated that the root mean squared errors (RMSEs) and absolute values of bias for both Day 1 and Day 2 were the lowest for the BME method, thus demonstrating its higher estimation accuracy. Further, we compared the performance of the BME algorithm coupled with auxiliary information, namely soil temperature data, and the OK method without auxiliary information in the same study area for 9, 21, and 37 sampled points. The results showed that the RMSEs for the BME algorithm (0.972 and 1.193) were less than those for the OK method (1.146 and 1.539) when the number of sampled points was 9 and 37, respectively. This indicates that the former method using auxiliary information could reduce the required number of sampling points for studying spatial distribution of soil respiration. Thus, the BME algorithm, coupled with soil temperature data, can not only improve the accuracy of soil respiration spatial interpolation but can also reduce the number of sampling points.

  11. The SIS and SIR stochastic epidemic models: a maximum entropy approach.

    Artalejo, J R; Lopez-Herrero, M J


    We analyze the dynamics of infectious disease spread by formulating the maximum entropy (ME) solutions of the susceptible-infected-susceptible (SIS) and the susceptible-infected-removed (SIR) stochastic models. Several scenarios providing helpful insight into the use of the ME formalism for epidemic modeling are identified. The ME results are illustrated with respect to several descriptors, including the number of recovered individuals and the time to extinction. An application to infectious data from outbreaks of extended spectrum beta lactamase (ESBL) in a hospital is also considered.

  12. An Exact Solution Approach for the Maximum Multicommodity K-splittable Flow Problem

    Gamst, Mette; Petersen, Bjørn


    This talk concerns the NP-hard Maximum Multicommodity k-splittable Flow Problem (MMCkFP) in which each commodity may use at most k paths between its origin and its destination. A new branch-and-cut-and-price algorithm is presented. The master problem is a two-index formulation of the MMCk......FP and the pricing problem is the shortest path problem with forbidden paths. A new branching strategy forcing and forbidding the use of certain paths is developed. The new branch-and-cut-and-price algorithm is computationally evaluated and compared to results from the literature. The new algorithm shows very...

  13. Maximum-Likelihood Approach to Topological Charge Fluctuations in Lattice Gauge Theory

    Brower, R C; Fleming, G T; Lin, M F; Neil, E T; Osborn, J C; Rebbi, C; Rinaldi, E; Schaich, D; Schroeder, C; Voronov, G; Vranas, P; Weinberg, E; Witzel, O


    We present a novel technique for the determination of the topological susceptibility (related to the variance of the distribution of global topological charge) from lattice gauge theory simulations, based on maximum-likelihood analysis of the Markov-chain Monte Carlo time series. This technique is expected to be particularly useful in situations where relatively few tunneling events are observed. Restriction to a lattice subvolume on which topological charge is not quantized is explored, and may lead to further improvement when the global topology is poorly sampled. We test our proposed method on a set of lattice data, and compare it to traditional methods.

  14. Prediction of CO Concentration and Maximum Smoke Temperature beneath Ceiling in Tunnel Fire with Different Aspect Ratio

    S. Gannouni


    Full Text Available In a tunnel fire, the production of smoke and toxic gases remains the principal prejudicial factors to users. The heat is not considered as a major direct danger to users since temperatures up to man level do not reach tenable situations that after a relatively long time except near the fire source. However, the temperatures under ceiling can exceed the thresholds conditions and can thus cause structural collapse of infrastructure. This paper presents a numerical analysis of smoke hazard in tunnel fires with different aspect ratio by large eddy simulation. Results show that the CO concentration increases as the aspect ratio decreases and decreases with the longitudinal ventilation velocity. CFD predicted maximum smoke temperatures are compared to the calculated values using the model of Li et al. and then compared with those given by the empirical equation proposed by kurioka et al. A reasonable good agreement has been obtained. The backlayering length decreases as the ventilation velocity increases and this decrease fell into good exponential decay. The dimensionless interface height and the region of bad visibility increases with the aspect ratio of the tunnel cross-sectional geometry.

  15. A maximum entropy approach to separating noise from signal in bimodal affiliation networks

    Dianati, Navid


    In practice, many empirical networks, including co-authorship and collocation networks are unimodal projections of a bipartite data structure where one layer represents entities, the second layer consists of a number of sets representing affiliations, attributes, groups, etc., and an inter-layer link indicates membership of an entity in a set. The edge weight in the unimodal projection, which we refer to as a co-occurrence network, counts the number of sets to which both end-nodes are linked. Interpreting such dense networks requires statistical analysis that takes into account the bipartite structure of the underlying data. Here we develop a statistical significance metric for such networks based on a maximum entropy null model which preserves both the frequency sequence of the individuals/entities and the size sequence of the sets. Solving the maximum entropy problem is reduced to solving a system of nonlinear equations for which fast algorithms exist, thus eliminating the need for expensive Monte-Carlo sam...

  16. Adiabatic magnetocaloric temperature change in polycrystalline gadolinium – A new approach highlighting reversibility

    Mohammadreza Ghahremani


    Full Text Available The adiabatic temperature change (ΔT during the magnetization and demagnetization processes of bulk gadolinium is directly measured for several applied magnetic fields in the temperature range 285 K to 305 K. During the magnetization process, ΔT measurements display the same maximum for each applied field when plotted against the initial temperature (Ti. However, during the demagnetization process, the maximum ΔT varies for each applied field. This discrepancy between the magnetization and demagnetization measurements appears inconsistent with the reversibility of the magnetocaloric effect. A new approach is undertaken to highlight the reversibility of the magnetocaloric effect by plotting ΔT against the average temperature change (Tavg instead of Ti. The value of Tavg which corresponds to the maximum ΔT is found to increase linearly with the applied magnetic field, consistently for both the magnetization and demagnetization measurements. Solving the linear-fitting equations of these measurements gives a new, and more precise, Curie temperature measurement. This new approach confirmed that the relationship between the maximum adiabatic temperature change (ΔTpeak and the applied magnetic field is perfectly linear.

  17. Raw Data Maximum Likelihood Estimation for Common Principal Component Models: A State Space Approach.

    Gu, Fei; Wu, Hao


    The specifications of state space model for some principal component-related models are described, including the independent-group common principal component (CPC) model, the dependent-group CPC model, and principal component-based multivariate analysis of variance. Some derivations are provided to show the equivalence of the state space approach and the existing Wishart-likelihood approach. For each model, a numeric example is used to illustrate the state space approach. In addition, a simulation study is conducted to evaluate the standard error estimates under the normality and nonnormality conditions. In order to cope with the nonnormality conditions, the robust standard errors are also computed. Finally, other possible applications of the state space approach are discussed at the end.

  18. Achieving maximum sustainable yield in mixed fisheries: a management approach for the North Sea demersal fisheries

    Ulrich, Clara; Vermard, Youen; Dolder, Paul J.


    . An objective method is suggested that provides an optimal set of fishing mortality within the range, minimizing the risk of total allowable catch mismatches among stocks captured within mixed fisheries, and addressing explicitly the trade-offs between the most and least productive stocks.......Achieving single species maximum sustainable yield (MSY) in complex and dynamic fisheries targeting multiple species (mixed fisheries) is challenging because achieving the objective for one species may mean missing the objective for another. The North Sea mixed fisheries are a representative...... ranges to combine long-term single-stock targets with flexible, short-term, mixed-fisheries management requirements applied to the main North Sea demersal stocks. It is shown that sustained fishing at the upper bound of the range may lead to unacceptable risks when technical interactions occur...

  19. A maximum-entropy approach to the adiabatic freezing of a supercooled liquid.

    Prestipino, Santi


    I employ the van der Waals theory of Baus and co-workers to analyze the fast, adiabatic decay of a supercooled liquid in a closed vessel with which the solidification process usually starts. By imposing a further constraint on either the system volume or pressure, I use the maximum-entropy method to quantify the fraction of liquid that is transformed into solid as a function of undercooling and of the amount of a foreign gas that could possibly be also present in the test tube. Upon looking at the implications of thermal and mechanical insulation for the energy cost of forming a solid droplet within the liquid, I identify one situation where the onset of solidification inevitably occurs near the wall in contact with the bath.

  20. Inferring kinetic pathways, rates, and force dependence from nonprocessive optical tweezers experiments: a maximum likelihood approach

    Kalafut, Bennett; Visscher, Koen


    Optical tweezers experiments allow us to probe the role of force and mechanical work in a variety of biochemical processes. However, observable states do not usually correspond in a one-to-one fashion with the internal state of an enzyme or enzyme-substrate complex. Different kinetic pathways yield different distributions for the dwells in the observable states. Furthermore, the dwell-time distribution will be dependent upon force, and upon where in the biochemical pathway force acts. I will present a maximum-likelihood method for identifying rate constants and the locations of force-dependent transitions in transcription initiation by T7 RNA Polymerase. This method is generalizable to systems with more complicated kinetic pathways in which there are two observable states (e.g. bound and unbound) and an irreversible final transition.

  1. Estimation of Road Vehicle Speed Using Two Omnidirectional Microphones: A Maximum Likelihood Approach

    López-Valcarce Roberto


    Full Text Available We address the problem of estimating the speed of a road vehicle from its acoustic signature, recorded by a pair of omnidirectional microphones located next to the road. This choice of sensors is motivated by their nonintrusive nature as well as low installation and maintenance costs. A novel estimation technique is proposed, which is based on the maximum likelihood principle. It directly estimates car speed without any assumptions on the acoustic signal emitted by the vehicle. This has the advantages of bypassing troublesome intermediate delay estimation steps as well as eliminating the need for an accurate yet general enough acoustic traffic model. An analysis of the estimate for narrowband and broadband sources is provided and verified with computer simulations. The estimation algorithm uses a bank of modified crosscorrelators and therefore it is well suited to DSP implementation, performing well with preliminary field data.

  2. A Sum-of-Squares and Semidefinite Programming Approach for Maximum Likelihood DOA Estimation

    Shu Cai


    Full Text Available Direction of arrival (DOA estimation using a uniform linear array (ULA is a classical problem in array signal processing. In this paper, we focus on DOA estimation based on the maximum likelihood (ML criterion, transform the estimation problem into a novel formulation, named as sum-of-squares (SOS, and then solve it using semidefinite programming (SDP. We first derive the SOS and SDP method for DOA estimation in the scenario of a single source and then extend it under the framework of alternating projection for multiple DOA estimation. The simulations demonstrate that the SOS- and SDP-based algorithms can provide stable and accurate DOA estimation when the number of snapshots is small and the signal-to-noise ratio (SNR is low. Moveover, it has a higher spatial resolution compared to existing methods based on the ML criterion.

  3. Continuity of the maximum-entropy inference: Convex geometry and numerical ranges approach

    Rodman, Leiba [Department of Mathematics, College of William and Mary, P.O. Box 8795, Williamsburg, Virginia 23187-8795 (United States); Spitkovsky, Ilya M., E-mail:, E-mail: [Department of Mathematics, College of William and Mary, P.O. Box 8795, Williamsburg, Virginia 23187-8795 (United States); Division of Science and Mathematics, New York University Abu Dhabi, Saadiyat Island, P.O. Box 129188, Abu Dhabi (United Arab Emirates); Szkoła, Arleta, E-mail:; Weis, Stephan, E-mail: [Max Planck Institute for Mathematics in the Sciences, Inselstrasse 22, D-04103 Leipzig (Germany)


    We study the continuity of an abstract generalization of the maximum-entropy inference—a maximizer. It is defined as a right-inverse of a linear map restricted to a convex body which uniquely maximizes on each fiber of the linear map a continuous function on the convex body. Using convex geometry we prove, amongst others, the existence of discontinuities of the maximizer at limits of extremal points not being extremal points themselves and apply the result to quantum correlations. Further, we use numerical range methods in the case of quantum inference which refers to two observables. One result is a complete characterization of points of discontinuity for 3 × 3 matrices.

  4. Maximum Fidelity

    Kinkhabwala, Ali


    The most fundamental problem in statistics is the inference of an unknown probability distribution from a finite number of samples. For a specific observed data set, answers to the following questions would be desirable: (1) Estimation: Which candidate distribution provides the best fit to the observed data?, (2) Goodness-of-fit: How concordant is this distribution with the observed data?, and (3) Uncertainty: How concordant are other candidate distributions with the observed data? A simple unified approach for univariate data that addresses these traditionally distinct statistical notions is presented called "maximum fidelity". Maximum fidelity is a strict frequentist approach that is fundamentally based on model concordance with the observed data. The fidelity statistic is a general information measure based on the coordinate-independent cumulative distribution and critical yet previously neglected symmetry considerations. An approximation for the null distribution of the fidelity allows its direct conversi...

  5. Degree-Based Approach for the Maximum Clique Problem%度数法求解最大团问题

    胡新; 王丽珍; 何瓦特; 姚华传


      由于最大团问题(maximum clique problem,MCP)的复杂性、挑战性,以及在数据挖掘等领域的广泛应用,使得求解MCP问题具有非常重要的意义.根据最大团顶点度数较大的特点,提出了从图中第一个度数最大的顶点出发递归求解最大团的算法(简称度数法).为了进一步提高算法的效率,根据图的特点和最大团的特点提出了三个改进的剪枝策略.从理论上证明了算法的正确性和完整性,其时间复杂度为 O(1.442n),空间为 O(n2).通过实验验证了度数法及其改进剪枝策略的效果和效率.%The maximum clique problem (MCP) is a significant problem in computer science field because of its complexity, challenging and the extensive applications in data mining and other fields. This paper puts forward a new degree-based approach to finding the maximum clique in a given graph G. According to the characteristic that the degree of the vertexes in a maximum clique is relatively larger, the new approach solves the maximum clique by starting from the first vertex which degree is the largest in the graph G. In order to further improve the efficiency of the algorithm, this paper presents three improvement and pruning strategies based on the characteristics of the graph and the maximum clique. This paper proves that the new approach is correct and complete, the time complexity is O(1.442n) , and the space cost is O(n2) . Finally, an empirical study verifies the effectiveness and efficiency of the new approach.

  6. Maximum energy product at elevated temperatures for hexagonal strontium ferrite (SrFe12O19) magnet

    Park, J; Hong, YK; Kim, SG; Kim, S; Liyanage, LSI; Lee, J; Lee, W; Abo, GS; Hur, KH; An, SY


    The electronic structure of hexagonal strontium ferrite (SrFe12O19) was calculated based on the density functional theory (DFT) and generalized gradient approximation (GGA). The GGA+U method was used to improve the description of localized Fe 3d electrons. Three different effective U (U-eff) values of 3.7, 7.0, and 10.3 eV were used to calculate three sets of exchange integrals for 21 excited states. We then calculated the temperature dependence of magnetic moments m(T) for the five sublattices (2a, 2b, 12k, 4f(1), and 4f(2)) using the exchange integrals. The m(T) of the five sublattices are inter related to the nearest neighbors, where the spins are mostly anti-ferromagnetically coupled. The five sublattice m(T) were used to ()brain the saturation magnetization M-s(T) of SrFe12O19, which is in good agreement with the experimental values. The temperature dependence of maximum energy product. ((BII)(max)(T)) was calculated using the calculated M-s(T). (C) 2013 Elsevier B.V. All rights reserved.

  7. Optimisation of sea surface current retrieval using a maximum cross correlation technique on modelled sea surface temperature

    Heuzé, Céline; Eriksson, Leif; Carvajal, Gisela


    Using sea surface temperature from satellite images to retrieve sea surface currents is not a new idea, but so far its operational near-real time implementation has not been possible. Validation studies are too region-specific or uncertain, due to the errors induced by the images themselves. Moreover, the sensitivity of the most common retrieval method, the maximum cross correlation, to the three parameters that have to be set is unknown. Using model outputs instead of satellite images, biases induced by this method are assessed here, for four different seas of Western Europe, and the best of nine settings and eight temporal resolutions are determined. For all regions, tracking a small 5 km pattern from the first image over a large 30 km region around its original location on a second image, separated from the first image by 6 to 9 hours returned the most accurate results. Moreover, for all regions, the problem is not inaccurate results but missing results, where the velocity is too low to be picked by the retrieval. The results are consistent both with limitations caused by ocean surface current dynamics and with the available satellite technology, indicating that automated sea surface current retrieval from sea surface temperature images is feasible now, for search and rescue operations, pollution confinement or even for more energy efficient and comfortable ship navigation.

  8. Attributes for NHDPlus Catchments (Version 1.1) for the Conterminous United States: 30-Year Average Annual Maximum Temperature, 1971-2000

    U.S. Geological Survey, Department of the Interior — This data set represents the 30-year (1971-2000) average annual maximum temperature in Celsius multiplied by 100 compiled for every catchment of NHDPlus for the...

  9. Estimation of Land Surface Temperature through Blending MODIS and AMSR-E Data with the Bayesian Maximum Entropy Method

    Xiaokang Kou


    Full Text Available Land surface temperature (LST plays a major role in the study of surface energy balances. Remote sensing techniques provide ways to monitor LST at large scales. However, due to atmospheric influences, significant missing data exist in LST products retrieved from satellite thermal infrared (TIR remotely sensed data. Although passive microwaves (PMWs are able to overcome these atmospheric influences while estimating LST, the data are constrained by low spatial resolution. In this study, to obtain complete and high-quality LST data, the Bayesian Maximum Entropy (BME method was introduced to merge 0.01° and 0.25° LSTs inversed from MODIS and AMSR-E data, respectively. The result showed that the missing LSTs in cloudy pixels were filled completely, and the availability of merged LSTs reaches 100%. Because the depths of LST and soil temperature measurements are different, before validating the merged LST, the station measurements were calibrated with an empirical equation between MODIS LST and 0~5 cm soil temperatures. The results showed that the accuracy of merged LSTs increased with the increasing quantity of utilized data, and as the availability of utilized data increased from 25.2% to 91.4%, the RMSEs of the merged data decreased from 4.53 °C to 2.31 °C. In addition, compared with the filling gap method in which MODIS LST gaps were filled with AMSR-E LST directly, the merged LSTs from the BME method showed better spatial continuity. The different penetration depths of TIR and PMWs may influence fusion performance and still require further studies.

  10. New methodology to estimate Arctic sea ice concentration from SMOS combining brightness temperature differences in a maximum-likelihood estimator

    Gabarro, Carolina; Turiel, Antonio; Elosegui, Pedro; Pla-Resina, Joaquim A.; Portabella, Marcos


    Monitoring sea ice concentration is required for operational and climate studies in the Arctic Sea. Technologies used so far for estimating sea ice concentration have some limitations, for instance the impact of the atmosphere, the physical temperature of ice, and the presence of snow and melting. In the last years, L-band radiometry has been successfully used to study some properties of sea ice, remarkably sea ice thickness. However, the potential of satellite L-band observations for obtaining sea ice concentration had not yet been explored. In this paper, we present preliminary evidence showing that data from the Soil Moisture Ocean Salinity (SMOS) mission can be used to estimate sea ice concentration. Our method, based on a maximum-likelihood estimator (MLE), exploits the marked difference in the radiative properties of sea ice and seawater. In addition, the brightness temperatures of 100 % sea ice and 100 % seawater, as well as their combined values (polarization and angular difference), have been shown to be very stable during winter and spring, so they are robust to variations in physical temperature and other geophysical parameters. Therefore, we can use just two sets of tie points, one for summer and another for winter, for calculating sea ice concentration, leading to a more robust estimate. After analysing the full year 2014 in the entire Arctic, we have found that the sea ice concentration obtained with our method is well determined as compared to the Ocean and Sea Ice Satellite Application Facility (OSI SAF) dataset. However, when thin sea ice is present (ice thickness ≲ 0.6 m), the method underestimates the actual sea ice concentration. Our results open the way for a systematic exploitation of SMOS data for monitoring sea ice concentration, at least for specific seasons. Additionally, SMOS data can be synergistically combined with data from other sensors to monitor pan-Arctic sea ice conditions.

  11. Quantile-based Bayesian maximum entropy approach for spatiotemporal modeling of ambient air quality levels.

    Yu, Hwa-Lung; Wang, Chih-Hsin


    Understanding the daily changes in ambient air quality concentrations is important to the assessing human exposure and environmental health. However, the fine temporal scales (e.g., hourly) involved in this assessment often lead to high variability in air quality concentrations. This is because of the complex short-term physical and chemical mechanisms among the pollutants. Consequently, high heterogeneity is usually present in not only the averaged pollution levels, but also the intraday variance levels of the daily observations of ambient concentration across space and time. This characteristic decreases the estimation performance of common techniques. This study proposes a novel quantile-based Bayesian maximum entropy (QBME) method to account for the nonstationary and nonhomogeneous characteristics of ambient air pollution dynamics. The QBME method characterizes the spatiotemporal dependence among the ambient air quality levels based on their location-specific quantiles and accounts for spatiotemporal variations using a local weighted smoothing technique. The epistemic framework of the QBME method can allow researchers to further consider the uncertainty of space-time observations. This study presents the spatiotemporal modeling of daily CO and PM10 concentrations across Taiwan from 1998 to 2009 using the QBME method. Results show that the QBME method can effectively improve estimation accuracy in terms of lower mean absolute errors and standard deviations over space and time, especially for pollutants with strong nonhomogeneous variances across space. In addition, the epistemic framework can allow researchers to assimilate the site-specific secondary information where the observations are absent because of the common preferential sampling issues of environmental data. The proposed QBME method provides a practical and powerful framework for the spatiotemporal modeling of ambient pollutants.

  12. Maximum shortening velocity of lymphatic muscle approaches that of striated muscle.

    Zhang, Rongzhen; Taucer, Anne I; Gashev, Anatoliy A; Muthuchamy, Mariappan; Zawieja, David C; Davis, Michael J


    Lymphatic muscle (LM) is widely considered to be a type of vascular smooth muscle, even though LM cells uniquely express contractile proteins from both smooth muscle and cardiac muscle. We tested the hypothesis that LM exhibits an unloaded maximum shortening velocity (Vmax) intermediate between that of smooth muscle and cardiac muscle. Single lymphatic vessels were dissected from the rat mesentery, mounted in a servo-controlled wire myograph, and subjected to isotonic quick release protocols during spontaneous or agonist-evoked contractions. After maximal activation, isotonic quick releases were performed at both the peak and plateau phases of contraction. Vmax was 0.48 ± 0.04 lengths (L)/s at the peak: 2.3 times higher than that of mesenteric arteries and 11.4 times higher than mesenteric veins. In cannulated, pressurized lymphatic vessels, shortening velocity was determined from the maximal rate of constriction [rate of change in internal diameter (-dD/dt)] during spontaneous contractions at optimal preload and minimal afterload; peak -dD/dt exceeded that obtained during any of the isotonic quick release protocols (2.14 ± 0.30 L/s). Peak -dD/dt declined with pressure elevation or activation using substance P. Thus, isotonic methods yielded Vmax values for LM in the mid to high end (0.48 L/s) of those the recorded for phasic smooth muscle (0.05-0.5 L/s), whereas isobaric measurements yielded values (>2.0 L/s) that overlapped the midrange of values for cardiac muscle (0.6-3.3 L/s). Our results challenge the dogma that LM is classical vascular smooth muscle, and its unusually high Vmax is consistent with the expression of cardiac muscle contractile proteins in the lymphatic vessel wall.

  13. A seqlet-based maximum entropy Markov approach for protein secondary structure prediction

    DONG; Qiwen; WANG; Xiaolong; LIN; Lei; GUAN; Yi


    A novel method for predicting the secondary structures of proteins from amino acid sequence has been presented. The protein secondary structure seqlets that are analogous to the words in natural language have been extracted. These seqlets will capture the relationship between amino acid sequence and the secondary structures of proteins and further form the protein secondary structure dictionary. To be elaborate, the dictionary is organism-specific. Protein secondary structure prediction is formulated as an integrated word segmentation and part of speech tagging problem. The word-lattice is used to represent the results of the word segmentation and the maximum entropy model is used to calculate the probability of a seqlet tagged as a certain secondary structure type. The method is markovian in the seqlets, permitting efficient exact calculation of the posterior probability distribution over all possible word segmentations and their tags by viterbi algorithm. The optimal segmentations and their tags are computed as the results of protein secondary structure prediction. The method is applied to predict the secondary structures of proteins of four organisms respectively and compared with the PHD method. The results show that the performance of this method is higher than that of PHD by about 3.9% Q3 accuracy and 4.6% SOV accuracy. Combining with the local similarity protein sequences that are obtained by BLAST can give better prediction. The method is also tested on the 50 CASP5 target proteins with Q3 accuracy 78.9% and SOV accuracy 77.1%. A web server for protein secondary structure prediction has been constructed which is available at http://www.insun. hit. edu. cn: 81/demos/biology/index.html.

  14. Maximum ADPE Approach for a High Rate CCSDS Return Link Processing System

    Krimchansky, Alexander; Moe, Brian; Erickson, David


    The earth observing system data and operations system (EDOS) multi-mission data processing and distribution system for the earth observing system is considered. The EDOS was based on the Consultative Committee for Space Data Systems (CCSDS) protocols. The development included the challenge of developing and demonstrating a 150 Mbps CCSDS return link processing capability for the support of the first EDOS delivery. The approach used general-purpose automated data processing equipment (ADPE) and minimized the use of customized hardware. The way in which the system was developed is described. The principle design decisions and the performance benchmark results are presented.

  15. Hybrid Evolutionary Approaches to Maximum Lifetime Routing and Energy Efficiency in Sensor Mesh Networks.

    Rahat, Alma A M; Everson, Richard M; Fieldsend, Jonathan E


    Mesh network topologies are becoming increasingly popular in battery-powered wireless sensor networks, primarily because of the extension of network range. However, multihop mesh networks suffer from higher energy costs, and the routing strategy employed directly affects the lifetime of nodes with limited energy resources. Hence when planning routes there are trade-offs to be considered between individual and system-wide battery lifetimes. We present a multiobjective routing optimisation approach using hybrid evolutionary algorithms to approximate the optimal trade-off between the minimum lifetime and the average lifetime of nodes in the network. In order to accomplish this combinatorial optimisation rapidly, our approach prunes the search space using k-shortest path pruning and a graph reduction method that finds candidate routes promoting long minimum lifetimes. When arbitrarily many routes from a node to the base station are permitted, optimal routes may be found as the solution to a well-known linear program. We present an evolutionary algorithm that finds good routes when each node is allowed only a small number of paths to the base station. On a real network deployed in the Victoria & Albert Museum, London, these solutions, using only three paths per node, are able to achieve minimum lifetimes of over 99% of the optimum linear program solution's time to first sensor battery failure.

  16. Estimating Daily Maximum and Minimum Land Air Surface Temperature Using MODIS Land Surface Temperature Data and Ground Truth Data in Northern Vietnam

    Phan Thanh Noi


    Full Text Available This study aims to evaluate quantitatively the land surface temperature (LST derived from MODIS (Moderate Resolution Imaging Spectroradiometer MOD11A1 and MYD11A1 Collection 5 products for daily land air surface temperature (Ta estimation over a mountainous region in northern Vietnam. The main objective is to estimate maximum and minimum Ta (Ta-max and Ta-min using both TERRA and AQUA MODIS LST products (daytime and nighttime and auxiliary data, solving the discontinuity problem of ground measurements. There exist no studies about Vietnam that have integrated both TERRA and AQUA LST of daytime and nighttime for Ta estimation (using four MODIS LST datasets. In addition, to find out which variables are the most effective to describe the differences between LST and Ta, we have tested several popular methods, such as: the Pearson correlation coefficient, stepwise, Bayesian information criterion (BIC, adjusted R-squared and the principal component analysis (PCA of 14 variables (including: LST products (four variables, NDVI, elevation, latitude, longitude, day length in hours, Julian day and four variables of the view zenith angle, and then, we applied nine models for Ta-max estimation and nine models for Ta-min estimation. The results showed that the differences between MODIS LST and ground truth temperature derived from 15 climate stations are time and regional topography dependent. The best results for Ta-max and Ta-min estimation were achieved when we combined both LST daytime and nighttime of TERRA and AQUA and data from the topography analysis.

  17. Low-temperature hopping dynamics with energy disorder: renormalization group approach.

    Velizhanin, Kirill A; Piryatinski, Andrei; Chernyak, Vladimir Y


    We formulate a real-space renormalization group (RG) approach for efficient numerical analysis of the low-temperature hopping dynamics in energy-disordered lattices. The approach explicitly relies on the time-scale separation of the trapping/escape dynamics. This time-scale separation allows to treat the hopping dynamics as a hierarchical process, RG step being a transformation between the levels of the hierarchy. We apply the proposed RG approach to analyze hopping dynamics in one- and two-dimensional lattices with varying degrees of energy disorder, and find the approach to be accurate at low temperatures and computationally much faster than the brute-force direct diagonalization. Applicability criteria of the proposed approach with respect to the time-scale separation and the maximum number of hierarchy levels are formulated. RG flows of energy distribution and pre-exponential factors of the Miller-Abrahams model are analyzed.

  18. Assessment of future changes in the maximum temperature at selected stations in Iran based on HADCM3 and CGCM3 models

    Abbasnia, Mohsen; Tavousi, Taghi; Khosravi, Mahmood


    Identification and assessment of climate change in the next decades with the aim of appropriate environmental planning in order to adapt and mitigate its effects are quite necessary. In this study, maximum temperature changes of Iran were comparatively examined in two future periods (2041-2070 and 2071-2099) and based on the two general circulation model outputs (CGCM3 and HADCM3) and under existing emission scenarios (A2, A1B, B1 and B2). For this purpose, after examining the ability of statistical downscaling method of SDSM in simulation of the observational period (1981-2010), the daily maximum temperature of future decades was downscaled by considering the uncertainty in seven synoptic stations as representatives of climate in Iran. In uncertainty analysis related to model-scenarios, it was found that CGCM3 model under scenario B1 had the best performance about the simulation of future maximum temperature among all of the examined scenario-models. The findings also showed that the maximum temperature at study stations will be increased between 1°C and 2°C in the middle and the end of 21st century. Also this maximum temperature changes is more severe in the HADCM3 model than the CGCM3 model.

  19. Critical flow-storm approach to total maximum daily load(TMDL) development: an analytical conceptual model

    Harry X.ZHANG; Shaw L.YU


    One of the key challenges in the total max-imum daily load (TMDL) development process is how to define the critical condition for a receiving water-body. The main concern in using a continuous simu-lation approach is the absence of any guarantee that the most critical condition will be captured during the selected representative hydrologic period, given the scar-city of long-term continuous data. The objectives of this paper are to clearly address the critical condition in the TMDL development process and to compare continu-ous and evEnt-based approaches in defining critical con-dition during TMDL development for a waterbody impacted by both point and nonpoint source pollution. A practical, event-based critical flow-storm (CFS) approach was developed to explicitly addresses the crit-ical condition as a combination of a low stream flow and a storm event of a selected magnitude, both having cer-tain frequencies of occurrence. This paper illustrated the CFS concept and provided its theoretical basis using a derived analytical conceptual model. The CFS approach clearly defined a critical condition, obtained reasonable results and could be considered as an alternative method in TMDL development.

  20. Kinetics of hydrolysis of 1-benzoyl-1,2,4-triazole in aqueous solution as a function of temperature near the temperature of maximum density, and the isochoric controversy

    Blandamer, MJ; Buurma, NJ; Engberts, JBFN; Reis, JCR; Buurma, Niklaas J.; Reis, João C.R.


    At temperatures above and below the temperature of maximum density, TMD, for water at ambient pressure, pairs of temperatures exist at which the molar volumes of water are equal. First-order rate constants for the pH-independent hydrolysis of 1-benzoyl-1,2,4-triazole in aqueous solution at pairs of

  1. Random Forest-Based Approach for Maximum Power Point Tracking of Photovoltaic Systems Operating under Actual Environmental Conditions.

    Shareef, Hussain; Mutlag, Ammar Hussein; Mohamed, Azah


    Many maximum power point tracking (MPPT) algorithms have been developed in recent years to maximize the produced PV energy. These algorithms are not sufficiently robust because of fast-changing environmental conditions, efficiency, accuracy at steady-state value, and dynamics of the tracking algorithm. Thus, this paper proposes a new random forest (RF) model to improve MPPT performance. The RF model has the ability to capture the nonlinear association of patterns between predictors, such as irradiance and temperature, to determine accurate maximum power point. A RF-based tracker is designed for 25 SolarTIFSTF-120P6 PV modules, with the capacity of 3 kW peak using two high-speed sensors. For this purpose, a complete PV system is modeled using 300,000 data samples and simulated using the MATLAB/SIMULINK package. The proposed RF-based MPPT is then tested under actual environmental conditions for 24 days to validate the accuracy and dynamic response. The response of the RF-based MPPT model is also compared with that of the artificial neural network and adaptive neurofuzzy inference system algorithms for further validation. The results show that the proposed MPPT technique gives significant improvement compared with that of other techniques. In addition, the RF model passes the Bland-Altman test, with more than 95 percent acceptability.

  2. Maximum entropy approach for batch-arrival queue under N policy with an un-reliable server and single vacation

    Ke, Jau-Chuan; Lin, Chuen-Horng


    We consider the M[x]/G/1 queueing system, in which the server operates N policy and a single vacation. As soon as the system becomes empty the server leaves for a vacation of random length V. When he returns from the vacation and the system size is greater than or equal to a threshold value N, he starts to serve the waiting customers. If he finds fewer customers than N. he waits in the system until the system size reaches or exceeds N. The server is subject to breakdowns according to a Poisson process and his repair time obeys an arbitrary distribution. We use maximum entropy principle to derive the approximate formulas for the steady-state probability distributions of the queue length. We perform a comparative analysis between the approximate results with established exact results for various batch size, vacation time, service time and repair time distributions. We demonstrate that the maximum entropy approach is efficient enough for practical purpose and is a feasible method for approximating the solution of complex queueing systems.

  3. Dynamical Evolution of the Inner Heliosphere Approaching Solar Activity Maximum: Interpreting Ulysses Observations Using a Global MHD Model. Appendix 1

    Riley, Pete; Mikic, Z.; Linker, J. A.


    In this study we describe a series of MHD simulations covering the time period from 12 January 1999 to 19 September 2001 (Carrington Rotation 1945 to 1980). This interval coincided with: (1) the Sun s approach toward solar maximum; and (2) Ulysses second descent to the southern polar regions, rapid latitude scan, and arrival into the northern polar regions. We focus on the evolution of several key parameters during this time, including the photospheric magnetic field, the computed coronal hole boundaries, the computed velocity profile near the Sun, and the plasma and magnetic field parameters at the location of Ulysses. The model results provide a global context for interpreting the often complex in situ measurements. We also present a heuristic explanation of stream dynamics to describe the morphology of interaction regions at solar maximum and contrast it with the picture that resulted from Ulysses first orbit, which occurred during more quiescent solar conditions. The simulation results described here are available at:

  4. Comparison of eastern tropical Pacific TEX86 and Globigerinoides ruber Mg/Ca derived sea surface temperatures: Insights from the Holocene and Last Glacial Maximum

    Hertzberg, Jennifer E.; Schmidt, Matthew W.; Bianchi, Thomas S.; Smith, Richard W.; Shields, Michael R.; Marcantonio, Franco


    The use of the TEX86 temperature proxy has thus far come to differing results as to whether TEX86 temperatures are representative of surface or subsurface conditions. In addition, although TEX86 temperatures might reflect sea surface temperatures based on core-top (Holocene) values, this relationship might not hold further back in time. Here, we investigate the TEX86 temperature proxy by comparing TEX86 temperatures to Mg/Ca temperatures of multiple species of planktonic foraminifera for two sites in the eastern tropical Pacific (on the Cocos and Carnegie Ridges) across the Holocene and Last Glacial Maximum. Core-top and Holocene TEX86H temperatures at both study regions agree well, within error, with the Mg/Ca temperatures of Globigerinoides ruber, a surface dwelling planktonic foraminifera. However, during the Last Glacial Maximum, TEX86H temperatures are more representative of upper thermocline temperatures, and are offset from G. ruber Mg/Ca temperatures by 5.8 °C and 2.9 °C on the Cocos Ridge and Carnegie Ridge, respectively. This offset between proxies cannot be reconciled by using different TEX86 temperature calibrations, and instead, we suggest that the offset is due to a deeper export depth of GDGTs at the LGM. We also compare the degree of glacial cooling at both sites based on both temperature proxies, and find that TEX86H temperatures greatly overestimate glacial cooling, especially on the Cocos Ridge. This study has important implications for applying the TEX86 paleothermometer in the eastern tropical Pacific.

  5. Assessing suitable area for Acacia dealbata Mill. in the Ceira River Basin (Central Portugal based on maximum entropy modelling approach

    Jorge Pereira


    Full Text Available Biological invasion by exotic organisms became a key issue, a concern associated to the deep impacts on several domains described as resultant from such processes. A better understanding of the processes, the identification of more susceptible areas, and the definition of preventive or mitigation measures are identified as critical for the purpose of reducing associated impacts. The use of species distribution modeling might help on the purpose of identifying areas that are more susceptible to invasion. This paper aims to present preliminary results on assessing the susceptibility to invasion by the exotic species Acacia dealbata Mill. in the Ceira river basin. The results are based on the maximum entropy modeling approach, considered one of the correlative modelling techniques with better predictive performance. Models which validation is based on independent data sets present better performance, an evaluation based on the AUC of ROC accuracy measure.

  6. Sharp reduction in maximum fuel temperatures during loss of coolant accidents in a PBMR DPP-400 core, by means of optimised placement of neutron poisons

    Serfontein, Dawid E., E-mail:


    In a preceding study, coupled neutronics and thermo-hydraulic simulations were performed with the VSOP-A diffusion code for the standard 9.6 wt% enriched 9 g uranium fuel spheres in the 400 MWth Pebble Bed Modular Reactor Demonstration Power Plant. The axial power profile peaked at about a third from the top of the fuel core and the radial profile peaked directly adjacent to the central graphite reflector. The maximum temperature during a Depressurised Loss of Coolant (DLOFC) incident was 1581.0 °C, which is close to the limit of 1600 °C above which the leakage of radioactive fission products through the TRISO coatings around the fuel kernels may become unacceptable. This may present licensing challenges and also limits the total power output of the reactor. In this article the results of an optimisation study of the axial and radial power profiles for this reactor are reported. The main aim was to minimise the maximum DLOFC temperature. Reducing the maximum equilibrium temperature during normal operation was a lesser aim. Minimising the maximum DLOFC temperature was achieved by placing an optimised distribution of {sup 10}B neutron poison in the central reflector. The standard power profiles are sub-optimal with respect to the passive leakage of decay heat during a DLOFC. Since the radial power profile peaks directly adjacent to the central reflector, the distance that the decay heat needs to be conducted toward the outside of the reactor and the ultimate heat sink is at a maximum. The sharp axial power profile peak means that most of the decay power is concentrated in a small part of the core volume, thereby sharply increasing the required outward heat flux in this hotspot region. Both these features sharply increase the maximum DLOFC temperatures in this hotspot. Therefore the axial distribution of the neutron poisons in the central reflector was optimised so as to push the equilibrium power density profile radially outward and to suppress the axial power peak

  7. Temperature of critical clusters in nucleation theory: generalized Gibbs' approach.

    Schmelzer, Jürn W P; Boltachev, Grey Sh; Abyzov, Alexander S


    According to the classical Gibbs' approach to the description of thermodynamically heterogeneous systems, the temperature of the critical clusters in nucleation is the same as the temperature of the ambient phase, i.e., with respect to temperature the conventional macroscopic equilibrium conditions are assumed to be fulfilled. In contrast, the generalized Gibbs' approach [J. W. P. Schmelzer, G. Sh. Boltachev, and V. G. Baidakov, J. Chem. Phys. 119, 6166 (2003); and ibid. 124, 194503 (2006)] predicts that critical clusters (having commonly spatial dimensions in the nanometer range) have, as a rule, a different temperature as compared with the ambient phase. The existence of a curved interface may lead, consequently, to an equilibrium coexistence of different phases with different temperatures similar to differences in pressure as expressed by the well-known Laplace equation. Employing the generalized Gibbs' approach, it is demonstrated that, for the case of formation of droplets in a one-component vapor, the temperature of the critical droplets can be shown to be higher as compared to the vapor. In this way, temperature differences between critically sized droplets and ambient vapor phase, observed in recent molecular dynamics simulations of argon condensation by Wedekind et al. [J. Chem. Phys. 127, 064501 (2007)], can be given a straightforward theoretical interpretation. It is shown as well that - employing the same model assumptions concerning bulk and interfacial properties of the system under consideration - the temperature of critical bubbles in boiling is lower as compared to the bulk liquid.

  8. Improving soil moisture profile reconstruction from ground-penetrating radar data: a maximum likelihood ensemble filter approach

    A. P. Tran


    Full Text Available The vertical profile of shallow unsaturated zone soil moisture plays a key role in many hydro-meteorological and agricultural applications. We propose a closed-loop data assimilation procedure based on the maximum likelihood ensemble filter algorithm to update the vertical soil moisture profile from time-lapse ground-penetrating radar (GPR data. A hydrodynamic model is used to propagate the system state in time and a radar electromagnetic model and petrophysical relationships to link the state variable with the observation data, which enables us to directly assimilate the GPR data. Instead of using the surface soil moisture only, the approach allows to use the information of the whole soil moisture profile for the assimilation. We validated our approach through a synthetic study. We constructed a synthetic soil column with a depth of 80 cm and analyzed the effects of the soil type on the data assimilation by considering 3 soil types, namely, loamy sand, silt and clay. The assimilation of GPR data was performed to solve the problem of unknown initial conditions. The numerical soil moisture profiles generated by the Hydrus-1D model were used by the GPR model to produce the "observed" GPR data. The results show that the soil moisture profile obtained by assimilating the GPR data is much better than that of an open-loop forecast. Compared to the loamy sand and silt, the updated soil moisture profile of the clay soil converges to the true state much more slowly. Decreasing the update interval from 60 down to 10 h only slightly improves the effectiveness of the GPR data assimilation for the loamy sand but significantly for the clay soil. The proposed approach appears to be promising to improve real-time prediction of the soil moisture profiles as well as to provide effective estimates of the unsaturated hydraulic properties at the field scale from time-lapse GPR measurements.


    Djeison Cesar Batista


    Full Text Available Thermal rectification of wood was developed in the decade of 1940 and has been largely studied and produced in Europe. In Brazil, the research about this technique is still little and sparse, but it has gained attention nowadays. The aim of this study was to evaluate the influence of time and temperature of rectification on the reduction of maximum swelling of Eucalyptus grandis wood. According to the results obtained it is possible to achieve reductions of about 50% on the maximum volumetric swelling of Eucalyptus grandis wood. Best results were obtained for 230°C of thermal rectification rather than 200°C. The factor temperature was more significant than time, once that there was no significant difference between the times used (1, 2 and 3 hours. There was no significant interaction between the factors time and temperature.

  10. A basin-scale approach to estimating stream temperatures of tributaries to the lower Klamath River, California

    Flint, L.E.; Flint, A.L.


    Stream temperature is an important component of salmonid habitat and is often above levels suitable for fish survival in the Lower Klamath River in northern California. The objective of this study was to provide boundary conditions for models that are assessing stream temperature on the main stem for the purpose of developing strategies to manage stream conditions using Total Maximum Daily Loads. For model input, hourly stream temperatures for 36 tributaries were estimated for 1 Jan. 2001 through 31 Oct. 2004. A basin-scale approach incorporating spatially distributed energy balance data was used to estimate the stream temperatures with measured air temperature and relative humidity data and simulated solar radiation, including topographic shading and corrections for cloudiness. Regression models were developed on the basis of available stream temperature data to predict temperatures for unmeasured periods of time and for unmeasured streams. The most significant factor in matching measured minimum and maximum stream temperatures was the seasonality of the estimate. Adding minimum and maximum air temperature to the regression model improved the estimate, and air temperature data over the region are available and easily distributed spatially. The addition of simulated solar radiation and vapor saturation deficit to the regression model significantly improved predictions of maximum stream temperature but was not required to predict minimum stream temperature. The average SE in estimated maximum daily stream temperature for the individual basins was 0.9 ?? 0.6??C at the 95% confidence interval. Copyright ?? 2008 by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America. All rights reserved.

  11. Maximum power point search method for photovoltaic panels which uses a light sensor in the conditions of real shading and temperature

    Mroczka, Janusz; Ostrowski, Mariusz


    Disadvantages of photovoltaic panels are their low efficiency and non-linear current-voltage characteristic. Therefore it is necessary to apply the maximum power tracking systems which are dependent on the sun exposure and temperature. Trackers, that are used in photovoltaic systems, differ from each other in the speed and accuracy of tracking. Typically, in order to determine the maximum power point, trackers use measure of current and voltage. The perturb and observe algorithm or the incremental conductance method are frequent in the literature. The drawback of these solutions is the need to search the entire current-voltage curve, resulting in a significant loss of power in the fast-changing lighting conditions. Modern solutions use an additional measurement of temperature, short-circuit current or open circuit voltage in order to determine the starting point of one of the above methods, what decreases the tracking time. For this paper, a sequence of simulations and tests in real shading and temperature conditions for the investigated method, which uses additional light sensor to increase the speed of the perturb and observe algorithm in fast-changing illumination conditions was performed. Due to the non-linearity of the light sensor and the photovoltaic panel and the influence of temperature on the used sensor and panel characteristics, we cannot directly determine the relationship between them. For this reason, the tested method is divided into two steps. In the first step algorithm uses the correlation curve of the light sensor and current at the maximum power point and determines the current starting point with respect of which the perturb and observe algorithm is run. When the maximum power point is reached, in a second step, the difference between the starting point and the actual maximum power point is calculated and on this basis the coefficients of correlation curve are modified.

  12. A first-principles approach to finite temperature elastic constants

    Wang, Y; Wang, J J; Zhang, H; Manga, V R; Shang, S L; Chen, L-Q; Liu, Z-K [Department of Materials Science and Engineering, Pennsylvania State University, University Park, PA 16802 (United States)


    A first-principles approach to calculating the elastic stiffness coefficients at finite temperatures was proposed. It is based on the assumption that the temperature dependence of elastic stiffness coefficients mainly results from volume change as a function of temperature; it combines the first-principles calculations of elastic constants at 0 K and the first-principles phonon theory of thermal expansion. Its applications to elastic constants of Al, Cu, Ni, Mo, Ta, NiAl, and Ni{sub 3}Al from 0 K up to their respective melting points show excellent agreement between the predicted values and existing experimental measurements.

  13. Comparison of binning approaches for pulsed photothermal temperature profiling

    Milanič, Matija; Majaron, Boris


    In experiments and numerical simulations of pulsed photothermal temperature profiling, we compare three signal binning approaches. In uniform binning n subsequent signal data points are averaged, quadratic binning follows from the characteristic of thermal diffusion, and geometrical binning utilizes geometric progression. Our experiment was performed on collagen gel samples with absorbing layers located at various subsurface depths. From measured PPTR signals laser-induced temperature profiles were reconstructed using spectrally composite kernel. The simulated PPTR signals of temperature profiles resembling experimental temperature profiles contain noise with characteristics consistent with our experimental system. In addition, we simulated PPTR signal of a biopsy-defined port-wine stain skin geometry. In PPTR temperature profiling of collagen gel samples, quadratic binning results in optimal reconstructions for shallow absorbing structures, while uniform binning performs optimally for deeper absorbing structures. Overall, geometric binning yields least accurate reconstructions, especially for deeper absorbing layers.

  14. A New Approach to Identify Optimal Properties of Shunting Elements for Maximum Damping of Structural Vibration Using Piezoelectric Patches

    Park, Junhong; Palumbo, Daniel L.


    The use of shunted piezoelectric patches in reducing vibration and sound radiation of structures has several advantages over passive viscoelastic elements, e.g., lower weight with increased controllability. The performance of the piezoelectric patches depends on the shunting electronics that are designed to dissipate vibration energy through a resistive element. In past efforts most of the proposed tuning methods were based on modal properties of the structure. In these cases, the tuning applies only to one mode of interest and maximum tuning is limited to invariant points when based on den Hartog's invariant points concept. In this study, a design method based on the wave propagation approach is proposed. Optimal tuning is investigated depending on the dynamic and geometric properties that include effects from boundary conditions and position of the shunted piezoelectric patch relative to the structure. Active filters are proposed as shunting electronics to implement the tuning criteria. The developed tuning methods resulted in superior capabilities in minimizing structural vibration and noise radiation compared to other tuning methods. The tuned circuits are relatively insensitive to changes in modal properties and boundary conditions, and can applied to frequency ranges in which multiple modes have effects.

  15. A topological restricted maximum likelihood (TopREML approach to regionalize trended runoff signatures in stream networks

    M. F. Müller


    Full Text Available We introduce TopREML as a method to predict runoff signatures in ungauged basins. The approach is based on the use of linear mixed models with spatially correlated random effects. The nested nature of streamflow networks is taken into account by using water balance considerations to constrain the covariance structure of runoff and to account for the stronger spatial correlation between flow-connected basins. The restricted maximum likelihood (REML framework generates the best linear unbiased predictor (BLUP of both the predicted variable and the associated prediction uncertainty, even when incorporating observable covariates into the model. The method was successfully tested in cross validation analyses on mean streamflow and runoff frequency in Nepal (sparsely gauged and Austria (densely gauged, where it matched the performance of comparable methods in the prediction of the considered runoff signature, while significantly outperforming them in the prediction of the associated modeling uncertainty. TopREML's ability to combine deterministic and stochastic information to generate BLUPs of the prediction variable and its uncertainty makes it a particularly versatile method that can readily be applied in both densely gauged basins, where it takes advantage of spatial covariance information, and data-scarce regions, where it can rely on covariates, which are increasingly observable thanks to remote sensing technology.

  16. Comparison of an assumption-free Bayesian approach with Optimal Sampling Schedule to a maximum a posteriori Approach for Personalizing Cyclophosphamide Dosing.

    Laínez, José M; Orcun, Seza; Pekny, Joseph F; Reklaitis, Gintaras V; Suvannasankha, Attaya; Fausel, Christopher; Anaissie, Elias J; Blau, Gary E


    Variable metabolism, dose-dependent efficacy, and a narrow therapeutic target of cyclophosphamide (CY) suggest that dosing based on individual pharmacokinetics (PK) will improve efficacy and minimize toxicity. Real-time individualized CY dose adjustment was previously explored using a maximum a posteriori (MAP) approach based on a five serum-PK sampling in patients with hematologic malignancy undergoing stem cell transplantation. The MAP approach resulted in an improved toxicity profile without sacrificing efficacy. However, extensive PK sampling is costly and not generally applicable in the clinic. We hypothesize that the assumption-free Bayesian approach (AFBA) can reduce sampling requirements, while improving the accuracy of results. Retrospective analysis of previously published CY PK data from 20 patients undergoing stem cell transplantation. In that study, Bayesian estimation based on the MAP approach of individual PK parameters was accomplished to predict individualized day-2 doses of CY. Based on these data, we used the AFBA to select the optimal sampling schedule and compare the projected probability of achieving the therapeutic end points. By optimizing the sampling schedule with the AFBA, an effective individualized PK characterization can be obtained with only two blood draws at 4 and 16 hours after administration on day 1. The second-day doses selected with the AFBA were significantly different than the MAP approach and averaged 37% higher probability of attaining the therapeutic targets. The AFBA, based on cutting-edge statistical and mathematical tools, allows an accurate individualized dosing of CY, with simplified PK sampling. This highly accessible approach holds great promise for improving efficacy, reducing toxicities, and lowering treatment costs. © 2013 Pharmacotherapy Publications, Inc.

  17. The Role of Concurrent Chemical and Physical Processes in Determining the Maximum Use Temperature of Thermosetting Polymers for Aerospace Applications


    nonylphenol, All samples were melted, blended, and de- gassed for 30 min. prior to cure in silicone molds under N2, cure schedules as indicated “BADCy” “LECy...technique is limited by the difficulty in separating DSC signals due to cure and degradation at very high temperatures. 14 0 50 100 150 200 250 300 0.5...of superior predictive models; readily catalyzed to cure at reasonable temperatures, providing a wide and tunable processing window • Amenable to

  18. Technology and education: First approach for measuring temperature with Arduino

    Carrillo, Alejandro


    This poster session presents some ideas and approaches to understand concepts of thermal equilibrium, temperature and heat in order to bulid a man-nature relationship in a harmonious and responsible manner, emphasizing the interaction between science and technology, without neglecting the relationship of the environment and society, an approach to sustainability. It is proposed the development of practices that involve the use of modern technology, of easy access and low cost to measure temperature. We believe that the Arduino microcontroller and some temperature sensors can open the doors of innovation to carry out such practices. In this work we present some results of simple practices presented to a population of students between the ages of 16 and 17 years old. The practices in this proposal are: Zero law of thermodynamics and the concept of temperature, calibration of thermometers and measurement of temperature for heating and cooling of three different substances under the same physical conditions. Finally the student is asked to make an application that involves measuring of temperature and other physical parameters. Some suggestions are: to determine the temperature at which we take some food, measure the temperature difference at different rooms of a house, housing constructions that favour optimal condition, measure the temperature of different regions, measure of temperature trough different colour filters, solar activity and UV, propose applications to understand current problems such as global warming, etc. It is concluded that the Arduino practices and electrical sensors increase the cultural horizon of the students while awaking their interest to understand their operation, basic physics and its application from a modern perspective.

  19. Temperature of critical clusters in nucleation theory: Generalized Gibbs' approach

    Schmelzer, Jürn W. P.; Boltachev, Grey Sh.; Abyzov, Alexander S.


    According to the classical Gibbs' approach to the description of thermodynamically heterogeneous systems, the temperature of the critical clusters in nucleation is the same as the temperature of the ambient phase, i.e., with respect to temperature the conventional macroscopic equilibrium conditions are assumed to be fulfilled. In contrast, the generalized Gibbs' approach [J. W. P. Schmelzer, G. Sh. Boltachev, and V. G. Baidakov, J. Chem. Phys. 119, 6166 (2003), 10.1063/1.1602066; J. W. P. Schmelzer, G. Sh. Boltachev, and V. G. Baidakov, J. Chem. Phys. 124, 194503 (2006)], 10.1063/1.2196412 predicts that critical clusters (having commonly spatial dimensions in the nanometer range) have, as a rule, a different temperature as compared with the ambient phase. The existence of a curved interface may lead, consequently, to an equilibrium coexistence of different phases with different temperatures similar to differences in pressure as expressed by the well-known Laplace equation. Employing the generalized Gibbs' approach, it is demonstrated that, for the case of formation of droplets in a one-component vapor, the temperature of the critical droplets can be shown to be higher as compared to the vapor. In this way, temperature differences between critically sized droplets and ambient vapor phase, observed in recent molecular dynamics simulations of argon condensation by Wedekind et al. [J. Chem. Phys. 127, 064501 (2007)], 10.1063/1.2752154, can be given a straightforward theoretical interpretation. It is shown as well that - employing the same model assumptions concerning bulk and interfacial properties of the system under consideration - the temperature of critical bubbles in boiling is lower as compared to the bulk liquid.

  20. Efficient Parameter Estimation of Generalizable Coarse-Grained Protein Force Fields Using Contrastive Divergence: A Maximum Likelihood Approach.

    Várnai, Csilla; Burkoff, Nikolas S; Wild, David L


    Maximum Likelihood (ML) optimization schemes are widely used for parameter inference. They maximize the likelihood of some experimentally observed data, with respect to the model parameters iteratively, following the gradient of the logarithm of the likelihood. Here, we employ a ML inference scheme to infer a generalizable, physics-based coarse-grained protein model (which includes Go̅-like biasing terms to stabilize secondary structure elements in room-temperature simulations), using native conformations of a training set of proteins as the observed data. Contrastive divergence, a novel statistical machine learning technique, is used to efficiently approximate the direction of the gradient ascent, which enables the use of a large training set of proteins. Unlike previous work, the generalizability of the protein model allows the folding of peptides and a protein (protein G) which are not part of the training set. We compare the same force field with different van der Waals (vdW) potential forms: a hard cutoff model, and a Lennard-Jones (LJ) potential with vdW parameters inferred or adopted from the CHARMM or AMBER force fields. Simulations of peptides and protein G show that the LJ model with inferred parameters outperforms the hard cutoff potential, which is consistent with previous observations. Simulations using the LJ potential with inferred vdW parameters also outperforms the protein models with adopted vdW parameter values, demonstrating that model parameters generally cannot be used with force fields with different energy functions. The software is available at

  1. Staying cool in a changing landscape: the influence of maximum daily ambient temperature on grizzly bear habitat selection.

    Pigeon, Karine E; Cardinal, Etienne; Stenhouse, Gordon B; Côté, Steeve D


    To fulfill their needs, animals are constantly making trade-offs among limiting factors. Although there is growing evidence about the impact of ambient temperature on habitat selection in mammals, the role of environmental conditions and thermoregulation on apex predators is poorly understood. Our objective was to investigate the influence of ambient temperature on habitat selection patterns of grizzly bears in the managed landscape of Alberta, Canada. Grizzly bear habitat selection followed a daily and seasonal pattern that was influenced by ambient temperature, with adult males showing stronger responses than females to warm temperatures. Cutblocks aged 0-20 years provided an abundance of forage but were on average 6 °C warmer than mature conifer stands and 21- to 40-year-old cutblocks. When ambient temperatures increased, the relative change (odds ratio) in the probability of selection for 0- to 20-year-old cutblocks decreased during the hottest part of the day and increased during cooler periods, especially for males. Concurrently, the probability of selection for 21- to 40-year-old cutblocks increased on warmer days. Following plant phenology, the odds of selecting 0- to 20-year-old cutblocks also increased from early to late summer while the odds of selecting 21- to 40-year-old cutblocks decreased. Our results demonstrate that ambient temperatures, and therefore thermal requirements, play a significant role in habitat selection patterns and behaviour of grizzly bears. In a changing climate, large mammals may increasingly need to adjust spatial and temporal selection patterns in response to thermal constraints.

  2. A spatio-temporal statistical model of maximum daily river temperatures to inform the management of Scotland's Atlantic salmon rivers under climate change.

    Jackson, Faye L; Fryer, Robert J; Hannah, David M; Millar, Colin P; Malcolm, Iain A


    The thermal suitability of riverine habitats for cold water adapted species may be reduced under climate change. Riparian tree planting is a practical climate change mitigation measure, but it is often unclear where to focus effort for maximum benefit. Recent developments in data collection, monitoring and statistical methods have facilitated the development of increasingly sophisticated river temperature models capable of predicting spatial variability at large scales appropriate to management. In parallel, improvements in temporal river temperature models have increased the accuracy of temperature predictions at individual sites. This study developed a novel large scale spatio-temporal model of maximum daily river temperature (Twmax) for Scotland that predicts variability in both river temperature and climate sensitivity. Twmax was modelled as a linear function of maximum daily air temperature (Tamax), with the slope and intercept allowed to vary as a smooth function of day of the year (DoY) and further modified by landscape covariates including elevation, channel orientation and riparian woodland. Spatial correlation in Twmax was modelled at two scales; (1) river network (2) regional. Temporal correlation was addressed through an autoregressive (AR1) error structure for observations within sites. Additional site level variability was modelled with random effects. The resulting model was used to map (1) spatial variability in predicted Twmax under current (but extreme) climate conditions (2) the sensitivity of rivers to climate variability and (3) the effects of riparian tree planting. These visualisations provide innovative tools for informing fisheries and land-use management under current and future climate. Crown Copyright © 2017. Published by Elsevier B.V. All rights reserved.

  3. Assessment of Maximum Possible Urbanization Influences on Land Temperature Data by Comparison of Land and Marine Data around Coasts

    Philip D. Jones


    Full Text Available Global surface temperature trends, based on land and marine data, show warming of about 0.8 °C over the last 100 years. This rate of warming is sometimes questioned because of the existence of Urban Heat Islands (UHIs. In this study we compare the rate of temperature change estimated from measurements of land and marine temperatures for the same grid squares using 5° by 5° latitude/longitude grid-box datasets. For 1951–2009 the ‘land’ average warmed by 0.02 °C decade−1 relative to the ‘sea surface temperature’ (SST average. There were regional contrasts in the trends of land/sea temperature differences: the land warmed at a greater rate compared to the SST for regions north of 20°S, but the opposite occurred further south. Given strong forcing of the climate system, we would expect the land to change more rapidly than the ocean, so the differences represent an upper limit to the urbanization effect.

  4. Temperature Control in Spark Plasma Sintering: An FEM Approach

    G. Molénat


    Full Text Available Powder consolidation assisted by pulsed current and uniaxial pressure, namely, Spark Plasma Sintering (SPS, is increasingly popular. One limitation however lies in the difficulty of controlling the sample temperature during compaction. The aim of this work is to present a computational method for the assembly temperature based on the finite elements method (FEM. Computed temperatures have been compared with experimental data for three different dies filled with three materials with different electrical conductivities (TiAl, SiC, Al2O3. The results obtained are encouraging: the difference between computed and experimental values is less than 5%. This allows thinking about this FEM approach as a predictive tool for selecting the right control temperatures in the SPS machine.

  5. Maximum voltage gradient technique for optimization of ablation for typical atrial flutter with zero-fluoroscopy approach.

    Deutsch, Karol; Śledź, Janusz; Mazij, Mariusz; Ludwik, Bartosz; Labus, Michał; Karbarz, Dariusz; Pasicka, Bernadetta; Chrabąszcz, Michał; Śledź, Arkadiusz; Klank-Szafran, Monika; Vitali-Sendoz, Laura; Kameczura, Tomasz; Śpikowski, Jerzy; Stec, Piotr; Ujda, Marek; Stec, Sebastian


    Radiofrequency catheter ablation (RFCA) is an established effective method for the treatment of typical cavo-tricuspid isthmus (CTI)-dependent atrial flutter (AFL). The introduction of 3-dimensional electro-anatomic systems enables RFCA without fluoroscopy (No-X-Ray [NXR]). The aim of this study was to evaluate the feasibility and effectiveness of CTI RFCA during implementation of the NXR approach and the maximum voltage-guided (MVG) technique for ablation of AFL.Data were obtained from prospective standardized multicenter ablation registry. Consecutive patients with the first RFCA for CTI-dependent AFL were recruited. Two navigation approaches (NXR and fluoroscopy based as low as reasonable achievable [ALARA]) and 2 mapping and ablation techniques (MVG and pull-back technique [PBT]) were assessed. NXR + MVG (n  =  164; age: 63.7 ± 9.5; 30% women), NXR + PBT (n  =  55; age: 63.9 ± 10.7; 39% women); ALARA + MVG (n  =  36; age: 64.2 ± 9.6; 39% women); and ALARA + PBT (n  =  205; age: 64.7 ± 9.1; 30% women) were compared, respectively. All groups were simplified with a 2-catheter femoral approach using 8-mm gold tip catheters (Osypka AG, Germany or Biotronik, Germany) with 15 min of observation. The MVG technique was performed using step-by-step application by mapping the largest atrial signals within the CTI.Bidirectional block in CTI was achieved in 99% of all patients (P  =  NS, between groups). In NXR + MVG and NXR + PBT groups, the procedure time decreased (45.4 ± 17.6 and 47.2 ± 15.7 min vs. 52.6 ± 23.7 and 59.8 ± 24.0 min, P < .01) as compared to ALARA + MVG and ALARA + PBT subgroups. In NXR + MVG and NXR + PBT groups, 91% and 98% of the procedures were performed with complete elimination of fluoroscopy. The NXR approach was associated with a significant reduction in fluoroscopy exposure (from 0.2 ± 1.1 [NXR + PBT] and 0.3 ± 1.6 [NXR + MVG] to 7.7 ± 6.0 min [ALARA + MVG] and 9

  6. Surface temperature evolution and the location of maximum and average surface temperature of a lithium-ion pouch cell under variable load profiles

    Goutam, Shovon; Timmermans, Jean-Marc; Omar, Noshin;


    , manganese and cobalt (NMC) based and the anode is graphite based. In order to measure the surface temperature, thermal infrared (IR) camera and contact thermocouples were used. A fairly uniform temperature distribution was observed over the cell surface in case of continuous charge and discharge up to 100A...

  7. A Reconstruction of Temperature and δ18O Data Since the Last Glacial Maximum Using Soil and Gastropods from the Chinese Loess Plateau

    Mitsunaga, B.; Mering, J. A.; Eagle, R.; Bricker, H. L.; Davila, N.; Trewman, S.; Burford, S.; Li, G.; Tripati, A. K.


    The climate of the Chinese Loess Plateau is affected by the East Asian Monsoon, an important water source for over a billion people. We are examining how temperature and hydrology on the Loess Plateau has changed since the Last Glacial Maximum (18,000 - 23,000 years before the present) in response to insolation, deglaciation, and rising levels of greenhouse gases. Specifically, we are reconstructing temperature and meteoric δ18O through paired clumped and oxygen isotope analyses performed on carbonate minerals. Clumped isotope thermometry—the use of 13C—18O bond frequency in carbonates—is a novel geochemical proxy that provides constraints on mineral formation temperatures and can be combined with carbonate δ18O to quantify meteoric δ18O. We have measured a suite of nodular loess concretions and gastropod shells from the modern as well as the Last Glacial Maximum from 15 sites across the Chinese Loess Plateau. These observations constrain spatial variations in temperature and precipitation, which in turn will provide key constraints on models that simulate changes in regional climates and monsoon intensity over the last 20,000 years.

  8. New climatic targets against global warming: will the maximum 2 °C temperature rise affect estuarine benthic communities?

    Crespo, Daniel; Grilo, Tiago Fernandes; Baptista, Joana; Coelho, João Pedro; Lillebø, Ana Isabel; Cássio, Fernanda; Fernandes, Isabel; Pascoal, Cláudia; Pardal, Miguel Ângelo; Dolbeth, Marina


    The Paris Agreement signed by 195 countries in 2015 sets out a global action plan to avoid dangerous climate change by limiting global warming to remain below 2 °C. Under that premise, in situ experiments were run to test the effects of 2 °C temperature increase on the benthic communities in a seagrass bed and adjacent bare sediment, from a temperate European estuary. Temperature was artificially increased in situ and diversity and ecosystem functioning components measured after 10 and 30 days. Despite some warmness effects on the analysed components, significant impacts were not verified on macro and microfauna structure, bioturbation or in the fluxes of nutrients. The effect of site/habitat seemed more important than the effects of the warmness, with the seagrass habitat providing more homogenous results and being less impacted by warmness than the adjacent bare sediment. The results reinforce that most ecological responses to global changes are context dependent and that ecosystem stability depends not only on biological diversity but also on the availability of different habitats and niches, highlighting the role of coastal wetlands. In the context of the Paris Agreement it seems that estuarine benthic ecosystems will be able to cope if global warming remains below 2 °C.

  9. Evaluation of daily maximum and minimum 2-m temperatures as simulated with the regional climate model COSMO-CLM over Africa

    Kraehenmann, Stefan; Kothe, Steffen; Ahrens, Bodo [Frankfurt Univ. (Germany). Inst. for Atmospheric and Environmental Sciences; Panitz, Hans-Juergen [Karlsruhe Institute of Technology (KIT), Eggenstein-Leopoldshafen (Germany)


    The representation of the diurnal 2-m temperature cycle is challenging because of the many processes involved, particularly land-atmosphere interactions. This study examines the ability of the regional climate model COSMO-CLM (version 4.8) to capture the statistics of daily maximum and minimum 2-m temperatures (Tmin/Tmax) over Africa. The simulations are carried out at two different horizontal grid-spacings (0.22 and 0.44 ), and are driven by ECMWF ERA-Interim reanalyses as near-perfect lateral boundary conditions. As evaluation reference, a high-resolution gridded dataset of daily maximum and minimum temperatures (Tmin/Tmax) for Africa (covering the period 2008-2010) is created using the regression-kriging-regression-kriging (RKRK) algorithm. RKRK applies, among other predictors, the remotely sensed predictors land surface temperature and cloud cover to compensate for the missing information about the temperature pattern due to the low station density over Africa. This dataset allows the evaluation of temperature characteristics like the frequencies of Tmin/Tmax, the diurnal temperature range, and the 90{sup th} percentile of Tmax. Although the large-scale patterns of temperature are reproduced well, COSMO-CLM shows significant under- and overestimation of temperature at regional scales. The hemispheric summers are generally too warm and the day-to-day temperature variability is overestimated over northern and southern extra-tropical Africa. The average diurnal temperature range is underestimated by about 2 C across arid areas, yet overestimated by around 2 C over the African tropics. An evaluation based on frequency distributions shows good model performance for simulated Tmin (the simulated frequency distributions capture more than 80% of the observed ones), but less well performance for Tmax (capture below 70%). Further, over wide parts of Africa a too large fraction of daily Tmax values exceeds the observed 90{sup th} percentile of Tmax, particularly across

  10. An approach for IC engine coolant energy recovery based on low-temperature organic Rankine cycle

    付建勤; 刘敬平; 徐政欣; 邓帮林; 刘琦


    To promote the fuel utilization efficiency of IC engine, an approach was proposed for IC engine coolant energy recovery based on low-temperature organic Rankine cycle (ORC). The ORC system uses IC engine coolant as heat source, and it is coupled to the IC engine cooling system. After various kinds of organic working media were compared, R124 was selected as the ORC working medium. According to IC engine operating conditions and coolant energy characteristics, the major parameters of ORC system were preliminary designed. Then, the effects of various parameters on cycle performance and recovery potential of coolant energy were analyzed via cycle process calculation. The results indicate that cycle efficiency is mainly influenced by the working pressure of ORC, while the maximum working pressure is limited by IC engine coolant temperature. At the same working pressure, cycle efficiency is hardly affected by both the mass flow rate and temperature of working medium. When the bottom cycle working pressure arrives at the maximum allowable value of 1.6 MPa, the fuel utilization efficiency of IC engine could be improved by 12.1%. All these demonstrate that this low-temperature ORC is a useful energy-saving technology for IC engine.

  11. Temperature Distribution in Solar Cells Calculated in Three Dimensional Approach

    Hamdy K. Elminir


    Full Text Available Field-testing is costly, time consuming and depends heavily on prevailing weather conditions. Adequate security and weather protection must also provide at the test site. Delays can also be caused due to bad weather and system failures. To overcome these problems, a Photovoltaic (PV array simulation may be used. For system design purpose, the model must reflect the details of the physical process occurring in the cell, to get a closer insight into device operation as well as optimization of particular device parameters. PV cell temperature ratings have a great effect on the main cell performance. Hence, the need for an exact technique to calculate accurately and efficiently the temperature distribution of a PV cell arises, from which we can adjust safe and proper operation at maximum ratings. The Scope of this work is to describe the development of 3D-thermal models, which are used to update the operation temperature, to get a closer insight into the response behavior and to estimate the overall performance.

  12. Intensification of the meridional temperature gradient in the Great Barrier Reef following the Last Glacial Maximum - Results from IODP Expedition 325

    Felis, Thomas; McGregor, Helen V.; Linsley, Braddock K.; Tudhope, Alexander W.; Gagan, Michael K.; Suzuki, Atsushi; Inoue, Mayuri; Thomas, Alexander L.; Esat, Tezer M.; Thompson, William G.; Tiwari, Manish; Potts, Donald C.; Mudelsee, Manfred; Yokoyama, Yusuke; Webster, Jody M.


    Tropical south-western Pacific temperatures are of vital importance to the Great Barrier Reef (GBR), but the role of sea surface temperatures (SSTs) in the growth of the GBR since the Last Glacial Maximum remains largely unknown. Here we present records of Sr/Ca and δ18O for Last Glacial Maximum and deglacial corals that were drilled by Integrated Ocean Drilling Program (IODP) Expedition 325 along the shelf edge seaward of the modern GBR. The Sr/Ca and δ18O records of the precisely U-Th dated fossil shallow-water corals show a considerably steeper meridional SST gradient than the present day in the central GBR. We find a 1-2 ° C larger temperature decrease between 17° S and 20° S about 20,000 to 13,000 years ago. The result is best explained by the northward expansion of cooler subtropical waters due to a weakening of the South Pacific gyre and East Australian Current. Our findings indicate that the GBR experienced substantial and regionally differing temperature change during the last deglaciation, much larger temperature changes than previously recognized. Furthermore, our findings suggest a northward contraction of the Western Pacific Warm Pool during the LGM and last deglaciation, and serve to explain anomalous drying of northeastern Australia at that time. Overall, the GBR developed through significant SST change and, considering temperature alone, may be more resilient than previously thought. Webster, J. M., Yokoyama, Y. & Cotteril, C. & the Expedition 325 Scientists. Proceedings of the Integrated Ocean Drilling Program Vol. 325 (Integrated Ocean Drilling Program Management International Inc., 2011). Felis, T., McGregor, H. V., Linsley, B. K., Tudhope, A. W., Gagan, M. K., Suzuki, A., Inoue, M., Thomas, A. L., Esat, T. M., Thompson, W. G., Tiwari, M., Potts, D. C., Mudelsee, M., Yokoyama, Y., Webster, J. M. Intensification of the meridional temperature gradient in the Great Barrier Reef following the Last Glacial Maximum. Nature Communications 5, 4102

  13. Response of Terrestrial Vegetation to Variations in Temperature and Aridity Since the Last Glacial Maximum in Lake Chalco, Mexico

    Werne, J. P.; Halbur, J.; Rubesch, M.; Brown, E. T.; Ortega, B.; Caballero, M.; Correa-Metrio, A.; Lozano, S.


    The water balance of the Southwestern United States and most of Mexico is dependent on regional climate systems, including the Mexican (or North American) Monsoon. The Mexican Monsoon leads to significant summer rainfall across a broad swath of the continent, which constitutes the major source of annual precipitation over much of this region. The position of the ITCZ and the strength of the accompanying monsoon are affected by variability in insolation. Stronger northern hemisphere summer insolation shifts the ITCZ northward, bringing about a more intense monsoon. Here we discuss a new geochemical climate record from Lake Chalco, Mexico, which couples inorganic (X-ray fluorescence) and organic (biomarkers and stable isotopes) geochemical proxies to reconstruct temperature and aridity over the past 45,000 years, as well as the response of terrestrial vegetation to such climate changes. The Basin of Mexico is a high altitude closed lacustrine basin (20°N, 99°W; 2240 m.a.s.l.) in the Trans Mexican Volcanic Belt. The plain of Lake Chalco, located near Mexico City in the southern sub-basin, has an area of 120 km2 and a catchment of 1100 km2. Though the present-day lake has been reduced to a small marsh due to historic diversion of its waters, over longer timescales the lake has been a sensitive recorder of hydroclimatic variations. Low Ca concentrations indicate more arid periods during the late glacial (34 - 15 kybp) compared to the last interstadial or early Holocene. This observation is supported by the ratio of terrestrial to aquatic lipid biomarkers (long vs. short chain n-alkanes), which indicate greater relative inputs of aquatic biomarkers during wetter periods. The changes in aridity as shown in these geochemical proxies are compared with temperature as reflected in glycerol dialkyl glycerol tetraether (GDGT) based paleotemperature proxies to assess the extent to which insolation may have driven aridity variations, and with terrestrial and aquatic biomarker

  14. Trends in Alaska temperature data. Towards a more realistic approach

    Lopez-de-Lacalle, Javier [University of the Basque Country, Department of Applied Economics III (Econometrics and Statistics), Bilbao (Spain)


    Time series of seasonal temperatures recorded in Alaska during the past eighty years are analyzed. A common practice to measure changes in the long-term pattern of temperature series is to fit a deterministic linear trend. A deterministic trend is not a realistic approach and poses some pitfalls from the statistical point of view. A statistical model to fit a latent time-varying level independent of the Pacific climate shift is proposed. The empirical distribution of temperature conditional on the phase of the Pacific Decadal Oscillation is obtained. The results reveal that the switch between the negative and the positive phase leads to differences in temperatures up to 4 C in a given location and season. Differences across seasons and locations are detected. The effect of the Pacific climate shift is stronger in winter. An overall increase of temperatures is observed in the long term. The estimated trends are not constant but exhibit different patterns that vary in the sign and strength over the sample period. (orig.)

  15. Location specific forecasting of maximum and minimum temperatures over India by using the statistical bias corrected output of global forecasting system

    V R Durai; Rashmi Bhardwaj


    The output from Global Forecasting System (GFS) T574L64 operational at India Meteorological Department (IMD), New Delhi is used for obtaining location specific quantitative forecast of maximum and minimum temperatures over India in the medium range time scale. In this study, a statistical bias correction algorithm has been introduced to reduce the systematic bias in the 24–120 hour GFS model location specific forecast of maximum and minimum temperatures for 98 selected synoptic stations, representing different geographical regions of India. The statistical bias correction algorithm used for minimizing the bias of the next forecast is Decaying Weighted Mean (DWM), as it is suitable for small samples. The main objective of this study is to evaluate the skill of Direct Model Output (DMO) and Bias Corrected (BC) GFS for location specific forecast of maximum and minimum temperatures over India. The performance skill of 24–120 hour DMO and BC forecast of GFS model is evaluated for all the 98 synoptic stations during summer (May–August 2012) and winter (November 2012–February 2013) seasons using different statistical evaluation skill measures. The magnitude of Mean Absolute Error (MAE) and Root Mean Squared Error (RMSE) for BC GFS forecast is lower than DMO during both summer and winter seasons. The BC GFS forecasts have higher skill score as compared to GFS DMO over most of the stations in all day-1 to day-5 forecasts during both summer and winter seasons. It is concluded from the study that the skill of GFS statistical BC forecast improves over the GFS DMO remarkably and hence can be used as an operational weather forecasting system for location specific forecast over India.

  16. Stochastic model of the NASA/MSFC ground facility for large space structures with uncertain parameters: The maximum entropy approach

    Hsia, Wei-Shen


    A stochastic control model of the NASA/MSFC Ground Facility for Large Space Structures (LSS) control verification through Maximum Entropy (ME) principle adopted in Hyland's method was presented. Using ORACLS, a computer program was implemented for this purpose. Four models were then tested and the results presented.

  17. Entropy generation minimization: A practical approach for performance evaluation of temperature cascaded co-generation plants

    Myat, Aung


    We present a practical tool that employs entropy generation minimization (EGM) approach for an in-depth performance evaluation of a co-generation plant with a temperature-cascaded concept. Co-generation plant produces useful effect production sequentially, i.e., (i) electricity from the micro-turbines, (ii) low pressure steam at 250 °C or about 8-10 bars, (iii) cooling capacity of 4 refrigeration tones (Rtons) and (iv) dehumidification of outdoor air for air conditioned space. The main objective is to configure the most efficient configuration of producing power and heat. We employed entropy generation minimization (EGM) which reflects to minimize the dissipative losses and maximize the cycle efficiency of the individual thermally activated systems. The minimization of dissipative losses or EGM is performed in two steps namely, (i) adjusting heat source temperatures for the heat-fired cycles and (ii) the use of Genetic Algorithm (GA), to seek out the sensitivity of heat transfer areas, flow rates of working fluids, inlet temperatures of heat sources and coolant, etc., over the anticipated range of operation to achieve maximum efficiency. With EGM equipped with GA, we verified that the local minimization of entropy generation individually at each of the heat-activated processes would lead to the maximum efficiency of the system. © 2012.

  18. Intelligent approach to maximum power point tracking control strategy for variable-speed wind turbine generation system

    Lin, Whei-Min; Hong, Chih-Ming [Department of Electrical Engineering, National Sun Yat-Sen University, Kaohsiung 80424 (China)


    To achieve maximum power point tracking (MPPT) for wind power generation systems, the rotational speed of wind turbines should be adjusted in real time according to wind speed. In this paper, a Wilcoxon radial basis function network (WRBFN) with hill-climb searching (HCS) MPPT strategy is proposed for a permanent magnet synchronous generator (PMSG) with a variable-speed wind turbine. A high-performance online training WRBFN using a back-propagation learning algorithm with modified particle swarm optimization (MPSO) regulating controller is designed for a PMSG. The MPSO is adopted in this study to adapt to the learning rates in the back-propagation process of the WRBFN to improve the learning capability. The MPPT strategy locates the system operation points along the maximum power curves based on the dc-link voltage of the inverter, thus avoiding the generator speed detection. (author)

  19. Global and Seasonal Extent of the Thermospheric Midnight Temperature Maximum as Seen in O(1D) Nightglow by WINDII and Simulated by C-IAM

    Shepherd, M. G.


    Manifestations of thermospheric dynamics are observed in the variations of upper atmosphere neutral winds, temperature, density and F-region plasma over a wide time range. These fields are influenced by perturbations propagating vertically from the lower and middle atmosphere (e.g. tides) and from above through variations in the solar and geomagnetic activity. The midnight temperature maximum (MTM) is a large scale neutral temperature anomaly with wide spread influence on the low-latitude thermosphere and ionosphere. Variations in the low latitudes' nighttime neutral density, termed midnight density maximum (MDM) have also been observed and modeled. Although there is a large body of work on the characteristics of the MTM (& MDM) there are still a few questions which remain to be answered concerning the global scale distribution of the MTM (&MDM), their spatial extent and longitudinal variations, their global seasonal occurrence pattern and amplitude. The Wind Imaging Interferometer (WINDII) flown on the Upper Atmosphere Research Satellite (UARS) provides among other parameters multiyear observations of O(1D) nightglow volume emission rates (VER), Doppler temperatures, and neutral winds over the altitude range from 150 to 300 km with continuous latitude coverage from 42°N to 42°S and to 72° in one hemisphere every 36 days. These correlative in time and space data are employed in the study of the global and seasonal extent of the MTM/MDM. The results are compared with simulations by the Canadian Ionosphere and Atmosphere Model (C-IAM). Reasonable agreement is obtained in terms of temporal, solar flux, and solar zenith angle variations.

  20. A coupled force-restore model of surface temperature and soil moisture using the maximum entropy production model of heat fluxes

    Huang, S.-Y.; Wang, J.


    A coupled force-restore model of surface soil temperature and moisture (FRMEP) is formulated by incorporating the maximum entropy production model of surface heat fluxes and including the gravitational drainage term. The FRMEP model driven by surface net radiation and precipitation are independent of near-surface atmospheric variables with reduced sensitivity to the uncertainties of model input and parameters compared to the classical force-restore models (FRM). The FRMEP model was evaluated using observations from two field experiments with contrasting soil moisture conditions. The modeling errors of the FRMEP predicted surface temperature and soil moisture are lower than those of the classical FRMs forced by observed or bulk formula based surface heat fluxes (bias 1 ~ 2°C versus ~4°C, 0.02 m3 m-3 versus 0.05 m3 m-3). The diurnal variations of surface temperature, soil moisture, and surface heat fluxes are well captured by the FRMEP model measured by the high correlations between the model predictions and observations (r ≥ 0.84). Our analysis suggests that the drainage term cannot be neglected under wet soil condition. A 1 year simulation indicates that the FRMEP model captures the seasonal variation of surface temperature and soil moisture with bias less than 2°C and 0.01 m3 m-3 and correlation coefficients of 0.93 and 0.9 with observations, respectively.

  1. A Two-Stage Information-Theoretic Approach to Modeling Landscape-Level Attributes and Maximum Recruitment of Chinook Salmon in the Columbia River Basin.

    Thompson, William L.; Lee, Danny C.


    Many anadromous salmonid stocks in the Pacific Northwest are at their lowest recorded levels, which has raised questions regarding their long-term persistence under current conditions. There are a number of factors, such as freshwater spawning and rearing habitat, that could potentially influence their numbers. Therefore, we used the latest advances in information-theoretic methods in a two-stage modeling process to investigate relationships between landscape-level habitat attributes and maximum recruitment of 25 index stocks of chinook salmon (Oncorhynchus tshawytscha) in the Columbia River basin. Our first-stage model selection results indicated that the Ricker-type, stock recruitment model with a constant Ricker a (i.e., recruits-per-spawner at low numbers of fish) across stocks was the only plausible one given these data, which contrasted with previous unpublished findings. Our second-stage results revealed that maximum recruitment of chinook salmon had a strongly negative relationship with percentage of surrounding subwatersheds categorized as predominantly containing U.S. Forest Service and private moderate-high impact managed forest. That is, our model predicted that average maximum recruitment of chinook salmon would decrease by at least 247 fish for every increase of 33% in surrounding subwatersheds categorized as predominantly containing U.S. Forest Service and privately managed forest. Conversely, mean annual air temperature had a positive relationship with salmon maximum recruitment, with an average increase of at least 179 fish for every increase in 2 C mean annual air temperature.

  2. Modeling of high-temperature treatment of wood using the reaction engineering approach (REA).

    Putranto, Aditya; Chen, Xiao Dong; Xiao, Zongyuan; Webley, Paul A


    A simple and accurate model of high-temperature treatment of wood can assist in the process design and the evaluation of performance of equipment. The high-temperature treatment of wood is essentially a drying process under linearly-increased gas temperature up to final temperature of 220-230°C which is a challenging process to model. This study is aimed to assess the applicability and accuracy of the reaction engineering approach (REA) to model the heat treatment of wood. In order to describe the process using the REA, the maximum activation energy (ΔE(v,b)) is evaluated according to the corresponding external conditions during the heat treatment. Results indicate that the REA coupled with the heat balance describes both moisture content and temperature profiles during the heat treatment very well. A good agreement towards the experimental data is indicated. It has also been shown that the current model is highly comparable in accuracy with the complex models.

  3. Glass precursor approach to high-temperature superconductors

    Bansal, Narottam P.


    The available studies on the synthesis of high T sub c superconductors (HTS) via the glass precursor approach were reviewed. Melts of the Bi-Sr-Ca-Cu-O system as well as those doped with oxides of some other elements (Pb, Al, V, Te, Nb, etc.) could be quenched into glasses which, on further heat treatments under appropriate conditions, crystallized into the superconducting phase(s). The nature of the HTS phase(s) formed depends on the annealing temperature, time, atmosphere, and the cooling rate and also on the glass composition. Long term annealing was needed to obtain a large fraction of the 110 K phase. The high T sub c phase did not crystallize out directly from the glass matrix, but was preceded by the precipitation of other phases. The 110 K HTS was produced at high temperatures by reaction between the phases formed at lower temperatures resulting in multiphase material. The presence of a glass former such as B2O3 was necessary for the Y-Ba-Cu-O melt to form a glass on fast cooling. A discontinuous YBa2Cu3O(7-delta) HTS phase crystallized out on heat treatment of this glass. Attempts to prepare Tl-Ba-Ca-Cu-O system in the glassy state were not successful.




    Full Text Available About 40% of reactors in the world are being operated beyond design life or are approaching the end of their life cycle. During long-term operation, various degradation mechanisms occur. Fatigue caused by alternating operational stresses in terms of temperature or pressure change is an important damage mechanism in continued operation of nuclear power plants. To monitor the fatigue damage of components, Fatigue Monitoring System (FMS has been installed. Most FMSs have used Green's Function Approach (GFA to calculate the thermal stresses rapidly. However, if temperature-dependent material properties are used in a detailed FEM, there is a maximum peak stress discrepancy between a conventional GFA and a detailed FEM because constant material properties are used in a conventional method. Therefore, if a conventional method is used in the fatigue evaluation, thermal stresses for various operating cycles may be calculated incorrectly and it may lead to an unreliable estimation. So, in this paper, the modified GFA which can consider temperature-dependent material properties is proposed by using an artificial neural network and weight factor. To verify the proposed method, thermal stresses by the new method are compared with those by FEM. Finally, pros and cons of the new method as well as technical findings from the assessment are discussed.

  5. Optimization of a Nucleic Acids united-RESidue 2-Point model (NARES-2P) with a maximum-likelihood approach

    He, Yi; Scheraga, Harold A., E-mail: [Department of Chemistry and Chemical Biology, Cornell University, Ithaca, New York 14853 (United States); Liwo, Adam [Faculty of Chemistry, University of Gdańsk, Wita Stwosza 63, 80-308 Gdańsk (Poland)


    Coarse-grained models are useful tools to investigate the structural and thermodynamic properties of biomolecules. They are obtained by merging several atoms into one interaction site. Such simplified models try to capture as much as possible information of the original biomolecular system in all-atom representation but the resulting parameters of these coarse-grained force fields still need further optimization. In this paper, a force field optimization method, which is based on maximum-likelihood fitting of the simulated to the experimental conformational ensembles and least-squares fitting of the simulated to the experimental heat-capacity curves, is applied to optimize the Nucleic Acid united-RESidue 2-point (NARES-2P) model for coarse-grained simulations of nucleic acids recently developed in our laboratory. The optimized NARES-2P force field reproduces the structural and thermodynamic data of small DNA molecules much better than the original force field.

  6. Northern China maximum temperature in the summer of 1743:A historical event of burning summer in a relatively warm climate background

    ZHANG De'er; Demaree Gaston


    In the context of historical climate records of China and early meteorological measurements of Beijing discovered recently in Europe, a study is undertaken on the 1743 hottest summer of north China over the last 700 a, covering Beijing, Tianjin, and the provinces of Hebei, Shanxi and Shandong, with the highest temperature reaching 44.4℃ in July 1743 in Beijing, in excess of the maximum climate record in the 20th century. Results show that the related weather/climate features of the 1743 heat wave, e.g., flood/ drought distribution and Meiyu activity and the external forcings, such as solar activity and equatorial Pacific SST condition are the same as those of the 1942 and 1999 heat events. It is noted that the 1743 burning summer event occurs in a relatively warm climate background prior to the Industrial Revolution, with a lower level of CO2 release.

  7. A novel approach to estimating potential maximum heavy metal exposure to ship recycling yard workers in Alang, India.

    Deshpande, Paritosh C; Tilwankar, Atit K; Asolekar, Shyam R


    The 180 ship recycling yards located on Alang-Sosiya beach in the State of Gujarat on the west coast of India is the world's largest cluster engaged in dismantling. Yearly 350 ships have been dismantled (avg. 10,000 ton steel/ship) with the involvement of about 60,000 workers. Cutting and scrapping of plates or scraping of painted metal surfaces happens to be the commonly performed operation during ship breaking. The pollutants released from a typical plate-cutting operation can potentially either affect workers directly by contaminating the breathing zone (air pollution) or can potentially add pollution load into the intertidal zone and contaminate sediments when pollutants get emitted in the secondary working zone and gets subjected to tidal forces. There was a two-pronged purpose behind the mathematical modeling exercise performed in this study. First, to estimate the zone of influence up to which the effect of plume would extend. Second, to estimate the cumulative maximum concentration of heavy metals that can potentially occur in ambient atmosphere of a given yard. The cumulative maximum heavy metal concentration was predicted by the model to be between 113 μg/Nm(3) and 428 μg/Nm(3) (at 4m/s and 1m/s near-ground wind speeds, respectively). For example, centerline concentrations of lead (Pb) in the yard could be placed between 8 and 30 μg/Nm(3). These estimates are much higher than the Indian National Ambient Air Quality Standards (NAAQS) for Pb (0.5 μg/Nm(3)). This research has already become the critical science and technology inputs for formulation of policies for eco-friendly dismantling of ships, formulation of ideal procedure and corresponding health, safety, and environment provisions. The insights obtained from this research are also being used in developing appropriate technologies for minimizing exposure to workers and minimizing possibilities of causing heavy metal pollution in the intertidal zone of ship recycling yards in India.

  8. A bottom-up approach to identifying the maximum operational adaptive capacity of water resource systems to a changing climate

    Culley, S.; Noble, S.; Yates, A.; Timbs, M.; Westra, S.; Maier, H. R.; Giuliani, M.; Castelletti, A.


    Many water resource systems have been designed assuming that the statistical characteristics of future inflows are similar to those of the historical record. This assumption is no longer valid due to large-scale changes in the global climate, potentially causing declines in water resource system performance, or even complete system failure. Upgrading system infrastructure to cope with climate change can require substantial financial outlay, so it might be preferable to optimize existing system performance when possible. This paper builds on decision scaling theory by proposing a bottom-up approach to designing optimal feedback control policies for a water system exposed to a changing climate. This approach not only describes optimal operational policies for a range of potential climatic changes but also enables an assessment of a system's upper limit of its operational adaptive capacity, beyond which upgrades to infrastructure become unavoidable. The approach is illustrated using the Lake Como system in Northern Italy—a regulated system with a complex relationship between climate and system performance. By optimizing system operation under different hydrometeorological states, it is shown that the system can continue to meet its minimum performance requirements for more than three times as many states as it can under current operations. Importantly, a single management policy, no matter how robust, cannot fully utilize existing infrastructure as effectively as an ensemble of flexible management policies that are updated as the climate changes.

  9. The Significance of Temperature Based Approach Over the Energy Based Approaches in the Buildings Thermal Assessment

    Albatayneh, Aiman; Alterman, Dariusz; Page, Adrian; Moghtaderi, Behdad


    The design of low energy buildings requires accurate thermal simulation software to assess the heating and cooling loads. Such designs should sustain thermal comfort for occupants and promote less energy usage over the life time of any building. One of the house energy rating used in Australia is AccuRate, star rating tool to assess and compare the thermal performance of various buildings where the heating and cooling loads are calculated based on fixed operational temperatures between 20 °C to 25 °C to sustain thermal comfort for the occupants. However, these fixed settings for the time and temperatures considerably increase the heating and cooling loads. On the other hand the adaptive thermal model applies a broader range of weather conditions, interacts with the occupants and promotes low energy solutions to maintain thermal comfort. This can be achieved by natural ventilation (opening window/doors), suitable clothes, shading and low energy heating/cooling solutions for the occupied spaces (rooms). These activities will save significant amount of operating energy what can to be taken into account to predict energy consumption for a building. Most of the buildings thermal assessment tools depend on energy-based approaches to predict the thermal performance of any building e.g. AccuRate in Australia. This approach encourages the use of energy to maintain thermal comfort. This paper describes the advantages of a temperature-based approach to assess the building's thermal performance (using an adaptive thermal comfort model) over energy based approach (AccuRate Software used in Australia). The temperature-based approach was validated and compared with the energy-based approach using four full scale housing test modules located in Newcastle, Australia (Cavity Brick (CB), Insulated Cavity Brick (InsCB), Insulated Brick Veneer (InsBV) and Insulated Reverse Brick Veneer (InsRBV)) subjected to a range of seasonal conditions in a moderate climate. The time required for

  10. DFPT approach to the temperature dependence of electronic band energies

    Boulanger, Paul; Cote, Michel; Gonze, Xavier


    The energy bands of semiconductors exhibit significant shifts and broadening with temperature at constant volume. This is an effect of the direct renormalization of band energies due to electron-phonon interactions. In search of an efficient linear response DFT approach to this effect, beyond semi-empirical approximation or frozen- phonon DFT, we have implemented formulas derived by Allen and Heine [J. Phys. C 9, 2305 (1976)] inside the ABINIT package. We have found that such formulas need a great number of bands, O(1000), to properly converge the thermal corrections of deep potential well atoms, i.e. elements of the first row. This leads to heavy computational costs even for simple systems like diamond. The DFPT formalism can be used to circumvent entirely the need for conduction bands by computing the first-order wave-functions using the self-consistent Sternheimer equation. We will compare the results of both formalism demonstrating that the DFPT approach reproduces the correct converged results of the formulas of Allen and Heine.

  11. Improving on hidden Markov models: An articulatorily constrained, maximum likelihood approach to speech recognition and speech coding

    Hogden, J.


    The goal of the proposed research is to test a statistical model of speech recognition that incorporates the knowledge that speech is produced by relatively slow motions of the tongue, lips, and other speech articulators. This model is called Maximum Likelihood Continuity Mapping (Malcom). Many speech researchers believe that by using constraints imposed by articulator motions, we can improve or replace the current hidden Markov model based speech recognition algorithms. Unfortunately, previous efforts to incorporate information about articulation into speech recognition algorithms have suffered because (1) slight inaccuracies in our knowledge or the formulation of our knowledge about articulation may decrease recognition performance, (2) small changes in the assumptions underlying models of speech production can lead to large changes in the speech derived from the models, and (3) collecting measurements of human articulator positions in sufficient quantity for training a speech recognition algorithm is still impractical. The most interesting (and in fact, unique) quality of Malcom is that, even though Malcom makes use of a mapping between acoustics and articulation, Malcom can be trained to recognize speech using only acoustic data. By learning the mapping between acoustics and articulation using only acoustic data, Malcom avoids the difficulties involved in collecting articulator position measurements and does not require an articulatory synthesizer model to estimate the mapping between vocal tract shapes and speech acoustics. Preliminary experiments that demonstrate that Malcom can learn the mapping between acoustics and articulation are discussed. Potential applications of Malcom aside from speech recognition are also discussed. Finally, specific deliverables resulting from the proposed research are described.

  12. Technical basis for the reduction of the maximum temperature TGA-MS analysis of oxide samples from the 3013 destructive examination program

    Scogin, J. H. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)


    Thermogravimetric analysis with mass spectroscopy of the evolved gas (TGA-MS) is used to quantify the moisture content of materials in the 3013 destructive examination (3013 DE) surveillance program. Salts frequently present in the 3013 DE materials volatilize in the TGA and condense in the gas lines just outside the TGA furnace. The buildup of condensate can restrict the flow of purge gas and affect both the TGA operations and the mass spectrometer calibration. Removal of the condensed salts requires frequent maintenance and subsequent calibration runs to keep the moisture measurements by mass spectroscopy within acceptable limits, creating delays in processing samples. In this report, the feasibility of determining the total moisture from TGA-MS measurements at a lower temperature is investigated. A temperature of the TGA-MS analysis which reduces the complications caused by the condensation of volatile materials is determined. Analysis shows that an excellent prediction of the presently measured total moisture value can be made using only the data generated up to 700 °C and there is a sound physical basis for this estimate. It is recommended that the maximum temperature of the TGA-MS determination of total moisture for the 3013 DE program be reduced from 1000 °C to 700 °C. It is also suggested that cumulative moisture measurements at 550 °C and 700°C be substituted for the measured value of total moisture in the 3013 DE database. Using these raw values, any of predictions of the total moisture discussed in this report can be made.

  13. Stochastic modeling and control system designs of the NASA/MSFC Ground Facility for large space structures: The maximum entropy/optimal projection approach

    Hsia, Wei-Shen


    In the Control Systems Division of the Systems Dynamics Laboratory of the NASA/MSFC, a Ground Facility (GF), in which the dynamics and control system concepts being considered for Large Space Structures (LSS) applications can be verified, was designed and built. One of the important aspects of the GF is to design an analytical model which will be as close to experimental data as possible so that a feasible control law can be generated. Using Hyland's Maximum Entropy/Optimal Projection Approach, a procedure was developed in which the maximum entropy principle is used for stochastic modeling and the optimal projection technique is used for a reduced-order dynamic compensator design for a high-order plant.

  14. Seasonal Assessment of Habitat Suitability of the Wild Goat (Capra aegagrus in Mountainous Areas of Kolah-Qazi National Park using Maximum Entropy Approach

    N. Ranjbar


    Full Text Available Knowledge of species’ habitat needs is considered as one of the requirements of wildlife management. We studied seasonal habitat suitability and habitat associations of wild goat (Capra aegagrus in Kolah-Qazi National Park, one of its typical habitats in central Asia, using Maximum Entropy approach. The study area was confined to mountainous areas as the potential habitat of the wild goat. Elevation, distance to water sources, distance to human settlements, and distance to guard patrol roads were recognised as the most important variables determining habitat suitability of the species. The extent of suitable habitats was maximum in spring (3882.25 ha and the least in summer (1362.5 ha. The AUC values of MaxEnt revealed acceptable to good efficiency (AUC ≥0.7. The obtained results may have implications for conservation of the wild goat in similar habitats across its distribution range.

  15. Perfusion CT in acute ischemic stroke: a qualitative and quantitative comparison of deconvolution and maximum slope approach.

    Abels, B; Klotz, E; Tomandl, B F; Kloska, S P; Lell, M M


    PCT postprocessing commonly uses either the MS or a variant of the DC approach for modeling of voxel-based time-attenuation curves. There is an ongoing discussion about the respective merits and limitations of both methods, frequently on the basis of theoretic reasoning or simulated data. We performed a qualitative and quantitative comparison of DC and MS by using identical source datasets and preprocessing parameters. From the PCT data of 50 patients with acute ischemic stroke, color maps of CBF, CBV, and various temporal parameters were calculated with software implementing both DC and MS algorithms. Color maps were qualitatively categorized. Quantitative region-of-interest-based measurements were made in nonischemic GM and WM, suspected penumbra, and suspected infarction core. Qualitative results, quantitative results, and PCT lesion sizes from DC and MS were statistically compared. CBF and CBV color maps based on DC and MS were of comparably high quality. Quantitative CBF and CBV values calculated by DC and MS were within the same range in nonischemic regions. In suspected penumbra regions, average CBF(DC) was lower than CBF(MS). In suspected infarction core regions, average CBV(DC) was similar to CBV(MS). Using adapted tissue-at-risk/nonviable-tissue thresholds, we found excellent correlation of DC and MS lesion sizes. DC and MS yielded comparable qualitative and quantitative results. Lesion sizes indicated by DC and MS showed excellent agreement when using adapted thresholds. In all cases, the same therapy decision would have been made.

  16. Projected changes in precipitation and temperature over the Canadian Prairie Provinces using the Generalized Linear Model statistical downscaling approach

    Asong, Z. E.; Khaliq, M. N.; Wheater, H. S.


    In this study, a multisite multivariate statistical downscaling approach based on the Generalized Linear Model (GLM) framework is developed to downscale daily observations of precipitation and minimum and maximum temperatures from 120 sites located across the Canadian Prairie Provinces: Alberta, Saskatchewan and Manitoba. First, large scale atmospheric covariates from the National Center for Environmental Prediction (NCEP) Reanalysis-I, teleconnection indices, geographical site attributes, and observed precipitation and temperature records are used to calibrate GLMs for the 1971-2000 period. Then the calibrated models are used to generate daily sequences of precipitation and temperature for the 1962-2005 historical (conditioned on NCEP predictors), and future period (2006-2100) using outputs from five CMIP5 (Coupled Model Intercomparison Project Phase-5) Earth System Models corresponding to Representative Concentration Pathway (RCP): RCP2.6, RCP4.5, and RCP8.5 scenarios. The results indicate that the fitted GLMs are able to capture spatiotemporal characteristics of observed precipitation and temperature fields. According to the downscaled future climate, mean precipitation is projected to increase in summer and decrease in winter while minimum temperature is expected to warm faster than the maximum temperature. Climate extremes are projected to intensify with increased radiative forcing.

  17. A systematic approach to selecting the best probability models for annual maximum rainfalls - A case study using data in Ontario (Canada)

    Nguyen, Truong-Huy; El Outayek, Sarah; Lim, Sun Hee; Nguyen, Van-Thanh-Van


    Many probability distributions have been developed to model the annual maximum rainfall series (AMS). However, there is no general agreement as to which distribution should be used due to the lack of a suitable evaluation method. This paper presents hence a general procedure for assessing systematically the performance of ten commonly used probability distributions in rainfall frequency analyses based on their descriptive as well as predictive abilities. This assessment procedure relies on an extensive set of graphical and numerical performance criteria to identify the most suitable models that could provide the most accurate and most robust extreme rainfall estimates. The proposed systematic assessment approach has been shown to be more efficient and more robust than the traditional model selection method based on only limited goodness-of-fit criteria. To test the feasibility of the proposed procedure, an illustrative application was carried out using 5-min, 1-h, and 24-h annual maximum rainfall data from a network of 21 raingages located in the Ontario region in Canada. Results have indicated that the GEV, GNO, and PE3 models were the best models for describing the distribution of daily and sub-daily annual maximum rainfalls in this region. The GEV distribution, however, was preferred to the GNO and PE3 because it was based on a more solid theoretical basis for representing the distribution of extreme random variables.

  18. Effects of Doubled CO2 on Tropical Sea-Surface Temperature (SSTs) for Onset of Deep Convection and Maximum SST-GCM Simulations Based Inferences

    Sud, Y. C.; Walker, G. K.; Zhou, Y. P.; Schmidt, Gavin A.; Lau, K. M.; Cahalan, R. F.


    A primary concern of CO2-induced warming is the associated rise of tropical (10S-10N) seasurface temperatures (SSTs). GISS Model-E was used to produce two sets of simulations-one with the present-day and one with doubled CO2 in the atmosphere. The intrinsic usefulness of model guidance in the tropics was confirmed when the model simulated realistic convective coupling between SSTs and atmospheric soundings and that the simulated-data correlations between SSTs and 300 hPa moiststatic energies were found to be similar to the observed. Model predicted SST limits: (i) one for the onset of deep convection and (ii) one for maximum SST, increased in the doubled C02 case. Changes in cloud heights, cloud frequencies, and cloud mass-fractions showed that convective-cloud changes increased the SSTs, while warmer mixed-layer of the doubled CO2 contained approximately 10% more water vapor; clearly that would be conducive to more intense storms and hurricanes.

  19. MOnthly TEmperature DAtabase of Spain 1951-2010: MOTEDAS (2): The Correlation Decay Distance (CDD) and the spatial variability of maximum and minimum monthly temperature in Spain during (1981-2010).

    Cortesi, Nicola; Peña-Angulo, Dhais; Simolo, Claudia; Stepanek, Peter; Brunetti, Michele; Gonzalez-Hidalgo, José Carlos


    One of the key point in the develop of the MOTEDAS dataset (see Poster 1 MOTEDAS) in the framework of the HIDROCAES Project (Impactos Hidrológicos del Calentamiento Global en España, Spanish Ministery of Research CGL2011-27574-C02-01) is the reference series for which no generalized metadata exist. In this poster we present an analysis of spatial variability of monthly minimum and maximum temperatures in the conterminous land of Spain (Iberian Peninsula, IP), by using the Correlation Decay Distance function (CDD), with the aim of evaluating, at sub-regional level, the optimal threshold distance between neighbouring stations for producing the set of reference series used in the quality control (see MOTEDAS Poster 1) and the reconstruction (see MOREDAS Poster 3). The CDD analysis for Tmax and Tmin was performed calculating a correlation matrix at monthly scale between 1981-2010 among monthly mean values of maximum (Tmax) and minimum (Tmin) temperature series (with at least 90% of data), free of anomalous data and homogenized (see MOTEDAS Poster 1), obtained from AEMEt archives (National Spanish Meteorological Agency). Monthly anomalies (difference between data and mean 1981-2010) were used to prevent the dominant effect of annual cycle in the CDD annual estimation. For each station, and time scale, the common variance r2 (using the square of Pearson's correlation coefficient) was calculated between all neighbouring temperature series and the relation between r2 and distance was modelled according to the following equation (1): Log (r2ij) = b*°dij (1) being Log(rij2) the common variance between target (i) and neighbouring series (j), dij the distance between them and b the slope of the ordinary least-squares linear regression model applied taking into account only the surrounding stations within a starting radius of 50 km and with a minimum of 5 stations required. Finally, monthly, seasonal and annual CDD values were interpolated using the Ordinary Kriging with a

  20. An analytical approach for beam loading compensation and excitation of maximum cavity field gradient in a coupled cavity-waveguide system

    Kelisani, M. Dayyani; Doebert, S.; Aslaninejad, M.


    The critical process of beam loading compensation in high intensity accelerators brings under control the undesired effect of the beam induced fields to the accelerating structures. A new analytical approach for optimizing standing wave accelerating structures is found which is hugely fast and agrees very well with simulations. A perturbative analysis of cavity and waveguide excitation based on the Bethe theorem and normal mode expansion is developed to compensate the beam loading effect and excite the maximum field gradient in the cavity. The method provides the optimum values for the coupling factor and the cavity detuning. While the approach is very accurate and agrees well with simulation software, it massively shortens the calculation time compared with the simulation software.

  1. An analytical approach for beam loading compensation and excitation of maximum cavity field gradient in a coupled cavity-waveguide system

    Kelisani, M. Dayyani, E-mail: [Institute for Research in Fundamental Sciences (IPM), School of Particles and Accelerators, P.O. Box 19395-5531, Tehran (Iran, Islamic Republic of); European Organization for Nuclear Research (CERN), BE Department, CH-1211 Geneva 23 (Switzerland); Doebert, S. [European Organization for Nuclear Research (CERN), BE Department, CH-1211 Geneva 23 (Switzerland); Aslaninejad, M. [Institute for Research in Fundamental Sciences (IPM), School of Particles and Accelerators, P.O. Box 19395-5531, Tehran (Iran, Islamic Republic of)


    The critical process of beam loading compensation in high intensity accelerators brings under control the undesired effect of the beam induced fields to the accelerating structures. A new analytical approach for optimizing standing wave accelerating structures is found which is hugely fast and agrees very well with simulations. A perturbative analysis of cavity and waveguide excitation based on the Bethe theorem and normal mode expansion is developed to compensate the beam loading effect and excite the maximum field gradient in the cavity. The method provides the optimum values for the coupling factor and the cavity detuning. While the approach is very accurate and agrees well with simulation software, it massively shortens the calculation time compared with the simulation software.

  2. A non—perturbation approach in temperature Green function theory

    ZuoWei; WangShun-Jin


    A set of differo-integral equations for many-body connected temperature Green's functions is established which is non-perturbative in nature and provides a reasonable truncation scheme with respect to the order of many-body correlations.The method can be applied to nuclear systems at finite temperature.

  3. An informational approach about energy and temperature in atoms

    Flores-Gallegos, N.


    In this letter, we introduce new definitions of energy and temperature based on the information theory model, and we show that our definition of informational energy is related to the kinetic energy of the Thomas-Fermi model, meanwhile the definition of informational temperature proposed, permit identify 'hot' and 'cold' zones of an atom, such zones are related to the changes in the local electron energy wherein the chemical and physical changes can occur; informational temperature also can reproduce the shell structure of an atom.

  4. On the critical temperatures of superconductors: a quantum gravity approach

    Gregori, Andrea


    We consider superconductivity in the light of the quantum gravity theoretical framework introduced in [1]. In this framework, the degree of quantum delocalization depends on the geometry of the energy distribution along space. This results in a dependence of the critical temperature characterizing the transition to the superconducting phase on the complexity of the structure of a superconductor. We consider concrete examples, ranging from low to high temperature superconductors, and discuss how the critical temperature can be predicted once the quantum gravity effects are taken into account.

  5. Temperature prediction in domestic refrigerators: Deterministic and stochastic approaches

    Laguerre, O. (UMR Genie Industriel Alimentaire Cemagref-AgroParisTech-INRA by Refrigeration Process Engineering); Flick, D. [UMR Genie Industriel Alimentaire AgroParisTech-Cemagref-INRA, AgroParisTech-16 rue Claude Bernard, 75231 Paris Cedex 05 (France)


    A simplified steady state heat transfer model was developed for a domestic refrigerator (without a fan). This model considers circular airflow, heat exchange by natural convection between the air and the cold/warm walls and between the air and the load. Radiation between cold/warm walls and load is also taken into account. The model considers the temperature variation related to the height of the refrigerator (top, bottom) and the position (near the cold wall, near the warm wall). Two random parameters were considered: the room and thermostat temperatures. These values were then introduced into the model enabling the calculation of the load and air temperatures. Analysis of the predicted temperatures was undertaken using comparison with survey data; good agreement was obtained for the mean value and the standard deviation. This model could prove to be useful in the development of a risk evaluation tool. (author)

  6. Estimates of the theoretical maximum daily intake of erythorbic acid, gallates, butylated hydroxyanisole (BHA) and butylated hydroxytoluene (BHT) in Italy: a stepwise approach.

    Leclercq, C; Arcella, D; Turrini, A


    The three recent EU directives which fixed maximum permitted levels (MPL) for food additives for all member states also include the general obligation to establish national systems for monitoring the intake of these substances in order to evaluate their use safety. In this work, we considered additives with primary antioxidant technological function for which an acceptable daily intake (ADI) was established by the Scientific Committee for Food (SCF): gallates, butylated hydroxyanisole (BHA), butylated hydroxytoluene (BHT) and erythorbic acid. The potential intake of these additives in Italy was estimated by means of a hierarchical approach using, step by step, more refined methods. The likelihood of the current ADI to be exceeded was very low for erythorbic acid, BHA and gallates. On the other hand, the theoretical maximum daily intake (TMDI) of BHT was above the current ADI. The three food categories found to be main potential sources of BHT were "pastry, cake and biscuits", "chewing gums" and "vegetables oils and margarine"; they overall contributed 74% of the TMDI. Actual use of BHT in these food categories is discussed, together with other aspects such as losses of this substance in the technological process and percentage of ingestion in the case of chewing gums.

  7. Use of the Maximum Cumulative Ratio As an Approach for Prioritizing Aquatic Coexposure to Plant Protection Products: A Case Study of a Large Surface Water Monitoring Database.

    Vallotton, Nathalie; Price, Paul S


    This paper uses the maximum cumulative ratio (MCR) as part of a tiered approach to evaluate and prioritize the risk of acute ecological effects from combined exposures to the plant protection products (PPPs) measured in 3 099 surface water samples taken from across the United States. Assessments of the reported mixtures performed on a substance-by-substance approach and using a Tier One cumulative assessment based on the lowest acute ecotoxicity benchmark gave the same findings for 92.3% of the mixtures. These mixtures either did not indicate a potential risk for acute effects or included one or more individual PPPs that had concentrations in excess of their benchmarks. A Tier Two assessment using a trophic level approach was applied to evaluate the remaining 7.7% of the mixtures. This assessment reduced the number of mixtures of concern by eliminating the combination of endpoint from multiple trophic levels, identified invertebrates and nonvascular plants as the most susceptible nontarget organisms, and indicated that a only a very limited number of PPPs drove the potential concerns. The combination of the measures of cumulative risk and the MCR enabled the identification of a small subset of mixtures where a potential risk would be missed in substance-by-substance assessments.

  8. On the critical temperatures of superconductors: a quantum gravity approach

    Gregori, Andrea


    We consider superconductivity in the light of the quantum gravity theoretical framework introduced in [1]. In this framework, the degree of quantum delocalization depends on the geometry of the energy distribution along space. This results in a dependence of the critical temperature characterizing the transition to the superconducting phase on the complexity of the structure of a superconductor. We consider concrete examples, ranging from low to high temperature superconductors, and discuss h...

  9. An artificial neural network approach for the forecast of ambient air temperature

    Philippopoulos, Kostas; Deligiorgi, Despina; Kouroupetroglou, Georgios


    Ambient air temperature forecasting is one of the most significant aspects of environmental and climate research. Accurate temperature forecasts are important in the energy and tourism industry, in agriculture for estimating potential hazards, and within an urban context, in studies for assessing the risk of adverse health effects in the general population. The scope of this study is to propose an Artificial Neural Network (ANN) approach for the one-day ahead maximum (Tmax) and minimum (Tmin) air temperature forecasting. The ANNs are signal processing systems consisted by an assembly of simple interconnected processing elements (neurons) and in geosciences are mainly used in pattern recognition problems. In this study the feed-forward ANN models are selected, which are theoretically capable of estimating a measurable input-output function to any desired degree of accuracy. The method is implemented at a single site (Souda Airport) located at the island of Crete in southeastern Mediterranean and employs the hourly, Tmax and Tmin temperature observations over a ten-yearly period (January 2000 to December 2009). Separate ANN models are trained and tested for the forecast of Tmax and Tmin, which are based on the 24 previous day's hourly temperature records. The first six years are used for training the ANNs, the subsequent two for validating the models and the last two (January 2008 to December 2009) for testing the ANN's overall predicting accuracy. The model architecture consists of a single hidden layer and multiple experiments with varying number of neurons are performed (from 1 to 80 neurons with hyperbolic tangent sigmoid transfer functions). The selection of the optimum number of neurons in the hidden layer is based on a trial and error procedure and the performance is measured using the mean absolute error (MAE) on the validation set. A comprehensive set of model output statistics is used for examining the ability of the models to estimate both Tmax and Tmin

  10. Detection and Adjustment of Undocumented Discontinuities in Chinese Temperature Series Using a Composite Approach

    LI Qingxiang; DONG Wenjie


    Annually averaged daily maximum and minimum surface temperatures from southeastern China were evaluated for artificial discontinuities using three different tests for undocumented changepoints. Change-points in the time series were identified by comparing each target series to a reference calculated from values observed at a number of nearby stations. Under the assumption that no trend was present in the sequence of target-reference temperature differences, a changepoint was assigned to the target series when at least two of the three tests rejected the null hypothesis of no changepoint at approximately the same position in the difference series. Each target series then was adjusted using a procedure that accounts for discontinuities in average temperature values from nearby stations that otherwise could bias estimates of the magnitude of the target series step change. A spatial comparison of linear temperature trends in the adjusted annual temperature series suggests that major relative discontinuities were removed in the homogenization process. A greater number of relative change points were detected in annual average minimum than in average maximum temperature series. Some evidence is presented which suggests that minimum surface temperature fields may be more sensitive to changes in measurement practice than maximum temperature fields. In addition, given previous evidence of urban heat island (i.e., local) trends in this region, the assumption of no slope in a target-reference difference series is likely to be violated more frequently in minimum than in maximum temperature series. Consequently, there may be greater potential to confound trend and step changes in minimum temperature series.

  11. Stream temperature prediction in ungauged basins: review of recent approaches and description of a new physically-based analytical model

    Gallice, A.; Schaefli, B.; Lehning, M.; Parlange, M. P.; Huwald, H.


    The development of stream temperature regression models at regional scales has regained some popularity over the past years. These models are used to predict stream temperature in ungauged catchments to assess the impact of human activities or climate change on riverine fauna over large spatial areas. A comprehensive literature review presented in this study shows that the temperature metrics predicted by the majority of models correspond to yearly aggregates, such as the popular annual maximum weekly mean temperature (MWMT). As a consequence, current models are often unable to predict the annual cycle of stream temperature, nor can the majority of them forecast the interannual variation of stream temperature. This study presents a new model to estimate the monthly mean stream temperature of ungauged rivers over multiple years in an Alpine country (Switzerland). Contrary to the models developed to date, which mostly rely upon statistical regression to express stream temperature as a function of physiographic and climatic variables, this one rests upon the analytical solution to a simplified version of the energy-balance equation over an entire stream network. This physically-based approach presents some advantages: (1) the functional form linking stream temperature to the predictor variables is directly obtained from first principles, (2) the spatial extent over which the predictor variables are averaged naturally arises during model development, and (3) the regression coefficients can be interpreted from a physical point of view - their values can therefore be constrained to remain within plausible bounds. The evaluation of the model over a new freely available data set shows that the monthly mean stream temperature curve can be reproduced with a root mean square error of ±1.3 °C, which is similar in precision to the predictions obtained with a multi-linear regression model. We illustrate through a simple example how the physical basis of the model can be used

  12. Constant delivery temperature solar water heater - an integrated approach

    Kumar, S. [C.A.S. Indian Institute of Technology, New Delhi (India); Kumar, N. [D.C.E. Muzaffarpur Institute of Technology, Bihar (India)


    An integrated model of a constant delivery temperature solar water heat-cum-active regenerative distillation system has been developed. The water used for the regenerative effect in the distiller of the proposed system is subsequently fed to the basin-cum-storage tank of the still through the heat exchanger (connected to the collector). The model varies the water mass flow rate in order to maintain a constant outlet temperature. With minor modifications in the solar water heater, the extra energy stored in the water mass due to non-utilization of capacity and/or non-linear utilization of capacity can be efficiently utilized for distillation purposes. In this process, the latent heat of vaporization is used for preheating the inlet water supply to the heat exchanger. The effect of insulation on maintaining the hot water temperature and distillate output is also presented. (Author)

  13. Electromagnetic field at finite temperature: A first order approach

    Casana, R.; Pimentel, B. M.; Valverde, J. S.


    In this work we study the electromagnetic field at finite temperature via the massless DKP formalism. The constraint analysis is performed and the partition function for the theory is constructed and computed. When it is specialized to the spin 1 sector we obtain the well-known result for the thermodynamic equilibrium of the electromagnetic field.

  14. Evaluation of approaches for modeling temperature wave propagation in district heating pipelines

    Gabrielaitiene, I.; Bøhm, Benny; Sunden, B.


    The limitations of a pseudo-transient approach for modeling temperature wave propagation in district heating pipes were investigated by comparing numerical predictions with experimental data. The performance of two approaches, namely a pseudo-transient approach implemented in the finite element...... code ANSYS and a node method, was examined for a low turbulent Reynolds number regime and small velocity fluctuations. Both approaches are found to have limitations in predicting the temperature response time and predicting the peak values of the temperature wave, which is further hampered by the fact...... to be given to the detailed modeling of the turbulent flow characteristics....

  15. Functional Integral Approach to Transition Temperature of a Homogeneous Imperfect Bose Gas

    HU Guang-Xi; DAI Xian-Xi; DAI Ji-Xin; William E. Evenson


    A functional integral approach (FIA) is introduced to calculate the transition temperature of a uniform imperfect Bose gas. With this approach we find that the transition temperature is higher than that of the corresponding ideal gas. We obtain the expression of the transition temperature shift as △Tc/To = 2.492 (na3) 1/6, where n is the density of particle number and a is the scattering length. The result has never been reported in the literature.

  16. Spatiotemporal Modeling of Ozone Levels in Quebec (Canada): A Comparison of Kriging, Land-Use Regression (LUR), and Combined Bayesian Maximum Entropy–LUR Approaches

    Adam-Poupart, Ariane; Brand, Allan; Fournier, Michel; Jerrett, Michael


    Background: Ambient air ozone (O3) is a pulmonary irritant that has been associated with respiratory health effects including increased lung inflammation and permeability, airway hyperreactivity, respiratory symptoms, and decreased lung function. Estimation of O3 exposure is a complex task because the pollutant exhibits complex spatiotemporal patterns. To refine the quality of exposure estimation, various spatiotemporal methods have been developed worldwide. Objectives: We sought to compare the accuracy of three spatiotemporal models to predict summer ground-level O3 in Quebec, Canada. Methods: We developed a land-use mixed-effects regression (LUR) model based on readily available data (air quality and meteorological monitoring data, road networks information, latitude), a Bayesian maximum entropy (BME) model incorporating both O3 monitoring station data and the land-use mixed model outputs (BME-LUR), and a kriging method model based only on available O3 monitoring station data (BME kriging). We performed leave-one-station-out cross-validation and visually assessed the predictive capability of each model by examining the mean temporal and spatial distributions of the average estimated errors. Results: The BME-LUR was the best predictive model (R2 = 0.653) with the lowest root mean-square error (RMSE ;7.06 ppb), followed by the LUR model (R2 = 0.466, RMSE = 8.747) and the BME kriging model (R2 = 0.414, RMSE = 9.164). Conclusions: Our findings suggest that errors of estimation in the interpolation of O3 concentrations with BME can be greatly reduced by incorporating outputs from a LUR model developed with readily available data. Citation: Adam-Poupart A, Brand A, Fournier M, Jerrett M, Smargiassi A. 2014. Spatiotemporal modeling of ozone levels in Quebec (Canada): a comparison of kriging, land-use regression (LUR), and combined Bayesian maximum entropy–LUR approaches. Environ Health Perspect 122:970–976; PMID:24879650

  17. A Dyson-Schwinger approach to finite temperature QCD

    Mueller, Jens Andreas


    The different phases of quantum chromodynamics at finite temperature are studied. To this end the nonperturbative quark propagator in Matsubara formalism is determined from its equation of motion, the Dyson-Schwinger equation. A novel truncation scheme is introduced including the nonperturbative, temperature dependent gluon propagator as extracted from lattice gauge theory. In the first part of the thesis a deconfinement order parameter, the dual condensate, and the critical temperature are determined from the dependence of the quark propagator on the temporal boundary conditions. The chiral transition is investigated by means of the quark condensate as order parameter. In addition differences in the chiral and deconfinement transition between gauge groups SU(2) and SU(3) are explored. In the following the quenched quark propagator is studied with respect to a possible spectral representation at finite temperature. In doing so, the quark propagator turns out to possess different analytic properties below and above the deconfinement transition. This result motivates the consideration of an alternative deconfinement order parameter signaling positivity violations of the spectral function. A criterion for positivity violations of the spectral function based on the curvature of the Schwinger function is derived. Using a variety of ansaetze for the spectral function, the possible quasi-particle spectrum is analyzed, in particular its quark mass and momentum dependence. The results motivate a more direct determination of the spectral function in the framework of Dyson-Schwinger equations. In the two subsequent chapters extensions of the truncation scheme are considered. The influence of dynamical quark degrees of freedom on the chiral and deconfinement transition is investigated. This serves as a first step towards a complete self-consistent consideration of dynamical quarks and the extension to finite chemical potential. The goodness of the truncation is verified first

  18. Setting the renormalization scale in pQCD: Comparisons of the principle of maximum conformality with the sequential extended Brodsky-Lepage-Mackenzie approach

    Ma, Hong -Hao [Chongqing Univ., Chongqing (People' s Republic of China); Wu, Xing -Gang [Chongqing Univ., Chongqing (People' s Republic of China); Ma, Yang [Chongqing Univ., Chongqing (People' s Republic of China); Brodsky, Stanley J. [Stanford Univ., Stanford, CA (United States); Mojaza, Matin [KTH Royal Inst. of Technology and Stockholm Univ., Stockholm (Sweden)


    A key problem in making precise perturbative QCD (pQCD) predictions is how to set the renormalization scale of the running coupling unambiguously at each finite order. The elimination of the uncertainty in setting the renormalization scale in pQCD will greatly increase the precision of collider tests of the Standard Model and the sensitivity to new phenomena. Renormalization group invariance requires that predictions for observables must also be independent on the choice of the renormalization scheme. The well-known Brodsky-Lepage-Mackenzie (BLM) approach cannot be easily extended beyond next-to-next-to-leading order of pQCD. Several suggestions have been proposed to extend the BLM approach to all orders. In this paper we discuss two distinct methods. One is based on the “Principle of Maximum Conformality” (PMC), which provides a systematic all-orders method to eliminate the scale and scheme ambiguities of pQCD. The PMC extends the BLM procedure to all orders using renormalization group methods; as an outcome, it significantly improves the pQCD convergence by eliminating renormalon divergences. An alternative method is the “sequential extended BLM” (seBLM) approach, which has been primarily designed to improve the convergence of pQCD series. The seBLM, as originally proposed, introduces auxiliary fields and follows the pattern of the β0-expansion to fix the renormalization scale. However, the seBLM requires a recomputation of pQCD amplitudes including the auxiliary fields; due to the limited availability of calculations using these auxiliary fields, the seBLM has only been applied to a few processes at low orders. In order to avoid the complications of adding extra fields, we propose a modified version of seBLM which allows us to apply this method to higher orders. As a result, we then perform detailed numerical comparisons of the two alternative scale-setting approaches by investigating their predictions for the annihilation cross section ratio R

  19. Ultrasonic Approach to Nonivasive Temperature Monitoring During Microwave Thermotherapy

    J. Vrba


    Full Text Available Microwave thermotherapy (MT is an oncological treatment. At presentthe invasive thermometer probes are clinically used for temperaturemeasuring during an MT. Any invasive handling of tumors is ofhigh-risk. A new possible method of noninvasive monitoring oftemperature distribution in tissue has been developed. An MT treatmentof the experimentally induced pedicle-tumors of the rat was prepared.For 100 rat samples a strong correlation between the mean gray level inthe ROIs in the ultrasound pictures and the invasively measuredtemperature in the range 37-44 °C was found. The correlationcoefficient of the mean gray level and the invasively measuredtemperature is 0.96a0.05. A system for representation of changes ofspatial temperature distribution of the whole tumor during MT ispresented.

  20. Temperature Gradient Approach for Rapidly Assessing Sensor Binding Kinetics and Thermodynamics.

    Wagner, Caleb E; Macedo, Lucyano J A; Opdahl, Aric


    We report a highly resolved approach for quantitatively measuring the temperature dependence of molecular binding in a sensor format. The method is based on surface plasmon resonance (SPR) imaging measurements made across a spatial temperature gradient. Simultaneous recording of sensor response over the range of temperatures spanned by the gradient avoids many of the complications that arise in the analysis of SPR measurements where temperature is varied. In addition to simplifying quantitative analysis of binding interactions, the method allows the temperature dependence of binding to be monitored as a function of time, and provides a straightforward route for calibrating how temperature varies across the gradient. Using DNA hybridization as an example, we show how the gradient approach can be used to measure the temperature dependence of binding kinetics and thermodynamics (e.g., melt/denaturation profile) in a single experiment.

  1. Quantum electron-vibrational dynamics at finite temperature: Thermo field dynamics approach

    Borrelli, Raffaele; Gelin, Maxim F.


    Quantum electron-vibrational dynamics in molecular systems at finite temperature is described using an approach based on the thermo field dynamics theory. This formulation treats temperature effects in the Hilbert space without introducing the Liouville space. A comparison with the theoretically equivalent density matrix formulation shows the key numerical advantages of the present approach. The solution of thermo field dynamics equations with a novel technique for the propagation of tensor trains (matrix product states) is discussed. Numerical applications to model spin-boson systems show that the present approach is a promising tool for the description of quantum dynamics of complex molecular systems at finite temperature.

  2. A combined diffusion and thermal modeling approach to determine peak temperatures of thermal metamorphism experienced by meteorites

    Schwinger, Sabrina; Dohmen, Ralf; Schertl, Hans-Peter


    around sealed cracks in type I chondrule olivine yields similar Γ values, indicating a formation of both zoning features during a common thermal history on the parent body. In addition, Γ values for type II chondrule olivine correlate with metamorphic grade. The application of this approach on Fe-Mg zoning in type II chondrule olivine of CO3 chondrites yields estimates of maximum metamorphic peak temperatures ranging from 653 to 849 K for different petrologic subtypes. The Fe-Mg zoning of type I chondrule olivine is not consistent with the peak temperature estimates from type II chondrule olivine, suggesting an additional contribution of solar nebular processes to type I chondrule olivine zoning prior to accretion into the parent body.

  3. Maximum-Entropy Inference with a Programmable Annealer

    Chancellor, Nicholas; Vinci, Walter; Aeppli, Gabriel; Warburton, Paul A


    Optimisation problems in science and engineering typically involve finding the ground state (i.e. the minimum energy configuration) of a cost function with respect to many variables. If the variables are corrupted by noise then this approach maximises the likelihood that the solution found is correct. An alternative approach is to make use of prior statistical information about the noise in conjunction with Bayes's theorem. The maximum entropy solution to the problem then takes the form of a Boltzmann distribution over the ground and excited states of the cost function. Here we use a programmable Josephson junction array for the information decoding problem which we simulate as a random Ising model in a field. We show experimentally that maximum entropy decoding at finite temperature can in certain cases give competitive and even slightly better bit-error-rates than the maximum likelihood approach at zero temperature, confirming that useful information can be extracted from the excited states of the annealing...

  4. A new approach to measure the ocean temperature using Brillouin lidar

    Wei Gao; Zhiwei Lü; Yongkang Dong; Weiming He


    @@ An approach of lidar measurements of ocean temperature through measuring the spectral linewidth of the backscattered Brillouin lines is presented. An empirical equation for the temperature as a function of Brillouin linewidth and salinity is derived. Theoretical results are in good agreement with the experimental data. The equation also reveals the dependence of the temperature on the salinity and Brillouin linewidth.It is shown that the uncertainty of the salinity has very little impact on the temperature measurement.The uncertainty of this temperature measurement methodology is approximately 0.02 ℃.

  5. Shifting distributions of adult Atlantic sturgeon amidst post-industrialization and future impacts in the Delaware River: a maximum entropy approach.

    Breece, Matthew W; Oliver, Matthew J; Cimino, Megan A; Fox, Dewayne A


    Atlantic sturgeon (Acipenser oxyrinchus oxyrinchus) experienced severe declines due to habitat destruction and overfishing beginning in the late 19(th) century. Subsequent to the boom and bust period of exploitation, there has been minimal fishing pressure and improving habitats. However, lack of recovery led to the 2012 listing of Atlantic sturgeon under the Endangered Species Act. Although habitats may be improving, the availability of high quality spawning habitat, essential for the survival and development of eggs and larvae may still be a limiting factor in the recovery of Atlantic sturgeon. To estimate adult Atlantic sturgeon spatial distributions during riverine occupancy in the Delaware River, we utilized a maximum entropy (MaxEnt) approach along with passive biotelemetry during the likely spawning season. We found that substrate composition and distance from the salt front significantly influenced the locations of adult Atlantic sturgeon in the Delaware River. To broaden the scope of this study we projected our model onto four scenarios depicting varying locations of the salt front in the Delaware River: the contemporary location of the salt front during the likely spawning season, the location of the salt front during the historic fishery in the late 19(th) century, an estimated shift in the salt front by the year 2100 due to climate change, and an extreme drought scenario, similar to that which occurred in the 1960's. The movement of the salt front upstream as a result of dredging and climate change likely eliminated historic spawning habitats and currently threatens areas where Atlantic sturgeon spawning may be taking place. Identifying where suitable spawning substrate and water chemistry intersect with the likely occurrence of adult Atlantic sturgeon in the Delaware River highlights essential spawning habitats, enhancing recovery prospects for this imperiled species.

  6. Shifting distributions of adult Atlantic sturgeon amidst post-industrialization and future impacts in the Delaware River: a maximum entropy approach.

    Matthew W Breece

    Full Text Available Atlantic sturgeon (Acipenser oxyrinchus oxyrinchus experienced severe declines due to habitat destruction and overfishing beginning in the late 19(th century. Subsequent to the boom and bust period of exploitation, there has been minimal fishing pressure and improving habitats. However, lack of recovery led to the 2012 listing of Atlantic sturgeon under the Endangered Species Act. Although habitats may be improving, the availability of high quality spawning habitat, essential for the survival and development of eggs and larvae may still be a limiting factor in the recovery of Atlantic sturgeon. To estimate adult Atlantic sturgeon spatial distributions during riverine occupancy in the Delaware River, we utilized a maximum entropy (MaxEnt approach along with passive biotelemetry during the likely spawning season. We found that substrate composition and distance from the salt front significantly influenced the locations of adult Atlantic sturgeon in the Delaware River. To broaden the scope of this study we projected our model onto four scenarios depicting varying locations of the salt front in the Delaware River: the contemporary location of the salt front during the likely spawning season, the location of the salt front during the historic fishery in the late 19(th century, an estimated shift in the salt front by the year 2100 due to climate change, and an extreme drought scenario, similar to that which occurred in the 1960's. The movement of the salt front upstream as a result of dredging and climate change likely eliminated historic spawning habitats and currently threatens areas where Atlantic sturgeon spawning may be taking place. Identifying where suitable spawning substrate and water chemistry intersect with the likely occurrence of adult Atlantic sturgeon in the Delaware River highlights essential spawning habitats, enhancing recovery prospects for this imperiled species.

  7. Fixed-scale approach to finite-temperature lattice QCD with shifted boundaries

    Umeda, Takashi


    We study the thermodynamics of the SU(3) gauge theory using the fixed-scale approach with shifted boundary conditions. The fixed-scale approach can reduce the numerical cost of the zero-temperature part in the equation of state calculations, while the number of possible temperatures is limited by the integer $N_t$, which represents the temporal lattice extent. The shifted boundary conditions can overcome such a limitation while retaining the advantages of the fixed-scale approach. Therefore, our approach enables the investigation of not only the equation of state in detail, but also the calculation of the critical temperature with increased precision even with the fixed-scale approach. We also confirm numerically that the boundary conditions suppress the lattice artifact of the equation of state, which has been confirmed in the non-interacting limit.

  8. Low temperature specific heat of glasses: a non-extensive approach

    Razdan, Ashok


    Specific heat is calculated using Tsallis statistics. It is observed that it is possible to explain some low temperature specific heat properties of glasses using non-extensive approach. A similarity between temperature dependence of non-extensive specific heat and fractal specific heat is also discussed.

  9. Spatio-statistical analysis of temperature fluctuation using Mann-Kendall and Sen's slope approach

    Atta-ur-Rahman; Dawood, Muhammad


    This article deals with the spatio-statistical analysis of temperature trend using Mann-Kendall trend model (MKTM) and Sen's slope estimator (SSE) in the eastern Hindu Kush, north Pakistan. The climate change has a strong relationship with the trend in temperature and resultant changes in rainfall pattern and river discharge. In the present study, temperature is selected as a meteorological parameter for trend analysis and slope magnitude. In order to achieve objectives of the study, temperature data was collected from Pakistan Meteorological Department for all the seven meteorological stations that falls in the eastern Hindu Kush region. The temperature data were analysed and simulated using MKTM, whereas for the determination of temperature trend and slope magnitude SSE method have been applied to exhibit the type of fluctuations. The analysis reveals that a positive (increasing) trend in mean maximum temperature has been detected for Chitral, Dir and Saidu Sharif met stations, whereas, negative (decreasing) trend in mean minimum temperature has been recorded for met station Saidu Sharif and Timergara. The analysis further reveals that the concern variation in temperature trend and slope magnitude is attributed to climate change phenomenon in the region.

  10. Spatial distribution of temperature in the low-temperature geothermal Euganean field (NE Italy): a simulated annealing approach

    Fabbri, Paolo; Trevisani, Sebastiano [Dipartimento di Geologia, Paleontologia e Geofisica, Universita degli Studi di Padova, via Giotto 1, 35127 Padova (Italy)


    The spatial distribution of groundwater temperatures in the low-temperature (60-86{sup o}C) geothermal Euganean field of northeastern Italy has been studied using a geostatistical approach. The data set consists of 186 temperatures measured in a fractured limestone reservoir, over an area of 8km{sup 2}. Investigation of the spatial continuity by means of variographic analysis revealed the presence of anisotropies that are apparently related to the particular geologic structure of the area. After inference of variogram models, a simulated annealing procedure was used to perform conditional simulations of temperature in the domain being studied. These simulations honor the data values and reproduce the spatial continuity inferred from the data. Post-processing of the simulations permits an assessment of temperature uncertainties. Maps of estimated temperatures, interquartile range, and of the probability of exceeding a prescribed 80{sup o}C threshold were also computed. The methodology described could prove useful when siting new wells in a geothermal area. (author)

  11. A variational approach to coarse-graining of equilibrium and non-equilibrium atomistic description at finite temperature

    Kulkarni, Y; Knap, J; Ortiz, M


    The aim of this paper is the development of equilibrium and non-equilibrium extensions of the quasicontinuum (QC) method. We first use variational mean-field theory and the maximum-entropy formalism for deriving approximate probability distribution and partition functions for the system. The resulting probability distribution depends locally on atomic temperatures defined for every atom and the corresponding thermodynamic potentials are explicit and local in nature. The method requires an interatomic potential as the sole empirical input. Numerical validation is performed by simulating thermal equilibrium properties of selected materials using the Lennard-Jones pair potential and the EAM potential and comparing with molecular dynamics results as well as experimental data. The max-ent variational approach is then taken as a basis for developing a three-dimensional non-equilibrium finite temperature extension of the quasicontinuum method. This extension is accomplished by coupling the local temperature-dependent free energy furnished by the max-ent approximation scheme to the heat equation in a joint thermo-mechanical variational setting. Results for finite-temperature nanoindentation tests demonstrate the ability of the method to capture non-equilibrium transport properties and differentiate between slow and fast indentation.

  12. Modeling precipitation δ18O variability in East Asia since the Last Glacial Maximum: temperature and amount effects across different timescales

    Wen, Xinyu; Liu, Zhengyu; Chen, Zhongxiao; Brady, Esther; Noone, David; Zhu, Qingzhao; Guan, Jian


    Water isotopes in precipitation have played a key role in the reconstruction of past climate on millennial timescales and longer. However, for midlatitude regions like East Asia with complex terrain, the reliability behind the basic assumptions of the temperature effect and amount effect is based on modern observational data and still remains unclear for past climate. In the present work, we reexamine the two basic effects on seasonal, interannual, and millennial timescales in a set of time slice experiments for the period 22-0 ka using an isotope-enabled atmospheric general circulation model (AGCM). Our study confirms the robustness of the temperature and amount effects on the seasonal cycle over China in the present climatic conditions, with the temperature effect dominating in northern China and the amount effect dominating in the far south of China but no distinct effect in the transition region of central China. However, our analysis shows that neither temperature nor amount effect is significantly dominant over China on millennial and interannual timescales, which is a challenge to those classic assumptions in past climate reconstruction. Our work helps shed light on the interpretation of the proxy record of δ18O from a modeling point of view.

  13. Modeling egg development of the pest Clavipalpus ursinus (Coleoptera: Melolonthidae) using a temperature-dependent approach

    Andrea Escobar; Rodrigo Gil; Carlos Ricardo Bojacá; Jaime Jiménez


    Predicting the population dynamics of insects in natural conditions is essential for their management or preservation,and temperature-dependent development models contribute to achieving this.In this research the effects of temperature and soil moisture content on egg development and hatching of Clavipalpus ursinus (Blanchard)were evaluated.The eggs were exposed to seven temperature treatments with averages of 7.2,13.0,15.5,19.7,20.6,22.0 and 25.3℃,in combination with three soil moisture contents of 40%,60% and 80%.A linear and two non-linear (Lactin and Briere) models were evaluated in order to determine the thermal requirements of this developmental stage.Temperature affected significantly the time of development and egg hatching,while no significant effect was observed for moisture content.Thermal requirements were set as:7.2℃ for lower developmental threshold,20.6℃ for optimum developmental threshold,25.3℃ for maximum temperature and 344.83 degree-days for the thermal constant.The linear model described satisfactorily egg development at intermediate temperatures; nevertheless,a slightly better fit of the observed data was obtained with the Lactin model.Egg development took place inside a narrow range of temperatures.Consequently,an increment of soil temperature could generate a negative impact on the population size of this species or changes in its biological parameters.

  14. Generalized Maximum Entropy

    Cheeseman, Peter; Stutz, John


    A long standing mystery in using Maximum Entropy (MaxEnt) is how to deal with constraints whose values are uncertain. This situation arises when constraint values are estimated from data, because of finite sample sizes. One approach to this problem, advocated by E.T. Jaynes [1], is to ignore this uncertainty, and treat the empirically observed values as exact. We refer to this as the classic MaxEnt approach. Classic MaxEnt gives point probabilities (subject to the given constraints), rather than probability densities. We develop an alternative approach that assumes that the uncertain constraint values are represented by a probability density {e.g: a Gaussian), and this uncertainty yields a MaxEnt posterior probability density. That is, the classic MaxEnt point probabilities are regarded as a multidimensional function of the given constraint values, and uncertainty on these values is transmitted through the MaxEnt function to give uncertainty over the MaXEnt probabilities. We illustrate this approach by explicitly calculating the generalized MaxEnt density for a simple but common case, then show how this can be extended numerically to the general case. This paper expands the generalized MaxEnt concept introduced in a previous paper [3].

  15. Stream temperature prediction in ungauged basins: review of recent approaches and description of a new physics-derived statistical model

    Gallice, A.; Schaefli, B.; Lehning, M.; Parlange, M. B.; Huwald, H.


    The development of stream temperature regression models at regional scales has regained some popularity over the past years. These models are used to predict stream temperature in ungauged catchments to assess the impact of human activities or climate change on riverine fauna over large spatial areas. A comprehensive literature review presented in this study shows that the temperature metrics predicted by the majority of models correspond to yearly aggregates, such as the popular annual maximum weekly mean temperature (MWMT). As a consequence, current models are often unable to predict the annual cycle of stream temperature, nor can the majority of them forecast the inter-annual variation of stream temperature. This study presents a new statistical model to estimate the monthly mean stream temperature of ungauged rivers over multiple years in an Alpine country (Switzerland). Contrary to similar models developed to date, which are mostly based on standard regression approaches, this one attempts to incorporate physical aspects into its structure. It is based on the analytical solution to a simplified version of the energy-balance equation over an entire stream network. Some terms of this solution cannot be readily evaluated at the regional scale due to the lack of appropriate data, and are therefore approximated using classical statistical techniques. This physics-inspired approach presents some advantages: (1) the main model structure is directly obtained from first principles, (2) the spatial extent over which the predictor variables are averaged naturally arises during model development, and (3) most of the regression coefficients can be interpreted from a physical point of view - their values can therefore be constrained to remain within plausible bounds. The evaluation of the model over a new freely available data set shows that the monthly mean stream temperature curve can be reproduced with a root-mean-square error (RMSE) of ±1.3 °C, which is similar in

  16. A simple classical approach for the melting temperature of inert-gas nanoparticles

    Nanda, K. K.


    Like the metal and semiconductor nanoparticles, the melting temperature of free inert-gas nanoparticles decreases with decreasing size. The variation is linear with the inverse of the particle size for large nanoparticles and deviates from the linearity for small nanoparticles. The decrease in the melting temperature is slower for free nanoparticles with non-wetting surfaces, while the decrease is faster for nanoparticles with wetting surfaces. Though the depression of the melting temperature has been reported for inert-gas nanoparticles in porous glasses, superheating has also been observed when the nanoparticles are embedded in some matrices. By using a simple classical approach, the influence of size, geometry and the matrix on the melting temperature of nanoparticles is understood quantitatively and shown to be applicable for other materials. It is also shown that the classical approach can be applied to understand the size-dependent freezing temperature of nanoparticles.

  17. The pre-onset, transitional, and foot regions in resistance versus temperature behavior in high-T2 cuprates: Inferences regarding maximum T2

    Vezzoli, G. C.; Burke, T.; Chen, M. F.; Craver, F.; Stanley, W.


    We have studied the pre-onset deviation-from-linearity region, the transitional regime, and the foot region in the resistance versus temperature behavior of high-T sub c oxide superconductors, employing time varying magnetic fields and carefully controlled precise temperatures. We have shown that the best value of T sub c can be extrapolated from the magnetic field induced divergence of the resistance versus inverse absolute temperature data as derived from the transitional and/or foot regions. These data are in accord with results from previous Hall effect studies. The pre-onset region however, shows a differing behavior (in R versus 1000/T as a function of B) which we believe links it to an incipient Cooper pairing that suffers a kinetic barrier opposing formation of a full supercurrent. This kinetic dependence is believed to be associated with the lifetime of the mediator particle. This particle is interpreted to be the virtual exciton formed from internal-field induced charge-transfer excitations which transiently neutralize the multivalence cations and establish bound holes on the oxygens.

  18. Pre-onset, transitional, and foot regions in resistance versus temperature behavior in high-t[sub 2] cuprates: Inferences regarding maximum t[sub 2]. Final report

    Vezzoli, G.C.; Burke, T.; Chen, M.F.; Craver, F.; Stanley, W.


    We have studied the pre-onset deviation-from-linearity region, the transitional regime, and the foot region in the resistance versus temperature behavior of high-T sub c oxdie superconductors, employing time varying magnetic fields and carefully controlled precise temperatures. We have shown that the best value of T sub c can be extrapolated from the magnetic field induced divergence of the resistance versus inverse absolute temperature data as derived from the transitional and/or foot regions. These data are in accord with results from previous Hall effect studies. The pre-onset region however, shows a differing behavior (in R versus 1000/T as a function of B) which we believe links it to an incipient Cooper pairing that suffers a kinetic barrier opposing formation of a full supercurrent. This kinetic dependence is believed to be associated with the lifetime of the mediator particle. This particle is interpreted to be the virtual exciton formed from internal-field induced charge-transfer excitations which transiently neutralize the multivalence cations and establish bound holes on the oxygens.

  19. Predicted time from fertilization to maximum wet weight for steelhead alevins based on incubation temperature and egg size (Study site: Western Fishery Research Center, Seattle; Stock: Dworshak hatchery; Year class: 1996): Chapter 4

    Rubin, Stephen P.; Reisenbichler, Reginald R.; Slatton, Stacey L.; Rubin, Stephen P.; Reisenbichler, Reginald R.; Wetzel, Lisa A.; Hayes, Michael C.


    The accuracy of a model that predicts time between fertilization and maximum alevin wet weight (MAWW) from incubation temperature was tested for steelhead Oncorhynchus mykiss from Dworshak National Fish Hatchery on the Clearwater River, Idaho. MAWW corresponds to the button-up fry stage of development. Embryos were incubated at warm (mean=11.6°C) or cold (mean=7.3°C) temperatures and time between fertilization and MAWW was measured for each temperature. Model predictions of time to MAWW were within 1% of measured time to MAWW. Mean egg weight ranged from 0.101-0.136 g among females (mean = 0.116). Time to MAWW was positively related to egg size for each temperature, but the increase in time to MAWW with increasing egg size was greater for embryos reared at the warm than at the cold temperature. We developed equations accounting for the effect of egg size on time to MAWW for each temperature, and also for the mean of those temperatures (9.3°C).

  20. Short-Term Responses in Maximum Quantum Yield of PSII (Fv/Fm to ex situ Temperature Treatment of Populations of Bryophytes Originating from Different Sites in Hokkaido, Northern Japan

    Annika K. Jägerbrand


    Full Text Available There is limited knowledge available on the thermal acclimation processes for bryophytes, especially when considering variation between populations or sites. This study investigated whether short-term ex situ thermal acclimation of different populations showed patterns of site dependency and whether the maximum quantum yield of PSII (Fv/Fm could be used as an indicator of adaptation or temperature stress in two bryophyte species: Pleurozium schreberi (Willd. ex Brid. Mitt. and Racomitrium lanuginosum (Hedw. Brid. We sought to test the hypothesis that differences in the ability to acclimate to short-term temperature treatment would be revealed as differences in photosystem II maximum yield (Fv/Fm. Thermal treatments were applied to samples from 12 and 11 populations during 12 or 13 days in growth chambers and comprised: (1 10/5 °C; (2 20/10 °C; (3 25/15 °C; (4 30/20 °C (12 hours day/night temperature. In Pleurozium schreberi, there were no significant site-dependent differences before or after the experiment, while site dependencies were clearly shown in Racomitrium lanuginosum throughout the study. Fv/Fm in Pleurozium schreberi decreased at the highest and lowest temperature treatments, which can be interpreted as a stress response, but no similar trends were shown by Racomitrium lanuginosum.

  1. Identifying the optimal supply temperature in district heating networks - A modelling approach

    Mohammadi, Soma; Bojesen, Carsten


    of this study is to develop a model for thermo-hydraulic calculation of low temperature DH system. The modelling is performed with emphasis on transient heat transfer in pipe networks. The pseudo-dynamic approach is adopted to model the District Heating Network [DHN] behaviour which estimates the temperature...... dynamically while the flow and pressure are calculated on the basis of steady state conditions. The implicit finite element method is applied to simulate the transient temperature behaviour in the network. Pipe network heat losses, pressure drop in the network and return temperature to the plant...... are calculated in the developed model. The model will serve eventually as a basis to find out the optimal supply temperature in an existing DHN in later work. The modelling results are used as decision support for existing DHN; proposing possible modifications to operate at optimal supply temperature....

  2. Unified approach for determining the enthalpic fictive temperature of glasses with arbitrary thermal history

    Guo, Xiaoju; Potuzak, M.; Mauro, J. C.


    We propose a unified routine to determine the enthalpic fictive temperature of a glass with arbitrary thermal history under isobaric conditions. The technique is validated both experimentally and numerically using a novel approach for modeling of glass relaxation behavior. The technique is applic......We propose a unified routine to determine the enthalpic fictive temperature of a glass with arbitrary thermal history under isobaric conditions. The technique is validated both experimentally and numerically using a novel approach for modeling of glass relaxation behavior. The technique...... is applicable to glasses of any thermal history, as proved through a series of numerical simulations where the enthalpic fictive temperature is precisely known within the model. Also, we demonstrate that the enthalpic fictive temperature of a glass can be determined at any calorimetric scan rate in excellent...

  3. Extracting the near surface stoichiometry of BiFe0.5Mn0.5O3 thin films; a finite element maximum entropy approach

    Song, F.; Monsen, A.; Li, Z. S.; Choi, E. -M.; MacManus-Driscoll, J. L.; Xiong, J.; Jia, Q. X.; Wahlstrom, E.; Wells, J. W.


    The surface and near-surface chemical composition of BiFe0.5Mn0.5O3 has been studied using a combination of low photon energy synchrotron photoemission spectroscopy, and a newly developed maximum entropy finite element model from which it is possible to extract the depth dependent chemical compositi

  4. Functional Integral Approach to the Transition Temperature of Attractive Interacting Bose Gas in Traps

    HU Guang-Xi; DAI Xian-Xi


    The functional integral approach (FIA) is introduced to study the transition temperature of an imperfect Bose gas in traps.An interacting model in quantum statistical mechanics is presented.With the model we study a Bose gas with attractive interaction trapped in an external potential.We obtain the result that the transition temperature of a trapped Bose gas will slightly shift upwards owing to the attractive interacting force.Successful application of the FIA to Bose systems is demonstrated.

  5. Finite-temperature electromagnetic-field quantization in a medium: The thermofield approach

    Kheirandish, F.; Soltani, M.; Jafari, M. [Department of Physics, Faculty of Science, University of Isfahan, Hezar-Jarib Street, 81746-73441, Isfahan (Iran, Islamic Republic of)


    Starting from a Lagrangian, an electromagnetic field is quantized in the presence of a medium in thermal equilibrium and also in a medium with time-varying temperature. The vector potential for both equilibrium and nonequilibrium cases is obtained and vacuum fluctuations of the fields are calculated. As an illustrative example, the finite-temperature decay rate and level shift of an atom in a polarizable medium are calculated in this approach.

  6. Maximum margin Bayesian network classifiers.

    Pernkopf, Franz; Wohlmayr, Michael; Tschiatschek, Sebastian


    We present a maximum margin parameter learning algorithm for Bayesian network classifiers using a conjugate gradient (CG) method for optimization. In contrast to previous approaches, we maintain the normalization constraints on the parameters of the Bayesian network during optimization, i.e., the probabilistic interpretation of the model is not lost. This enables us to handle missing features in discriminatively optimized Bayesian networks. In experiments, we compare the classification performance of maximum margin parameter learning to conditional likelihood and maximum likelihood learning approaches. Discriminative parameter learning significantly outperforms generative maximum likelihood estimation for naive Bayes and tree augmented naive Bayes structures on all considered data sets. Furthermore, maximizing the margin dominates the conditional likelihood approach in terms of classification performance in most cases. We provide results for a recently proposed maximum margin optimization approach based on convex relaxation. While the classification results are highly similar, our CG-based optimization is computationally up to orders of magnitude faster. Margin-optimized Bayesian network classifiers achieve classification performance comparable to support vector machines (SVMs) using fewer parameters. Moreover, we show that unanticipated missing feature values during classification can be easily processed by discriminatively optimized Bayesian network classifiers, a case where discriminative classifiers usually require mechanisms to complete unknown feature values in the data first.

  7. Hamiltonian approach to QCD in Coulomb gauge at zero and finite temperature

    Reinhardt H.


    Full Text Available I report on recent results obtained within the Hamiltonian approach to QCD in Coulomb gauge. By relating the Gribov confinement scenario to the center vortex picture of confinement it is shown that the Coulomb string tension is tied to the spatial string tension. For the quark sector a vacuum wave functional is used which results in variational equations which are free of ultraviolet divergences. The variational approach is extended to finite temperatures by compactifying a spatial dimension. For the chiral and deconfinement phase transition pseudo-critical temperatures of 170MeV and 198 MeV, respectively, are obtained.

  8. Hamiltonian approach to QCD in Coulomb gauge at zero and finite temperature

    Reinhardt, H; Campagnari, D; Ebadati, E; Heffner, J; Quandt, M; Vastag, P; Vogt, H


    I report on recent results obtained within the Hamiltonian approach to QCD in Coulomb gauge. By relating the Gribov confinement scenario to the center vortex picture of confinement it is shown that the Coulomb string tension is tied to the spatial string tension. For the quark sector a vacuum wave functional is used which results in variational equations which are free of ultraviolet divergences. The variational approach is extended to finite temperatures by compactifying a spatial dimension. For the chiral and deconfinement phase transition pseudo-critical temperatures of 170 MeV and 198 MeV, respectively, are obtained.

  9. 纵向风对隧道火灾拱顶最高温度及其位置的影响%Effects of longitudinal ventilation on maximum ceiling temperature and its position in tunnel fire

    朱伟; 周晓峰; 胡隆华; 刘帅


    Experimental study of temperature distribution along tunnel ceiling with different longitudinal ventilation velocities and different typical fire sizes was conducted in a combustion wind tunnel with length of 20 m. Both square and rectangular pool fires were used as fire sources for longitudinal ventilation velocities ranged in 0 - 3 m/s. The smoke temperatures below the ceiling were recorded by K type thermocouple for near fire region and by thermal resistors for that relative far away from the fire source. The results indicate that longitudinal ventilation velocity and fire size have a great influence on temperature distribution. The variation of ceiling temperature distribution with the increase of longitudinal ventilation velocity was shown to be different for a small fire fromthat for a large fire. For small scale fires, temperature of tunnel ceiling reduced to a stable value with gradual increase of longitudinal ventilation velocity. However, for big scale fire, the temperature increased first, and then decreased. The position of maximum temperature rise point firstly moved downstream horizontally with the increase of the wind speed. When the wind speed reached a certain value, the maximum temperature rise point returned back to an upstream position, and then moved downstream again. This is due to the change of dominant heating mechanism with the increase of longitudinal ventilation velocity. When the longitudinal ventilation velocity is relative small, both convection from hot smoke and radiation from the flame provide considerable contribution. However , when the longitudinal ventilation velocity is relative large, radiation is dominant. The influence of the length of windward side of rectangular pool fire was also discussed. It was shown that when the longitudinal ventilation velocity is relative smaller (0.5 - 1.5 m/s), the maximum temperature is higher when the longer side is parallel to the wind direction than that under perpendicular condition. However

  10. The Maximum Density of Water.

    Greenslade, Thomas B., Jr.


    Discusses a series of experiments performed by Thomas Hope in 1805 which show the temperature at which water has its maximum density. Early data cast into a modern form as well as guidelines and recent data collected from the author provide background for duplicating Hope's experiments in the classroom. (JN)

  11. Quantum Electrodynamics in Two-Dimensions at Finite Temperature. Thermofield Bosonization Approach

    Belvedere, L V; Rothe, K D; Rodrigues, A F


    The Schwinger model at finite temperature is analyzed using the Thermofield Dynamics formalism. The operator solution due to Lowenstein and Swieca is generalized to the case of finite temperature within the thermofield bosonization approach. The general properties of the statistical-mechanical ensemble averages of observables in the Hilbert subspace of gauge invariant thermal states are discussed. The bare charge and chirality of the Fermi thermofields are screened, giving rise to an infinite number of mutually orthogonal thermal ground states. One consequence of the bare charge and chirality selection rule at finite temperature is that there are innumerably many thermal vacuum states with the same total charge and chirality of the doubled system. The fermion charge and chirality selection rules at finite temperature turn out to imply the existence of a family of thermal theta vacua states parametrized with the same number of parameters as in zero temperature case. We compute the thermal theta-vacuum expectat...

  12. SALT spectroscopic classification of LSQ16acz (= PS16bby = SN 2016bew) as a type-Ia supernova approaching maximum light

    Jha, S. W.; Pan, Y.-C.; Foley, R. J.; Rest, A.; Scolnic, D.; Kotze, M.


    We obtained SALT (+RSS) spectroscopy of LSQ16acz (= PS16bby = SN 2016bew; Baltay et al. 2013, PASP, 125, 683) on 2016 Mar 14.9 UT, covering the wavelength range 340-920 nm. Cross-correlation of the spectrum with a template library using SNID (Blondin & Tonry 2007, ApJ, 666, 1024) shows LSQ16acz is a type-Ia supernova a few days before maximum light.

  13. Maximum Autocorrelation Factorial Kriging

    Nielsen, Allan Aasbjerg; Conradsen, Knut; Pedersen, John L.


    This paper describes maximum autocorrelation factor (MAF) analysis, maximum autocorrelation factorial kriging, and its application to irregularly sampled stream sediment geochemical data from South Greenland. Kriged MAF images are compared with kriged images of varimax rotated factors from...

  14. Extending the Purple Crow Lidar Temperature Climatology Above 100 km Altitude Using an Inversion Approach

    Jalali, A.; Sica, R. J.; Argall, S.; McCullough, E. M.


    Temperature retrievals from Rayleigh-scattering lidar measurements have been performed using the algorithm given by Chanin and Hauchecorne (1980; henceforth CH) for the last 3 decades. Recently Khanna et al. have presented an inversion approach to retrieve atmospheric temperature profiles. This method uses a nonlinear inversion method with a Monte Carlo technique to determine the statistical uncertainties for the retrieved nightly average temperature profiles. Using this approach, Purple Crow Lidar temperature profiles can now be extended 10 km higher in altitude compared to those calculated with the CH method, with reduced systematic uncertainty. Argall and Sica (2007) used the CH method to produce a climatology of the Purple Crow Lidar measurements from 1994 to 2004 which was compared with the CIRA-86 model. The CH method integrates temperatures downward, and requires the assumption of a 'seed' pressure at the highest altitude, taken from a model. Geophysical variation here, in the lower thermosphere, is sufficiently large to cause temperature retrievals to be unreliable for the top 10 or more km; uncertainties due to this pressure assumption cause the top two scale heights of temperatures from each profile to be discarded until the retrieval is no longer sensitive to the seed pressure. Khanna et al. (2012) use an inversion approach which allows the corrected lidar photocount profile to be integrated upward, as opposed to downward as required by the CH method. Khanna et al. (2012) showed that seeding the retrieval at the lowest instead of top height allows a much smaller uncertainty in the contribution of the seed pressure to the temperature compared to integrating from the top of the profile. Two other benefits to seeding the retrieval at the lower altitudes (around 30 km) include reduced geophysical variability, and the availability of routine pressure measurements from radiosondes. This presentation will show an extension of the Khanna et al. (2012) comparison

  15. Temperature issues with white laser diodes, calculation and approach for new packages

    Lachmayer, Roland; Kloppenburg, Gerolf; Stephan, Serge


    Bright white light sources are of significant importance for automotive front lighting systems. Today's upper class systems mainly use HID or LED light sources. As a further step laser diode based systems offer a high luminance, efficiency and allow the realization of new dynamic and adaptive light functions and styling concepts. The use of white laser diode systems in automotive applications is still limited to laboratories and prototypes even though announcements of laser based front lighting systems have been made. But the environment conditions for vehicles and other industry sectors differ from laboratory conditions. Therefor a model of the system's thermal behavior is set up. The power loss of a laser diode is transported as thermal flux from the junction layer to the diode's case and on to the environment. Therefor its optical power is limited by the maximum junction temperature (for blue diodes typically 125 - 150 °C), the environment temperature and the diode's packaging with its thermal resistances. In a car's headlamp the environment temperature can reach up to 80 °C. While the difference between allowed case temperature and environment temperature is getting small or negative the relevant heat flux also becomes small or negative. In early stages of LED development similar challenges had to be solved. Adapting LED packages to the conditions in a vehicle environment lead to today's efficient and bright headlights. In this paper the need to transfer these results to laser diodes is shown by calculating the diodes lifetimes based on the presented model.

  16. Solar thermal collectors in polymeric materials: A novel approach towards higher operating temperatures

    Mendes, Joao Farinha; Horta, Pedro; Carvalho, Maria Joao [INETI - Inst. Nacional de Engenharia Tecnologia e Inovacao, IP, Lisboa (Portugal); Silva, Paulo [PLASDAN - Maquinas para Plasticos, Marinha Grande (Portugal)


    The increasing demand for low temperature solar thermal collectors, especially for hot water production purposes in dwellings, swimming pools, hotels or industry, has lead to the possibility of high scale production, with leading manufacturers presenting yearly productions of hundreds of thousands of square meters. In such conditions, the use of polymeric materials in the manufacturing of solar collectors acquires particular interest, opening a full scope of opportunities for lower production costs, by means of cheaper materials or simpler manufacturing operations. Yet, the use of low cost materials limits the maximum operating temperatures estimated for the collectors (stagnation) to values around 120 C, easily attainable by any simple glazed solar collector. Higher performances, leading to higher stagnation temperatures as those observed for regular metal-based solar thermal collectors, would require high temperature polymers, at a much higher cost. The present paper addresses the manufacturing of a high performance solar thermal collector based in polymeric materials and includes a base thermal study, highlighting the different possibilities to be followed in the production of a polymeric collector, as well as a description of different temperature control strategies. (orig.)

  17. Maximum Entropy Fundamentals

    F. Topsøe


    Full Text Available Abstract: In its modern formulation, the Maximum Entropy Principle was promoted by E.T. Jaynes, starting in the mid-fifties. The principle dictates that one should look for a distribution, consistent with available information, which maximizes the entropy. However, this principle focuses only on distributions and it appears advantageous to bring information theoretical thinking more prominently into play by also focusing on the "observer" and on coding. This view was brought forward by the second named author in the late seventies and is the view we will follow-up on here. It leads to the consideration of a certain game, the Code Length Game and, via standard game theoretical thinking, to a principle of Game Theoretical Equilibrium. This principle is more basic than the Maximum Entropy Principle in the sense that the search for one type of optimal strategies in the Code Length Game translates directly into the search for distributions with maximum entropy. In the present paper we offer a self-contained and comprehensive treatment of fundamentals of both principles mentioned, based on a study of the Code Length Game. Though new concepts and results are presented, the reading should be instructional and accessible to a rather wide audience, at least if certain mathematical details are left aside at a rst reading. The most frequently studied instance of entropy maximization pertains to the Mean Energy Model which involves a moment constraint related to a given function, here taken to represent "energy". This type of application is very well known from the literature with hundreds of applications pertaining to several different elds and will also here serve as important illustration of the theory. But our approach reaches further, especially regarding the study of continuity properties of the entropy function, and this leads to new results which allow a discussion of models with so-called entropy loss. These results have tempted us to speculate over

  18. Maximum Entropy in Drug Discovery

    Chih-Yuan Tseng


    Full Text Available Drug discovery applies multidisciplinary approaches either experimentally, computationally or both ways to identify lead compounds to treat various diseases. While conventional approaches have yielded many US Food and Drug Administration (FDA-approved drugs, researchers continue investigating and designing better approaches to increase the success rate in the discovery process. In this article, we provide an overview of the current strategies and point out where and how the method of maximum entropy has been introduced in this area. The maximum entropy principle has its root in thermodynamics, yet since Jaynes’ pioneering work in the 1950s, the maximum entropy principle has not only been used as a physics law, but also as a reasoning tool that allows us to process information in hand with the least bias. Its applicability in various disciplines has been abundantly demonstrated. We give several examples of applications of maximum entropy in different stages of drug discovery. Finally, we discuss a promising new direction in drug discovery that is likely to hinge on the ways of utilizing maximum entropy.

  19. Temperature Approach Optimization in the Double Pipe Heat Exchanger with Groove

    Sunu Putu Wijaya


    Full Text Available Heat transfer in double pipe heat exchanger with circumference-rectangular grooves has been investigated experimentally. The volume flowrate of cold and hot water were varied to determine its influence on the approach temperature of the outlet terminals. In this experimental design, the grooves were incised in annular room that is placed on the outside surface of the inner pipe. The shell diameter is 38.1 mm and tube diameter 19.4 mm with 1 m length, which is made of aluminum. The flow pattern of the two fluids in the heat exchanger is a parallel flow. The working fluid is water with volume flow rate of 27.1, 23.8 and 19.8 l/minute. The temperature of water on the inlet terminals are 50±1°C for hot stream and 30±1°C for cold stream. Temperature measurements conducted on each terminal of the inlet and outlet heat exchanger. The results showed that the grooves induced the approach temperature. The change of the approach temperature from the grooves compared to that of without grooves decreased by 37.9%. This phenomenon indicates an increase in heat transfer process and performance of the heat exchanger. Groove improves the heat surface area of the inner pipe, increasing the momentum transfer and in the other hand, reducing the weight of heat exchangers itself.

  20. Fractional calculus approach to study temperature distribution within a spinning satellite

    Jyotindra C. Prajapati


    Full Text Available This paper deals with the temperature distribution within spinning satellites and problem is formulated in terms of fractional differential equation. Applying fractional calculus approach, solution of this equation is obtained in terms of Wright generalized hypergeometric function, a generalization of exponential function.

  1. Unified approach for determining the enthalpic fictive temperature of glasses with arbitrary thermal history

    Guo, Xiaoju; Potuzak, M.; Mauro, J. C.;


    We propose a unified routine to determine the enthalpic fictive temperature of a glass with arbitrary thermal history under isobaric conditions. The technique is validated both experimentally and numerically using a novel approach for modeling of glass relaxation behavior. The technique is applic...

  2. Relationship between Altitude and Variation Characteristics of the Maximum Temperature, Minimum Temperature, and Diurnal Temperature Range in China%中国最高、最低温度及日较差在海拔高度上变化的初步分析

    董丹宏; 黄刚


    Based on daily maximum and minimum temperature data from 740 homogenized surface meteorological stations, the present study investigates the regional characteristics of the temperature trend and the dependence of maximum and minimum temperature and diurnal temperature range changes on the altitude during the period 1963–2012. It is found that the magnitude of minimum temperature increase is larger than that of the maximum temperature increase. The significant warming areas are located at high altitude, all of which increase remarkably in size during the study period. The maximum and minimum temperature and diurnal temperature range trends increase with altitude, except in spring. The correlation coefficients between the maximum temperature trend and altitude are the highest. At the same altitude, the amplitudes of maximum and minimum temperature show inconsistency: They exhibit increasing trends in the 1990s, with significant change at low altitude; they change minimally in the 1980s; and at high altitudes (above 2000 m), the magnitudes of their changes are weak before the 1990s but stronger in the last 10 years of the study period. The seasonal variability of the diurnal temperature range is large over 2000 m, decreasing in summer but increasing in winter. Before the 1990s, there is no significant variation between maximum and minimum temperature and altitude. However, their trends almost all decrease and then increase with altitude in the last 20 years. Additionally, the response to climate in highland areas is more sensitive than that in lowland areas.%本文利用中国740个气象台站1963~2012年均一化逐日最高温度和最低温度资料,分析了中国地区最高、最低气温和日较差变化趋势的区域特征及其与海拔高度的关系。结果表明:近50年气温的变化趋势无论是年或季节变化,最低温度的增温幅度都高于最高温度,且其增温显著区域都对应我国高海拔地区。除了春季,

  3. Temperature-dependent striped antiferromagnetism of LaFeAsO in a Green's function approach.

    Liu, Gui-Bin; Liu, Bang-Gui


    We use a Green's function method to study the temperature-dependent average moment and magnetic phase-transition temperature of the striped antiferromagnetism of LaFeAsO, and other similar compounds, as the parents of FeAs-based superconductors. We consider the nearest and the next-nearest couplings in the FeAs layer, and the nearest coupling for inter-layer spin interaction. The dependence of the transition temperature T(N) and the zero-temperature average spin on the interaction constants is investigated. We obtain an analytical expression for T(N) and determine our temperature-dependent average spin from zero temperature to T(N) in terms of unified self-consistent equations. For LaFeAsO, we obtain a reasonable estimation of the coupling interactions with the experimental transition temperature T(N) = 138 K. Our results also show that a non-zero antiferromagnetic (AFM) inter-layer coupling is essential for the existence of a non-zero T(N), and the many-body AFM fluctuations reduce substantially the low-temperature magnetic moment per Fe towards the experimental value. Our Green's function approach can be used for other FeAs-based parent compounds and these results should be useful to understand the physical properties of FeAs-based superconductors.

  4. Vibration-insensitive temperature sensing system based on fluorescence decay and using a digital processing approach

    Dong, H.; Zhao, W.; Sun, T.; Grattan, K. T. V.; Al-Shamma'a, A. I.; Wei, C.; Mulrooney, J.; Clifford, J.; Fitzpatrick, C.; Lewis, E.; Degner, M.; Ewald, H.; Lochmann, S. I.; Bramann, G.; Merlone Borla, E.; Faraldi, P.; Pidria, M.


    A fluorescence-based temperature sensor system using a digital signal processing approach has been developed and evaluated in operation on a working automotive engine. The signal processing approach, using the least-squares method, makes the system relatively insensitive to intensity variations in the probe and thus provides more precise measurements when compared to a previous system designed using analogue phase-locked detection. Experiments carried out to determine the emission temperatures of a running car engine have demonstrated the effectiveness of the sensor system in monitoring exhaust temperatures up to 250 °C, and potentially higher. This paper was presented at the 13th International Conference on Sensors and Their Applications, held in Chatham, Kent, on 6-7 September 2005.

  5. A Real-Time Temperature Data Transmission Approach for Intelligent Cooling Control of Mass Concrete

    Peng Lin


    Full Text Available The primary aim of the study presented in this paper is to propose a real-time temperature data transmission approach for intelligent cooling control of mass concrete. A mathematical description of a digital temperature control model is introduced in detail. Based on pipe mounted and electrically linked temperature sensors, together with postdata handling hardware and software, a stable, real-time, highly effective temperature data transmission solution technique is developed and utilized within the intelligent mass concrete cooling control system. Once the user has issued the relevant command, the proposed programmable logic controllers (PLC code performs all necessary steps without further interaction. The code can control the hardware, obtain, read, and perform calculations, and display the data accurately. Hardening concrete is an aggregate of complex physicochemical processes including the liberation of heat. The proposed control system prevented unwanted structural change within the massive concrete blocks caused by these exothermic processes based on an application case study analysis. In conclusion, the proposed temperature data transmission approach has proved very useful for the temperature monitoring of a high arch dam and is able to control thermal stresses in mass concrete for similar projects involving mass concrete.

  6. Reprint of : Connection between wave transport through disordered 1D waveguides and energy density inside the sample: A maximum-entropy approach

    Mello, Pier A.; Shi, Zhou; Genack, Azriel Z.


    We study the average energy - or particle - density of waves inside disordered 1D multiply-scattering media. We extend the transfer-matrix technique that was used in the past for the calculation of the intensity beyond the sample to study the intensity in the interior of the sample by considering the transfer matrices of the two segments that form the entire waveguide. The statistical properties of the two disordered segments are found using a maximum-entropy ansatz subject to appropriate constraints. The theoretical expressions are shown to be in excellent agreement with 1D transfer-matrix simulations.

  7. Finite temperature and the Polyakov loop in the covariant variational approach to Yang-Mills Theory

    Quandt, Markus; Reinhardt, Hugo


    We extend the covariant variational approach for Yang-Mills theory in Landau gauge to non-zero temperatures. Numerical solutions for the thermal propagators are presented and compared to high-precision lattice data. To study the deconfinement phase transition, we adapt the formalism to background gauge and compute the effective action of the Polyakov loop for the colour groups SU(2) and SU(3). Using the zero-temperature propagators as input, all parameters are fixed at T = 0 and we find a clear signal for a deconfinement phase transition at finite temperatures, which is second order for SU(2) and first order for SU(3). The critical temperatures obtained are in reasonable agreement with lattice data.

  8. A new approach for highly resolved air temperature measurements in urban areas

    M. Buttstädt


    Full Text Available In different fields of applied local climate investigation, highly resolved data of air temperature are of great importance. As a part of the research programme entitled City2020+, which deals with future climate conditions in agglomerations, this study focuses on increasing the quantity of urban air temperature data intended for the analysis of their spatial distribution. A new measurement approach using local transport buses as "riding thermometers" is presented. By this means, temperature data with a very high temporal and spatial resolution could be collected during scheduled bus rides. The data obtained provide the basis for the identification of thermally affected areas and for the investigation of factors in urban structure which influence the thermal conditions. Initial results from the ongoing study, which show the temperature distribution along different traverses through the city of Aachen, are presented.

  9. Thermal modelling of the high temperature treatment of wood based on Luikov's approach

    Younsi, R.; Kocaefe, D.; Poncsak, S.; Kocaefe, Y. [University of Quebec, Chicoutimi (Canada). Dept. of Applied Sciences


    A 3D, unsteady-state mathematical model was used to simulate the behaviour of wood during high temperature treatment. The model is based on Luikov's approach and solves a set of coupled heat and mass transfer equations. Using the model, the temperature and moisture content profiles of wood were predicted as a function of time for different heating rates. Parallel to the modelling study, an experimental study was carried out using small birch samples. The samples were subjected to high temperature treatment in a thermogravimetric system under different operating conditions. The experimental results and the model predictions were found to be in good agreement. The results show that the distributions of temperature and moisture content are influenced appreciably by the heating rate and the initial moisture content. (author)

  10. Solution of the neutron point kinetics equations with temperature feedback effects applying the polynomial approach method

    Tumelero, Fernanda, E-mail: [Universidade Federal do Rio Grande do Sul (UFRGS), Porto Alegre, RS (Brazil). Programa de Pos-Graduacao em Engenharia Mecanica; Petersen, Claudio Z.; Goncalves, Glenio A.; Lazzari, Luana, E-mail:, E-mail:, E-mail: [Universidade Federal de Pelotas (DME/UFPEL), Capao do Leao, RS (Brazil). Instituto de Fisica e Matematica


    In this work, we present a solution of the Neutron Point Kinetics Equations with temperature feedback effects applying the Polynomial Approach Method. For the solution, we consider one and six groups of delayed neutrons precursors with temperature feedback effects and constant reactivity. The main idea is to expand the neutron density, delayed neutron precursors and temperature as a power series considering the reactivity as an arbitrary function of the time in a relatively short time interval around an ordinary point. In the first interval one applies the initial conditions of the problem and the analytical continuation is used to determine the solutions of the next intervals. With the application of the Polynomial Approximation Method it is possible to overcome the stiffness problem of the equations. In such a way, one varies the time step size of the Polynomial Approach Method and performs an analysis about the precision and computational time. Moreover, we compare the method with different types of approaches (linear, quadratic and cubic) of the power series. The answer of neutron density and temperature obtained by numerical simulations with linear approximation are compared with results in the literature. (author)

  11. Holocene temperature shifts around Greenland: Paleolimnological approaches to quantifying past warmth and documenting its consequences

    Axford, Y.; Lasher, G. E.; McFarlin, J. M.; Francis, D. R.; Kelly, M. A.; Langdon, P. G.; Levy, L.; Osburn, M. R.; Osterberg, E. C.


    Insolation-driven warmth across the Arctic during the early to middle Holocene (the Holocene Thermal Maximum, or HTM) represents a geologically accessible analog for future warming and its impacts. Improved constraints on the magnitude and seasonality of HTM warmth around Greenland's margins can advance the use of paleoclimate data to test and improve climate and ice sheet models. Here we present an overview of our recent efforts to reconstruct climate through the Holocene around the margins of the Greenland Ice Sheet using multiple proxies in lake sediments. We use insect (chironomid) assemblages to derive quantitative estimates of Holocene temperatures at sites with minimal soil and vegetation development near the eastern, northwestern and western margins of the ice sheet. Our chironomid-based temperature reconstructions consistently imply HTM July air temperatures 3 to 4.5 °C warmer than the pre-industrial late Holocene in these sectors of Greenland. The timing of reconstructed peak warmth differs between sites, with onset varying from ~10 ka to ~6.5 ka, but in good agreement with glacial geology and other evidence from each region. Our reconstructed temperature anomalies are larger than those typically inferred from annually-integrated indicators from the ice sheet itself, but comparable to the few other quantitative summer temperature estimates available from beyond the ice sheet on Greenland. Additional records are needed to confirm the magnitude of HTM warmth and to better define its seasonality and spatial pattern. To provide independent constraints on paleotemperatures and to elucidate additional aspects of Holocene paleoclimate, we are also employing oxygen isotopes of chironomid remains and other aquatic organic materials, and molecular organic proxies, in parallel (see Lasher et al. and McFarlin et al., this meeting). Combined with glacial geologic evidence, these multi-proxy records elucidate diverse aspects of HTM climate around Greenland - including

  12. A new approach to assess the dependency of extant half-saturation coefficients on maximum process rates and estimate intrinsic coefficients.

    Shaw, A; Takács, I; Pagilla, K R; Murthy, S


    The Monod equation is often used to describe biological treatment processes and is the foundation for many activated sludge models. The Monod equation includes a "half-saturation coefficient" to describe the effect of substrate limitations on the process rate and it is customary to consider this parameter to be a constant for a given system. The purpose of this study was to develop a methodology, and its use to show that the half-saturation coefficient for denitrification is not constant but is in fact a function of the maximum denitrification rate. A 4-step procedure is developed to investigate the dependency of half-saturation coefficients on the maximum rate and two different models are used to describe this dependency: (a) an empirical linear model and (b) a deterministic model based on Fick's law of diffusion. Both models are proved better for describing denitrification kinetics than assuming a fixed K(NO3) at low nitrate concentrations. The empirical model is more utilitarian whereas the model based on Fick's law has a fundamental basis that enables the intrinsic K(NO3) to be estimated. In this study data was analyzed from 56 denitrification rate tests and it was found that the extant K(NO3) varied between 0.07 mgN/L and 1.47 mgN/L (5th and 95th percentile respectively) with an average of 0.47 mgN/L. In contrast to this, the intrinsic K(NO3) estimated for the diffusion model was 0.01 mgN/L which indicates that the extant K(NO3) is greatly influenced by, and mostly describes, diffusion limitations.

  13. Constructal approach to bio-engineering: the ocular anterior chamber temperature

    Lucia, Umberto; Grisolia, Giulia; Dolcino, Daniela; Astori, Maria Rosa; Massa, Eugenio; Ponzetto, Antonio


    The aim of this work was to analyse the pressure inside the eyes anterior chamber, namedintraocular pressure (IOP), in relation to the biomechanical properties of corneas. The approach used was based on the constructal law, recently introduced in vision analysis. Results were expressed as the relation between the temperature of the ocular anterior chamber and the biomechanical properties of the cornea. The IOP, the elastic properties of the cornea, and the related refractive properties of the eye were demonstrated to be dependent on the temperature of the ocular anterior chamber. These results could lead to new perspectives for experimental analysis of the IOP in relation to the properties of the cornea.

  14. Simultaneous approach for simulation of a high-temperature gas-cooled reactor

    Yang CHEN; Jiang-hong YOU; Zhi-jiang SHAO; Ke-xin WANG; Ji-xin QIAN


    The simulation of a high-temperature gas-cooled reactor pebble-bed module (HTR-PM) plant is discussed.This lumped parameter model has the form of a set differential algebraic equations (DAEs) that include stiff equations to model point neutron kinetics.The nested approach is the most common method to solve DAE,but this approach is very expensive and time-consuming due to inner iterations.This paper deals with an alternative approach in which a simultaneous solution method is used.The DAEs are discretized over a time horizon using collocation on finite elements,and Radau collocation points are applied.The resulting nonlinear algebraic equations can be solved by existing solvers.The discrete algorithm is discussed in detail; both accuracy and stability issues are considered.Finally,the simulation results are presented to validate the efficiency and accuracy of the simultaneous approach that takes much less time than the nested one.

  15. Assessing the Temperature Dependence of Narrow-Band Raman Water Vapor Lidar Measurements: A Practical Approach

    Whiteman, David N.; Venable, Demetrius D.; Walker, Monique; Cardirola, Martin; Sakai, Tetsu; Veselovskii, Igor


    Narrow-band detection of the Raman water vapor spectrum using the lidar technique introduces a concern over the temperature dependence of the Raman spectrum. Various groups have addressed this issue either by trying to minimize the temperature dependence to the point where it can be ignored or by correcting for whatever degree of temperature dependence exists. The traditional technique for performing either of these entails accurately measuring both the laser output wavelength and the water vapor spectral passband with combined uncertainty of approximately 0.01 nm. However, uncertainty in interference filter center wavelengths and laser output wavelengths can be this large or larger. These combined uncertainties translate into uncertainties in the magnitude of the temperature dependence of the Raman lidar water vapor measurement of 3% or more. We present here an alternate approach for accurately determining the temperature dependence of the Raman lidar water vapor measurement. This alternate approach entails acquiring sequential atmospheric profiles using the lidar while scanning the channel passband across portions of the Raman water vapor Q-branch. This scanning is accomplished either by tilt-tuning an interference filter or by scanning the output of a spectrometer. Through this process a peak in the transmitted intensity can be discerned in a manner that defines the spectral location of the channel passband with respect to the laser output wavelength to much higher accuracy than that achieved with standard laboratory techniques. Given the peak of the water vapor signal intensity curve, determined using the techniques described here, and an approximate knowledge of atmospheric temperature, the temperature dependence of a given Raman lidar profile can be determined with accuracy of 0.5% or better. A Mathematica notebook that demonstrates the calculations used here is available from the lead author.

  16. A space and time scale-dependent nonlinear geostatistical approach for downscaling daily precipitation and temperature

    Jha, Sanjeev Kumar


    A geostatistical framework is proposed to downscale daily precipitation and temperature. The methodology is based on multiple-point geostatistics (MPS), where a multivariate training image is used to represent the spatial relationship between daily precipitation and daily temperature over several years. Here, the training image consists of daily rainfall and temperature outputs from the Weather Research and Forecasting (WRF) model at 50 km and 10 km resolution for a twenty year period ranging from 1985 to 2004. The data are used to predict downscaled climate variables for the year 2005. The result, for each downscaled pixel, is daily time series of precipitation and temperature that are spatially dependent. Comparison of predicted precipitation and temperature against a reference dataset indicates that both the seasonal average climate response together with the temporal variability are well reproduced. The explicit inclusion of time dependence is explored by considering the climate properties of the previous day as an additional variable. Comparison of simulations with and without inclusion of time dependence shows that the temporal dependence only slightly improves the daily prediction because the temporal variability is already well represented in the conditioning data. Overall, the study shows that the multiple-point geostatistics approach is an efficient tool to be used for statistical downscaling to obtain local scale estimates of precipitation and temperature from General Circulation Models. This article is protected by copyright. All rights reserved.

  17. Neoendemic ground beetles and private tree haplotypes: two independent proxies attest a moderate last glacial maximum summer temperature depression of 3-4 °C for the southern Tibetan Plateau

    Schmidt, Joachim; Opgenoorth, Lars; Martens, Jochen; Miehe, Georg


    Previous findings regarding the Last Glacial Maximum LGM summer temperature depression (maxΔT in July) on the Tibetan Plateau varied over a large range (between 0 and 9 °C). Geologic proxies usually provided higher values than palynological data. Because of this wide temperature range, it was hitherto impossible to reconstruct the glacial environment of the Tibetan Plateau. Here, we present for the first time data indicating that local neoendemics of modern species groups are promising proxies for assessing the LGM temperature depression in Tibet. We used biogeographical and phylogenetic data from small, wingless edaphous ground beetles of the genus Trechus, and from private juniper tree haplotypes. The derived values of the maxΔT in July ranged between 3 and 4 °C. Our data support previous findings that were based on palynological data. At the same time, our data are spatially more specific as they are not bound to specific archives. Our study shows that the use of modern endemics enables a detailed mapping of local LGM conditions in High Asia. A prerequisite for this is an extensive biogeographical and phylogenetic exploration of the area and the inclusion of additional endemic taxa and evolutionary lines.

  18. Worldwide assessment of the Penman-Monteith temperature approach for the estimation of monthly reference evapotranspiration

    Almorox, Javier; Senatore, Alfonso; Quej, Victor H.; Mendicino, Giuseppe


    When not all the meteorological data needed for estimating reference evapotranspiration ETo are available, a Penman-Monteith temperature (PMT) equation can be adopted using only measured maximum and minimum air temperature data. The performance of the PMT method is evaluated and compared with the Hargreaves-Samani (HS) equation using the measured long-term monthly data of the FAO global climatic dataset New LocClim. The objective is to evaluate the quality of the PMT method for different climates as represented by the Köppen classification calculated on a monthly time scale. Estimated PMT and HS values are compared with FAO-56 Penman-Monteith ETo values through several statistical performance indices. For the full dataset, the approximated PMT expressions using air temperature alone produce better results than the uncalibrated HS method, and the performance of the PMT method is even more improved adopting corrections depending on the climate class for the estimation of the solar radiation, especially in the tropical climate class.

  19. Flux-measuring approach of high temperature metal liquid based on BP neural networks

    胡燕瑜; 桂卫华; 李勇刚


    A soft-measuring approach is presented to measure the flux of liquid zinc with high temperature andcausticity. By constructing mathematical model based on neural networks, weighing the mass of liquid zinc, the fluxof liquid zinc is acquired indirectly, the measuring on line and flux control are realized. Simulation results and indus-trial practice demonstrate that the relative error between the estimated flux value and practical measured flux value islower than 1.5%, meeting the need of industrial process.

  20. A statistical approach to the QCD phase transition --A mystery in the critical temperature

    Ishii, Noriyoshi; Suganuma, Hideo


    We study the QCD phase transition based on the statistical treatment with the bag-model picture of hadrons, and derive a phenomenological relation among the low-lying hadron masses, the hadron sizes and the critical temperature of the QCD phase transition. We apply this phenomenological relation to both full QCD and quenched QCD, and compare these results with the corresponding lattice QCD results. Whereas such a statistical approach works well in full QCD, it results in an extremely large es...

  1. A maximum entropy approach to the study of residue-specific backbone angle distributions in α-synuclein, an intrinsically disordered protein.

    Mantsyzov, Alexey B; Maltsev, Alexander S; Ying, Jinfa; Shen, Yang; Hummer, Gerhard; Bax, Ad


    α-Synuclein is an intrinsically disordered protein of 140 residues that switches to an α-helical conformation upon binding phospholipid membranes. We characterize its residue-specific backbone structure in free solution with a novel maximum entropy procedure that integrates an extensive set of NMR data. These data include intraresidue and sequential H(N) − H(α) and H(N) − H(N) NOEs, values for (3) JHNHα, (1) JHαCα, (2) JCαN, and (1) JCαN, as well as chemical shifts of (15)N, (13)C(α), and (13)C' nuclei, which are sensitive to backbone torsion angles. Distributions of these torsion angles were identified that yield best agreement to the experimental data, while using an entropy term to minimize the deviation from statistical distributions seen in a large protein coil library. Results indicate that although at the individual residue level considerable deviations from the coil library distribution are seen, on average the fitted distributions agree fairly well with this library, yielding a moderate population (20-30%) of the PPII region and a somewhat higher population of the potentially aggregation-prone β region (20-40%) than seen in the database. A generally lower population of the αR region (10-20%) is found. Analysis of (1)H − (1)H NOE data required consideration of the considerable backbone diffusion anisotropy of a disordered protein.

  2. Stochastic model of the NASA/MSFC ground facility for large space structures with uncertain parameters: The maximum entropy approach, part 2

    Hsia, Wei Shen


    A validated technology data base is being developed in the areas of control/structures interaction, deployment dynamics, and system performance for Large Space Structures (LSS). A Ground Facility (GF), in which the dynamics and control systems being considered for LSS applications can be verified, was designed and built. One of the important aspects of the GF is to verify the analytical model for the control system design. The procedure is to describe the control system mathematically as well as possible, then to perform tests on the control system, and finally to factor those results into the mathematical model. The reduction of the order of a higher order control plant was addressed. The computer program was improved for the maximum entropy principle adopted in Hyland's MEOP method. The program was tested against the testing problem. It resulted in a very close match. Two methods of model reduction were examined: Wilson's model reduction method and Hyland's optimal projection (OP) method. Design of a computer program for Hyland's OP method was attempted. Due to the difficulty encountered at the stage where a special matrix factorization technique is needed in order to obtain the required projection matrix, the program was successful up to the finding of the Linear Quadratic Gaussian solution but not beyond. Numerical results along with computer programs which employed ORACLS are presented.

  3. A fast approach for detection of erythemato-squamous diseases based on extreme learning machine with maximum relevance minimum redundancy feature selection

    Liu, Tong; Hu, Liang; Ma, Chao; Wang, Zhi-Yan; Chen, Hui-Ling


    In this paper, a novel hybrid method, which integrates an effective filter maximum relevance minimum redundancy (MRMR) and a fast classifier extreme learning machine (ELM), has been introduced for diagnosing erythemato-squamous (ES) diseases. In the proposed method, MRMR is employed as a feature selection tool for dimensionality reduction in order to further improve the diagnostic accuracy of the ELM classifier. The impact of the type of activation functions, the number of hidden neurons and the size of the feature subsets on the performance of ELM have been investigated in detail. The effectiveness of the proposed method has been rigorously evaluated against the ES disease dataset, a benchmark dataset, from UCI machine learning database in terms of classification accuracy. Experimental results have demonstrated that our method has achieved the best classification accuracy of 98.89% and an average accuracy of 98.55% via 10-fold cross-validation technique. The proposed method might serve as a new candidate of powerful methods for diagnosing ES diseases.

  4. A maximum entropy approach to the study of residue-specific backbone angle distributions in α-synuclein, an intrinsically disordered protein

    Mantsyzov, Alexey B; Maltsev, Alexander S; Ying, Jinfa; Shen, Yang; Hummer, Gerhard; Bax, Ad


    α-Synuclein is an intrinsically disordered protein of 140 residues that switches to an α-helical conformation upon binding phospholipid membranes. We characterize its residue-specific backbone structure in free solution with a novel maximum entropy procedure that integrates an extensive set of NMR data. These data include intraresidue and sequential HN–Hα and HN–HN NOEs, values for 3JHNHα, 1JHαCα, 2JCαN, and 1JCαN, as well as chemical shifts of 15N, 13Cα, and 13C′ nuclei, which are sensitive to backbone torsion angles. Distributions of these torsion angles were identified that yield best agreement to the experimental data, while using an entropy term to minimize the deviation from statistical distributions seen in a large protein coil library. Results indicate that although at the individual residue level considerable deviations from the coil library distribution are seen, on average the fitted distributions agree fairly well with this library, yielding a moderate population (20–30%) of the PPII region and a somewhat higher population of the potentially aggregation-prone β region (20–40%) than seen in the database. A generally lower population of the αR region (10–20%) is found. Analysis of 1H–1H NOE data required consideration of the considerable backbone diffusion anisotropy of a disordered protein. PMID:24976112

  5. Systematic approach to determination of maximum achievable capture capacity via leaching and carbonation processes for alkaline steelmaking wastes in a rotating packed bed.

    Pan, Shu-Yuan; Chiang, Pen-Chi; Chen, Yi-Hung; Chen, Chun-Da; Lin, Hsun-Yu; Chang, E-E


    Accelerated carbonation of basic oxygen furnace slag (BOFS) coupled with cold-rolling wastewater (CRW) was performed in a rotating packed bed (RPB) as a promising process for both CO2 fixation and wastewater treatment. The maximum achievable capture capacity (MACC) via leaching and carbonation processes for BOFS in an RPB was systematically determined throughout this study. The leaching behavior of various metal ions from the BOFS into the CRW was investigated by a kinetic model. In addition, quantitative X-ray diffraction (QXRD) using the Rietveld method was carried out to determine the process chemistry of carbonation of BOFS with CRW in an RPB. According to the QXRD results, the major mineral phases reacting with CO2 in BOFS were Ca(OH)2, Ca2(HSiO4)(OH), CaSiO3, and Ca2Fe1.04Al0.986O5. Meanwhile, the carbonation product was identified as calcite according to the observations of SEM, XEDS, and mappings. Furthermore, the MACC of the lab-scale RPB process was determined by balancing the carbonation conversion and energy consumption. In that case, the overall energy consumption, including grinding, pumping, stirring, and rotating processes, was estimated to be 707 kWh/t-CO2. It was thus concluded that CO2 capture by accelerated carbonation of BOFS could be effectively and efficiently performed by coutilizing with CRW in an RPB.

  6. New approach of determinations of earthquake moment magnitude using near earthquake source duration and maximum displacement amplitude of high frequency energy radiation

    Gunawan, H.; Puspito, N. T.; Ibrahim, G.; Harjadi, P. J. P. [ITB, Faculty of Earth Sciences and Tecnology (Indonesia); BMKG (Indonesia)


    The new approach method to determine the magnitude by using amplitude displacement relationship (A), epicenter distance ({Delta}) and duration of high frequency radiation (t) has been investigated for Tasikmalaya earthquake, on September 2, 2009, and their aftershock. Moment magnitude scale commonly used seismic surface waves with the teleseismic range of the period is greater than 200 seconds or a moment magnitude of the P wave using teleseismic seismogram data and the range of 10-60 seconds. In this research techniques have been developed a new approach to determine the displacement amplitude and duration of high frequency radiation using near earthquake. Determination of the duration of high frequency using half of period of P waves on the seismograms displacement. This is due tothe very complex rupture process in the near earthquake. Seismic data of the P wave mixing with other wave (S wave) before the duration runs out, so it is difficult to separate or determined the final of P-wave. Application of the 68 earthquakes recorded by station of CISI, Garut West Java, the following relationship is obtained: Mw = 0.78 log (A) + 0.83 log {Delta}+ 0.69 log (t) + 6.46 with: A (m), d (km) and t (second). Moment magnitude of this new approach is quite reliable, time processing faster so useful for early warning.

  7. Neurobehavioral approach for evaluation of office workers' productivity: The effects of room temperature

    Lan, Li; Lian, Zhiwei; Pan, Li [School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai 200240 (China); Ye, Qian [Shanghai Research Institute of Building Science, Shanghai 200041 (China)


    Indoor environment quality has great influence on worker's productivity, and how to assess the effect of indoor environment on productivity remains to be the major challenge. A neurobehavioral approach was proposed for evaluation of office workers' productivity in this paper. The distinguishing characteristic of neurobehavioral approach is its emphasis on the identification and measurement of behavioral changes, for the influence of environment on brain functions manifests behaviorally. Therefore worker's productivity can be comprehensively evaluated by testing the neurobehavioral functions. Four neurobehavioral functions, including perception, learning and memory, thinking, and executive functions were measured with nine representative psychometric tests. The effect of room temperature on performance of neurobehavioral tests was investigated in the laboratory. Four temperatures (19 C, 24 C, 27 C, and 32 C) were investigated based on the thermal sensation from cold to hot. Signal detection theory was utilized to analyze response bias. It was found that motivated people could maintain high performance for a short time under adverse (hot or cold) environmental conditions. Room temperature affected task performance differentially, depending on the type of tasks. The proposed neurobehavioral approach could be worked to quantitatively and systematically evaluate office workers' productivity. (author)

  8. Multi-model attribution of upper-ocean temperature changes using an isothermal approach.

    Weller, Evan; Min, Seung-Ki; Palmer, Matthew D; Lee, Donghyun; Yim, Bo Young; Yeh, Sang-Wook


    Both air-sea heat exchanges and changes in ocean advection have contributed to observed upper-ocean warming most evident in the late-twentieth century. However, it is predominantly via changes in air-sea heat fluxes that human-induced climate forcings, such as increasing greenhouse gases, and other natural factors such as volcanic aerosols, have influenced global ocean heat content. The present study builds on previous work using two different indicators of upper-ocean temperature changes for the detection of both anthropogenic and natural external climate forcings. Using simulations from phase 5 of the Coupled Model Intercomparison Project, we compare mean temperatures above a fixed isotherm with the more widely adopted approach of using a fixed depth. We present the first multi-model ensemble detection and attribution analysis using the fixed isotherm approach to robustly detect both anthropogenic and natural external influences on upper-ocean temperatures. Although contributions from multidecadal natural variability cannot be fully removed, both the large multi-model ensemble size and properties of the isotherm analysis reduce internal variability of the ocean, resulting in better observation-model comparison of temperature changes since the 1950s. We further show that the high temporal resolution afforded by the isotherm analysis is required to detect natural external influences such as volcanic cooling events in the upper-ocean because the radiative effect of volcanic forcings is short-lived.

  9. Multi-model attribution of upper-ocean temperature changes using an isothermal approach

    Weller, Evan; Min, Seung-Ki; Palmer, Matthew D.; Lee, Donghyun; Yim, Bo Young; Yeh, Sang-Wook


    Both air-sea heat exchanges and changes in ocean advection have contributed to observed upper-ocean warming most evident in the late-twentieth century. However, it is predominantly via changes in air-sea heat fluxes that human-induced climate forcings, such as increasing greenhouse gases, and other natural factors such as volcanic aerosols, have influenced global ocean heat content. The present study builds on previous work using two different indicators of upper-ocean temperature changes for the detection of both anthropogenic and natural external climate forcings. Using simulations from phase 5 of the Coupled Model Intercomparison Project, we compare mean temperatures above a fixed isotherm with the more widely adopted approach of using a fixed depth. We present the first multi-model ensemble detection and attribution analysis using the fixed isotherm approach to robustly detect both anthropogenic and natural external influences on upper-ocean temperatures. Although contributions from multidecadal natural variability cannot be fully removed, both the large multi-model ensemble size and properties of the isotherm analysis reduce internal variability of the ocean, resulting in better observation-model comparison of temperature changes since the 1950s. We further show that the high temporal resolution afforded by the isotherm analysis is required to detect natural external influences such as volcanic cooling events in the upper-ocean because the radiative effect of volcanic forcings is short-lived.

  10. Temperature, pressure, and isotope effects on the structure and properties of liquid water: a lattice approach.

    Hakem, Ilhem F; Boussaid, Abdelhak; Benchouk-Taleb, Hafida; Bockstaller, Michael R


    We present a lattice model to describe the effect of isotopic replacement, temperature, and pressure changes on the formation of hydrogen bonds in liquid water. The approach builds upon a previously established generalized lattice theory for hydrogen bonded liquids [B. A. Veytsman, J. Phys. Chem. 94, 8499 (1990)], accounts for the binding order of 1/2 in water-water association complexes, and introduces the pressure dependence of the degree of hydrogen bonding (that arises due to differences between the molar volumes of bonded and free water) by considering the number of effective binding sites to be a function of pressure. The predictions are validated using experimental data on the temperature and pressure dependence of the static dielectric constant of liquid water. The model is found to correctly reproduce the experimentally observed decrease of the dielectric constant with increasing temperature without any adjustable parameters and by assuming values for the enthalpy and entropy of hydrogen bond formation as they are determined from the respective experiments. The pressure dependence of the dielectric constant of water is quantitatively predicted up to pressures of 2 kbars and exhibits qualitative agreement at higher pressures. Furthermore, the model suggests a--temperature dependent--decrease of hydrogen bond formation at high pressures. The sensitive dependence of the structure of water on temperature and pressure that is described by the model rationalizes the different solubilization characteristics that have been observed in aqueous systems upon change of temperature and pressure conditions. The simplicity of the presented lattice model might render the approach attractive for designing optimized processing conditions in water-based solutions or the simulation of more complex multicomponent systems.

  11. Modular High Temperature Gas-Cooled Reactor Safety Basis and Approach

    David Petti; Jim Kinsey; Dave Alberstein


    Various international efforts are underway to assess the safety of advanced nuclear reactor designs. For example, the International Atomic Energy Agency has recently held its first Consultancy Meeting on a new cooperative research program on high temperature gas-cooled reactor (HTGR) safety. Furthermore, the Generation IV International Forum Reactor Safety Working Group has recently developed a methodology, called the Integrated Safety Assessment Methodology, for use in Generation IV advanced reactor technology development, design, and design review. A risk and safety assessment white paper is under development with respect to the Very High Temperature Reactor to pilot the Integrated Safety Assessment Methodology and to demonstrate its validity and feasibility. To support such efforts, this information paper on the modular HTGR safety basis and approach has been prepared. The paper provides a summary level introduction to HTGR history, public safety objectives, inherent and passive safety features, radionuclide release barriers, functional safety approach, and risk-informed safety approach. The information in this paper is intended to further the understanding of the modular HTGR safety approach. The paper gives those involved in the assessment of advanced reactor designs an opportunity to assess an advanced design that has already received extensive review by regulatory authorities and to judge the utility of recently proposed new methods for advanced reactor safety assessment such as the Integrated Safety Assessment Methodology.

  12. Nontargeted LC–MS Metabolomics Approach for Metabolic Profiling of Plasma and Urine from Pigs Fed Branched Chain Amino Acids for Maximum Growth Performance

    Assadi Soumeh, Elham; Hedemann, Mette Skou; Poulsen, Hanne Damgaard


    The metabolic response in plasma and urine of pigs when feeding an optimum level of branched chain amino acids (BCAAs) for best growth performance is unknown. The objective of the current study was to identify the metabolic phenotype associated with the BCAAs intake level that could be linked...... to the animal growth performance. Three dose–response studies were carried out to collect blood and urine samples from pigs fed increasing levels of Ile, Val, or Leu followed by a nontargeted LC–MS approach to characterize the metabolic profile of biofluids when dietary BCAAs are optimum for animal growth....... Results showed that concentrations of plasma hypoxanthine and tyrosine (Tyr) were higher while concentrations of glycocholic acid, tauroursodeoxycholic acid, and taurocholic acid were lower when the dietary Ile was optimum. Plasma 3-methyl-2-oxovaleric acid and creatine were lower when dietary Leu...

  13. Maximum Autocorrelation Factorial Kriging

    Nielsen, Allan Aasbjerg; Conradsen, Knut; Pedersen, John L.; Steenfelt, Agnete


    This paper describes maximum autocorrelation factor (MAF) analysis, maximum autocorrelation factorial kriging, and its application to irregularly sampled stream sediment geochemical data from South Greenland. Kriged MAF images are compared with kriged images of varimax rotated factors from an ordinary non-spatial factor analysis, and they are interpreted in a geological context. It is demonstrated that MAF analysis contrary to ordinary non-spatial factor analysis gives an objective discrimina...

  14. Strength of Geopolymer Cement Curing at Ambient Temperature by Non-Oven Curing Approaches: An Overview

    Wattanachai, Pitiwat; Suwan, Teewara


    At the present day, a concept of environmentally friendly construction materials has been intensively studying to reduce the amount of releasing greenhouse gases. Geopolymer is one of the cementitious binders which can be produced by utilising pozzolanic wastes (e.g. fly ash or furnace slag) and also receiving much more attention as a low-CO2 emission material. However, to achieve excellent mechanical properties, heat curing process is needed to apply to geopolymer cement in a range of temperature around 40 to 90°C. To consume less oven-curing energy and be more convenience in practical work, the study on geopolymer curing at ambient temperature (around 20 to 25°C) is therefore widely investigated. In this paper, a core review of factors and approaches for non-oven curing geopolymer has been summarised. The performance, in term of strength, of each non-oven curing method, is also presented and analysed. The main aim of this review paper is to gather the latest study of ambient temperature curing geopolymer and to enlarge a feasibility of non-oven curing geopolymer development. Also, to extend the directions of research work, some approaches or techniques can be combined or applied to the specific properties for in-field applications and embankment stabilization by using soil-cement column.

  15. Role of Temperature in the Growth of Silver Nanoparticles Through a Synergetic Reduction Approach

    Jiang XC


    Full Text Available Abstract This study presents the role of reaction temperature in the formation and growth of silver nanoparticles through a synergetic reduction approach using two or three reducing agents simultaneously. By this approach, the shape-/size-controlled silver nanoparticles (plates and spheres can be generated under mild conditions. It was found that the reaction temperature could play a key role in particle growth and shape/size control, especially for silver nanoplates. These nanoplates could exhibit an intensive surface plasmon resonance in the wavelength range of 700–1,400 nm in the UV–vis spectrum depending upon their shapes and sizes, which make them useful for optical applications, such as optical probes, ionic sensing, and biochemical sensors. A detailed analysis conducted in this study clearly shows that the reaction temperature can greatly influence reaction rate, and hence the particle characteristics. The findings would be useful for optimization of experimental parameters for shape-controlled synthesis of other metallic nanoparticles (e.g., Au, Cu, Pt, and Pd with desirable functional properties.

  16. Orexinergic neurotransmission in temperature responses to methamphetamine and stress: mathematical modeling as a data assimilation approach.

    Abolhassan Behrouzvaziri

    Full Text Available Orexinergic neurotransmission is involved in mediating temperature responses to methamphetamine (Meth. In experiments in rats, SB-334867 (SB, an antagonist of orexin receptors (OX1R, at a dose of 10 mg/kg decreases late temperature responses (t > 60 min to an intermediate dose of Meth (5 mg/kg. A higher dose of SB (30 mg/kg attenuates temperature responses to low dose (1 mg/kg of Meth and to stress. In contrast, it significantly exaggerates early responses (t < 60 min to intermediate and high doses (5 and 10 mg/kg of Meth. As pretreatment with SB also inhibits temperature response to the stress of injection, traditional statistical analysis of temperature responses is difficult.We have developed a mathematical model that explains the complexity of temperature responses to Meth as the interplay between excitatory and inhibitory nodes. We have extended the developed model to include the stress of manipulations and the effects of SB. Stress is synergistic with Meth on the action on excitatory node. Orexin receptors mediate an activation of on both excitatory and inhibitory nodes by low doses of Meth, but not on the node activated by high doses (HD. Exaggeration of early responses to high doses of Meth involves disinhibition: low dose of SB decreases tonic inhibition of HD and lowers the activation threshold, while the higher dose suppresses the inhibitory component. Using a modeling approach to data assimilation appears efficient in separating individual components of complex response with statistical analysis unachievable by traditional data processing methods.

  17. Orexinergic Neurotransmission in Temperature Responses to Methamphetamine and Stress: Mathematical Modeling as a Data Assimilation Approach

    Behrouzvaziri, Abolhassan; Fu, Daniel; Tan, Patrick; Yoo, Yeonjoo; Zaretskaia, Maria V.; Rusyniak, Daniel E.; Molkov, Yaroslav I.; Zaretsky, Dmitry V.


    Experimental Data Orexinergic neurotransmission is involved in mediating temperature responses to methamphetamine (Meth). In experiments in rats, SB-334867 (SB), an antagonist of orexin receptors (OX1R), at a dose of 10 mg/kg decreases late temperature responses (t>60 min) to an intermediate dose of Meth (5 mg/kg). A higher dose of SB (30 mg/kg) attenuates temperature responses to low dose (1 mg/kg) of Meth and to stress. In contrast, it significantly exaggerates early responses (t<60 min) to intermediate and high doses (5 and 10 mg/kg) of Meth. As pretreatment with SB also inhibits temperature response to the stress of injection, traditional statistical analysis of temperature responses is difficult. Mathematical Modeling We have developed a mathematical model that explains the complexity of temperature responses to Meth as the interplay between excitatory and inhibitory nodes. We have extended the developed model to include the stress of manipulations and the effects of SB. Stress is synergistic with Meth on the action on excitatory node. Orexin receptors mediate an activation of on both excitatory and inhibitory nodes by low doses of Meth, but not on the node activated by high doses (HD). Exaggeration of early responses to high doses of Meth involves disinhibition: low dose of SB decreases tonic inhibition of HD and lowers the activation threshold, while the higher dose suppresses the inhibitory component. Using a modeling approach to data assimilation appears efficient in separating individual components of complex response with statistical analysis unachievable by traditional data processing methods. PMID:25993564

  18. Validation of the modified Becker's split-window approach for retrieving land surface temperature from AVHRR

    Quan, Weijun; Chen, Hongbin; Han, Xiuzhen; Ma, Zhiqiang


    To further verify the modified Becker's split-window approach for retrieving land surface temperature (LST) from long-term Advanced Very High Resolution Radiometer (AVHRR) data, a cross-validation and a radiance-based (R-based) validation are performed and examined in this paper. In the cross-validation, 3481 LST data pairs are extracted from the AVHRR LST product retrieved with the modified Becker's approach and compared with the Moderate Resolution Imaging Spectroradiometer (MODIS) LST product (MYD11A1) for the period 2002-2008, relative to the positions of 548 weather stations in China. The results show that in most cases, the AVHRR LST values are higher than the MYD11A1. When the AVHRR LSTs are adjusted with a linear regression, the values are close to the MYD11A1, showing a good linear relationship between the two datasets ( R 2 = 0.91). In the R-based validation, comparison is made between AVHRR LST retrieved from the modified Becker's approach and the inversed LST from the Moderate Resolution Transmittance Model (MODTRAN) consolidated with observed temperature and humidity profiles at four radiosonde stations. The results show that the retrieved AVHRR LST deviates from the MODTRAN inversed LST by-1.3 (-2.5) K when the total water vapor amount is less (larger) than 20 mm. This provides useful hints for further improvement of the LST retrieval algorithms' accuracy and consistency.

  19. Long-Memory and the Sea Level-Temperature Relationship: A Fractional Cointegration Approach

    Ventosa-Santaulària, Daniel; Heres, David R.; Martínez-Hernández, L. Catalina


    Through thermal expansion of oceans and melting of land-based ice, global warming is very likely contributing to the sea level rise observed during the 20th century. The amount by which further increases in global average temperature could affect sea level is only known with large uncertainties due to the limited capacity of physics-based models to predict sea levels from global surface temperatures. Semi-empirical approaches have been implemented to estimate the statistical relationship between these two variables providing an alternative measure on which to base potentially disrupting impacts on coastal communities and ecosystems. However, only a few of these semi-empirical applications had addressed the spurious inference that is likely to be drawn when one nonstationary process is regressed on another. Furthermore, it has been shown that spurious effects are not eliminated by stationary processes when these possess strong long memory. Our results indicate that both global temperature and sea level indeed present the characteristics of long memory processes. Nevertheless, we find that these variables are fractionally cointegrated when sea-ice extent is incorporated as an instrumental variable for temperature which in our estimations has a statistically significant positive impact on global sea level. PMID:25426638

  20. A semi-nonlocal numerical approach for modeling of temperature-dependent crack-wave interaction

    Martowicz, Adam; Kijanka, Piotr; Staszewski, Wieslaw J.


    Numerical tools, which are used to simulate complex phenomena for models of complicated shapes suffer from either long computational time or accuracy. Hence, new modeling and simulation tools, which could offer reliable results within reasonable time periods, are highly demanded. Among other approaches, the nonlocal methods have appeared to fulfill these requirements quite efficiently and opened new perspectives for accurate simulations based on crude meshes of the model's degrees of freedom. In the paper, the preliminary results are shown for simulations of the phenomenon of temperature-dependent crack-wave interaction for elastic wave propagation in a model of an aluminum plate. Semi-nonlocal finite differences are considered to solve the problem of thermoelasticity - based on the discretization schemes, which were already proposed by the authors and taken from the previously published work. Numerical modeling is used to examine wave propagation primarily in the vicinity of a notch. Both displacement and temperature fields are sought in the investigated case study.

  1. A systematic approach for synthesizing a low-temperature distillation system

    Yiqing Luo; Liang Kong; Xigang Yuan


    In this paper, by combining a stochastic optimization method with a refrigeration shaft work targeting method, an approach for the synthesis of a heat integrated complex distillation system in a low-temperature process is presented. The synthesis problem is formulated as a mixed-integer nonlinear programming (MINLP) problem, which is solved by simulated annealing algorithm under a random procedure to explore the optimal operating parameters and the distillation sequence structure. The shaft work targeting method is used to evaluate the minimum energy cost of the corresponding separation system during the optimization without any need for a detailed design for the heat exchanger network (HEN) and the refrigeration system (RS). The method presented in the paper can dramatical y reduce the scale and complexity of the problem. A case study of ethylene cold-end separation is used to il ustrate the application of the approach. Compared with the original industrial scheme, the result is encouraging.

  2. A novel approach to quench detection for high temperature superconducting coils

    Song, W.J., E-mail: [School of Electrical Engineering, Beijing Jiaotong University, Beijing (China); China Electric Power Research Institute, Beijing (China); Fang, X.Y. [Department of Electrical and Computer Engineering, University of Victoria, PO Box 1700, STN CSC, Victoria, BC V8W 2Y2 (Canada); Fang, J., E-mail: [School of Electrical Engineering, Beijing Jiaotong University, Beijing (China); Wei, B.; Hou, J.Z. [China Electric Power Research Institute, Beijing (China); Liu, L.F. [Guangzhou Metro Design & Research Institute Co., Ltd, Guangdong (China); Lu, K.K. [School of Electrical Engineering, Beijing Jiaotong University, Beijing (China); Li, Shuo [College of Information Science and Engineering, Northeastern University, Shenyang (China)


    Highlights: • We proposed a novel quench detection method mainly based on phase for HTS coil. • We showed theory model and numerical simulation system by LabVIEW. • Experiment results are showed and analyzed. • Little quench voltage will cause obvious change on phase. • The approach can accurately detect quench resistance voltage in real-time. - Abstract: A novel approach to quench detection for high temperature superconducting (HTS) coils is proposed, which is mainly based on phase angle between voltage and current of two coils to detect the quench resistance voltage. The approach is analyzed theoretically, verified experimentally and analytically by MATLAB Simulink and LabVIEW. An analog quench circuit is built on Simulink and a quench alarm system program is written in LabVIEW. Experiment of quench detection is further conducted. The sinusoidal AC currents ranging from 19.9 A to 96 A are transported to the HTS coils, whose critical current is 90 A at 77 K. The results of analog simulation and experiment are analyzed and they show good consistency. It is shown that with the increase of current, the phase undergoes apparent growth, and it is up to 60° and 15° when the current reaches critical value experimentally and analytically, respectively. It is concluded that the approach proposed in this paper can meet the need of precision and quench resistance voltage can be detected in time.

  3. Effects of Temperature on Maximum Metabolic Rate and Metabolic Scope of Juvenile Manchurian Trout, Brachymystax lenok (Pallas)%温度对细鳞鲑幼鱼最大代谢率和代谢范围的影响

    徐革锋; 尹家胜; 韩英; 刘洋; 牟振波


    This study examined the effects of water temperature on the metabolic characteristics and aerobic exer-cise capacity of juvenile manchurian trout , Brachymystax lenok ( Pallas) .The resting metabolic rate ( RMR) ,maxi-mum metabolic rate (MMR), metabolic scope(MS)and critical swimming speed (UCrit) of juveniles were measured at different temperature (4, 8, 12, 16, 20℃).The results showed that both the RMR and the MMR increased sig-nificantly with the increasing of water temperature ( P<0 .05 ) .Compared with test group at 4℃, the RMR for 8℃, 12℃, 16℃ and 20℃increased by 62%, 165%, 390%, 411%,respectively, and the MMR increased by 3%, 34%, 111%, 115%, respectively .However , the MS decreased with the increasing of water temperature with the highest MS occurring at 4℃.UCrit was significantly affected by water temperature (P<0.05), but the varia-tions of UCrit didn′t follow certain pattern with temperature .In the test of aerobic exercise , the MMR for each tem-perature level occurred at the swimming speed of 70% UCrit , probably due to the start of anaerobic metabolism , which caused excessive creatine in body , consequently hindered the aerobic metabolism .%为了探究温度对细鳞鲑( Brachymystax lenok)幼鱼的代谢特征和有氧运动能力的影响,在不同温度(4℃、8℃、12℃、16℃、20℃)下测定了实验鱼的静止代谢率( RMR)、有氧运动过程中的最大代谢率( MMR)以及能量代谢范围(MS)和临界游泳速度(UCrit)。结果表明,随着温度的上升,RMR和MMR均显著提高(P<0.05),各温度下的RMR和MMR分别较4℃条件的提高了62%(8℃)、165%(12℃)、390%(16℃)、411%(20℃)和3%(8℃)、34%(12℃)、111%(16℃)、115%(20℃);MS随水温的升高呈现下降的趋势,且4℃条件具有最大的代谢范围。不同温度条件下UCrit存在显著性差异,但随着温度升高未表现出明显的变

  4. Maximum likely scale estimation

    Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo


    A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and/or ...

  5. Comparison of different statistical modelling approaches for deriving spatial air temperature patterns in an urban environment

    Straub, Annette; Beck, Christoph; Breitner, Susanne; Cyrys, Josef; Geruschkat, Uta; Jacobeit, Jucundus; Kühlbach, Benjamin; Kusch, Thomas; Richter, Katja; Schneider, Alexandra; Umminger, Robin; Wolf, Kathrin


    Frequently spatial variations of air temperature of considerable magnitude occur within urban areas. They correspond to varying land use/land cover characteristics and vary with season, time of day and synoptic conditions. These temperature differences have an impact on human health and comfort directly by inducing thermal stress as well as indirectly by means of affecting air quality. Therefore, knowledge of the spatial patterns of air temperature in cities and the factors causing them is of great importance, e.g. for urban planners. A multitude of studies have shown statistical modelling to be a suitable tool for generating spatial air temperature patterns. This contribution presents a comparison of different statistical modelling approaches for deriving spatial air temperature patterns in the urban environment of Augsburg, Southern Germany. In Augsburg there exists a measurement network for air temperature and humidity currently comprising 48 stations in the city and its rural surroundings (corporately operated by the Institute of Epidemiology II, Helmholtz Zentrum München, German Research Center for Environmental Health and the Institute of Geography, University of Augsburg). Using different datasets for land surface characteristics (Open Street Map, Urban Atlas) area percentages of different types of land cover were calculated for quadratic buffer zones of different size (25, 50, 100, 250, 500 m) around the stations as well for source regions of advective air flow and used as predictors together with additional variables such as sky view factor, ground level and distance from the city centre. Multiple Linear Regression and Random Forest models for different situations taking into account season, time of day and weather condition were applied utilizing selected subsets of these predictors in order to model spatial distributions of mean hourly and daily air temperature deviations from a rural reference station. Furthermore, the different model setups were

  6. Bivariate ensemble model output statistics approach for joint forecasting of wind speed and temperature

    Baran, Sándor; Möller, Annette


    Forecast ensembles are typically employed to account for prediction uncertainties in numerical weather prediction models. However, ensembles often exhibit biases and dispersion errors, thus they require statistical post-processing to improve their predictive performance. Two popular univariate post-processing models are the Bayesian model averaging (BMA) and the ensemble model output statistics (EMOS). In the last few years, increased interest has emerged in developing multivariate post-processing models, incorporating dependencies between weather quantities, such as for example a bivariate distribution for wind vectors or even a more general setting allowing to combine any types of weather variables. In line with a recently proposed approach to model temperature and wind speed jointly by a bivariate BMA model, this paper introduces an EMOS model for these weather quantities based on a bivariate truncated normal distribution. The bivariate EMOS model is applied to temperature and wind speed forecasts of the 8-member University of Washington mesoscale ensemble and the 11-member ALADIN-HUNEPS ensemble of the Hungarian Meteorological Service and its predictive performance is compared to the performance of the bivariate BMA model and a multivariate Gaussian copula approach, post-processing the margins with univariate EMOS. While the predictive skills of the compared methods are similar, the bivariate EMOS model requires considerably lower computation times than the bivariate BMA method.

  7. Computational Intelligence Approach for Estimating Superconducting Transition Temperature of Disordered MgB2 Superconductors Using Room Temperature Resistivity

    Taoreed O. Owolabi


    Full Text Available Doping and fabrication conditions bring about disorder in MgB2 superconductor and further influence its room temperature resistivity as well as its superconducting transition temperature (TC. Existence of a model that directly estimates TC of any doped MgB2 superconductor from the room temperature resistivity would have immense significance since room temperature resistivity is easily measured using conventional resistivity measuring instrument and the experimental measurement of TC wastes valuable resources and is confined to low temperature regime. This work develops a model, superconducting transition temperature estimator (STTE, that directly estimates TC of disordered MgB2 superconductors using room temperature resistivity as input to the model. STTE was developed through training and testing support vector regression (SVR with ten experimental values of room temperature resistivity and their corresponding TC using the best performance parameters obtained through test-set cross validation optimization technique. The developed STTE was used to estimate TC of different disordered MgB2 superconductors and the obtained results show excellent agreement with the reported experimental data. STTE can therefore be incorporated into resistivity measuring instruments for quick and direct estimation of TC of disordered MgB2 superconductors with high degree of accuracy.

  8. A new approach to increase the Curie temperature of Fe-Mo double perovskites

    Rubi, D. [Institut de Ciencia de Materials de Barcelona, Campus UAB, E-08193, Bellaterra (Spain); Frontera, C. [Institut de Ciencia de Materials de Barcelona, Campus UAB, E-08193, Bellaterra (Spain); Roig, A. [Institut de Ciencia de Materials de Barcelona, Campus UAB, E-08193, Bellaterra (Spain); Nogues, J. [Departament de Fisica, Universitat Autonoma de Barcelona, 08193 Bellaterra, Catalunya (Spain); Institut Catala de Recerca i Estudis Avancats (ICREA), 08193 Bellaterra, Catalunya (Spain); Munoz, J.S. [Departament de Fisica, Universitat Autonoma de Barcelona, 08193 Bellaterra, Catalunya (Spain); Fontcuberta, J. [Institut de Ciencia de Materials de Barcelona, Campus UAB, E-08193, Bellaterra (Spain)]. E-mail:


    Sr{sub 2}FeMoO{sub 6} and related double perovskites are nowadays intensely investigated due to their potential in the field of spintronics. It has been previously shown that the Curie temperature (T {sub C}) of double perovskites can be increased by injecting carriers in the conduction band. We report here on an alternative approach to reinforce the magnetic interaction, and thus raise T {sub C}. It can be suspected that the introduction of Fe excess in the Fe-Mo sub-lattice, which would lead into the appearance of nearest neighbour Fe-O-Fe antiferromagnetic spin coupling, could reinforce the next-near neighbour Fe-O-Fe-O-Fe ferromagnetic ordering and thus raise the Curie temperature. The plausibility of this mechanism was checked, in the first place, by means of Monte Carlo simulations. Afterwards, Nd{sub 2x}Ca{sub 2-2x}Fe{sub 1+x}Mo{sub 1-x}O{sub 6} series was prepared and fully characterized, being found that the Curie temperature rises as much as {delta}T {sub C} {approx} 75 K when the Fe content is increased. We argue that this is a genuine magnetic exchange effect, not related neither to steric distortions nor band filling.

  9. A statistical approach to the QCD phase transition --A mystery in the critical temperature

    Ishii, N; Ishii, Noriyoshi; Suganuma, Hideo


    We study the QCD phase transition based on the statistical treatment with the bag-model picture of hadrons, and derive a phenomenological relation among the low-lying hadron masses, the hadron sizes and the critical temperature of the QCD phase transition. We apply this phenomenological relation to both full QCD and quenched QCD, and compare these results with the corresponding lattice QCD results. Whereas such a statistical approach works well in full QCD, it results in an extremely large estimate of the critical temperature in quenched QCD, which indicates a serious problem in understanding of the QCD phase transition. This large discrepancy traces back to the fact that enough number of glueballs are not yet thermally excited at the critical temperature T_c \\simeq 280 MeV in quenched QCD due to the extremely small statistical factor as exp(-m_G/T_c) \\simeq 0.00207. This fact itself has a quite general nature independent of the particular choice of the effective model framework. We are thus arrive at a myste...

  10. Generalised maximum entropy and heterogeneous technologies

    Oude Lansink, A.G.J.M.


    Generalised maximum entropy methods are used to estimate a dual model of production on panel data of Dutch cash crop farms over the period 1970-1992. The generalised maximum entropy approach allows a coherent system of input demand and output supply equations to be estimated for each farm in the sam

  11. Predicting critical temperatures of iron(II) spin crossover materials: density functional theory plus U approach.

    Zhang, Yachao


    A first-principles study of critical temperatures (T(c)) of spin crossover (SCO) materials requires accurate description of the strongly correlated 3d electrons as well as much computational effort. This task is still a challenge for the widely used local density or generalized gradient approximations (LDA/GGA) and hybrid functionals. One remedy, termed density functional theory plus U (DFT+U) approach, introduces a Hubbard U term to deal with the localized electrons at marginal computational cost, while treats the delocalized electrons with LDA/GGA. Here, we employ the DFT+U approach to investigate the T(c) of a pair of iron(II) SCO molecular crystals (α and β phase), where identical constituent molecules are packed in different ways. We first calculate the adiabatic high spin-low spin energy splitting ΔE(HL) and molecular vibrational frequencies in both spin states, then obtain the temperature dependent enthalpy and entropy changes (ΔH and ΔS), and finally extract T(c) by exploiting the ΔH/T - T and ΔS - T relationships. The results are in agreement with experiment. Analysis of geometries and electronic structures shows that the local ligand field in the α phase is slightly weakened by the H-bondings involving the ligand atoms and the specific crystal packing style. We find that this effect is largely responsible for the difference in T(c) of the two phases. This study shows the applicability of the DFT+U approach for predicting T(c) of SCO materials, and provides a clear insight into the subtle influence of the crystal packing effects on SCO behavior.


    王跃; 翦知湣; 赵平


    teleconnections, especially under glacial conditions( Last Glacial Maximum , LGM ) . However, there are still large uncertainties in the reconstruction of LGM tropical SST, an important boundary condition for the paleoclimate modeling. The new global LGM SST reconstruction from MARGO ( Multiproxy Approach for the Reconstruction of the Glacial Ocean Surface project)provides us a chance to validate the sensitivity of tropical rainfall to SST variations under different climate backgrounds. And more , the SST outputted from the LGM simulations of Atmosphere-Ocean ( AO) coupled model ( CCSM 3 ) can be used to assess the reliability of MARGO SST reconstruction and the two kinds of relationship from modern observations.We used CAM 3, the fifth generation atmospheric general circulation model of the National Center for Atmospheric Research, to study the sensitivity of precipitation in tropical convergence zone ( ITCZ)/summer monsoon regions to local SST anomalies during LGM. Under the T42 resolution ( 2. 8° ×2. 8° horizontally and 26 levels vertically) of CAM 3, we first designed a modern control experiment prescribed with present boundary conditions ( Greenhouse gases ( GHG ) , orbital parameters , Ice sheet , sea level , climatological SST and Sea -Ice ) . then a LGM control case ( CLIMAPLGM ) with glacial conditions ( GHG and orbital parameters of 2lkaB. P. , ICE-5G Ice sheet, -120m sea level, CLIMAP SST and Sea-ice) . The other two LGM sensitive cases were conducted by replacing the SST & Sea-ice in CLIMAPLGM ( MARGOLGM case with tropical SST from MARGO reconstruction and b30. 104wLGM case with global SST & Sea ice from CCSM 3's LGM simulation).By comparing the SST/precipitation differences ( CLIMAPLCM minus modern case , and LGM sensitive cases minus CLIMAPLGM ) ,we suggested that : ( 1) Under glacial conditions , local SST anomalies could largely control the tropical rainfall responses; when SST anomalies extended to the whole tropics, the drastic ITCZ/summer monsoon

  13. Understanding uncertainty in temperature effects on vector-borne disease: a Bayesian approach

    Johnson, Leah R.; Ben-Horin, Tal; Lafferty, Kevin D.; McNally, Amy; Mordecai, Erin A.; Paaijmans, Krijn P.; Pawar, Samraat; Ryan, Sadie J.


    Extrinsic environmental factors influence the distribution and population dynamics of many organisms, including insects that are of concern for human health and agriculture. This is particularly true for vector-borne infectious diseases like malaria, which is a major source of morbidity and mortality in humans. Understanding the mechanistic links between environment and population processes for these diseases is key to predicting the consequences of climate change on transmission and for developing effective interventions. An important measure of the intensity of disease transmission is the reproductive number R0. However, understanding the mechanisms linking R0 and temperature, an environmental factor driving disease risk, can be challenging because the data available for parameterization are often poor. To address this, we show how a Bayesian approach can help identify critical uncertainties in components of R0 and how this uncertainty is propagated into the estimate of R0. Most notably, we find that different parameters dominate the uncertainty at different temperature regimes: bite rate from 15°C to 25°C; fecundity across all temperatures, but especially ~25–32°C; mortality from 20°C to 30°C; parasite development rate at ~15–16°C and again at ~33–35°C. Focusing empirical studies on these parameters and corresponding temperature ranges would be the most efficient way to improve estimates of R0. While we focus on malaria, our methods apply to improving process-based models more generally, including epidemiological, physiological niche, and species distribution models.

  14. Maximum entropy approach to fuzzy control

    Ramer, Arthur; Kreinovich, Vladik YA.


    For the same expert knowledge, if one uses different &- and V-operations in a fuzzy control methodology, one ends up with different control strategies. Each choice of these operations restricts the set of possible control strategies. Since a wrong choice can lead to a low quality control, it is reasonable to try to loose as few possibilities as possible. This idea is formalized and it is shown that it leads to the choice of min(a + b,1) for V and min(a,b) for &. This choice was tried on NASA Shuttle simulator; it leads to a maximally stable control.

  15. On a closed form approach to the fractional neutron point kinetics equation with temperature feedback

    Schramm, Marcelo; Bodmann, Bardo E.J.; Vilhena, Marco T.M.B., E-mail:, E-mail:, E-mail: [Universidade Federal do Rio Grande do Sul (UFRGS), Porto Alegre, RS (Brazil). Departamento de Engenharia Mecanica; Petersen, Claudio Z., E-mail: [Universidade Federal de Pelotas (UFPel), RS (Brazil). Departamento de Matematica; Alvim, Antonio C.M., E-mail: [Universidade Federal do Rio de Janeiro (UFRJ), RJ (Brazil). Instituto Alberto Luiz Coimbra de Pos-Graduacao e Pesquisa em Engenharia


    Following the quest to find analytical solutions, we extend the methodology applied successfully to timely fractional neutron point kinetics (FNPK) equations by adding the effects of temperature. The FNPK equations with temperature feedback correspond to a nonlinear system and “stiff” type for the neutron density and the concentration of delayed neutron precursors. These variables determine the behavior of a nuclear reactor power with time and are influenced by the position of control rods, for example. The solutions of kinetics equations provide time information about the dynamics in a nuclear reactor in operation and are useful, for example, to understand the power fluctuations with time that occur during startup or shutdown of the reactor, due to adjustments of the control rods. The inclusion of temperature feedback in the model introduces an estimate of the transient behavior of the power and other variables, which are strongly coupled. Normally, a single value of reactivity is used across the energy spectrum. Especially in case of power change, the neutron energy spectrum changes as well as physical parameters such as the average cross sections. However, even knowing the importance of temperature effects on the control of the reactor power, the character of the set of nonlinear equations governing this system makes it difficult to obtain a purely analytical solution. Studies have been published in this sense, using numerical approaches. Here the idea is to consider temperature effects to make the model more realistic and thus solve it in a semi-analytical way. Therefore, the main objective of this paper is to obtain an analytical representation of fractional neutron point kinetics equations with temperature feedback, without having to resort to approximations inherent in numerical methods. To this end, we will use the decomposition method, which has been successfully used by the authors to solve neutron point kinetics problems. The results obtained will

  16. A novel approach to quench detection for high temperature superconducting coils

    Song, W. J.; Fang, X. Y.; Fang, J.; Wei, B.; Hou, J. Z.; Liu, L. F.; Lu, K. K.; Li, Shuo


    A novel approach to quench detection for high temperature superconducting (HTS) coils is proposed, which is mainly based on phase angle between voltage and current of two coils to detect the quench resistance voltage. The approach is analyzed theoretically, verified experimentally and analytically by MATLAB Simulink and LabVIEW. An analog quench circuit is built on Simulink and a quench alarm system program is written in LabVIEW. Experiment of quench detection is further conducted. The sinusoidal AC currents ranging from 19.9 A to 96 A are transported to the HTS coils, whose critical current is 90 A at 77 K. The results of analog simulation and experiment are analyzed and they show good consistency. It is shown that with the increase of current, the phase undergoes apparent growth, and it is up to 60° and 15° when the current reaches critical value experimentally and analytically, respectively. It is concluded that the approach proposed in this paper can meet the need of precision and quench resistance voltage can be detected in time.

  17. Maximum information photoelectron metrology

    Hockett, P; Wollenhaupt, M; Baumert, T


    Photoelectron interferograms, manifested in photoelectron angular distributions (PADs), are a high-information, coherent observable. In order to obtain the maximum information from angle-resolved photoionization experiments it is desirable to record the full, 3D, photoelectron momentum distribution. Here we apply tomographic reconstruction techniques to obtain such 3D distributions from multiphoton ionization of potassium atoms, and fully analyse the energy and angular content of the 3D data. The PADs obtained as a function of energy indicate good agreement with previous 2D data and detailed analysis [Hockett et. al., Phys. Rev. Lett. 112, 223001 (2014)] over the main spectral features, but also indicate unexpected symmetry-breaking in certain regions of momentum space, thus revealing additional continuum interferences which cannot otherwise be observed. These observations reflect the presence of additional ionization pathways and, most generally, illustrate the power of maximum information measurements of th...

  18. The Testability of Maximum Magnitude

    Clements, R.; Schorlemmer, D.; Gonzalez, A.; Zoeller, G.; Schneider, M.


    Recent disasters caused by earthquakes of unexpectedly large magnitude (such as Tohoku) illustrate the need for reliable assessments of the seismic hazard. Estimates of the maximum possible magnitude M at a given fault or in a particular zone are essential parameters in probabilistic seismic hazard assessment (PSHA), but their accuracy remains untested. In this study, we discuss the testability of long-term and short-term M estimates and the limitations that arise from testing such rare events. Of considerable importance is whether or not those limitations imply a lack of testability of a useful maximum magnitude estimate, and whether this should have any influence on current PSHA methodology. We use a simple extreme value theory approach to derive a probability distribution for the expected maximum magnitude in a future time interval, and we perform a sensitivity analysis on this distribution to determine if there is a reasonable avenue available for testing M estimates as they are commonly reported today: devoid of an appropriate probability distribution of their own and estimated only for infinite time (or relatively large untestable periods). Our results imply that any attempt at testing such estimates is futile, and that the distribution is highly sensitive to M estimates only under certain optimal conditions that are rarely observed in practice. In the future we suggest that PSHA modelers be brutally honest about the uncertainty of M estimates, or must find a way to decrease its influence on the estimated hazard.

  19. Finite-Temperature Fidelity-Metric Approach to the Lipkin-Meshkov-Glick Model

    Scherer, D D; Kästner, M


    The fidelity metric has recently been proposed as a useful and elegant approach to identify and characterize both quantum and classical phase transitions. We study this metric on the manifold of thermal states for the Lipkin-Meshkov-Glick (LMG) model. For the isotropic LMG model, we find that the metric reduces to a Fisher-Rao metric, reflecting an underlying classical probability distribution. Furthermore, this metric can be expressed in terms of derivatives of the free energy, indicating a relation to Ruppeiner geometry. This allows us to obtain exact expressions for the (suitably rescaled) metric in the thermodynamic limit. The phase transition of the isotropic LMG model is signalled by a degeneracy of this (improper) metric in the paramagnetic phase. Due to the integrability of the isotropic LMG model, ground state level crossings occur, leading to an ill-defined fidelity metric at zero temperature.

  20. A Density Functional Approach to Para-hydrogen at Zero Temperature

    Ancilotto, Francesco; Barranco, Manuel; Navarro, Jesús; Pi, Marti


    We have developed a density functional (DF) built so as to reproduce either the metastable liquid or the solid equation of state of bulk para-hydrogen, as derived from quantum Monte Carlo zero temperature calculations. As an application, we have used it to study the structure and energetics of small para-hydrogen clusters made of up to N=40 molecules. We compare our results for liquid clusters with diffusion Monte Carlo (DMC) calculations and find a fair agreement between them. In particular, the transition found within DMC between hollow-core structures for small N values and center-filled structures at higher N values is reproduced. The present DF approach yields results for (pH_2)_N clusters indicating that for small N values a liquid-like character of the clusters prevails, while solid-like clusters are instead energetically favored for N ≥ 15.

  1. Temperature-sensitive PSII: a novel approach for sustained photosynthetic hydrogen production.

    Bayro-Kaiser, Vinzenz; Nelson, Nathan


    The need for energy and the associated burden are ever growing. It is crucial to develop new technologies for generating clean and efficient energy for society to avoid upcoming energetic and environmental crises. Sunlight is the most abundant source of energy on the planet. Consequently, it has captured our interest. Certain microalgae possess the ability to capture solar energy and transfer it to the energy carrier, H2. H2 is a valuable fuel, because its combustion produces only one by-product: water. However, the establishment of an efficient biophotolytic H2 production system is hindered by three main obstacles: (1) the hydrogen-evolving enzyme, [FeFe]-hydrogenase, is highly sensitive to oxygen; (2) energy conversion efficiencies are not economically viable; and (3) hydrogen-producing organisms are sensitive to stressful conditions in large-scale production systems. This study aimed to circumvent the oxygen sensitivity of this process with a cyclic hydrogen production system. This approach required a mutant that responded to high temperatures by reducing oxygen evolution. To that end, we randomly mutagenized the green microalgae, Chlamydomonas reinhardtii, to generate mutants that exhibited temperature-sensitive photoautotrophic growth. The selected mutants were further characterized by their ability to evolve oxygen and hydrogen at 25 and 37 °C. We identified four candidate mutants for this project. We characterized these mutants with PSII fluorescence, P700 absorbance, and immunoblotting analyses. Finally, we demonstrated that these mutants could function in a prototype hydrogen-producing bioreactor. These mutant microalgae represent a novel approach for sustained hydrogen production.

  2. Approaches to experimental validation of high-temperature gas-cooled reactor components

    Belov, S.E. [Joint Stock Company ' Afrikantov OKB Mechanical Engineering' , Burnakovsky Proezd, 15, Nizhny Novgorod 603074 (Russian Federation); Borovkov, M.N., E-mail: [Joint Stock Company ' Afrikantov OKB Mechanical Engineering' , Burnakovsky Proezd, 15, Nizhny Novgorod 603074 (Russian Federation); Golovko, V.F.; Dmitrieva, I.V.; Drumov, I.V.; Znamensky, D.S.; Kodochigov, N.G. [Joint Stock Company ' Afrikantov OKB Mechanical Engineering' , Burnakovsky Proezd, 15, Nizhny Novgorod 603074 (Russian Federation); Baxi, C.B.; Shenoy, A.; Telengator, A. [General Atomics, 3550 General Atomics Court, CA (United States); Razvi, J., E-mail: [General Atomics, 3550 General Atomics Court, CA (United States)


    Highlights: Black-Right-Pointing-Pointer Computational and experimental investigations of thermal and hydrodynamic characteristics for the equipment. Black-Right-Pointing-Pointer Vibroacoustic investigations. Black-Right-Pointing-Pointer Studies of the electromagnetic suspension system on GT-MHR turbo machine rotor models. Black-Right-Pointing-Pointer Experimental investigations of the catcher bearings design. - Abstract: The special feature of high-temperature gas-cooled reactors (HTGRs) is stressed operating conditions for equipment due to high temperature of the primary circuit helium, up to 950 Degree-Sign C, as well as acoustic and hydrodynamic loads upon the gas path elements. Therefore, great significance is given to reproduction of real operation conditions in tests. Experimental investigation of full-size nuclear power plant (NPP) primary circuit components is not practically feasible because costly test facilities will have to be developed for the power of up to hundreds of megawatts. Under such conditions, the only possible process to validate designs under development is representative tests of smaller scale models and fragmentary models. At the same time, in order to take in to validated account the effect of various physical factors, it is necessary to ensure reproduction of both individual processes and integrated tests incorporating needed integrated investigations. Presented are approaches to experimental validation of thermohydraulic and vibroacoustic characteristics for main equipment components and primary circuit path elements under standard loading conditions, which take account of their operation in the HTGR. Within the framework of the of modular helium reactor project, including a turbo machine in the primary circuit, a new and difficult problem is creation of multiple-bearing flexible vertical rotor. Presented are approaches to analytical and experimental validation of the rotor electromagnetic bearings, catcher bearings, flexible rotor

  3. Hybrid Vibration Control under Broadband Excitation and Variable Temperature Using Viscoelastic Neutralizer and Adaptive Feedforward Approach

    João C. O. Marra


    Full Text Available Vibratory phenomena have always surrounded human life. The need for more knowledge and domain of such phenomena increases more and more, especially in the modern society where the human-machine integration becomes closer day after day. In that context, this work deals with the development and practical implementation of a hybrid (passive-active/adaptive vibration control system over a metallic beam excited by a broadband signal and under variable temperature, between 5 and 35°C. Since temperature variations affect directly and considerably the performance of the passive control system, composed of a viscoelastic dynamic vibration neutralizer (also called a viscoelastic dynamic vibration absorber, the associative strategy of using an active-adaptive vibration control system (based on a feedforward approach with the use of the FXLMS algorithm working together with the passive one has shown to be a good option to compensate the neutralizer loss of performance and generally maintain the extended overall level of vibration control. As an additional gain, the association of both vibration control systems (passive and active-adaptive has improved the attenuation of vibration levels. Some key steps matured over years of research on this experimental setup are presented in this paper.

  4. Low-Temperature Wet Conformal Nickel Silicide Deposition for Transistor Technology through an Organometallic Approach.

    Lin, Tsung-Han; Margossian, Tigran; De Marchi, Michele; Thammasack, Maxime; Zemlyanov, Dmitry; Kumar, Sudhir; Jagielski, Jakub; Zheng, Li-Qing; Shih, Chih-Jen; Zenobi, Renato; De Micheli, Giovanni; Baudouin, David; Gaillardon, Pierre-Emmanuel; Copéret, Christophe


    The race for performance of integrated circuits is nowadays facing a downscale limitation. To overpass this nanoscale limit, modern transistors with complex geometries have flourished, allowing higher performance and energy efficiency. Accompanying this breakthrough, challenges toward high-performance devices have emerged on each significant step, such as the inhomogeneous coverage issue and thermal-induced short circuit issue of metal silicide formation. In this respect, we developed a two-step organometallic approach for nickel silicide formation under near-ambient temperature. Transmission electron and atomic force microscopy show the formation of a homogeneous and conformal layer of NiSix on pristine silicon surface. Post-treatment decreases the carbon content to a level similar to what is found for the original wafer (∼6%). X-ray photoelectron spectroscopy also reveals an increasing ratio of Si content in the layer after annealing, which is shown to be NiSi2 according to X-ray absorption spectroscopy investigation on a Si nanoparticle model. I-V characteristic fitting reveals that this NiSi2 layer exhibits a competitive Schottky barrier height of 0.41 eV and series resistance of 8.5 Ω, thus opening an alternative low-temperature route for metal silicide formation on advanced devices.

  5. Thermodynamic properties of rhodium at high temperature and pressure by using mean field potential approach

    Kumar, Priyank; Bhatt, Nisarg K.; Vyas, Pulastya R.; Gohel, Vinod B.


    The thermophysical properties of rhodium are studied up to melting temperature by incorporating anharmonic effects due to lattice ions and thermally excited electrons. In order to account anharmonic effects due to lattice vibrations, we have employed mean field potential (MFP) approach and for thermally excited electrons Mermin functional. The local form of the pseudopotential with only one effective adjustable parameter rc is used to construct MFP and hence vibrational free energy due to ions - Fion. We have studied equation of state at 300 K and further, to access the applicability of present conjunction scheme, we have also estimated shock-Hugoniot and temperature along principle Hugoniot. We have carried out the study of temperature variation of several thermophysical properties like thermal expansion (βP), enthalpy (EH), specific heats at constant pressure and volume (CP and CV), specific heats due to lattice ions and thermally excited electrons ( and , isothermal and adiabatic bulk moduli (BT and Bs) and thermodynamic Gruneisen parameter (γth) in order to examine the inclusion of anharmonic effects in the present study. The computed results are compared with available experimental results measured by using different methods and previously obtained theoretical results using different theoretical philosophy. Our computed results are in good agreement with experimental findings and for some physical quantities better or comparable with other theoretical results. We conclude that local form of the pseudopotential used accounts s-p-d hybridization properly and found to be transferable at extreme environment without changing the values of the parameter. Thus, even the behavior of transition metals having complexity in electronic structure can be well understood with local pseudopotential without any modification in the potential at extreme environment. Looking to the success of present scheme (MFP + pseudopotential) we would like to extend it further for the

  6. Maximum Likelihood Associative Memories

    Gripon, Vincent; Rabbat, Michael


    Associative memories are structures that store data in such a way that it can later be retrieved given only a part of its content -- a sort-of error/erasure-resilience property. They are used in applications ranging from caches and memory management in CPUs to database engines. In this work we study associative memories built on the maximum likelihood principle. We derive minimum residual error rates when the data stored comes from a uniform binary source. Second, we determine the minimum amo...

  7. Maximum likely scale estimation

    Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo


    A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and....../or having different derivative orders. Although the principle is applicable to a wide variety of image models, the main focus here is on the Brownian model and its use for scale selection in natural images. Furthermore, in the examples provided, the simplifying assumption is made that the behavior...... of the measurements is completely characterized by all moments up to second order....

  8. The new single-channel approaches for retrieving land surface temperature and the preliminary results

    Chen, Feng; Yang, Song; Liu, Lin; Zhao, Xiaofeng


    Two satellites named HJ-1A and HJ-1B were launched on 6 September 2008, which are intended for environment and disaster monitoring and forecasting. The infrared scanner (IRS) onboard HJ-1B has one thermal infrared band. Currently, for sensors with one thermal band (e.g. Landsat TM/ETM+ and HJ-1B), several empirical algorithms have been developed to estimate land surface temperature (LST). However, surface emissivity and atmospheric parameters which are not readily accessible to general users are required for these empirical methods. To resolve this problem, particularly for HJ-1B, new retrieval methodology is desired. According to proper assumptions, two approaches were proposed, which included the single-channel method based on temporal and spatial information (MTSC) and the image based single-channel method (IBSC). The newly developed methods are mainly for estimating LST accurately from one thermal band, even without any accurate information related to the atmospheric parameters and land surface emissivity. In this paper, we introduce and give preliminary assessments on the new approaches. Assessments generally show good agreement between the HJ-1B retrieved results and the MODIS references. Especially, over sea and water areas the biases were less than 1K while the root mean square errors were about 1K for both MTSC and IBSC methods. As expected, the MTSC method did superiorly to the IBSC method, owning to spatiotemporal information is incorporated into the MTSC method, although more experiments and comparisons should be conducted further.

  9. Maximum permissible voltage of YBCO coated conductors

    Wen, J.; Lin, B.; Sheng, J.; Xu, J.; Jin, Z. [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Hong, Z., E-mail: [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Wang, D.; Zhou, H.; Shen, X.; Shen, C. [Qingpu Power Supply Company, State Grid Shanghai Municipal Electric Power Company, Shanghai (China)


    Highlights: • We examine three kinds of tapes’ maximum permissible voltage. • We examine the relationship between quenching duration and maximum permissible voltage. • Continuous I{sub c} degradations under repetitive quenching where tapes reaching maximum permissible voltage. • The relationship between maximum permissible voltage and resistance, temperature. - Abstract: Superconducting fault current limiter (SFCL) could reduce short circuit currents in electrical power system. One of the most important thing in developing SFCL is to find out the maximum permissible voltage of each limiting element. The maximum permissible voltage is defined as the maximum voltage per unit length at which the YBCO coated conductors (CC) do not suffer from critical current (I{sub c}) degradation or burnout. In this research, the time of quenching process is changed and voltage is raised until the I{sub c} degradation or burnout happens. YBCO coated conductors test in the experiment are from American superconductor (AMSC) and Shanghai Jiao Tong University (SJTU). Along with the quenching duration increasing, the maximum permissible voltage of CC decreases. When quenching duration is 100 ms, the maximum permissible of SJTU CC, 12 mm AMSC CC and 4 mm AMSC CC are 0.72 V/cm, 0.52 V/cm and 1.2 V/cm respectively. Based on the results of samples, the whole length of CCs used in the design of a SFCL can be determined.

  10. Regularized maximum correntropy machine

    Wang, Jim Jing-Yan


    In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.

  11. Economics and Maximum Entropy Production

    Lorenz, R. D.


    Price differentials, sales volume and profit can be seen as analogues of temperature difference, heat flow and work or entropy production in the climate system. One aspect in which economic systems exhibit more clarity than the climate is that the empirical and/or statistical mechanical tendency for systems to seek a maximum in production is very evident in economics, in that the profit motive is very clear. Noting the common link between 1/f noise, power laws and Self-Organized Criticality with Maximum Entropy Production, the power law fluctuations in security and commodity prices is not inconsistent with the analogy. There is an additional thermodynamic analogy, in that scarcity is valued. A commodity concentrated among a few traders is valued highly by the many who do not have it. The market therefore encourages via prices the spreading of those goods among a wider group, just as heat tends to diffuse, increasing entropy. I explore some empirical price-volume relationships of metals and meteorites in this context.

  12. Maximum-entropy for the laser fusion problem

    Madkour, M.A. [Nansoura Univ. (Egypt). Dept. of Phys.


    The problem of heat flux at the critical surfaces and the surfaces of a pellet of deuterium and tritium (conduction zone) heated by laser have been considered. Ion-electron collisions are only allowed for: i.e. the linear transport equation is used to describe the problem with boundary conditions. The maximum-entropy approach is used to calculate the electron density and temperature across the conduction zone as well as the heat flux. Numerical results are given and compared with those of Rouse and Williams and El-Wakil et al. (orig.).

  13. Revisiting carbonate chemistry controls on planktic foraminifera Mg / Ca: implications for sea surface temperature and hydrology shifts over the Paleocene–Eocene Thermal Maximum and Eocene–Oligocene Transition

    D. Evans


    Full Text Available Much of our knowledge of past ocean temperatures comes from the foraminifera Mg / Ca palaeothermometer. Several non-thermal controls on foraminifera Mg incorporation have been identified, of which vital-effects, salinity and secular variation in seawater Mg / Ca are the most commonly considered. Ocean carbonate chemistry is also known to influence Mg / Ca, yet this is rarely considered as a source of uncertainty either because (1 precise pH and [CO32−] reconstructions are sparse, or (2 it is not clear from existing culture studies how a correction should be applied. We present new culture data of the relationship between carbonate chemistry for the surface-dwelling planktic species Globigerinoides ruber, and compare our results to data compiled from existing studies. We find a coherent relationship between Mg / Ca and the carbonate system and argue that pH rather than [CO32−] is likely to be the dominant control. Applying these new calibrations to datasets for the Paleocene–Eocene Thermal Maximum (PETM and Eocene–Oligocene Transition (EOT enable us to produce a more accurate picture of surface hydrology change for the former, and a reassessment of the amount of subtropical precursor cooling for the latter. We show that properly corrected Mg / Ca and δ18O datasets for the PETM imply no salinity change, and that the amount of precursor cooling over the EOT has been previously underestimated by ∼ 2 °C based on Mg / Ca. Finally, we present new laser-ablation data of EOT-age Turborotalia ampliapertura from St Stephens Quarry (Alabama, for which a solution ICPMS Mg / Ca record is available (Wade et al., 2012. We show that the two datasets are in excellent agreement, demonstrating that fossil solution and laser-ablation data may be directly comparable. Together with an advancing understanding of the effect of Mg / Casw, the coherent picture of the relationship between Mg / Ca and pH that we outline here represents a step towards producing

  14. Revisiting carbonate chemistry controls on planktic foraminifera Mg / Ca: implications for sea surface temperature and hydrology shifts over the Paleocene-Eocene Thermal Maximum and Eocene-Oligocene transition

    Evans, David; Wade, Bridget S.; Henehan, Michael; Erez, Jonathan; Müller, Wolfgang


    Much of our knowledge of past ocean temperatures comes from the foraminifera Mg / Ca palaeothermometer. Several nonthermal controls on foraminifera Mg incorporation have been identified, of which vital effects, salinity, and secular variation in seawater Mg / Ca are the most commonly considered. Ocean carbonate chemistry is also known to influence Mg / Ca, yet this is rarely examined as a source of uncertainty, either because (1) precise pH and [CO32-] reconstructions are sparse or (2) it is not clear from existing culture studies how a correction should be applied. We present new culture data of the relationship between carbonate chemistry and Mg / Ca for the surface-dwelling planktic species Globigerinoides ruber and compare our results to data compiled from existing studies. We find a coherent relationship between Mg / Ca and the carbonate system and argue that pH rather than [CO32-] is likely to be the dominant control. Applying these new calibrations to data sets for the Paleocene-Eocene Thermal Maximum (PETM) and Eocene-Oligocene transition (EOT) enables us to produce a more accurate picture of surface hydrology change for the former and a reassessment of the amount of subtropical precursor cooling for the latter. We show that pH-adjusted Mg / Ca and δ18O data sets for the PETM are within error of no salinity change and that the amount of precursor cooling over the EOT has been previously underestimated by ˜ 2 °C based on Mg / Ca. Finally, we present new laser-ablation data of EOT-age Turborotalia ampliapertura from St. Stephens Quarry (Alabama), for which a solution inductively coupled plasma mass spectrometry (ICPMS) Mg / Ca record is available (Wade et al., 2012). We show that the two data sets are in excellent agreement, demonstrating that fossil solution and laser-ablation data may be directly comparable. Together with an advancing understanding of the effect of Mg / Casw, the coherent picture of the relationship between Mg / Ca and pH that we outline

  15. Equalized near maximum likelihood detector


    This paper presents new detector that is used to mitigate intersymbol interference introduced by bandlimited channels. This detector is named equalized near maximum likelihood detector which combines nonlinear equalizer and near maximum likelihood detector. Simulation results show that the performance of equalized near maximum likelihood detector is better than the performance of nonlinear equalizer but worse than near maximum likelihood detector.

  16. Maximum mutual information regularized classification

    Wang, Jim Jing-Yan


    In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.

  17. State-space approach for the analysis of soil water content and temperature in a sugarcane crop

    Dourado-Neto Durval


    Full Text Available The state-space approach is used to describe surface soil water content and temperature behaviour, in a field experiment in which sugarcane is submitted to different management practices. The treatments consisted of harvest trash mulching, bare soil, and burned trash, all three in a ratoon crop, after first cane harvest. One transect of 84 points was sampled, meter by meter, covering all treatments and borders. The state-space approach is described in detail and the results show that soil water contents measured along the transect could successfully be estimated from water content and temperature observations made at the first neighbour.

  18. High-Temperature Phase Equilibria of Duplex Stainless Steels Assessed with a Novel In-Situ Neutron Scattering Approach

    Pettersson, Niklas; Wessman, Sten; Hertzman, Staffan; Studer, Andrew


    Duplex stainless steels are designed to solidify with ferrite as the parent phase, with subsequent austenite formation occurring in the solid state, implying that, thermodynamically, a fully ferritic range should exist at high temperatures. However, computational thermodynamic tools appear currently to overestimate the austenite stability of these systems, and contradictory data exist in the literature. In the present work, the high-temperature phase equilibria of four commercial duplex stainless steel grades, denoted 2304, 2101, 2507, and 3207, with varying alloying levels were assessed by measurements of the austenite-to-ferrite transformation at temperatures approaching 1673 K (1400 °C) using a novel in-situ neutron scattering approach. All grades became fully ferritic at some point during progressive heating. Higher austenite dissolution temperatures were measured for the higher alloyed grades, and for 3207, the temperature range for a single-phase ferritic structure approached zero. The influence of temperatures in the region of austenite dissolution was further evaluated by microstructural characterization using electron backscattered diffraction of isothermally heat-treated and quenched samples. The new experimental data are compared to thermodynamic calculations, and the precision of databases is discussed.

  19. Maximum entropy production in daisyworld

    Maunu, Haley A.; Knuth, Kevin H.


    Daisyworld was first introduced in 1983 by Watson and Lovelock as a model that illustrates how life can influence a planet's climate. These models typically involve modeling a planetary surface on which black and white daisies can grow thus influencing the local surface albedo and therefore also the temperature distribution. Since then, variations of daisyworld have been applied to study problems ranging from ecological systems to global climate. Much of the interest in daisyworld models is due to the fact that they enable one to study self-regulating systems. These models are nonlinear, and as such they exhibit sensitive dependence on initial conditions, and depending on the specifics of the model they can also exhibit feedback loops, oscillations, and chaotic behavior. Many daisyworld models are thermodynamic in nature in that they rely on heat flux and temperature gradients. However, what is not well-known is whether, or even why, a daisyworld model might settle into a maximum entropy production (MEP) state. With the aim to better understand these systems, this paper will discuss what is known about the role of MEP in daisyworld models.

  20. A combined modulated feedback and temperature compensation approach to improve bias drift of a closed-loop MEMS capacitive accelerometer

    Ming-jun MA; Zhong-he JIN‡; Hui-jie ZHU


    The bias drift of a micro-electro-mechanical systems (MEMS) accelerometer suffers from the 1/f noise and the tem-perature effect. For massive applications, the bias drift urgently needs to be improved. Conventional methods often cannot ad-dress the 1/f noise and temperature effect in one architecture. In this paper, a combined approach on closed-loop architecture modification is proposed to minimize the bias drift. The modulated feedback approach is used to isolate the 1/f noise that exists in the conventional direct feedback approach. Then a common mode signal is created and added into the closed loop on the basis of modulated feedback architecture, to compensate for the temperature drift. With the combined approach, the bias instability is improved to less than 13 µg, and the drift of the Allan variance result is reduced to 17 µg at 100 s of the integration time. The temperature coefficient is reduced from 4.68 to 0.1 mg/°C. The combined approach could be useful for many other closed-loop accelerometers.

  1. Mapping air temperature using time series analysis of LST: the SINTESI approach

    Alfieri, S.M.; De Lorenzi, F.; Menenti, M.


    This paper presents a new procedure to map time series of air temperature (Ta) at fine spatial resolution using time series analysis of satellite-derived land surface temperature (LST) observations. The method assumes that air temperature is known at a single (reference) location such as in gridded

  2. 采用高山松最大密度重建川西高原1917-2002年夏季气温%Reconstruction of summer temperature variation from maximum density of alpine pine during 1917-2002 for west Sichuan Plateau, China

    吴普; 王丽丽; 邵雪梅


    Having analyzed the tree ring width and maximum latewood density of Pinus den-sata from west Sichuan, we obtained different climate information from tree-ring width and maximum latewood density chronology. The growth of tree ring width was responded princi-pally to the precipitation in current May, which might be influenced by the activity of southwest monsoon, whereas the maximum latewood density reflected summer temperature (June-September). According to the correlation relationship, a transfer function had been used to reconstruct summer temperature for the study area. The explained variance of re-construction is 51% (F=-52.099, p<0.0001). In the reconstruction series: before the 1930s, the climate was relatively cold, and relatively warm from 1930 to 1960, this trend was in accor-dance with the cold-warm period of the last 100 years, west Sichuan. Compared with Chengdu, the warming break point in west Sichuan is 3 years ahead of time, indicating that the Tibetan Plateau was more sensitive to temperature change. There was an evident sum-mer warming signal after 1983. Although the last 100-year running average of summer tem-perature in the 1990s was the maximum, the running average of the early 1990s was below the average line and it was cold summer, but summer drought occurred in the late 1990s.

  3. Maximum-Entropy Inference with a Programmable Annealer.

    Chancellor, Nicholas; Szoke, Szilard; Vinci, Walter; Aeppli, Gabriel; Warburton, Paul A


    Optimisation problems typically involve finding the ground state (i.e. the minimum energy configuration) of a cost function with respect to many variables. If the variables are corrupted by noise then this maximises the likelihood that the solution is correct. The maximum entropy solution on the other hand takes the form of a Boltzmann distribution over the ground and excited states of the cost function to correct for noise. Here we use a programmable annealer for the information decoding problem which we simulate as a random Ising model in a field. We show experimentally that finite temperature maximum entropy decoding can give slightly better bit-error-rates than the maximum likelihood approach, confirming that useful information can be extracted from the excited states of the annealer. Furthermore we introduce a bit-by-bit analytical method which is agnostic to the specific application and use it to show that the annealer samples from a highly Boltzmann-like distribution. Machines of this kind are therefore candidates for use in a variety of machine learning applications which exploit maximum entropy inference, including language processing and image recognition.

  4. Low quasiparticle coherence temperature in the one-band Hubbard model: A slave-boson approach

    Mezio, Alejandro; McKenzie, Ross H.


    We use the Kotliar-Ruckenstein slave-boson formalism to study the temperature dependence of paramagnetic phases of the one-band Hubbard model for a variety of band structures. We calculate the Fermi liquid quasiparticle spectral weight Z and identify the temperature at which it decreases significantly to a crossover to a bad metal region. Near the Mott metal-insulator transition, this coherence temperature Tcoh is much lower than the Fermi temperature of the uncorrelated Fermi gas, as is observed in a broad range of strongly correlated electron materials. After a proper rescaling of temperature and interaction, we find a universal behavior that is independent of the band structure of the system. We obtain the temperature-interaction phase diagram as function of doping, and we compare the temperature dependence of the double occupancy, entropy, and charge compressibility with previous results obtained with dynamical mean-field theory. We analyze the stability of the method by calculating the charge compressibility.

  5. Biochemical, physiological and molecular responses of Ricinus communis seeds and seedlings to different temperatures: a multi-omics approach

    Ribeiro de Jesus, P.R.


    Biochemical, physiological and molecular responses of Ricinus communis seeds and seedlings to different temperatures: a multi-omics approach by Paulo Roberto Ribeiro de Jesus The main objective of this thesis was to provide a detailed analysis of physiological, bioc

  6. Biochemical, physiological and molecular responses of Ricinus communis seeds and seedlings to different temperatures: a multi-omics approach

    Ribeiro de Jesus, P.R.


    Biochemical, physiological and molecular responses of Ricinus communis seeds and seedlings to different temperatures: a multi-omics approach by Paulo Roberto Ribeiro de Jesus The main objective of this thesis was to provide a detailed analysis of physiological,

  7. A Design of Experiments (DOE) approach to optimise temperature measurement accuracy in Solid Oxide Fuel Cell (SOFC)

    Barari, F.; Morgan, R.; Barnard, P.


    In SOFC, accurately measuring the hot-gas temperature is challenging due to low gas velocity, high wall temperature, complex flow geometries and relatively small pipe diameter. Improper use of low cost thermometry system such as standard Type K thermocouples (TC) may introduce large measurement error. The error could have a negative effect on the thermal management of the SOFC systems and consequential reduction in efficiency. In order to study the factors affecting the accuracy of the temperature measurement system, a mathematical model of a TC inside a pipe was defined and numerically solved. The model calculated the difference between the actual and the measured gas temperature inside the pipe. A statistical Design of Experiment (DOE) approach was applied to the modelling data to compute the interaction effect between variables and investigate the significance of each variable on the measurement errors. In this study a full factorial DOE design with six variables (wall temperature, gas temperature, TC length, TC diameter and TC emissivity) at two levels was carried out. Four different scenarios, two sets of TC length (6 - 10.5 mm and 17 - 22 mm) and two different sets of temperature range (550 - 650 °C and 750 - 850 °C), were proposed. DOE analysis was done for each scenario and results were compared to identify key parameters affecting the accuracy of a particular temperature reading.

  8. Spatio-temporal interaction between absorbing aerosols and temperature: Correlation and causality based approach

    Dave, P.; Bhushan, M.; Venkataraman, C.


    Indian subcontinent, in particular, the Indo-gangetic plain (IGP) has witnessed large temperature anomalies (Ratnam et al., 2016) along with high emission of absorbing aerosols (AA) (Gazala, et al., 2005). The anomalous high temperature observed over this region may bear a relationship with high AA emissions. Different studies have been conducted to understand AA and temperature relationships (Turco et al., 1983; Hansen et al., 1997, 2005; Seinfeld 2008; Ramanathan et al. 2010b; Ban-Weiss et al., 2012). It was found that when the AA was injected in the lower- mid troposphere the surface air temperature increases while injection of AA at higher troposphere-lower stratosphere surface temperature decreases. These studies used simulation based results to establish link between AA and temperature (Hansen et al., 1997, 2005; Ban-Weiss et al., 2012). The current work focuses on identifying the causal influence of AA on temperature using observational and re-analysis data over Indian subcontinent using cross correlation (CCs) and Granger causality (GC) (Granger, 1969). Aerosol index (AI) from TOMS-OMI was used as index for AA while ERA-interim reanalysis data was used for temperature at varying altitude. Period of study was March-April-May-June (MAMJ) for years 1979-2015. CCs were calculated for all the atmospheric layers. In each layer nearby and distant pixels (>500 kms) with high CCs were identified using clustering technique. It was found that that AI and Temperature shows statistically significant cross-correlations for co-located and distant pixels and more prominently over IGP. The CCs fades away with higher altitudes. CCs analysis was followed by GC analysis to identify the lag over which AI can influence the Temperature. GC also supported the findings of CCs analysis. It is an early attempt to link persisting large temperature anomalies with absorbing aerosols and may help in identifying the role of absorbing aerosol in causing heat waves.

  9. Augmented Nonlinear Controller for Maximum Power-Point Tracking with Artificial Neural Network in Grid-Connected Photovoltaic Systems


    Photovoltaic (PV) systems have non-linear characteristics that generate maximum power at one particular operating point. Environmental factors such as irradiance and temperature variations greatly affect the maximum power point (MPP). Diverse offline and online techniques have been introduced for tracking the MPP. Here, to track the MPP, an augmented-state feedback linearized (AFL) non-linear controller combined with an artificial neural network (ANN) is proposed. This approach linearizes the...

  10. Retrieval of sea surface air temperature from satellite data over Indian Ocean: An empirical approach

    Sathe, P.V.; Muraleedharan, P.M.

    the surface air temperature and surface humidity is analysed by fitting a polynomial between the two for different regions of the Indian Ocean in different seasons. Taking into account the variation in surface air temperatures, the Indian Ocean is split in 14...

  11. Temperature impact on yeast metabolism: Insights from experimental and modeling approaches

    Braga da Cruz, A.L.


    Temperature is an environmental parameter that greatly affects the growth of microorganisms, due to its impact on the activity of all enzymes in the network. This is particularly relevant in habitats where there are large temperature changes, either daily or seasonal. Understanding how organisms

  12. Innovative approach to retrieve land surface emissivity and land surface temperature in areas of highly dynamic emissivity changes by using thermal infrared data

    Heinemann, Sascha; Muro, Javier; Burkart, Andreas; Schultz, Johannes; Thonfeld, Frank; Menz, Gunter


    The land surface temperature (LST) is an extremely significant parameter in order to understand the processes of energetic interactions between the Earth's surface and the atmosphere. This knowledge is significant for various environmental research questions, particularly with regard to climate change. The current challenge is to reduce the higher deviations during daytime especially for bare areas with a maximum of 5.7 Kelvin. These temperature differences are time and vegetation cover dependent. This study shows an innovative approach to retrieve land surface emissivity (LSE) and LST by using thermal infrared (TIR) data from satellite sensors, such as SEVIRI and AATSR. So far there are no methods to derive LSE/LST particularly in areas of highly dynamic emissivity changes. Therefore especially for regions with large surface temperature amplitude in the diurnal cycle such as bare and uneven soil surfaces but also for regions with seasonal changes in vegetation cover including various surface areas such as grassland, mixed forests or agricultural land different methods were investigated to identify the most appropriate one. The LSE is retrieved by using the day/night Temperature-Independent Spectral Indices (TISI) method, while the Generalised Split-Window (GSW) method is used to retrieve the LST. Nevertheless different GSW algorithms show that equal LSEs lead to large LST differences. For bare surfaces during daytime the difference is about 6 Kelvin. Additionally LSE is also measured using a NDVI-based threshold method (NDVITHM) to distinguish between soil, dense vegetation cover and pixel composed of soil and vegetation. The data used for this analysis were derived from MODIS TIR. The analysis is implemented with IDL and an intercomparison is performed to determine the most effective methods. To compensate temperature differences between derived and ground truth data appropriate correction terms, by comparing derived LSE/LST data with ground-based measurements

  13. Refractive index determination as a tool for temperature measurement and process control: a new approach

    Schaller, Johannes K.; Wassenberg, S.; Fiedler, Detlev K.; Stojanoff, Christo G.


    Recently a new method for temperature measurement of droplets was presented. This method determines the index of refraction of a spherical scatterer with high accuracy and utilizes the dependence of the index of refraction on the temperature to finally determine the temperature. In this paper we show that the method is likewise applicable to cylindrical scatterers with a homogeneous refractive index distribution, like liquid jets. The method can be used to optically determine the temperature of a liquid jet, or to measure other properties of the liquid that influence the index of refraction of that liquid. One such property is the concentration of one liquid in another, like that of glycerol in an aqueous solution, which was studied experimentally for assessing some properties of the proposed method. An estimation of the sensitivity of the method was gained by detecting temperature changes of a cylindrical water jet.

  14. Thermodynamic approach to the synthesis of silicon carbide using tetramethylsilane as the precursor at high temperature

    Jeong, Seong-Min; Kim, Kyung-Hun; Yoon, Young Joon; Lee, Myung-Hyun; Seo, Won-Seon


    Tetramethylsilane (TMS) is commonly used as a precursor in the production of SiC(β) films at relatively low temperatures. However, because TMS contains much more C than Si, it is difficult to produce solid phase SiC at high temperatures. In an attempt to develop a more efficient TMS-based SiC(α) process, computational thermodynamic simulations were performed under various temperatures, working pressures and TMS/H2 ratios. The findings indicate that each solid phase has a different dependency on the H2 concentration. Consequently, a high H2 concentration results in the formation of a single, solid phase SiC region at high temperatures. Finally, TMS appears to be useful as a precursor for the high temperature production of SiC(α).

  15. A response surface methodology and desirability approach for predictive modeling and optimization of cutting temperature in machining hardened steel

    Ashok Kumar Sahoo


    Full Text Available This paper presents an experimental investigation on cutting temperature during hard turning of EN 24 steel (50 HRC using TiN coated carbide insert under dry environment. The prediction model is developed using response surface methodology and optimization of process parameter is performed by desirability approach. A stiff rise in cutting temperature is noticed when feed and cutting speed are elevated. The effect of depth of cut on cutting temperature is not that much significant compared with cutting speed and feed as observed from main effects plot. The response surface second order model presented high correlation coefficient (R2 = 0.992 explaining 99.2 % of the variability in the cutting temperature which indicates the goodness of fit for the model to the actual data and high statistical significance of the model. The experimental and predicted values are very close to each other. The calculated error for cutting temperature lies between 1.88-3.19 % during confirmation trial. Therefore, the developed second order model correlates the relationship of the cutting temperature with the process parameters with good degree of approximation. The optimal combination for process parameter is depth of cut at 0.2mm, feed of 0.1597 mm/rev and cutting speed of 70m/min. Based on these combination, the value of cutting temperature is 302.950C whose desirability is one.

  16. Maximum Likelihood Estimation of Search Costs

    J.L. Moraga-Gonzalez (José Luis); M.R. Wildenbeest (Matthijs)


    textabstractIn a recent paper Hong and Shum (forthcoming) present a structural methodology to estimate search cost distributions. We extend their approach to the case of oligopoly and present a maximum likelihood estimate of the search cost distribution. We apply our method to a data set of online p

  17. A GM (1, 1 Markov Chain-Based Aeroengine Performance Degradation Forecast Approach Using Exhaust Gas Temperature

    Ning-bo Zhao


    Full Text Available Performance degradation forecast technology for quantitatively assessing degradation states of aeroengine using exhaust gas temperature is an important technology in the aeroengine health management. In this paper, a GM (1, 1 Markov chain-based approach is introduced to forecast exhaust gas temperature by taking the advantages of GM (1, 1 model in time series and the advantages of Markov chain model in dealing with highly nonlinear and stochastic data caused by uncertain factors. In this approach, firstly, the GM (1, 1 model is used to forecast the trend by using limited data samples. Then, Markov chain model is integrated into GM (1, 1 model in order to enhance the forecast performance, which can solve the influence of random fluctuation data on forecasting accuracy and achieving an accurate estimate of the nonlinear forecast. As an example, the historical monitoring data of exhaust gas temperature from CFM56 aeroengine of China Southern is used to verify the forecast performance of the GM (1, 1 Markov chain model. The results show that the GM (1, 1 Markov chain model is able to forecast exhaust gas temperature accurately, which can effectively reflect the random fluctuation characteristics of exhaust gas temperature changes over time.

  18. The effects of temperature on service employees' customer orientation: an experimental approach.

    Kolb, Peter; Gockel, Christine; Werth, Lioba


    Numerous studies have demonstrated how temperature can affect perceptual, cognitive and psychomotor performance (e.g. Hancock, P.A., Ross, J., and Szalma, J., 2007. A meta-analysis of performance response under thermal stressors. Human Factors: The Journal of the Human Factors and Ergonomics Society, 49 (5), 851-877). We extend this research to interpersonal aspects of performance, namely service employees' and salespeople's customer orientation. We combine ergonomics with recent research on social cognition linking physical with interpersonal warmth/coldness. In Experiment 1, a scenario study in the lab, we demonstrate that student participants in rooms with a low temperature showed more customer-oriented behaviour and gave higher customer discounts than participants in rooms with a high temperature - even in zones of thermal comfort. In Experiment 2, we show the existence of alternative possibilities to evoke positive temperature effects on customer orientation in a sample of 126 service and sales employees using a semantic priming procedure. Overall, our results confirm the existence of temperature effects on customer orientation. Furthermore, important implications for services, retail and other settings of interpersonal interactions are discussed. Practitioner Summary: Temperature effects on performance have emerged as a vital research topic. Owing to services' increasing economic importance, we transferred this research to the construct of customer orientation, focusing on performance in service and retail settings. The demonstrated temperature effects are transferable to services, retail and other settings of interpersonal interactions.

  19. Analytical approach for temperature of the evaporating droplets on solid substrates

    Dunin, Stanislav Z.; Nagornov, Oleg V.; Starostin, Nikolay V.; Trifonenkov, Vladimir P.


    Non-isothermal evaporation of sessile liquid drops is analyzed. For temperature and concentration distributions the analytical formulae are derived as functions of the system and ambient parameters. Non-uniform temperature distribution on drop surface results in Marangonni force. Extremes of surface temperature cause Marangonni force to change its direction in stagnation points. Critical values of system parameters, at which stagnation points take place, are found. Position of these points is derived as function of thermal properties of substrate, liquid and gas and contact angle.

  20. Improving demand response potential of a supermarket refrigeration system: A food temperature estimation approach

    Pedersen, Rasmus; Schwensen, John; Biegel, Benjamin


    a method for estimating food temperature based on measurements of evaporator expansion valve opening degree. This method requires no additional hardware or system modeling. We demonstrate the estimation method on a real supermarket display case and the applicability of knowing food temperature is shown...... through tests on a full scale supermarket refrigeration system made available by Danfoss A/S. The conducted application test shows that feedback based on food temperature can increase the demand flexibility during a step by approx. 60 % the first 70 minutes and up to 100%over the first 150 minutes...... - thereby strengthening the demand response potential of supermarket refrigeration systems....

  1. Large $N_{c}$, chiral approach to $M_{\\eta}'$ at finite temperature

    Escribano, R; Tytgat, M H G


    We study the temperature dependence of the eta and eta' meson masses withinthe framework of U(3)_L x U(3)_R chiral perturbation theory, up tonext-to-leading order in a simultaneous expansion in momenta, quark masses andnumber of colours. We find that both masses decrease at low temperatures, butonly very slightly. We analyze higher order corrections and argue that largeN_c suggests a discontinuous drop of M_eta' at the critical temperature ofdeconfinement T_c, consistent with a first order transition to a phase withapproximate U(1)_A symmetry.

  2. Physiological and biochemical responses of Ricinus communis seedlings to different temperatures: a metabolomics approach.

    Ribeiro, Paulo Roberto; Fernandez, Luzimar Gonzaga; de Castro, Renato Delmondez; Ligterink, Wilco; Hilhorst, Henk W M


    Compared with major crops, growth and development of Ricinus communis is still poorly understood. A better understanding of the biochemical and physiological aspects of germination and seedling growth is crucial for the breeding of high yielding varieties adapted to various growing environments. In this context, we analysed the effect of temperature on growth of young R. communis seedlings and we measured primary and secondary metabolites in roots and cotyledons. Three genotypes, recommended to small family farms as cash crop, were used in this study. Seedling biomass was strongly affected by the temperature, with the lowest total biomass observed at 20°C. The response in terms of biomass production for the genotype MPA11 was clearly different from the other two genotypes: genotype MPA11 produced heavier seedlings at all temperatures but the root biomass of this genotype decreased with increasing temperature, reaching the lowest value at 35°C. In contrast, root biomass of genotypes MPB01 and IAC80 was not affected by temperature, suggesting that the roots of these genotypes are less sensitive to changes in temperature. In addition, an increasing temperature decreased the root to shoot ratio, which suggests that biomass allocation between below- and above ground parts of the plants was strongly affected by the temperature. Carbohydrate contents were reduced in response to increasing temperature in both roots and cotyledons, whereas amino acids accumulated to higher contents. Our results show that a specific balance between amino acids, carbohydrates and organic acids in the cotyledons and roots seems to be an important trait for faster and more efficient growth of genotype MPA11. An increase in temperature triggers the mobilization of carbohydrates to support the preferred growth of the aerial parts, at the expense of the roots. A shift in the carbon-nitrogen metabolism towards the accumulation of nitrogen-containing compounds seems to be the main biochemical

  3. The non-linear link between electricity consumption and temperature in Europe: A threshold panel approach

    Bessec, Marie [CGEMP, Universite Paris-Dauphine, Place du Marechal de Lattre de Tassigny Paris (France); Fouquau, Julien [LEO, Universite d' Orleans, Faculte de Droit, d' Economie et de Gestion, Rue de Blois, BP 6739, 45067 Orleans Cedex 2 (France)


    This paper investigates the relationship between electricity demand and temperature in the European Union. We address this issue by means of a panel threshold regression model on 15 European countries over the last two decades. Our results confirm the non-linearity of the link between electricity consumption and temperature found in more limited geographical areas in previous studies. By distinguishing between North and South countries, we also find that this non-linear pattern is more pronounced in the warm countries. Finally, rolling regressions show that the sensitivity of electricity consumption to temperature in summer has increased in the recent period. (author)

  4. A robust approach to correct for pronounced errors in temperature measurements by controlling radiation damping feedback fields in solution NMR.

    Wolahan, Stephanie M; Li, Zhao; Hsu, Chao-Hsiung; Huang, Shing-Jong; Clubb, Robert; Hwang, Lian-Pin; Lin, Yung-Ya


    Accurate temperature measurement is a requisite for obtaining reliable thermodynamic and kinetic information in all NMR experiments. A widely used method to calibrate sample temperature depends on a secondary standard with temperature-dependent chemical shifts to report the true sample temperature, such as the hydroxyl proton in neat methanol or neat ethylene glycol. The temperature-dependent chemical shift of the hydroxyl protons arises from the sensitivity of the hydrogen-bond network to small changes in temperature. The frequency separation between the alkyl and the hydroxyl protons are then converted to sample temperature. Temperature measurements by this method, however, have been reported to be inconsistent and incorrect in modern NMR, particularly for spectrometers equipped with cryogenically-cooled probes. Such errors make it difficult or even impossible to study chemical exchange and molecular dynamics or to compare data acquired on different instruments, as is frequently done in biomolecular NMR. In this work, we identify the physical origins for such errors to be unequal amount of dynamical frequency shifts on the alkyl and the hydroxyl protons induced by strong radiation damping (RD) feedback fields. Common methods used to circumvent RD may not suppress such errors. A simple, easy-to-implement solution was demonstrated that neutralizes the RD effect on the frequency separation by a "selective crushing recovery" pulse sequence to equalize the transverse magnetization of both spin species. Experiments using cryoprobes at 500 MHz and 800 MHz demonstrated that this approach can effectively reduce the errors in temperature measurements from about ±4.0 K to within ±0.4 K in general.

  5. New Approach of True Temperature Restoration in Optical Diagnostics Using IR-Camera

    Zhirnov, I.; Protasov, C.; Kotoban, D.; Gusarov, A. V.; Tarasova, T.


    The laser treatment processes are specified due to the laser-matter interaction instabilities. Modern additive manufacturing technologies such as selective laser melting provide layer-by-layer part growth with continuous operation for hours and days but without adequate controlling systems at present. In this paper, a method for determining a temperature in the laser action zone during the process based on a study of microscopic structure, phase and element analyses of the processed material is proposed. A fixed point corresponding to melting temperature was acquired, and the corresponding emissivity coefficient was calculated with the assumption of its wavelength and temperature independence. The experimental data were corroborated with good agreement with mathematical calculations. The obtained results reveal an impact of scanning speed and of laser emission power on temperature in molten zone, which presents interest for optimization of laser-processing technologies and more specifically selective laser melting process parameters.

  6. Room Temperature Oxide Deposition Approach to Fully Transparent, All-Oxide Thin-Film Transistors.

    Rembert, Thomas; Battaglia, Corsin; Anders, André; Javey, Ali


    A room temperature cathodic arc deposition technique is used to produce high-mobility ZnO thin films for low voltage thin-film transistors (TFTs) and digital logic inverters. All-oxide, fully transparent devices are fabricated on alkali-free glass and flexible polyimide foil, exhibiting high performance. This provides a practical materials platform for the low-temperature fabrication of all-oxide TFTs on virtually any substrate.

  7. High Temperature Components of Magma-Related Geothermal Systems: An Experimental and Theoretical Approach

    Philip A. Candela; Philip M. Piccoli


    This summarizes select components of a multi-faceted study of high temperature magmatic fluid behavior in shallow, silicic, volcano-plutonic geothermal systems. This work built on a foundation provided by DOE-supported advances made in our lab in understanding the physics and chemistry of the addition of HCI and other chlorides into the high temperature regions of geothermal systems. The emphasis of this project was to produce a model of the bolatile contributions from felsic magmatic systems to geothermal systems

  8. High Temperature Components of Magma-Related Geothermal Systems: An Experimental and Theoretical Approach

    Philip A. Candela; Philip M. Piccoli


    This summarizes select components of a multi-faceted study of high temperature magmatic fluid behavior in shallow, silicic, volcano-plutonic geothermal systems. This work built on a foundation provided by DOE-supported advances made in our lab in understanding the physics and chemistry of the addition of HCI and other chlorides into the high temperature regions of geothermal systems. The emphasis of this project was to produce a model of the bolatile contributions from felsic magmatic systems to geothermal systems

  9. Maximum stellar iron core mass

    F W Giacobbe


    An analytical method of estimating the mass of a stellar iron core, just prior to core collapse, is described in this paper. The method employed depends, in part, upon an estimate of the true relativistic mass increase experienced by electrons within a highly compressed iron core, just prior to core collapse, and is significantly different from a more typical Chandrasekhar mass limit approach. This technique produced a maximum stellar iron core mass value of 2.69 × 1030 kg (1.35 solar masses). This mass value is very near to the typical mass values found for neutron stars in a recent survey of actual neutron star masses. Although slightly lower and higher neutron star masses may also be found, lower mass neutron stars are believed to be formed as a result of enhanced iron core compression due to the weight of non-ferrous matter overlying the iron cores within large stars. And, higher mass neutron stars are likely to be formed as a result of fallback or accretion of additional matter after an initial collapse event involving an iron core having a mass no greater than 2.69 × 1030 kg.

  10. A Pedestrian Approach to Indoor Temperature Distribution Prediction of a Passive Solar Energy Efficient House

    Golden Makaka


    Full Text Available With the increase in energy consumption by buildings in keeping the indoor environment within the comfort levels and the ever increase of energy price there is need to design buildings that require minimal energy to keep the indoor environment within the comfort levels. There is need to predict the indoor temperature during the design stage. In this paper a statistical indoor temperature prediction model was developed. A passive solar house was constructed; thermal behaviour was simulated using ECOTECT and DOE computer software. The thermal behaviour of the house was monitored for a year. The indoor temperature was observed to be in the comfort level for 85% of the total time monitored. The simulation results were compared with the measured results and those from the prediction model. The statistical prediction model was found to agree (95% with the measured results. Simulation results were observed to agree (96% with the statistical prediction model. Modeled indoor temperature was most sensitive to the outdoor temperatures variations. The daily mean peak ones were found to be more pronounced in summer (5% than in winter (4%. The developed model can be used to predict the instantaneous indoor temperature for a specific house design.

  11. Modeling stream temperature in the Anthropocene: An earth system modeling approach

    Li, Hong-Yi; Ruby Leung, L.; Tesfa, Teklu; Voisin, Nathalie; Hejazi, Mohamad; Liu, Lu; Liu, Ying; Rice, Jennie; Wu, Huan; Yang, Xiaofan


    A new large-scale stream temperature model has been developed within the Community Earth System Model (CESM) framework. The model is coupled with the Model for Scale Adaptive River Transport (MOSART) that represents river routing and a water management model (WM) that represents the effects of reservoir operations and water withdrawals on flow regulation. The coupled models allow the impacts of reservoir operations and withdrawals on stream temperature to be explicitly represented in a physically based and consistent way. The models have been applied to the Contiguous United States driven by observed meteorological forcing. Including water management in the models improves the agreement between the simulated and observed streamflow at a large number of stream gauge stations. It is then shown that the model is capable of reproducing stream temperature spatiotemporal variation satisfactorily by comparing against the observed data from over 320 USGS stations. Both climate and water management are found to have important influence on the spatiotemporal patterns of stream temperature. Furthermore, it is quantitatively estimated that reservoir operation could cool down stream temperature in the summer low-flow season (August-October) by as much as 1˜2°C due to enhanced low-flow conditions, which have important implications to aquatic ecosystems. Sensitivity of the simulated stream temperature to input data and reservoir operation rules used in the WM model motivates future directions to address some limitations in the current modeling framework.

  12. Who is more vulnerable to death from extremely cold temperatures? A case-only approach in Hong Kong with a temperate climate

    Qiu, Hong; Tian, Linwei; Ho, Kin-fai; Yu, Ignatius T. S.; Thach, Thuan-Quoc; Wong, Chit-Ming


    The short-term effects of ambient cold temperature on mortality have been well documented in the literature worldwide. However, less is known about which subpopulations are more vulnerable to death related to extreme cold. We aimed to examine the personal characteristics and underlying causes of death that modified the association between extreme cold and mortality in a case-only approach. Individual information of 197,680 deaths of natural causes, daily temperature, and air pollution concentrations in cool season (November-April) during 2002-2011 in Hong Kong were collected. Extreme cold was defined as those days with preceding week with a daily maximum temperature at or less than the 1st percentile of its distribution. Logistic regression models were used to estimate the effects of modification, further controlling for age, seasonal pattern, and air pollution. Sensitivity analyses were conducted by using the 5th percentile as cutoff point to define the extreme cold. Subjects with age of 85 and older were more vulnerable to extreme cold, with an odds ratio (OR) of 1.33 (95 % confidence interval (CI), 1.22-1.45). The greater risk of extreme cold-related mortality was observed for total cardiorespiratory diseases and several specific causes including hypertensive diseases, stroke, congestive heart failure, chronic obstructive pulmonary disease (COPD), and pneumonia. Hypertensive diseases exhibited the greatest vulnerability to extreme cold exposure, with an OR of 1.37 (95 % CI, 1.13-1.65). Sensitivity analyses showed the robustness of these effect modifications. This evidence on which subpopulations are vulnerable to the adverse effects of extreme cold is important to inform public health measures to minimize those effects.

  13. A facile approach to derive binder protective film on high voltage spinel cathode materials against high temperature degradation

    Chou, Wei-Yu; Jin, Yi-Chun; Duh, Jenq-Gong; Lu, Cheng-Zhang; Liao, Shih-Chieh


    The electrochemical performance of spinel LiNi0.5Mn1.5O4 cathode combined with different binders at elevated temperature is firstly investigated. The water soluble binder, such as sodium carboxymethyl cellulose (CMC) and sodium alginate (SA), is compared with the polyvinylidene difluoride (PVdF) binder used in non-aqueous process. The aqueous process can meet the need of Li-ion battery industry due to environmental-friendly and cost effectiveness by replacing toxic organic solvent, such as N-methyl-pyrrolidone (NMP). In this study, a significantly improved high temperature cycling performance is successfully obtained as compared to the traditional PVdF binder. The aqueous binder can serve as a protective film which inhibits the serious Ni and Mn dissolution especially at elevated temperature. Our result demonstrates a facile approach to solve the problem of capacity fading for high voltage spinel cathodes.

  14. Comparison of Stream Temperature Modeling Approaches: The Case of a High Alpine Watershed in the Context of Climate Change

    Gallice, A.


    Stream temperature controls important aspects of the riverine habitat, such as the rate of spawning or death of many fish species, or the concentration of numerous dissolved substances. In the current context of accelerating climate change, the future evolution of stream temperature is regarded as uncertain, particularly in the Alps. This uncertainty fostered the development of many prediction models, which are usually classified in two categories: mechanistic models and statistical models. Based on the numerical resolution of physical conservation laws, mechanistic models are generally considered to provide more reliable long-term estimates than regression models. However, despite their physical basis, these models are observed to differ quite significantly in some aspects of their implementation, notably (1) the routing of water in the river channel and (2) the estimation of the temperature of groundwater discharging into the stream. For each one of these two aspects, we considered several of the standard modeling approaches reported in the literature and implemented them in a new modular framework. The latter is based on the spatially-distributed snow model Alpine3D, which is essentially used in the framework to compute the amount of water infiltrating in the upper soil layer. Starting from there, different methods can be selected for the computation of the water and energy fluxes in the hillslopes and in the river network. We relied on this framework to compare the various methodologies for river channel routing and groundwater temperature modeling. We notably assessed the impact of each these approaches on the long-term stream temperature predictions of the model under a typical climate change scenario. The case study was conducted over a high Alpine catchment in Switzerland, whose hydrological and thermal regimes are expected to be markedly affected by climate change. The results show that the various modeling approaches lead to significant differences in the

  15. Examination of the Feynman-Hibbs Approach in the Study of NeN-Coronene Clusters at Low Temperatures.

    Rodríguez-Cantano, Rocío; Pérez de Tudela, Ricardo; Bartolomei, Massimiliano; Hernández, Marta I; Campos-Martínez, José; González-Lezana, Tomás; Villarreal, Pablo; Hernández-Rojas, Javier; Bretón, José


    Feynman-Hibbs (FH) effective potentials constitute an appealing approach for investigations of many-body systems at thermal equilibrium since they allow us to easily include quantum corrections within standard classical simulations. In this work we apply the FH formulation to the study of NeN-coronene clusters (N = 1-4, 14) in the 2-14 K temperature range. Quadratic (FH2) and quartic (FH4) contributions to the effective potentials are built upon Ne-Ne and Ne-coronene analytical potentials. In particular, a new corrected expression for the FH4 effective potential is reported. FH2 and FH4 cluster energies and structures-obtained from energy optimization through a basin-hopping algorithm as well as classical Monte Carlo simulations-are reported and compared with reference path integral Monte Carlo calculations. For temperatures T > 4 K, both FH2 and FH4 potentials are able to correct the purely classical calculations in a consistent way. However, the FH approach fails at lower temperatures, especially the quartic correction. It is thus crucial to assess the range of applicability of this formulation and, in particular, to apply the FH4 potentials with great caution. A simple model of N isotropic harmonic oscillators allows us to propose a means of estimating the cutoff temperature for the validity of the method, which is found to increase with the number of atoms adsorbed on the coronene molecule.

  16. The Wiener maximum quadratic assignment problem

    Cela, Eranda; Woeginger, Gerhard J


    We investigate a special case of the maximum quadratic assignment problem where one matrix is a product matrix and the other matrix is the distance matrix of a one-dimensional point set. We show that this special case, which we call the Wiener maximum quadratic assignment problem, is NP-hard in the ordinary sense and solvable in pseudo-polynomial time. Our approach also yields a polynomial time solution for the following problem from chemical graph theory: Find a tree that maximizes the Wiener index among all trees with a prescribed degree sequence. This settles an open problem from the literature.

  17. Neural Network Approach to Predict Melt Temperature in Injection Molding Processes


    Among the processing conditions of injection molding, temperature of the melt entering the mold plays a significant role in determining the quality of molded parts. In our previous research, a neural network was developed to predict, the melt temperature in the barrel during the plastication phase. In this paper, a neural network is proposed to predict the melt temperature at the nozzle exit during the injection phase. A typical two layer neural network with back propagation learning rules is used to model the relationship between input and output in the injection phase. The preliminary results show that the network works well and may be used for on-line optimization and control of injection molding processes.

  18. A Calculation Approach to Elastic Constants of Crystallines at High Pressure and Finite Temperature

    向士凯; 蔡灵仓; 张林; 经福谦


    Elastic constants of Na and Li metals are calculated successfully for temperatures up to 350K and pressures up to 30 GPa using a scheme without involving any adjustable parameter. Elastic constants are assumed to depend only on an effective pair potential that is only determined by the average interatomic distance. Temperature has an effect on elastic constants by way of charging the equilibrium. The elastic constants can be obtained by fitting the relationship between total energy and strain tensor using the new set of lattice parameters obtained by calculating displacement of atoms at the finite temperature and at a fixed pressure. The relationship between the effective pair potential and the interatomic distance is fitted by using a series of data of cohesive energy corresponding to lattice parameters.

  19. Support Vector Regression Algorithms in the Forecasting of Daily Maximums of Tropospheric Ozone Concentration in Madrid

    Ortiz-García, E. G.; Salcedo-Sanz, S.; Pérez-Bellido, A. M.; Gascón-Moreno, J.; Portilla-Figueras, A.

    In this paper we present the application of a support vector regression algorithm to a real problem of maximum daily tropospheric ozone forecast. The support vector regression approach proposed is hybridized with an heuristic for optimal selection of hyper-parameters. The prediction of maximum daily ozone is carried out in all the station of the air quality monitoring network of Madrid. In the paper we analyze how the ozone prediction depends on meteorological variables such as solar radiation and temperature, and also we perform a comparison against the results obtained using a multi-layer perceptron neural network in the same prediction problem.

  20. Nuclear Many-Body Problem at Finite Temperature A TFD Approach

    Kosov, D S; Wambach, J


    Based on the formalism of thermo-field dynamics a new approach for studying collective excitations in hot finite Fermi systems is presented. Two approximations going beyond the thermal RPA namely renormalized thermal RPA and thermal second RPA are formulated.

  1. Metal glass vacuum tube solar collectors are approaching lower-medium temperature heat application.

    Jiang, Xinian


    Solar thermal collectors are widely used worldwide mainly for hot water preparation at a low temperature (less than 80 degrees C). Applications including many industrial processes and central air conditioning with absorption chillers, instead require lower-medium temperature heat (between 90 degrees C and 150 degrees C) to be driven when using solar thermal energy. The metal absorber glass vacuum tube collectors (MGVT) are developed for this type of applications. Current state-of-art and possible future technology development of MGVT are presented.

  2. The wind chill temperature effect on a large-scale PV plant—an exergy approach

    Xydis, George


    disregarded atmospheric variables in planning new PV plants, in fact, do play a significant role on the plant's overall exergetic efficiency as wind chill temperature. The solar potential around a windy coastal hilly area was studied and presented on the basis of field measurements and simulations...... (PV) system. Atmospheric variables such as air temperature, humidity and wind speed and their effects, shadow effects, tracking losses, and low radiation losses of the PV power output were investigated aiming at identifying the real and net solar energy output. It was shown that some usually...

  3. Temperature and pH conditions for maximum activity of bromelain extracted from pineapple (Ananas Comosus L. Merril - doi: 10.4025/actascitechnol.v33i2.7928

    Moacyr Jorge Elias


    Full Text Available Bromelain, the enzyme found in pineapple, hydrolyzes proteins’ peptide bonds, has several applications in the food industry and in medicine. Temperature and pH conditions for highest activity with casein as substrate were investigated from the bromelain recovered from industrialized pineapple (‘pérola’ variety fruit residues. Extract was obtained by grinding fruit rind and its internal stem. The activity was expressed in mmol tyrosine L-1 min. -1 by absorbance at 280 nm of aromatic amino acids produced in casein hydrolysis. Assays were undertaken in duplicate for two enzyme/substrate with ratios: 1/25 and 1/125 (mass. Eight-pointed experimental design was undertaken with central value for pH at 7.0 and for temperature at 35°C. Experimental results were processed by MINITAB 15 (Minitab Inc software which provided model’s equations and surface responses. Equations were mathematically processed by differential calculus producing graphs which exhibit the best activities according to temperatures. Loss of enzyme activity was reported with temperature increase at 1/25 ratio, due to a higher free enzyme, when compared to 1/125 ratio.

  4. Simulated Annealing Approach to the Temperature-Emissivity Separation Problem in Thermal Remote Sensing Part One: Mathematical Background

    Morgan, John A


    The method of simulated annealing is adapted to the temperature-emissivity separation (TES) problem. A patch of surface at the bottom of the atmosphere is assumed to be a greybody emitter with spectral emissivity $\\epsilon(k)$ describable by a mixture of spectral endmembers. We prove that a simulated annealing search conducted according to a suitable schedule converges to a solution maximizing the $\\textit{A-Posteriori}$ probability that spectral radiance detected at the top of the atmosphere originates from a patch with stipulated $T$ and $\\epsilon(k)$. Any such solution will be nonunique. The average of a large number of simulated annealing solutions, however, converges almost surely to a unique Maximum A-Posteriori solution for $T$ and $\\epsilon(k)$. The limitation to a stipulated set of endmember emissivities may be relaxed by allowing the number of endmembers to grow without bound, and to be generic continuous functions of wavenumber with bounded first derivatives with respect to wavenumber.

  5. A simplified approach for predicting temperature profile in steel members with locally damaged fire protection

    Dwaikat, M.M.S.; Kodur, V.K.R.


    Steel structures in building are to be provided with external insulation to delay temperature rise and associated strength degradation when exposed to fire. However, due to delicateness and fragility of some insulation systems, damage might occur in these insulation systems during their service

  6. Extreme precipitation and temperature responses to circulation patterns in current climate: statistical approaches

    Photiadou, C.


    Climate change is likely to influence the frequency of extreme extremes - temperature, precipitation and hydrological extremes, which implies increasing risks for flood and drought events in Europe. In current climate, European countries were often not sufficiently prepared to deal with the great so

  7. A simplified approach for predicting temperature profile in steel members with locally damaged fire protection

    Dwaikat, M.M.S.; Kodur, V.K.R.


    Steel structures in building are to be provided with external insulation to delay temperature rise and associated strength degradation when exposed to fire. However, due to delicateness and fragility of some insulation systems, damage might occur in these insulation systems during their service life

  8. Breeding approaches and genomics technologies to increase crop yield under low-temperature stress.

    Jha, Uday Chand; Bohra, Abhishek; Jha, Rintu


    Improved knowledge about plant cold stress tolerance offered by modern omics technologies will greatly inform future crop improvement strategies that aim to breed cultivars yielding substantially high under low-temperature conditions. Alarmingly rising temperature extremities present a substantial impediment to the projected target of 70% more food production by 2050. Low-temperature (LT) stress severely constrains crop production worldwide, thereby demanding an urgent yet sustainable solution. Considerable research progress has been achieved on this front. Here, we review the crucial cellular and metabolic alterations in plants that follow LT stress along with the signal transduction and the regulatory network describing the plant cold tolerance. The significance of plant genetic resources to expand the genetic base of breeding programmes with regard to cold tolerance is highlighted. Also, the genetic architecture of cold tolerance trait as elucidated by conventional QTL mapping and genome-wide association mapping is described. Further, global expression profiling techniques including RNA-Seq along with diverse omics platforms are briefly discussed to better understand the underlying mechanism and prioritize the candidate gene (s) for downstream applications. These latest additions to breeders' toolbox hold immense potential to support plant breeding schemes that seek development of LT-tolerant cultivars. High-yielding cultivars endowed with greater cold tolerance are urgently required to sustain the crop yield under conditions severely challenged by low-temperature.

  9. Quantitative assessment of drivers of recent global temperature variability: an information theoretic approach

    Bhaskar, Ankush; Ramesh, Durbha Sai; Vichare, Geeta; Koganti, Triven; Gurubaran, S.


    Identification and quantification of possible drivers of recent global temperature variability remains a challenging task. This important issue is addressed adopting a non-parametric information theory technique, the Transfer Entropy and its normalized variant. It distinctly quantifies actual information exchanged along with the directional flow of information between any two variables with no bearing on their common history or inputs, unlike correlation, mutual information etc. Measurements of greenhouse gases: CO2 , CH4 and N2O; volcanic aerosols; solar activity: UV radiation, total solar irradiance (TSI) and cosmic ray flux (CR); El Niño Southern Oscillation (ENSO) and Global Mean Temperature Anomaly (GMTA) made during 1984-2005 are utilized to distinguish driving and responding signals of global temperature variability. Estimates of their relative contributions reveal that CO2 ({˜ } 24 % ), CH4 ({˜ } 19 % ) and volcanic aerosols ({˜ }23 % ) are the primary contributors to the observed variations in GMTA. While, UV ({˜ } 9 % ) and ENSO ({˜ } 12 % ) act as secondary drivers of variations in the GMTA, the remaining play a marginal role in the observed recent global temperature variability. Interestingly, ENSO and GMTA mutually drive each other at varied time lags. This study assists future modelling efforts in climate science.

  10. Genetic Dissection High Temperature Tolerance Traits in Maize-a QTL Mapping Approach

    High temperature (HT) stress severely limits plant productivity and causes extensive economic loss to US agriculture. Understanding HT adaptation mechanisms in crop plants is crucial to the success of developing HT tolerant varieties. Maize inbred lines vary greatly in HT tolerance based on field ...

  11. Thermodynamic hardness and the maximum hardness principle

    Franco-Pérez, Marco; Gázquez, José L.; Ayers, Paul W.; Vela, Alberto


    An alternative definition of hardness (called the thermodynamic hardness) within the grand canonical ensemble formalism is proposed in terms of the partial derivative of the electronic chemical potential with respect to the thermodynamic chemical potential of the reservoir, keeping the temperature and the external potential constant. This temperature dependent definition may be interpreted as a measure of the propensity of a system to go through a charge transfer process when it interacts with other species, and thus it keeps the philosophy of the original definition. When the derivative is expressed in terms of the three-state ensemble model, in the regime of low temperatures and up to temperatures of chemical interest, one finds that for zero fractional charge, the thermodynamic hardness is proportional to T-1(I -A ) , where I is the first ionization potential, A is the electron affinity, and T is the temperature. However, the thermodynamic hardness is nearly zero when the fractional charge is different from zero. Thus, through the present definition, one avoids the presence of the Dirac delta function. We show that the chemical hardness defined in this way provides meaningful and discernible information about the hardness properties of a chemical species exhibiting integer or a fractional average number of electrons, and this analysis allowed us to establish a link between the maximum possible value of the hardness here defined, with the minimum softness principle, showing that both principles are related to minimum fractional charge and maximum stability conditions.

  12. Reflection and refraction of a transient temperature field at a plane interface using Cagniard-de Hoop approach.

    Shendeleva, M L


    An instantaneous line heat source located in the medium consisting of two half-spaces with different thermal properties is considered. Green's functions for the temperature field are derived using the Laplace and Fourier transforms in time and space and their inverting by the Cagniard-de Hoop technique known in elastodynamics. The characteristic feature of the proposed approach consists in the application of the Cagniard-de Hoop method to the transient heat conduction problem. The idea is suggested by the fact that the Laplace transform in time reduces the heat conduction equation to a Helmholtz equation, as for the wave propagation. Derived solutions exhibit some wave properties. First, the temperature field is decomposed into the source field and the reflected field in one half-space and the transmitted field in the other. Second, the laws of reflection and refraction can be deduced for the rays of the temperature field. In this connection the ray concept is briefly discussed. It is shown that the rays, introduced in such a way that they are consistent with Snell's law do not represent the directions of heat flux in the medium. Numerical computations of the temperature field as well as diagrams of rays and streamlines of the temperature field are presented.

  13. Feasibility of a simple laboratory approach for determining temperature influence on SPMD-air partition coefficients of selected compounds

    Cicenaite, A.; Huckins, J.N.; Alvarez, D.A.; Cranor, W.L.; Gale, R.W.; Kauneliene, V.; Bergqvist, P.-A.


    Semipermeable membrane devices (SPMDs) are a widely used passive sampling methodology for both waterborne and airborne hydrophobic organic contaminants. The exchange kinetics and partition coefficients of an analyte in a SPMD are mediated by its physicochemical properties and certain environmental conditions. Controlled laboratory experiments are used for determining the SPMD-air (Ksa's) partition coefficients and the exchange kinetics of organic vapors. This study focused on determining a simple approach for measuring equilibrium Ksa's for naphthalene (Naph), o-chlorophenol (o-CPh) and p-dichlorobenzene (p-DCB) over a wide range of temperatures. SPMDs were exposed to test chemical vapors in small, gas-tight chambers at four different temperatures (-16, -4, 22 and 40 ??C). The exposure times ranged from 6 h to 28 d depending on test temperature. Ksa's or non-equilibrium concentrations in SPMDs were determined for all compounds, temperatures and exposure periods with the exception of Naph, which could not be quantified in SPMDs until 4 weeks at the -16 ??C temperature. To perform this study the assumption of constant and saturated atmospheric concentrations in test chambers was made. It could influence the results, which suggest that flow through experimental system and performance reference compounds should be used for SPMD calibration. ?? 2006 Elsevier Ltd. All rights reserved.

  14. Feasibility of a simple laboratory approach for determining temperature influence on SPMD–air partition coefficients of selected compounds

    Cicenaite, Aurelija; Huckins, James N.; Alvarez, David A.; Cranor, Walter L.; Gale, Robert W.; Kauneliene, Violeta; Bergqvist, Per-Anders


    Semipermeable membrane devices (SPMDs) are a widely used passive sampling methodology for both waterborne and airborne hydrophobic organic contaminants. The exchange kinetics and partition coefficients of an analyte in a SPMD are mediated by its physicochemical properties and certain environmental conditions. Controlled laboratory experiments are used for determining the SPMD–air (Ksa's) partition coefficients and the exchange kinetics of organic vapors. This study focused on determining a simple approach for measuring equilibrium Ksa's for naphthalene (Naph), o-chlorophenol (o-CPh) and p-dichlorobenzene (p-DCB) over a wide range of temperatures. SPMDs were exposed to test chemical vapors in small, gas-tight chambers at four different temperatures (−16, −4, 22 and 40 °C). The exposure times ranged from 6 h to 28 d depending on test temperature. Ksa's or non-equilibrium concentrations in SPMDs were determined for all compounds, temperatures and exposure periods with the exception of Naph, which could not be quantified in SPMDs until 4 weeks at the −16 °C temperature. To perform this study the assumption of constant and saturated atmospheric concentrations in test chambers was made. It could influence the results, which suggest that flow through experimental system and performance reference compounds should be used for SPMD calibration.

  15. Estimated Outlet Temperatures in Shell-and-Tube Heat Exchanger Using Artificial Neural Network Approach Based on Practical Data

    Hisham Hassan Jasim


    Full Text Available The objective of this study is to apply Artificial Neural Network for heat transfer analysis of shell-and-tube heat exchangers widely used in power plants and refineries. Practical data was obtained by using industrial heat exchanger operating in power generation department of Dura refinery. The commonly used Back Propagation (BP algorithm was used to train and test networks by divided the data to three samples (training, validation and testing data to give more approach data with actual case. Inputs of the neural network include inlet water temperature, inlet air temperature and mass flow rate of air. Two outputs (exit water temperature to cooling tower and exit air temperature to second stage of air compressor were taken in ANN.150 sets of data were generated in different days by the reference heat exchanger model to training the network. Regression between desired target and prediction ANN output for training , validation, testing and all samples show reasonably values are equal to one (R=1 . 50 sets of data were generated to test the network and compare between desired and predicated exit temperature (water temp. and air temp. show a good agreement ( .

  16. Vestige: Maximum likelihood phylogenetic footprinting

    Maxwell Peter


    Full Text Available Abstract Background Phylogenetic footprinting is the identification of functional regions of DNA by their evolutionary conservation. This is achieved by comparing orthologous regions from multiple species and identifying the DNA regions that have diverged less than neutral DNA. Vestige is a phylogenetic footprinting package built on the PyEvolve toolkit that uses probabilistic molecular evolutionary modelling to represent aspects of sequence evolution, including the conventional divergence measure employed by other footprinting approaches. In addition to measuring the divergence, Vestige allows the expansion of the definition of a phylogenetic footprint to include variation in the distribution of any molecular evolutionary processes. This is achieved by displaying the distribution of model parameters that represent partitions of molecular evolutionary substitutions. Examination of the spatial incidence of these effects across regions of the genome can identify DNA segments that differ in the nature of the evolutionary process. Results Vestige was applied to a reference dataset of the SCL locus from four species and provided clear identification of the known conserved regions in this dataset. To demonstrate the flexibility to use diverse models of molecular evolution and dissect the nature of the evolutionary process Vestige was used to footprint the Ka/Ks ratio in primate BRCA1 with a codon model of evolution. Two regions of putative adaptive evolution were identified illustrating the ability of Vestige to represent the spatial distribution of distinct molecular evolutionary processes. Conclusion Vestige provides a flexible, open platform for phylogenetic footprinting. Underpinned by the PyEvolve toolkit, Vestige provides a framework for visualising the signatures of evolutionary processes across the genome of numerous organisms simultaneously. By exploiting the maximum-likelihood statistical framework, the complex interplay between mutational

  17. Calibration approach and plan for the sea and land surface temperature radiometer

    Smith, David L.; Nightingale, Tim J.; Mortimer, Hugh; Middleton, Kevin; Edeson, Ruben; Cox, Caroline V.; Mutlow, Chris T.; Maddison, Brian J.; Coppo, Peter


    The sea and land surface temperature radiometer (SLSTR) to be flown on the European Space Agency's (ESA) Sentinel-3 mission is a multichannel scanning radiometer that will continue the 21 year dataset of the along-track scanning radiometer (ATSR) series. As its name implies, measurements from SLSTR will be used to retrieve global sea surface temperatures to an uncertainty of SLSTR instrument, the infrared calibration sources, and the alignment equipment. The calibration rig has been commissioned and results of these tests will be presented. Finally, the authors will present the planning for the on-orbit monitoring and calibration activities to ensure that the calibration is maintained. These activities include vicarious calibration techniques that have been developed through previous missions and the deployment of ship-borne radiometers.

  18. Nondestructive approach for measuring temperature-dependent dielectric properties of epoxy resins.

    Akhtar, M Jaleel; Feher, Lambert E; Thumm, Manfred


    A practical method for measuring the complex relative permittivity of epoxy resins and other viscous liquids over a wide temperature range in S-band is presented. The method involves inserting a hot glass tube, filled with the liquid-under-test (LUT), into a length of WR-340 rectangular waveguide connected between two ports of a Vector Network Analyzer, which measures the reflection and transmission coefficients at 2.45 GHz. The heating arrangement consists of a temperature-controlled glycol bath, where the LUT-filled glass tube is placed. The dielectric properties are determined using an optimization routine, which minimizes the error between the theoretical and measured scattering coefficient data. The theoretical values of the scattering coefficient data are computed with the help of a numerical 3-D electromagnetic field simulator, the CST Microwave Studio. The dielectric properties of the empty glass tube (required by the simulation code) are also measured using the above methodology.

  19. Population transitions and temperature change in Minas Gerais, Brazil: a multidimensional approach

    Alisson F. Barbieri


    Full Text Available Climate change will exacerbate the vulnerability of places and people around the world in the next decades, especially in less developed regions. In this paper, we investigate future scenarios of population vulnerability to climate change for the next 30 years in 66 regions of the state of Minas Gerais, Brazil. Based upon the Alkire & Foster Index, we integrate simulated and projected dimensions of population vulnerability into a Multidimensional Index, showing how scenarios of temperature change would affect each region's relative vulnerability in the future. Results suggest that economic and health dimensions are the highest contributors to increases in temperature-related vulnerability, with the poorest and agribusiness regions being the most impacted in decades to come.

  20. Hawking temperature: an elementary approach based on Newtonian mechanics and quantum theory

    Pinochet, Jorge


    In 1974, the British physicist Stephen Hawking discovered that black holes have a characteristic temperature and are therefore capable of emitting radiation. Given the scientific importance of this discovery, there is a profuse literature on the subject. Nevertheless, the available literature ends up being either too simple, which does not convey the true physical significance of the issue, or too technical, which excludes an ample segment of the audience interested in science, such as physics teachers and their students. The present article seeks to remedy this shortcoming. It develops a simple and plausible argument that provides insight into the fundamental aspects of Hawking’s discovery, which leads to an approximate equation for the so-called Hawking temperature. The exposition is mainly intended for physics teachers and their students, and it only requires elementary algebra, as well as basic notions of Newtonian mechanics and quantum theory.

  1. A new ab initio approach to the development of high temperature super conducting materials

    Turner, Philip


    We review recent theoretical developments, which suggest that a set of shared principles underpin macroscopic quantum phenomena observed in high temperature super conducting materials, room temperature coherence in photosynthetic processes and the emergence of long range order in biological structures. These systems are driven by dissipative systems, which lead to fractal assembly and a fractal network of charges (with associated quantum potentials) at the molecular scale. At critical levels of charge density and fractal dimension, individual quantum potentials merge to form a charged induced macroscopic quantum potential, which act as a structuring force dictating long range order. Whilst the system is only partially coherent (i.e. only the bosonic fields are coherent), within these processes many of the phenomena associated with standard quantum theory are recovered, with macroscopic quantum potentials and associated forces having their equivalence in standard quantum mechanics. We establish a testable hypo...

  2. Comparison of abdominal skin temperature between fertile and infertile women by infrared thermography: A diagnostic approach.

    Jo, Junyoung; Kim, Hyunho


    This retrospective study aimed to evaluate the differences in abdominal temperature (AT) between fertile (n=206; age) and infertile (n=250) women between the ages of 30 and 39 years. We evaluated the differences in two distinctive skin temperatures by thermography: ΔT1 (CV8 index) - difference in temperature between the mid-abdomen (CV8 acupuncture area) and ventral upper arm (VUA) and ΔT2 (CV4 index) - difference in temperature between the lower abdomen (CV4 acupuncture area) and VUA. The results indicated that the ΔT1 and ΔT2 of infertile women were significantly lower (by 1.05°C and 0.79°C, respectively; p<0.001, both) compared to those of fertile women. Additionally, the area under the curve of ΔT1 (0.78) was greater compared to that of ΔT2 (0.736), and its threshold was set at 0.675°C, by which, the sensitivity and specificity of ΔT1 for determination of fertility were found to be 80.8% and 68.4%, respectively. In conclusion, infertility is associated with lower AT. The decrease in AT in infertile women might be due to poor blood perfusion to the core muscles and tissues of the body. These findings provide a basis for further research for evaluation of clinical feasibility of thermography for analysis of infertility in women. Further evaluation of the influence of AT on fertility outcomes is required to determine the causal relationship between AT and infertility.

  3. A Multivariate Regression Approach to Adjust AATSR Sea Surface Temperature to In Situ Measurements

    TANDEO, Pierre; Autret, Emmanuelle; Piolle, Jean-francois; Tournadre, Jean; Ailliot, Pierre


    The Advanced Along-Track Scanning Radiometer (AATSR) onboard Envisat is designed to provide very accurate measurements of sea surface temperature (SST). Using colocated in situ drifting buoys, a dynamical matchup database (MDB) is used to assess the AATSR-derived SST products more precisely. SST biases are then computed. Currently, Medspiration AATSR SST biases are discrete values and can introduce artificial discontinuities in AATSR level-2 SST fields. The new AATSR SST biases presented in t...

  4. A renormalization-group approach to finite-temperature mass corrections

    Marini, A; Marini, A; Burgess, C P


    We illustrate how the reorganization of perturbation theory at finite temperature can be economically cast in terms of the Wilson-Polchinski renormalization methods. We take as an example the old saw of the induced thermal mass of a hot scalar field with a quartic coupling, which we compute to second order in the coupling constant. We show that the form of the result can be largely determined by renormalization-group arguments without the explicit evaluation of Feynman graphs.

  5. A geometry-based approach to determining time-temperature superposition shifts in aging experiments

    Maiti, Amitesh


    A powerful way to expand the time and frequency range of material properties is through a method called time-temperature superposition (TTS). Traditionally, TTS has been applied to the dynamical mechanical and flow properties of thermo-rheologically simple materials, where a well-defined master curve can be objectively and accurately obtained by appropriate shifts of curves at different temperatures. However, TTS analysis can also be useful in many other situations where there is scatter in the data and where the principle holds only approximately. In such cases, shifting curves can become a subjective exercise and can often lead to significant errors in the long-term prediction. This mandates the need for an objective method of determining TTS shifts. Here, we adopt a method based on minimizing the “arc length” of the master curve, which is designed to work in situations where there is overlapping data at successive temperatures. We examine the accuracy of the method as a function of increasing noise in the data, and explore the effectiveness of data smoothing prior to TTS shifting. We validate the method using existing experimental data on the creep strain of an aramid fiber and the powder coarsening of an energetic material.

  6. A chemical approach toward low temperature alloying of immiscible iron and molybdenum metals

    Nazir, Rabia [Department of Chemistry, Quaid-i-Azam University, Islamabad 45320 (Pakistan); Applied Chemistry Research Centre, Pakistan Council of Scientific and Industrial Research Laboratories Complex, Lahore 54600 (Pakistan); Ahmed, Sohail [Department of Chemistry, Quaid-i-Azam University, Islamabad 45320 (Pakistan); Mazhar, Muhammad, E-mail: [Department of Chemistry, University of Malaya, Lembah Pantai, 50603 Kuala Lumpur (Malaysia); Akhtar, Muhammad Javed; Siddique, Muhammad [Physics Division, PINSTECH, P.O. Nilore, Islamabad (Pakistan); Khan, Nawazish Ali [Material Science Laboratory, Department of Physics, Quaid-i-Azam University, Islamabad 45320 (Pakistan); Shah, Muhammad Raza [HEJ Research Institute of Chemistry, University of Karachi, Karachi 75270 (Pakistan); Nadeem, Muhammad [Physics Division, PINSTECH, P.O. Nilore, Islamabad (Pakistan)


    Graphical abstract: - Highlights: • Low temperature pyrolysis of [Fe(bipy){sub 3}]Cl{sub 2} and [Mo(bipy)Cl{sub 4}] homogeneous powder. • Easy low temperature alloying of immiscible metals like Fe and Mo. • Uniform sized Fe–Mo nanoalloy with particle size of 48–68 nm. • Characterization by EDXRF, AFM, XRPD, magnetometery, {sup 57}Fe Mössbauer and impedance. • Alloy behaves as almost superparamagnetic obeying simple –R(CPE)– circuit. - Abstract: The present research is based on a low temperature operated feasible method for the synthesis of immiscible iron and molybdenum metals’ nanoalloy for technological applications. The nanoalloy has been synthesized by pyrolysis of homogeneous powder precipitated, from a common solvent, of the two complexes, trisbipyridineiron(II)chloride, [Fe(bipy){sub 3}]Cl{sub 2}, and bipyridinemolybedenum(IV) chloride, [Mo(bipy)Cl{sub 4}], followed by heating at 500 °C in an inert atmosphere of flowing argon gas. The resulting nanoalloy has been characterized by using EDXRF, AFM, XRD, magnetometery, {sup 57}Fe Mössbauer and impedance spectroscopies. These results showed that under provided experimental conditions iron and molybdenum metals, with known miscibility barrier, alloy together to give (1:1) single phase material having particle size in the range of 48–66 nm. The magnetism of iron is considerably reduced after alloy formation and shows its trend toward superparamagnetism. The designed chemical synthetic procedure is equally feasible for the fabrication of other immiscible metals.

  7. Extension of the Ginzburg–Landau approach for ultracold Fermi gases below a critical temperature

    Klimin, S.N., E-mail: [Theorie van Kwantumsystemen en Complexe Systemen (TQC), Universiteit Antwerpen, Groenenborgerlaan 171, B-2020 Antwerpen (Belgium); Tempere, J., E-mail: [Theorie van Kwantumsystemen en Complexe Systemen (TQC), Universiteit Antwerpen, Groenenborgerlaan 171, B-2020 Antwerpen (Belgium); Lyman Laboratory of Physics, Harvard University, Cambridge, MA 02138 (United States); Devreese, J.T. [Theorie van Kwantumsystemen en Complexe Systemen (TQC), Universiteit Antwerpen, Groenenborgerlaan 171, B-2020 Antwerpen (Belgium)


    Highlights: • Ginzburg–Landau formalism is extended below the critical temperature. • Two different healing lengths in two-band superfluids are captured. • The developed method is focused on strong-coupling superfluid Fermi gases. - Abstract: In the context of superfluid Fermi gases, the Ginzburg–Landau (GL) formalism for the macroscopic wave function has been successfully extended to the whole temperature range where the superfluid state exists. After reviewing the formalism, we first investigate the temperature-dependent correction to the standard GL expansion (which is valid close to T{sub c}). Deviations from the standard GL formalism are particularly important for the kinetic energy contribution to the GL energy functional, which in turn influences the healing length of the macroscopic wave function. We apply the formalism to variationally describe vortices in a strong-coupling Fermi gas in the BEC–BCS crossover regime, in a two-band system. The healing lengths, derived as variational parameters in the vortex wave function, are shown to exhibit hidden criticality well below T{sub c}.

  8. 气候变暖背景下2015年夏季新疆极端高温过程及其影响%Characteristics and effects of the extreme maximum air temperature in the summer of 2015 in Xinjiang under global warming

    毛炜峄; 陈鹏翔; 沈永平


    用新疆105个气象站监测资料,分析了2015年夏季高温过程的极端特征.2015年夏季新疆区域出现高温过程,从7月上旬后期开始,南疆东南部以及东疆最早出现日最高气温≥35℃的高温天气,进入中旬后高温范围迅速向西、向北蔓延发展,下旬初期范围达最大,南北疆均出现高温天气.新疆区域该次高温过程在7月中下旬最为强盛,全疆84.8%的测站(89站)出现高温;52.4%的测站(55站)的高温持续日数位居历史第1位;全疆21.9%的测站(23站)极端最高气温位居历史第1位,极端最高气温出现在吐鲁番东坎,达到47.7℃.这次高温过程造成8站夏季温度位居同期第1位,南疆及天山山区的7月平均气温位居历史同期第1位,有54.3%的测站(57站)7月平均气温突破同期历史极值.海拔3544 m的天山山区大西沟站7月份日最高气温连续突破历史极值,22日达到20.7℃.高温过程中,新疆区域7月0℃层高度位居1991年以来同期第1位,其中,7月19-23日连续6 d位居1991年以来的第1位.天山开都河流域日0℃层高度持续33 d高于1991-2015年平均值.7月上旬到下旬,在500 hPa高空,伊朗高压东移并控制新疆,是造成此次高温过程的直接原因.在100 hPa高空,南亚高压的形态、中心位置、强度变化与新疆此次高温过程演变关系密切.高温过程造成新疆高山区冰雪迅速消融,引发塔里木河流域出现融雪(冰)型洪水.%Meteorological datasets from 105 meteorological stations in Xinjiang were utilized to analyze the char-acteristics of extreme maximum air temperature in the summer of 2015. The extreme maximum air temperature more than 35℃firstly occurred in early July in southeastern and eastern Xinjiang region,and then spread west-wards and northwestwards in mid-July. In the late July,the range of extreme maximum air temperature reached up to the top,both in northern

  9. Materials selection for low temperature processed high Q resonators using ashby approach

    Kazmi, S.N.R.; Salm, Cora; Schmitz, J.


    MicroElectroMechanical Systems (MEMS) is an emerging class of microfabrication technology that can truly be anticipated as an enabling technology for future radio frequency (RF) communications. This work focuses on the material selection using the Ashby approach for the high-Q resonators that need t

  10. Comparison of data-driven and model-driven approaches to brightness temperature diurnal cycle interpolation

    Van den Bergh, F


    Full Text Available RKHS model for the first experiment. MSE = (0.5363, 0.7331). motivation for this approach was that the amount of compu- tation per cycle would be reduced significantly. The specific example in Figure 4 shows the RKHS model—initially fitted to cycle...

  11. Examination of the Feynman-Hibbs Approach in the Study of Ne$_N$-Coronene Clusters at Low Temperatures

    Rodríguez-Cantano, R; Bartolomei, M; Hernández, M I; Campos-Martínez, J; González-Lezana, T; Villarreal, P; Hernández-Rojas, J; Bretón, J


    Feynman-Hibbs (FH) effective potentials constitute an appealing approach for investigations of many-body systems at thermal equilibrium since they allow us to easily include quantum corrections within standard classical simulations. In this work we apply the FH formulation to the study of Ne$_N$-coronene clusters ($N=$ 1-4, 14) in the 2-14 K temperature range. Quadratic (FH2) and quartic (FH4) contributions to the effective potentials are built upon Ne-Ne and Ne-coronene analytical potentials. In particular, a new corrected expression for the FH4 effective potential is reported. FH2 and FH4 cluster energies and structures -obtained from energy optimization through a basin-hoping algorithm as well as classical Monte Carlo simulations- are reported and compared with reference path integral Monte Carlo calculations. For temperatures $T> \\approx 4$ K, both FH2 and FH4 potentials are able to correct the purely classical calculations in a consistent way. However, the FH approach fails at lower temperatures, especia...

  12. Association between completed suicide and environmental temperature in a Mexican population, using the Knowledge Discovery in Database approach.

    Fernández-Arteaga, Verónica; Tovilla-Zárate, Carlos Alfonso; Fresán, Ana; González-Castro, Thelma Beatriz; Juárez-Rojop, Isela E; López-Narváez, Lilia; Hernández-Díaz, Yazmín


    Suicide is a worldwide health problem and climatological characteristics have been associated with suicide behavior. However, approaches such as the Knowledge Discovery in Database are not frequently used to search for an association between climatological characteristics and suicide. The aim of the present study was to assess the association between weather data and suicide in a Mexican population. We used the information of 1357 patients who completed suicide from 2005 to 2012. Alternatively, weather data were provided by the National Water Commission. We used the Knowledge Discovery in Database approach with an Apriori algorithm and the data analyses were performed with the Waikato Environment for Knowledge Analysis software. One hundred rules of association were generated with a confidence of 0.86 and support of 1. We found an association between environmental temperature and suicide: days with no rain and temperatures between 30 °C and 40 °C (86-104 °F) were related to males completing suicide by hanging. In the prevention of suicidal behavior, the Knowledge Discovery in Database could be used to establish climatological characteristics and their association with suicide. This approach must be considered in future prevention strategies. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  13. Low-temperature approach to high-yield and reproducible syntheses of high-quality small-sized PbSe colloidal nanocrystals for photovoltaic applications.

    Ouyang, Jianying; Schuurmans, Carl; Zhang, Yanguang; Nagelkerke, Robbert; Wu, Xiaohua; Kingston, David; Wang, Zhi Yuan; Wilkinson, Diana; Li, Chunsheng; Leek, Donald M; Tao, Ye; Yu, Kui


    Small-sized PbSe nanocrystals (NCs) were synthesized at low temperature such as 50-80 °C with high reaction yield (up to 100%), high quality, and high synthetic reproducibility, via a noninjection-based one-pot approach. These small-sized PbSe NCs with their first excitonic absorption in wavelength shorter than 1200 nm (corresponding to size colloidal PbSe NCs, also called quantum dots, are high-quality, in terms of narrow size distribution with a typical standard deviation of ∼7-9%, excellent optical properties with high quantum yield of ∼50-90% and small full width at half-maximum of ∼130-150 nm of their band-gap photoemission peaks, and high storage stability. Our synthetic design aimed at promotion of the formation of PbSe monomers for fast and sizable nucleation with the presence of a large number of nuclei at low temperature. For formation of the PbSe monomer, our low-temperature approach suggests the existence of two pathways of Pb-Se (route a) and Pb-P (route b) complexes. Either pathway may dominate, depending on the method used and its experimental conditions. Experimentally, a reducing/nucleation agent, diphenylphosphine, was added to enhance route b. The present study addresses two challenging issues in the NC community, the monomer formation mechanism and the reproducible syntheses of small-sized NCs with high yield and high quality and large-scale capability, bringing insight to the fundamental understanding of optimization of the NC yield and quality via control of the precursor complex reactivity and thus nucleation/growth. Such advances in colloidal science should, in turn, promote the development of next-generation low-cost and high-efficiency solar cells. Schottky-type solar cells using our PbSe NCs as the active material have achieved the highest power conversion efficiency of 2.82%, in comparison with the same type of solar cells using other PbSe NCs, under Air Mass 1.5 global (AM 1.5G) irradiation of 100 mW/cm(2).

  14. Long-term room temperature preservation of corpse soft tissue: an approach for tissue sample storage

    Caputo Mariela


    Full Text Available Abstract Background Disaster victim identification (DVI represents one of the most difficult challenges in forensic sciences, and subsequent DNA typing is essential. Collected samples for DNA-based human identification are usually stored at low temperature to halt the degradation processes of human remains. We have developed a simple and reliable procedure for soft tissue storage and preservation for DNA extraction. It ensures high quality DNA suitable for PCR-based DNA typing after at least 1 year of room temperature storage. Methods Fragments of human psoas muscle were exposed to three different environmental conditions for diverse time periods at room temperature. Storage conditions included: (a a preserving medium consisting of solid sodium chloride (salt, (b no additional substances and (c garden soil. DNA was extracted with proteinase K/SDS followed by organic solvent treatment and concentration by centrifugal filter devices. Quantification was carried out by real-time PCR using commercial kits. Short tandem repeat (STR typing profiles were analysed with 'expert software'. Results DNA quantities recovered from samples stored in salt were similar up to the complete storage time and underscored the effectiveness of the preservation method. It was possible to reliably and accurately type different genetic systems including autosomal STRs and mitochondrial and Y-chromosome haplogroups. Autosomal STR typing quality was evaluated by expert software, denoting high quality profiles from DNA samples obtained from corpse tissue stored in salt for up to 365 days. Conclusions The procedure proposed herein is a cost efficient alternative for storage of human remains in challenging environmental areas, such as mass disaster locations, mass graves and exhumations. This technique should be considered as an additional method for sample storage when preservation of DNA integrity is required for PCR-based DNA typing.

  15. OECD Maximum Residue Limit Calculator

    With the goal of harmonizing the calculation of maximum residue limits (MRLs) across the Organisation for Economic Cooperation and Development, the OECD has developed an MRL Calculator. View the calculator.

  16. Homogeneous Carbon Nanotube/Carbon Composites Prepared by Catalyzed Carbonization Approach at Low Temperature

    Hongjiang Li


    Full Text Available We synthesize carbon nanotube (CNT/carbon composite using catalyzed carbonization of CNT/Epoxy Resin composite at a fairly low temperature of about 400∘C. The microstructure of the composite is characterized by scanning electron microscope (SEM, transmission electron microscope (TEM, and X-ray diffraction (XRD. The results indicate that CNTs and pyrolytic carbon blend well with each other. Pyrolytic carbon mainly stays in an amorphous state, with some of it forming crystalline structures. The catalyst has the effect of eliminating the interstices in the composites. Remarkable increases in thermal and electrical conductivity are also reported.

  17. Facile Hydrothermal Approach to ZnO Nanorods at Mild Temperature

    Yang Jiao


    Full Text Available In this work, ZnO nanorods are obtained through a facile hydrothermal route. The structure and morphology of the resultant products are characterized by X-ray diffraction (XRD and scanning electron microscope (SEM. The experimental results indicated that the as-synthesized ZnO nanorods have an average diameter of approximate 100 nm. A possible growth mechanism for ZnO nanorods was proposed based on the experimental results and found that Zn powder plays a critical role for the morphology of the products. Room temperature photoluminescence property of ZnO nanorods shows an ultraviolet emission peak at 390 nm.

  18. Pulsed laser deposition of gadolinia doped ceria layers at moderate temperature – a seeding approach

    Rodrigo, Katarzyna Agnieszka; Heiroth, Sebastian; Pryds, Nini

    ), to the growth of dense, gas impermeable 10 mol% gadolinia-doped ceria (CGO10) solid electrolyte can be overcome by the seeding process. In order to evaluate the seed layer preparation, the effects of different thermal annealing treatments on the morphology, microstructure and surface roughness of ultrathin CGO...... the preparation of ultrathin seed layers in the first stage of the deposition process is often envisaged to control the growth and physical properties of the subsequent coating. This work suggests that the limitations of conventional pulsed laser deposition (PLD), performed at moderate temperature (400°C...

  19. Temperature initiated passive cooling system

    Forsberg, Charles W.


    A passive cooling system for cooling an enclosure only when the enclosure temperature exceeds a maximum standby temperature comprises a passive heat transfer loop containing heat transfer fluid having a particular thermodynamic critical point temperature just above the maximum standby temperature. An upper portion of the heat transfer loop is insulated to prevent two phase operation below the maximum standby temperature.

  20. The devitrification of artificial fibers: a multimethodic approach to quantify the temperature-time onset of cancerogenic crystalline phases.

    Comodi, Paola; Cera, Fabio; Gatta, Giacomo Diego; Rotiroti, Nicola; Garofani, Patrizia


    A variety of artificial fibers extensively employed as lining in high-temperature apparatus may undergo a devitrification process that leads to significant changes in the chemical-physical properties of the materials. Among them, the crystallization of carcinogenic minerals, such as cristobalite, has already been documented for alumino-silicate ceramic fibers. Five fibrous samples with different compositions were treated over a wide range of temperatures (20-1500°C) and times (24-336 h) to investigate the rate and the crystalline phases that are formed as well their onset temperatures. The new phases were characterized by using a multimethodic approach: phase transformations were monitored together with thermal analysis and the new phases were investigated by using X-ray powder diffraction analysis. The crystalline:amorphous ratio was monitored by Rietveld refinement of X-ray diffraction data. Scanning electron microscopy was used to study the effect of heat treatments on the morphology of fibers, and the nanostructures were investigated by transmission electron microscopy (TEM). The results show that the main crystalline phases are cristobalite, diopside, mullite, and zirconia. The onset of cristobalite was observed at temperature lower than that thermodynamically expected. The TEM analysis showed that protostructures were present in the material vitrified from sol-gel-derived products, which can act as crystallization nuclei. The study shows that the devitrification leads to health hazard due to the formation of inhalable powder of cancerogenic crystalline phases.

  1. A modeling approach for heat conduction and radiation diffusion in plasma-photon mixture in temperature nonequilibrium

    Chang, Chong [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)


    We present a simple approach for determining ion, electron, and radiation temperatures of heterogeneous plasma-photon mixtures, in which temperatures depend on both material type and morphology of the mixture. The solution technique is composed of solving ion, electron, and radiation energy equations for both mixed and pure phases of each material in zones containing random mixture and solving pure material energy equations in subdivided zones using interface reconstruction. Application of interface reconstruction is determined by the material configuration in the surrounding zones. In subdivided zones, subzonal inter-material energy exchanges are calculated by heat fluxes across the material interfaces. Inter-material energy exchange in zones with random mixtures is modeled using the length scale and contact surface area models. In those zones, inter-zonal heat flux in each material is determined using the volume fractions.

  2. A GIS Approach to Wind,SST(Sea Surface Temperature) and CHL(Chlorophyll) variations in the Caspian Sea

    Mirkhalili, Seyedhamzeh


    Chlorophyll is an extremely important bio-molecule, critical in photosynthesis, which allows plants to absorb energy from light. At the base of the ocean food web are single-celled algae and other plant-like organisms known as Phytoplankton. Like plants on land, Phytoplankton use chlorophyll and other light-harvesting pigments to carry out photosynthesis. Where Phytoplankton grow depends on available sunlight, temperature, and nutrient levels. In this research a GIS Approach using ARCGIS software and QuikSCAT satellite data was applied to visualize WIND,SST(Sea Surface Temperature) and CHL(Chlorophyll) variations in the Caspian Sea.Results indicate that increase in chlorophyll concentration in coastal areas is primarily driven by terrestrial nutrients and does not imply that warmer SST will lead to an increase in chlorophyll concentration and consequently Phytoplankton abundance.

  3. Low temperature synthesis of ordered mesoporous stable anatase nanocrystals: the phosphorus dendrimer approach.

    Brahmi, Younes; Katir, Nadia; Ianchuk, Mykhailo; Collière, Vincent; Essassi, El Mokhtar; Ouali, Armelle; Caminade, Anne-Marie; Bousmina, Mosto; Majoral, Jean Pierre; El Kadib, Abdelkrim


    The scarcity of low temperature syntheses of anatase nanocrystals prompted us to explore the use of surface-reactive fourth generation phosphorus-dendrimers as molds to control the nucleation and growth of titanium-oxo-species during the sol-gel mineralization process. Unexpectedly, the dendritic medium provides at low temperature, discrete anatase nanocrystals (4.8 to 5.2 nm in size), in marked contrast to the routinely obtained amorphous titanium dioxide phase under standard conditions. Upon thermal treatment, heteroatom migration from the branches to the nanoparticle surface and the ring opening polymerization of the cyclophosphazene core provide stable, interpenetrating mesoporous polyphosphazene-anatase hybrid materials (-P[double bond, length as m-dash]N-)n-TiO2. The steric hindrance of the dendritic skeleton, the passivation of the anatase surface by heteroatoms and the ring opening of the core limit the crystal growth of anatase to 7.4 nm and prevent, up to 800 °C, the commonly observed anatase-to-rutile phase transformation. Performing this mineralization in the presence of similar surface-reactive but non-dendritic skeletons (referred to as branch-mimicking dendrimers) failed to generate crystalline anatase and to efficiently limit the crystal growth, bringing thus clear evidence of the virtues of phosphorus dendrimers in the design of novel nanostructured materials.

  4. A reaction kinetic approach to the temperature-time history of sedimentary basins

    Sajgó, Cs.; Lefler, J.

    Three biological marker reactions have been studied in order to determine the temperature — time history of a sedimentary sequence. Two of these reactions are configurational isomerization reactions, at C-20 in a C29-sterane and at C-22 in C31 and C32 hopane hydrocarbons. In the third reaction two C29 C-ring monoaromatic steroid hydrocarbons convert to a C28 triaromatic one. The progress of these reactions is different because of their different rate constants. Based on temperature and age data obtained from field measurements and on concentration measurements of reactants and products in core samples of a Pannonian borehole, we calculated the rate parameters: pre-exponential factors, enthalpies and entropies of activation. It is obvious, that at least two different reactions are necessary to characterize the maturity of any system. The aromatization seems to be a rather complicated reaction, and we believe its use to be premature. Fortunately, two isomerizations work well and are suitable for elucidation of thermal history in different basins if the rate constants are universally valid.

  5. Low-temperature electrodeposition approach leading to robust mesoscopic anatase TiO2 films.

    Patra, Snehangshu; Andriamiadamanana, Christian; Tulodziecki, Michal; Davoisne, Carine; Taberna, Pierre-Louis; Sauvage, Frédéric


    Anatase TiO2, a wide bandgap semiconductor, likely the most worldwide studied inorganic material for many practical applications, offers unequal characteristics for applications in photocatalysis and sun energy conversion. However, the lack of controllable, cost-effective methods for scalable fabrication of homogeneous thin films of anatase TiO2 at low temperatures (ie. < 100 °C) renders up-to-date deposition processes unsuited to flexible plastic supports or to smart textile fibres, thus limiting these wearable and easy-to-integrate emerging technologies. Here, we present a very versatile template-free method for producing robust mesoporous films of nanocrystalline anatase TiO2 at temperatures of/or below 80 °C. The individual assembly of the mesoscopic particles forming ever-demonstrated high optical quality beads of TiO2 affords, with this simple methodology, efficient light capture and confinement into the photo-anode, which in flexible dye-sensitized solar cell technology translates into a remarkable power conversion efficiency of 7.2% under A.M.1.5G conditions.

  6. Automatic maximum entropy spectral reconstruction in NMR.

    Mobli, Mehdi; Maciejewski, Mark W; Gryk, Michael R; Hoch, Jeffrey C


    Developments in superconducting magnets, cryogenic probes, isotope labeling strategies, and sophisticated pulse sequences together have enabled the application, in principle, of high-resolution NMR spectroscopy to biomolecular systems approaching 1 megadalton. In practice, however, conventional approaches to NMR that utilize the fast Fourier transform, which require data collected at uniform time intervals, result in prohibitively lengthy data collection times in order to achieve the full resolution afforded by high field magnets. A variety of approaches that involve nonuniform sampling have been proposed, each utilizing a non-Fourier method of spectrum analysis. A very general non-Fourier method that is capable of utilizing data collected using any of the proposed nonuniform sampling strategies is maximum entropy reconstruction. A limiting factor in the adoption of maximum entropy reconstruction in NMR has been the need to specify non-intuitive parameters. Here we describe a fully automated system for maximum entropy reconstruction that requires no user-specified parameters. A web-accessible script generator provides the user interface to the system.

  7. Pushing the upper limit of Rayleigh-scatter Temperatures Retrievals into the Lower Thermosphere Using an Inversion Approach

    Bandoro, J.; Sica, R. J.; Argall, S.


    An important aspect of solar terrestrial relations is the coupling between the lower and upper atmosphere-ionosphere system. The coupling is evident in the general circulation of the atmosphere, where waves generate in the lower atmosphere play an important role in the dynamics of the upper atmosphere, which feeds back on the lower atmosphere's circulation. To address coupling problems requires measurements over the broadest range of heights possible. A recently developed retrieval method for temperature profiles from Rayleigh-scatter lidar measurements using an inversion approach allows the upward extension of the altitude range of temperature by 10 to 15 km over the conventional method, thus producing the equivalent of increasing the systems power-aperture product by 4 times [1]. The method requires no changes to the lidar's hardware and thus, can be applied to the body of existing measurements. In addition, since the uncertainties of the retrieved temperature profile are found by a Monte Carlo error analysis, it is possible to isolate systematic and random uncertainties to model the effect of each one on the final uncertainty product for the temperature profile. This unambiguous separation of uncertainties was not previously possible as only the propagation of the statistical uncertainties are typically reported. For the Purple Crow Lidar, corrections for saturation (e.g. non-linearity) in the photocount returns, ozone extinction and background removal all contribute to the overall systematic uncertainty. Results of individually varying each systematic correction and the effect on the final temperature uncertainty through Monte Carlo realizations are presented to determine the importance for each one. For example, it was found that treatment of the background correction as a systematic versus statistical uncertainty gave results in agreement with each other. This new method is then applied to measurements obtained by the Purple Crow lidar' Rayleigh

  8. Versatile Approach to Access the Low Temperature Thermodynamics of Lattice Polymers and Proteins

    Wüst, Thomas; Landau, David P.


    We show that Wang-Landau sampling, combined with suitable Monte Carlo trial moves, provides a powerful method for both the ground state search and the determination of the density of states for the hydrophobic-polar (HP) protein model and the interacting self-avoiding walk (ISAW) model for homopolymers. We obtain accurate estimates of thermodynamic quantities for HP sequences with >100 monomers and for ISAWs up to >500 monomers. Our procedure possesses an intrinsic simplicity and overcomes the limitations inherent in more tailored approaches making it interesting for a broad range of protein and polymer models.

  9. Weak scale from the maximum entropy principle

    Hamada, Yuta; Kawai, Hikaru; Kawana, Kiyoharu


    The theory of the multiverse and wormholes suggests that the parameters of the Standard Model (SM) are fixed in such a way that the radiation of the S3 universe at the final stage S_rad becomes maximum, which we call the maximum entropy principle. Although it is difficult to confirm this principle generally, for a few parameters of the SM, we can check whether S_rad actually becomes maximum at the observed values. In this paper, we regard S_rad at the final stage as a function of the weak scale (the Higgs expectation value) vh, and show that it becomes maximum around vh = {{O}} (300 GeV) when the dimensionless couplings in the SM, i.e., the Higgs self-coupling, the gauge couplings, and the Yukawa couplings are fixed. Roughly speaking, we find that the weak scale is given by vh ˜ T_{BBN}2 / (M_{pl}ye5), where ye is the Yukawa coupling of electron, T_BBN is the temperature at which the Big Bang nucleosynthesis starts, and M_pl is the Planck mass.

  10. A temperature dependent formation time approach for \\Upsilon suppression at LHC

    Ganesh, S


    The present work is a further development over our recent paper [Phys. Rev. C 88, 044908 (2013)] in which we described the bottomonium suppression in Pb+Pb collisions at Large Hadron Collider (LHC), at $\\sqrt{s_{NN}}=2.76$ TeV by using the quasi-particle model (QPM) equation of state (EOS) for the Quark-Gluon Plasma (QGP) expanding under Bjorken's hydrodynamical expansion. The current model includes the modification of the formation time based on the temperature of QGP and cold nuclear matter (CNM) effects in addition to color screening during bottomonium production, gluon induced dissociation and collisional damping modeled previously. The final suppression of the bottomonium states is calculated as a function of centrality. The results compare closely with the CMS data at LHC in the mid rapidity region for various centrality bins.

  11. An integrated approach to remove and mitigate carbonate scale in a low temperature sandstone reservoir

    Al-Saiari, H.A.; Nasr-El-Din, H.A.


    Calcium carbonate and iron sulfide scales were detected in several wells in a low temperature sandstone reservoir. These scales were detected downhole; covering perforations and in-take of submersible pumps. The presence of scale has adversely affected well performance. The paper will present the results of detailed studies conducted to design and field test acid treatment to remove the scale and a new scale squeeze treatment to mitigate scale formation. The treatment has been successfully applied to more than 35 wells. Some of these wells were de scaled before the squeeze, while other wells were squeeze before scale detection. Field data indicated that the acid treatment resorted well productivity. The scale squeeze treatment which utilized a newly developed inhibitor was successfully applied in the field and has a life time that exceeded two years in most of the treated wells. (Author)

  12. Time-Dependent Hartree-Fock Approach to Nuclear Pasta at Finite Temperature

    Schuetrumpf, Bastian; Iida, Kei; Maruhn, Joachim; Mecke, Klaus; Reinhard, Paul-Gerhard


    We present simulations of neutron-rich matter at subnuclear densities, like supernova matter, with the time-dependent Hartree-Fock approximation at temperatures of several MeV. The initial state consists of $\\alpha$ particles randomly distributed in space that have a Maxwell-Boltzmann distribution in momentum space. Adding a neutron background initialized with Fermi distributed plane waves the calculations reflect a reasonable approximation of astrophysical matter. This matter evolves into spherical, rod-like, and slab-like shapes and mixtures thereof. The simulations employ a full Skyrme interaction in a periodic three-dimensional grid. By an improved morphological analysis based on Minkowski functionals, all eight pasta shapes can be uniquely identified by the sign of only two valuations, namely the Euler characteristic and the integral mean curvature.

  13. Short-term preservation of porcine oocytes in ambient temperature: novel approaches.

    Cai-Rong Yang

    Full Text Available The objective of this study was to evaluate the feasibility of preserving porcine oocytes without freezing. To optimize preservation conditions, porcine cumulus-oocyte complexes (COCs were preserved in TCM-199, porcine follicular fluid (pFF and FCS at different temperatures (4°C, 20°C, 25°C, 27.5°C, 30°C and 38.5°C for 1 day, 2 days or 3 days. After preservation, oocyte morphology, germinal vesicle (GV rate, actin cytoskeleton organization, cortical granule distribution, mitochondrial translocation and intracellular glutathione level were evaluated. Oocyte maturation was indicated by first polar body emission and spindle morphology after in vitro culture. Strikingly, when COCs were stored at 27.5°C for 3 days in pFF or FCS, more than 60% oocytes were still arrested at the GV stage and more than 50% oocytes matured into MII stages after culture. Almost 80% oocytes showed normal actin organization and cortical granule relocation to the cortex, and approximately 50% oocytes showed diffused mitochondria distribution patterns and normal spindle configurations. While stored in TCM-199, all these criteria decreased significantly. Glutathione (GSH level in the pFF or FCS group was higher than in the TCM-199 group, but lower than in the non-preserved control group. The preserved oocytes could be fertilized and developed to blastocysts (about 10% with normal cell number, which is clear evidence for their retaining the developmental potentiality after 3d preservation. Thus, we have developed a simple method for preserving immature pig oocytes at an ambient temperature for several days without evident damage of cytoplasm and keeping oocyte developmental competence.

  14. The Influence of Temperature on Time-Dependent Deformation and Failure in Granite: A Mesoscale Modeling Approach

    Xu, T.; Zhou, G. L.; Heap, Michael J.; Zhu, W. C.; Chen, C. F.; Baud, Patrick


    An understanding of the influence of temperature on brittle creep in granite is important for the management and optimization of granitic nuclear waste repositories and geothermal resources. We propose here a two-dimensional, thermo-mechanical numerical model that describes the time-dependent brittle deformation (brittle creep) of low-porosity granite under different constant temperatures and confining pressures. The mesoscale model accounts for material heterogeneity through a stochastic local failure stress field, and local material degradation using an exponential material softening law. Importantly, the model introduces the concept of a mesoscopic renormalization to capture the co-operative interaction between microcracks in the transition from distributed to localized damage. The mesoscale physico-mechanical parameters for the model were first determined using a trial-and-error method (until the modeled output accurately captured mechanical data from constant strain rate experiments on low-porosity granite at three different confining pressures). The thermo-physical parameters required for the model, such as specific heat capacity, coefficient of linear thermal expansion, and thermal conductivity, were then determined from brittle creep experiments performed on the same low-porosity granite at temperatures of 23, 50, and 90 °C. The good agreement between the modeled output and the experimental data, using a unique set of thermo-physico-mechanical parameters, lends confidence to our numerical approach. Using these parameters, we then explore the influence of temperature, differential stress, confining pressure, and sample homogeneity on brittle creep in low-porosity granite. Our simulations show that increases in temperature and differential stress increase the creep strain rate and therefore reduce time-to-failure, while increases in confining pressure and sample homogeneity decrease creep strain rate and increase time-to-failure. We anticipate that the

  15. temperature overspecification

    Mehdi Dehghan


    Full Text Available Two different finite difference schemes for solving the two-dimensional parabolic inverse problem with temperature overspecification are considered. These schemes are developed for indentifying the control parameter which produces, at any given time, a desired temperature distribution at a given point in the spatial domain. The numerical methods discussed, are based on the (3,3 alternating direction implicit (ADI finite difference scheme and the (3,9 alternating direction implicit formula. These schemes are unconditionally stable. The basis of analysis of the finite difference equation considered here is the modified equivalent partial differential equation approach, developed from the 1974 work of Warming and Hyett [17]. This allows direct and simple comparison of the errors associated with the equations as well as providing a means to develop more accurate finite difference schemes. These schemes use less central processor times than the fully implicit schemes for two-dimensional diffusion with temperature overspecification. The alternating direction implicit schemes developed in this report use more CPU times than the fully explicit finite difference schemes, but their unconditional stability is significant. The results of numerical experiments are presented, and accuracy and the Central Processor (CPU times needed for each of the methods are discussed. We also give error estimates in the maximum norm for each of these methods.

  16. A Maximum Radius for Habitable Planets.

    Alibert, Yann


    We compute the maximum radius a planet can have in order to fulfill two constraints that are likely necessary conditions for habitability: 1- surface temperature and pressure compatible with the existence of liquid water, and 2- no ice layer at the bottom of a putative global ocean, that would prevent the operation of the geologic carbon cycle to operate. We demonstrate that, above a given radius, these two constraints cannot be met: in the Super-Earth mass range (1-12 Mearth), the overall maximum that a planet can have varies between 1.8 and 2.3 Rearth. This radius is reduced when considering planets with higher Fe/Si ratios, and taking into account irradiation effects on the structure of the gas envelope.

  17. Growth of uniform nanoparticles of platinum by an economical approach at relatively low temperature

    Shah, M.A.


    Current chemical methods of synthesis have shown limited success in the fabrication of nanomaterials, which involves environmentally malignant chemicals. Environmental friendly synthesis requires alternative solvents, and it is expected that the use of soft options of green approaches may overcome these obstacles. Water, which is regarded as a benign solvent, has been used in the present work for the preparation of platinum nanoparticles. The average particle diameter is in the range of ∼13±5 nm and particles are largely agglomerated. The advantages of preparing nanoparticles with this method include ease, flexibility and cost effectiveness. The prospects of the process are bright, and the technique could be extended to prepare many other important metal and metal oxide nanostructures. © 2012 Sharif University of Technology. Production and hosting by Elsevier B.V. All rights reserved.

  18. An approach for protein to be completely reversible to thermal denaturation even at autoclave temperatures.

    Iwakura, M; Nakamura, D; Takenawa, T; Mitsuishi, Y


    Reversibility of protein denaturation is a prerequisite for all applications that depend on reliable enzyme catalysis, particularly, for using steam to sterilize enzyme reactors or enzyme sensor tips, and for developing protein-based devices that perform on-off switching of the protein function such as enzymatic activity, ligand binding and so on. In this study, we have successfully constructed an immobilized protein that retains full enzymatic activity even after thermal treatments as high as 120 degrees C. The key for the complete reversibility was the development of a new reaction that allowed a protein to be covalently attached to a surface through its C-terminus and the protein engineering approach that was used to make the protein compatible with the new attachment chemistry.

  19. [Aquatic ecosystem modelling approach: temperature and water quality models applied to Oualidia and Nador lagoons].

    Idrissi, J Lakhdar; Orbi, A; Hilmi, K; Zidane, F; Moncef, M


    The objective of this work is to develop an aquatic ecosystem and apply it on Moroccan lagoon systems. This model will keep us abreast of the yearly development of the main parameters that characterize these ecosystems while integrating all the data that have so far been acquired. Within this framework, a simulation model of the thermal system and a model of the water quality have been elaborated. These models, which have been simulated on the lagoon of Oualidia (North of Morocco) and validated on the lagoon of Nador (North West Mediterranean), permit to foresee the cycles of temperature of the surface and the parameters of the water quality (dissolved oxygen and biomass phytoplankton) by using meteorological information, specific features and in situ measurements in the studied sites. The elaborated model, called Zero-Dimensional, simulates the average conduct of the site during the time of variable states that are representatives of the studied ecosystem. This model will provide answers for the studied phenomena and is a work tool adequate for numerical simplicity.

  20. Nuclear Pasta at Finite Temperature with the Time-Dependent Hartree-Fock Approach

    Schuetrumpf, B.; Klatt, M. A.; Iida, K.; Maruhn, J. A.; Mecke, K.; Reinhard, P.-G.


    We present simulations of neutron-rich matter at sub-nuclear densities, like supernova matter. With the time-dependent Hartree-Fock approximation we can study the evolution of the system at temperatures of several MeV employing a full Skyrme interaction in a periodic three-dimensional grid [1]. The initial state consists of α particles randomly distributed in space that have a Maxwell-Boltzmann distribution in momentum space. Adding a neutron background initialized with Fermi distributed plane waves the calculations reflect a reasonable approximation of astrophysical matter. The matter evolves into spherical, rod-like, connected rod-like and slab-like shapes. Further we observe gyroid-like structures, discussed e.g. in [2], which are formed spontaneously choosing a certain value of the simulation box length. The ρ-T-map of pasta shapes is basically consistent with the phase diagrams obtained from QMD calculations [3]. By an improved topological analysis based on Minkowski functionals [4], all observed pasta shapes can be uniquely identified by only two valuations, namely the Euler characteristic and the integral mean curvature. In addition we propose the variance in the cell-density distribution as a measure to distinguish pasta matter from uniform matter.

  1. Comparison of different Geostatistical Approaches to map Sea Surface Temperature (SST) of Southern South China Sea

    Ali, Azizi; Mohd Muslim, Aidy; Lokman Husain, Mohd; Fadzil Akhir, Mohd


    Sea surface temperature (SST) variation provides vital information for weather and ocean forecasting especially when studying climate change. Conventional methods of collecting ocean parameters such as SST, remains expensive and labor intensive due to the large area coverage and complex analytical procedure required. Therefore, some studies need to be conducted on the spatial and temporal distribution of ocean parameters. This study looks at Geo-statisctical methods in interpolating SST values and its impact on accuracy. Two spatial Geo-statistical techniques, mainly kriging and inverse distance functions (IDW) were applied to create variability distribution maps of SST for the Southern South China Sea (SCS). Data from 72 sampling station was collected in July 2012 covering an area of 270 km x 100 km and 263 km away from shore. This data provide the basis for the interpolation and accuracy analysis. After normalization, variograms were computed to fit the data sets producing models with the least RSS value. The accuracy were later evaluated based on on root mean squared error (RMSE) and root mean kriging variance (RMKV). Results show that Kriging with exponential model produced most accuracy estimates, reducing error in 17.3% compared with inverse distance functions.

  2. New approach of gravity wave detection in mesopause temperatures operating an array of airglow spectrometers

    Wachter, Paul; Schmidt, Carsten; Wüst, Sabine; Bittner, Michael


    GRIPS (Ground based Infrared P-branch Spectrometer) airglow measurements allow the derivation of kinetic temperature in the mesopause region averaged over a field of view of some 10km x 10km. In 2011, three identical GRIPS instruments were setup at Oberpfaffenhofen (11.28°E, 48.09°N), Germany, in a way that their fields of view form an equilateral triangle shape in the mesopause with a horizontal dimension of approximately 70km. Using this setup, GRIPS time series cannot only be analyzed with respect to gravity wave periods, but also spatial wave parameters can be derived. Based on the results of the harmonic analysis the horizontal wavelength, phase speed and the direction of propagation were determined for gravity wave events from February to July 2011. We present distinct relationships between periods, amplitudes, phase speeds and wavelengths, which were identified in this dataset. Further data analysis of the derived wave parameters show preferred directions of propagation and suggest seasonal variations of the wave characteristics. The presentation will be concluded by the introduction of a measurement setup relying on one GRIPS instrument which is equipped with a variably adjustable mirror optic. The capability to scan multiple fields of view during nightly measurements will offer longer-term investigations of mesopause gravity waves.

  3. Temperature evaluation by simultaneous emission and saturated fluorescence measurements: A critical theoretical and experimental appraisal of the approach

    Shelby, Daniel E., E-mail: [Department of Chemistry, University of Florida, Gainesville, FL 32611 (United States); Merk, Sven [BAM, Federal Institute for Materials Research and Testing, Berlin (Germany); Smith, Benjamin W. [Department of Chemistry, University of Florida, Gainesville, FL 32611 (United States); Gornushkin, Igor B. [BAM, Federal Institute for Materials Research and Testing, Berlin (Germany); Panne, Ulrich [BAM, Federal Institute for Materials Research and Testing, Berlin (Germany); Department of Chemistry, Humboldt Universität, Berlin (Germany); Omenetto, Nicoló [Department of Chemistry, University of Florida, Gainesville, FL 32611 (United States)


    Temperature is one of the most important physical parameters of plasmas induced by a focused laser beam on solid targets, and its experimental evaluation has received considerable attention. An intriguing approach, first proposed by Kunze (H.-J. Kunze, Experimental check of local thermodynamic equilibrium in discharges, Appl. Opt., 25 (1986) 13–13.) as a check of the existence of local thermodynamic equilibrium, is based upon the simultaneous measurement of the thermal emission and the optically saturated fluorescence of the same selected atomic transition. The approach, whose appealing feature is that neither the calibration of the set-up nor the spontaneous radiative probability of the transitions is needed, has not yet been applied, to our knowledge, to analytical flames and plasmas. A critical discussion of the basic requirements for the application of the method, its advantages, and its experimental limitations, is therefore presented here. For our study, Ba{sup +} transitions in a plasma formed by focusing a pulsed Nd:YAG laser (1064 nm) on a glass sample containing BaO are selected. At various delay times from the plasma initiation, a pulsed, excimer-pumped dye laser tuned at the center of two Ba transitions (6s {sup 2}S{sub 1/2} → 6p {sup 2}P°{sub 3/2}; 455.403 nm and 6p {sup 2}P°{sub 1/2} → 6d {sup 2}S{sub 1/2}; 452.493 nm) is used to enhance the populations of the excited levels (6p {sup 2}P°{sub 3/2} and 6d {sup 2}S{sub 1/2}) above their thermal values. The measured ratio of the emission and direct line fluorescence signals observed at 614.171 nm (6p {sup 2}P°{sub 3/2} → 5d {sup 2}D{sub 5/2}) and 489.997 nm (6d {sup 2}S{sub 1/2} → 6p {sup 2}P°{sub 3/2}) is then related to the excitation temperature of the plasma. Our conclusion is that the approach, despite being indeed attractive and clever, does not seem to be easily applicable to flames and plasmas, in particular to transient and inhomogeneous plasmas such as those induced by lasers on

  4. A systems biology approach for the analysis of carbohydrate dynamics during acclimation to low temperature in Arabidopsis thaliana.

    Nägele, Thomas; Kandel, Benjamin A; Frana, Sabine; Meissner, Meike; Heyer, Arnd G


    Low temperature is an important environmental factor affecting the performance and distribution of plants. During the so-called process of cold acclimation, many plants are able to develop low-temperature tolerance, associated with the reprogramming of a large part of their metabolism. In this study, we present a systems biology approach based on mathematical modelling to determine interactions between the reprogramming of central carbohydrate metabolism and the development of freezing tolerance in two accessions of Arabidopsis thaliana. Different regulation strategies were observed for (a) photosynthesis, (b) soluble carbohydrate metabolism and (c) enzyme activities of central metabolite interconversions. Metabolism of the storage compound starch was found to be independent of accession-specific reprogramming of soluble sugar metabolism in the cold. Mathematical modelling and simulation of cold-induced metabolic reprogramming indicated major differences in the rates of interconversion between the pools of hexoses and sucrose, as well as the rate of assimilate export to sink organs. A comprehensive overview of interconversion rates is presented, from which accession-specific regulation strategies during exposure to low temperature can be derived. We propose this concept as a tool for predicting metabolic engineering strategies to optimize plant freezing tolerance. We confirm that a significant improvement in freezing tolerance in plants involves multiple regulatory instances in sucrose metabolism, and provide evidence for a pivotal role of sucrose-hexose interconversion in increasing the cold acclimation output. © 2010 The Authors Journal compilation © 2010 FEBS.

  5. Estimativa das temperaturas máximas e mínimas do ar para a região do Circuito das Frutas, SP Estimation of maximum and minimum air temperatures for the "Circuito das Frutas" region (São Paulo State, Brazil

    Ludmila Bardin


    Full Text Available Desenvolveram-se, neste trabalho modelos de estimativa da temperatura do ar com base em fatores geográficos, visando estimar os valores máximos e mínimos médios mensais e anuais na região compreendida pelos municípios que compõem o Polo Turístico do Circuito das Frutas do Estado de São Paulo. Obtiveram-se equações de regressão múltipla em função da altitude, latitude e longitude e simples em função da altitude, cujos coeficientes de determinação variam entre 0,91 a 0,96, para as temperaturas máximas e 0,71 a 0,94 para as mínimas e se apresentam as variações espaciais das temperaturas máximas e mínimas médias mensais e anuais da região de estudo na forma de mapas.Multiple regression equations to estimate mean monthy and annual maximum and minimum temperatures were developed as a function of altitude, latitude, and longitude for the "Pólo Turístico do Circuito das Frutas" region. The obtained correlation coefficients varied from 0.91 to 0.96 and 0.71 to 0.94 of the maximum and minimum air temperature, respectively. Also, maps with the spacial variability of the maximum and minimum mean monthly and annual temperatures are presented for the region.

  6. The sun and heliosphere at solar maximum.

    Smith, E J; Marsden, R G; Balogh, A; Gloeckler, G; Geiss, J; McComas, D J; McKibben, R B; MacDowall, R J; Lanzerotti, L J; Krupp, N; Krueger, H; Landgraf, M


    Recent Ulysses observations from the Sun's equator to the poles reveal fundamental properties of the three-dimensional heliosphere at the maximum in solar activity. The heliospheric magnetic field originates from a magnetic dipole oriented nearly perpendicular to, instead of nearly parallel to, the Sun's rotation axis. Magnetic fields, solar wind, and energetic charged particles from low-latitude sources reach all latitudes, including the polar caps. The very fast high-latitude wind and polar coronal holes disappear and reappear together. Solar wind speed continues to be inversely correlated with coronal temperature. The cosmic ray flux is reduced symmetrically at all latitudes.

  7. Abolishing the maximum tension principle

    Dabrowski, Mariusz P


    We find the series of example theories for which the relativistic limit of maximum tension $F_{max} = c^2/4G$ represented by the entropic force can be abolished. Among them the varying constants theories, some generalized entropy models applied both for cosmological and black hole horizons as well as some generalized uncertainty principle models.

  8. Abolishing the maximum tension principle

    Mariusz P. Da̧browski


    Full Text Available We find the series of example theories for which the relativistic limit of maximum tension Fmax=c4/4G represented by the entropic force can be abolished. Among them the varying constants theories, some generalized entropy models applied both for cosmological and black hole horizons as well as some generalized uncertainty principle models.

  9. Model Selection Through Sparse Maximum Likelihood Estimation

    Banerjee, Onureena; D'Aspremont, Alexandre


    We consider the problem of estimating the parameters of a Gaussian or binary distribution in such a way that the resulting undirected graphical model is sparse. Our approach is to solve a maximum likelihood problem with an added l_1-norm penalty term. The problem as formulated is convex but the memory requirements and complexity of existing interior point methods are prohibitive for problems with more than tens of nodes. We present two new algorithms for solving problems with at least a thousand nodes in the Gaussian case. Our first algorithm uses block coordinate descent, and can be interpreted as recursive l_1-norm penalized regression. Our second algorithm, based on Nesterov's first order method, yields a complexity estimate with a better dependence on problem size than existing interior point methods. Using a log determinant relaxation of the log partition function (Wainwright & Jordan (2006)), we show that these same algorithms can be used to solve an approximate sparse maximum likelihood problem for...

  10. Pareto versus lognormal: a maximum entropy test.

    Bee, Marco; Riccaboni, Massimo; Schiavo, Stefano


    It is commonly found that distributions that seem to be lognormal over a broad range change to a power-law (Pareto) distribution for the last few percentiles. The distributions of many physical, natural, and social events (earthquake size, species abundance, income and wealth, as well as file, city, and firm sizes) display this structure. We present a test for the occurrence of power-law tails in statistical distributions based on maximum entropy. This methodology allows one to identify the true data-generating processes even in the case when it is neither lognormal nor Pareto. The maximum entropy approach is then compared with other widely used methods and applied to different levels of aggregation of complex systems. Our results provide support for the theory that distributions with lognormal body and Pareto tail can be generated as mixtures of lognormally distributed units.

  11. Nonparametric Maximum Entropy Estimation on Information Diagrams

    Martin, Elliot A; Meinke, Alexander; Děchtěrenko, Filip; Davidsen, Jörn


    Maximum entropy estimation is of broad interest for inferring properties of systems across many different disciplines. In this work, we significantly extend a technique we previously introduced for estimating the maximum entropy of a set of random discrete variables when conditioning on bivariate mutual informations and univariate entropies. Specifically, we show how to apply the concept to continuous random variables and vastly expand the types of information-theoretic quantities one can condition on. This allows us to establish a number of significant advantages of our approach over existing ones. Not only does our method perform favorably in the undersampled regime, where existing methods fail, but it also can be dramatically less computationally expensive as the cardinality of the variables increases. In addition, we propose a nonparametric formulation of connected informations and give an illustrative example showing how this agrees with the existing parametric formulation in cases of interest. We furthe...

  12. Dependence of maximum concentration from chemical accidents on release duration

    Hanna, Steven; Chang, Joseph


    Chemical accidents often involve releases of a total mass, Q, of stored material in a tank over a time duration, td, of less than a few minutes. The value of td is usually uncertain because of lack of knowledge of key information, such as the size and location of the hole and the pressure and temperature of the chemical. In addition, it is rare that eyewitnesses or video cameras are present at the time of the accident. For inhalation hazards, serious health effects (such as damage to the respiratory system) are determined by short term averages (pressurized liquefied chlorine releases from tanks are given, focusing on scenarios from the Jack Rabbit I (JR I) field experiment. The analytical calculations and the predictions of the SLAB dense gas dispersion model agree that the ratio of maximum C for two different td's is greatest (as much as a factor of ten) near the source. At large distances (beyond a few km for the JR I scenarios), where tt exceeds both td's, the ratio of maximum C approaches unity.

  13. Maximum Spectral Luminous Efficacy of White Light

    Murphy, T W


    As lighting efficiency improves, it is useful to understand the theoretical limits to luminous efficacy for light that we perceive as white. Independent of the efficiency with which photons are generated, there exists a spectrally-imposed limit to the luminous efficacy of any source of photons. We find that, depending on the acceptable bandpass and---to a lesser extent---the color temperature of the light, the ideal white light source achieves a spectral luminous efficacy of 250--370 lm/W. This is consistent with previous calculations, but here we explore the maximum luminous efficacy as a function of photopic sensitivity threshold, color temperature, and color rendering index; deriving peak performance as a function of all three parameters. We also present example experimental spectra from a variety of light sources, quantifying the intrinsic efficacy of their spectral distributions.

  14. Computation of the properties of liquid neon, methane, and gas helium at low temperature by the Feynman-Hibbs approach.

    Tchouar, N; Ould-Kaddour, F; Levesque, D


    The properties of liquid methane, liquid neon, and gas helium are calculated at low temperatures over a large range of pressure from the classical molecular-dynamics simulations. The molecular interactions are represented by the Lennard-Jones pair potentials supplemented by quantum corrections following the Feynman-Hibbs approach. The equations of state, diffusion, and shear viscosity coefficients are determined for neon at 45 K, helium at 80 K, and methane at 110 K. A comparison is made with the existing experimental data and for thermodynamical quantities, with results computed from quantum numerical simulations when they are available. The theoretical variation of the viscosity coefficient with pressure is in good agreement with the experimental data when the quantum corrections are taken into account, thus reducing considerably the 60% discrepancy between the simulations and experiments in the absence of these corrections.

  15. On the integral-balance approach to the transient heat conduction with linearly temperature-dependent thermal diffusivity

    Fabre, Antoine; Hristov, Jordan


    Closed form approximate solutions to nonlinear transient heat conduction with linearly temperature-dependent thermal diffusivity have been developed by the integral-balance integral method under transient conditions. The solutions uses improved direct approaches of the integral method and avoid the commonly used linearization by the Kirchhoff transformation. The main steps in the new solutions are improvements in the integration technique of the double-integration technique and the optimization of the exponent of the approximate parabolic profile with unspecified exponent. Solutions to Dirichlet and Neumann boundary condition problems have been developed as examples by the classical Heat-balance integral method (HBIM) and the Double-integration method (DIM). Additional examples with HBIM and DIM solutions to cases when the Kirchhoff transform is initially applied have been developed.

  16. Maximum Genus of Strong Embeddings

    Er-ling Wei; Yan-pei Liu; Han Ren


    The strong embedding conjecture states that any 2-connected graph has a strong embedding on some surface. It implies the circuit double cover conjecture: Any 2-connected graph has a circuit double cover.Conversely, it is not true. But for a 3-regular graph, the two conjectures are equivalent. In this paper, a characterization of graphs having a strong embedding with exactly 3 faces, which is the strong embedding of maximum genus, is given. In addition, some graphs with the property are provided. More generally, an upper bound of the maximum genus of strong embeddings of a graph is presented too. Lastly, it is shown that the interpolation theorem is true to planar Halin graph.

  17. D(Maximum)=P(Argmaximum)

    Remizov, Ivan D


    In this note, we represent a subdifferential of a maximum functional defined on the space of all real-valued continuous functions on a given metric compact set. For a given argument, $f$ it coincides with the set of all probability measures on the set of points maximizing $f$ on the initial compact set. This complete characterization lies in the heart of several important identities in microeconomics, such as Roy's identity, Sheppard's lemma, as well as duality theory in production and linear programming.

  18. Physics-Based Correction of Inhomogeneities in Temperature Series: Model Transferability Testing and Comparison to Statistical Approaches

    Auchmann, Renate; Brönnimann, Stefan; Croci-Maspoli, Mischa


    For the correction of inhomogeneities in sub-daily temperature series, Auchmann and Brönnimann (2012) developed a physics-based model for one specific type of break, i.e. the transition from a Wild screen to a Stevenson screen at one specific station in Basel, Switzerland. The model is based solely on physical considerations, no relationships of the covariates to the differences between the parallel measurements have been investigated. The physics-based model requires detailed information on the screen geometry, the location, and includes a variety of covariates in the model. The model is mainly based on correcting the radiation error, including a modification by ambient wind. In this study we test the application of the model to another station, Zurich, experiencing the same type of transition. Furthermore we compare the performance of the physics based correction to purely statistical correction approaches (constant correction, correcting for annual cycle using spline). In Zurich the Wild screen was replaced in 1954 by the Stevenson screen, from 1954-1960 parallel temperature measurements in both screens were taken, which will be used to assess the performance of the applied corrections. For Zurich the required model input is available (i.e. three times daily observations of wind, cloud cover, pressure and humidity measurements, local times of sunset and sunrise). However, a large number of stations do not measure these additional input data required for the model, which hampers the transferability and applicability of the model to other stations. Hence, we test possible simplifications and generalizations of the model to make it more easily applicable to stations with the same type of inhomogeneity. In a last step we test whether other types of transitions (e.g., from a Stevenson screen to an automated weather system) can be corrected using the principle of a physics-based approach.

  19. Alternative Multiview Maximum Entropy Discrimination.

    Chao, Guoqing; Sun, Shiliang


    Maximum entropy discrimination (MED) is a general framework for discriminative estimation based on maximum entropy and maximum margin principles, and can produce hard-margin support vector machines under some assumptions. Recently, the multiview version of MED multiview MED (MVMED) was proposed. In this paper, we try to explore a more natural MVMED framework by assuming two separate distributions p1( Θ1) over the first-view classifier parameter Θ1 and p2( Θ2) over the second-view classifier parameter Θ2 . We name the new MVMED framework as alternative MVMED (AMVMED), which enforces the posteriors of two view margins to be equal. The proposed AMVMED is more flexible than the existing MVMED, because compared with MVMED, which optimizes one relative entropy, AMVMED assigns one relative entropy term to each of the two views, thus incorporating a tradeoff between the two views. We give the detailed solving procedure, which can be divided into two steps. The first step is solving our optimization problem without considering the equal margin posteriors from two views, and then, in the second step, we consider the equal posteriors. Experimental results on multiple real-world data sets verify the effectiveness of the AMVMED, and comparisons with MVMED are also reported.

  20. Maximum entropy signal restoration with linear programming

    Mastin, G.A.; Hanson, R.J.


    Dantzig's bounded-variable method is used to express the maximum entropy restoration problem as a linear programming problem. This is done by approximating the nonlinear objective function with piecewise linear segments, then bounding the variables as a function of the number of segments used. The use of a linear programming approach allows equality constraints found in the traditional Lagrange multiplier method to be relaxed. A robust revised simplex algorithm is used to implement the restoration. Experimental results from 128- and 512-point signal restorations are presented.

  1. Maximum entropy PDF projection: A review

    Baggenstoss, Paul M.


    We review maximum entropy (MaxEnt) PDF projection, a method with wide potential applications in statistical inference. The method constructs a sampling distribution for a high-dimensional vector x based on knowing the sampling distribution p(z) of a lower-dimensional feature z = T (x). Under mild conditions, the distribution p(x) having highest possible entropy among all distributions consistent with p(z) may be readily found. Furthermore, the MaxEnt p(x) may be sampled, making the approach useful in Monte Carlo methods. We review the theorem and present a case study in model order selection and classification for handwritten character recognition.

  2. Effects of scattering and dust grain size on the temperature structure of protoplanetary discs: A three-layer approach

    Inoue, Akio K; Nakamoto, Taishi


    The temperature in the optically thick interior of protoplanetary discs is essential for the interpretation of millimeter observations of the discs, for the vertical structure of the discs, for models of the disc evolution and the planet formation, and for the chemistry in the discs. Since large icy grains have a large albedo even in the infrared, the effect of scattering of the diffuse radiation in the discs on the interior temperature should be examined. We have performed a series of numerical radiation transfer simulations including isotropic scattering by grains with various typical sizes for the diffuse radiation as well as for the incident stellar radiation. We also have developed an analytic model including isotropic scattering to understand the physics concealed in the numerical results. With the analytic model, we have shown that the standard two-layer approach is valid only for grey opacity (i.e. grain size $\\ga10$ \\micron) even without scattering. A three-layer interpretation is required for grain ...

  3. A gel-free quantitative proteomics approach to investigate temperature adaptation of the food-borne pathogen Cronobacter turicensis 3032.

    Carranza, Paula; Grunau, Alexander; Schneider, Thomas; Hartmann, Isabel; Lehner, Angelika; Stephan, Roger; Gehrig, Peter; Grossmann, Jonas; Groebel, Katrin; Hoelzle, Ludwig E; Eberl, Leo; Riedel, Kathrin


    The opportunistic food-borne pathogen Cronobacter sp. causes rare but significant illness in neonates and is capable to grow at a remarkably wide range of temperatures from 5.5 to 47 degrees C. A gel-free quantitative proteomics approach was employed to investigate the molecular basis of the Cronobacter sp. adaptation to heat and cold-stress. To this end the model strain Cronobacter turicensis 3032 was grown at 25, 37, 44, and 47 degrees C, and whole-cell and secreted proteins were iTRAQ-labelled and identified/quantified by 2-D-LC-MALDI-TOF/TOF-MS. While 44 degrees C caused only minor changes in C. turicensis growth rate and protein profile, 47 degrees C affected the expression of about 20% of all 891 identified proteins and resulted in a reduced growth rate and rendered the strain non-motile and filamentous. Among the heat-induced proteins were heat shock factors, transcriptional and translational proteins, whereas proteins affecting cellular morphology, proteins involved in motility, central metabolism and energy production were down-regulated. Notably, numerous potential virulence factors were found to be up-regulated at higher temperatures, suggesting an elevated pathogenic potential of Cronobacter sp. under these growth conditions. Significant alterations in the protein expression profile and growth rate of C. turicensis exposed to 25 degrees C indicate that at this temperature the organism is cold-stressed. Up-regulated gene products comprised cold-shock, DNA-binding and ribosomal proteins, factors that support protein folding and proteins opposing cold-induced decrease in membrane fluidity, whereas down-regulated proteins were mainly involved in central metabolism.

  4. Minimal Length, Friedmann Equations and Maximum Density

    Awad, Adel


    Inspired by Jacobson's thermodynamic approach[gr-qc/9504004], Cai et al [hep-th/0501055,hep-th/0609128] have shown the emergence of Friedmann equations from the first law of thermodynamics. We extend Akbar--Cai derivation [hep-th/0609128] of Friedmann equations to accommodate a general entropy-area law. Studying the resulted Friedmann equations using a specific entropy-area law, which is motivated by the generalized uncertainty principle (GUP), reveals the existence of a maximum energy density closed to Planck density. Allowing for a general continuous pressure $p(\\rho,a)$ leads to bounded curvature invariants and a general nonsingular evolution. In this case, the maximum energy density is reached in a finite time and there is no cosmological evolution beyond this point which leaves the big bang singularity inaccessible from a spacetime prospective. The existence of maximum energy density and a general nonsingular evolution is independent of the equation of state and the spacial curvature $k$. As an example w...

  5. Midnight Temperature Maximum (MTM) in Whole Atmosphere Model (WAM) Simulations


    typical magnitudes of 50–100 K has been regularly observed by satellite and ground-based instruments in the tropical upper thermosphere. Although...the feature with typical magnitudes of about 50–100 K around midnight [e.g., Faivre et al., 2006]. Recent observations of a night- glow brightness wave...different UT-longitude sectors. A prominent MTM with a magnitude between about 50 K and well over 100 K clearly stands out at both locations and

  6. Cacti with maximum Kirchhoff index

    Wang, Wen-Rui; Pan, Xiang-Feng


    The concept of resistance distance was first proposed by Klein and Randi\\'c. The Kirchhoff index $Kf(G)$ of a graph $G$ is the sum of resistance distance between all pairs of vertices in $G$. A connected graph $G$ is called a cactus if each block of $G$ is either an edge or a cycle. Let $Cat(n;t)$ be the set of connected cacti possessing $n$ vertices and $t$ cycles, where $0\\leq t \\leq \\lfloor\\frac{n-1}{2}\\rfloor$. In this paper, the maximum kirchhoff index of cacti are characterized, as well...

  7. Generic maximum likely scale selection

    Pedersen, Kim Steenstrup; Loog, Marco; Markussen, Bo


    The fundamental problem of local scale selection is addressed by means of a novel principle, which is based on maximum likelihood estimation. The principle is generally applicable to a broad variety of image models and descriptors, and provides a generic scale estimation methodology. The focus...... on second order moments of multiple measurements outputs at a fixed location. These measurements, which reflect local image structure, consist in the cases considered here of Gaussian derivatives taken at several scales and/or having different derivative orders....

  8. Coordenadas geográficas na estimativa das temperaturas máxima e média decendiais do ar no Estado do Rio Grande do Sul Geographic coordinates in the ten-day maximum and mean air temperature estimation in the State of Rio Grande do Sul, Brazil

    Alberto Cargnelutti Filho


    Full Text Available A partir dos dados referentes à temperatura máxima média decendial (Tx e à temperatura média decendial (Tm do ar de 41 municípios do Estado do Rio Grande do Sul, de 1945 a 1974, este trabalho teve como objetivo verificar se a Tx e a Tm podem ser estimadas em função da altitude, latitude e longitude. Para cada um dos 36 decêndios do ano, realizou-se análise de correlação e estimaram-se os parâmetros do modelo das equações de regressão linear múltipla, considerando Tx e Tm como variável dependente e altitude, latitude e longitude como variáveis independentes. Na validação dos modelos de estimativa da Tx e Tm, usou-se o coeficiente de correlação linear de Pearson, entre a Tx e a Tm estimada e a Tx e a Tm observada em dez municípios do Estado, com dados da série de observações meteorológicas de 1975 a 2004. A temperatura máxima média decendial e a temperatura média decendial podem ser estimadas por meio da altitude, latitude e longitude, em qualquer local e decêndio, no Estado do Rio Grande do Sul.The objective of this research was to estimate ten-day maximum (Tx and mean (Tm air temperature using altitude and the geographic coordinates latitude and longitude for the Rio Grande do Sul State, Brazil. Normal ten-day maximum and mean air temperature of 41 counties in the State of Rio Grande do Sul, from 1945 to 1974 were used. Correlation analysis and parameters estimate of multiple linear regression equations were performed using Tx and Tm as dependent variable and altitude, latitude and longitude as independent variables, for the 36 ten-day periods of the year. Pearson's linear correlation coefficient between estimated and observed Tx and Tm, calculated for tem counties using data of were used as independent data sets. The ten-day maximum and mean air temperature may be estimated from the altitude and the geographic coordinates latitude and longitude in the State of Rio Grande do Sul.

  9. A unique approach to demonstrating that apical bud temperature specifically determines leaf initiation rate in the dicot Cucumis sativus

    Savvides, Andreas; Dieleman, Anja; Ieperen, van Wim; Marcelis, Leo F.M.


    Main conclusion: Leaf initiation rate is largely determined by the apical bud temperature even when apical bud temperature largely deviates from the temperature of other plant organs.We have long known that the rate of leaf initiation (LIR) is highly sensitive to temperature, but previous studies


    张同文; 刘禹; 袁玉江; 魏文寿; 喻树龙; 陈峰


    for de-trending. After all the processes,we obtained three kinds of chronologies( STD, RES and ARS)of tree-ring width data and gray values.Based on the tree-ring data analysis, mean maximum temperature from May to August of the Gongnaisi region from 1777 to 2008 A. D. Has been reconstructed by the tree-ring average gray values. For the calibrated period (1958 ~ 2008 A. D. ) ,the predictor variable accounts for 39% of the variance of mean maximum temperature data. The mean maximum temperature reconstruction shows that there are 34 warm years and 38 cold years. The warm events (lasting for more than three years)were 1861 ~ 1864 A. D., 1873 ~ 1876A. D. And 1917 ~ 1919A. D. ; and the cold events were 1816 ~ 1818A. D., 1948 ~ 1950A. D. And 1957 - 1959A. D. Furthermore, these years and events correspond well with historical documents. By applying a 11-year moving average to our reconstruction, only one period with above average reconstructed mean maximum temperature (1777 ~ 2008A. D. ) comprise 1845 ~ 1925A. D. ; the two periods below average consist of 1788 ~ 1844A. D. And 1926~2001 A. D. The reconstructed mean maximum temperature has increased since the 1990s and agreed well with instrumental measurements in the North Western China in the recent 50 years. The power spectrum analysis shows that there are 154-,77-,2. 7- and 2. 3-years cycles in our reconstruction, which may be associated with solar activity and quasi-biennial oscillation ( QBO). The moving t-test indicates that the significant abrupt changes were presented in about 1842A. D., 1880A. D. And 1923A. D. The significant correlations between our reconstruction and the gridded dataset of the Northern Hemisphere and three kinds of index (SOI, APOI, and AOI) may imply that mean maximum temperature of the Gongnaisi region is possibly influenced not only by local,but also by multiple large-scale climate changes to some extent.

  11. Efeito de níveis de água, coberturas do solo e condições ambientais na temperatura do solo e no cultivo de morangueiro em ambiente protegido e a céu aberto Effect of water levels, soil covers and enviroment in maximum soil temperature in strawberry crop in field and greenhouse

    Regina C. de M. Pires


    Full Text Available A temperatura do solo é um importante parâmetro no cultivo do morangueiro, pois interfere no desenvolvimento vegetativo, na sanidade e na produção. O objetivo do presente trabalho foi avaliar o efeito de diferentes níveis de água, coberturas de canteiro em campo aberto e em ambiente protegido, na temperatura máxima do solo no cultivo do morangueiro. Foram realizados dois experimentos: um em cultivo protegido e outro a campo aberto, em Atibaia - SP, em esquema fatorial 2 x 3 (coberturas do solo e níveis de irrigação, em blocos ao acaso, com cinco repetições. As coberturas de solo utilizadas foram filmes de polietileno preto e transparente. A irrigação localizada foi aplicada por gotejo sempre que o potencial de água no solo atingisse -0,010 (N1, -0,035 (N2 e -0,070 (N3 MPa, em tensiômetros instalados a 10 cm de profundidade. A temperatura do solo foi avaliada por termógrafos, sendo os sensores instalados a 5 cm de profundidade. Houve influência do ambiente de cultivo, da cobertura do solo e dos níveis de irrigação na temperatura máxima do solo. A temperatura do solo sob diferentes coberturas dependeu não somente das características físicas do plástico, como também da forma de instalação no canteiro. A temperatura máxima do solo aumentou com a diminuição do potencial da água no solo, no momento da irrigação.The soil temperature is an important parameter in strawberry crop, because, it interferes in vegetative development, plant health conditions and yield. The aim of this work was to evaluate the effect of different water levels, soil covers in field conditions and greenhouse in maximum soil temperature in strawberry crop. Two experiments were accomplished, one in greenhouse and other in field conditions, at Atibaia - SP, Brazil. The experimental design was a factorial 2 x 3 (soil covers and water levels, with 5 repetitions. The soil covers were clear and black plastics. The trickle irrigation was applied

  12. Objects of maximum electromagnetic chirality

    Fernandez-Corbaton, Ivan


    We introduce a definition of the electromagnetic chirality of an object and show that it has an upper bound. The upper bound is attained if and only if the object is transparent for fields of one handedness (helicity). Additionally, electromagnetic duality symmetry, i.e. helicity preservation upon scattering, turns out to be a necessary condition for reciprocal scatterers to attain the upper bound. We use these results to provide requirements for the design of such extremal scatterers. The requirements can be formulated as constraints on the polarizability tensors for dipolar scatterers or as material constitutive relations. We also outline two applications for objects of maximum electromagnetic chirality: A twofold resonantly enhanced and background free circular dichroism measurement setup, and angle independent helicity filtering glasses.

  13. The strong maximum principle revisited

    Pucci, Patrizia; Serrin, James

    In this paper we first present the classical maximum principle due to E. Hopf, together with an extended commentary and discussion of Hopf's paper. We emphasize the comparison technique invented by Hopf to prove this principle, which has since become a main mathematical tool for the study of second order elliptic partial differential equations and has generated an enormous number of important applications. While Hopf's principle is generally understood to apply to linear equations, it is in fact also crucial in nonlinear theories, such as those under consideration here. In particular, we shall treat and discuss recent generalizations of the strong maximum principle, and also the compact support principle, for the case of singular quasilinear elliptic differential inequalities, under generally weak assumptions on the quasilinear operators and the nonlinearities involved. Our principal interest is in necessary and sufficient conditions for the validity of both principles; in exposing and simplifying earlier proofs of corresponding results; and in extending the conclusions to wider classes of singular operators than previously considered. The results have unexpected ramifications for other problems, as will develop from the exposition, e.g. two point boundary value problems for singular quasilinear ordinary differential equations (Sections 3 and 4); the exterior Dirichlet boundary value problem (Section 5); the existence of dead cores and compact support solutions, i.e. dead cores at infinity (Section 7); Euler-Lagrange inequalities on a Riemannian manifold (Section 9); comparison and uniqueness theorems for solutions of singular quasilinear differential inequalities (Section 10). The case of p-regular elliptic inequalities is briefly considered in Section 11.


    邓自刚; 王家素; 郑珺; 刘伟; 林群煦; 马光同; 王为; 王素玉; 张娅


    文章通过对15块高温超导块材与永磁轨道相互作用的悬浮力测试,比较了零场冷和场冷两种冷却方式下块材的最大悬浮力关系.实验结果显示零场冷时悬浮力大的块材在场冷时悬浮力不一定就大,反之亦然,两者并无直接的对应关系.在实际的场冷应用中,推荐以场冷下的悬浮力数据为参考.%The paper compares the relationship of maximum levitation force of bulk high temperature superconductor in zero-field-cooling (ZFC) and field-cooling (FC) cases by the levitation measurement of 15 bulks interacting with permanent magnet guideway. The experimental results show that the maximum forces in the two cooling cases are not corresponding to each other. The bulk with large levitation force in ZFC case will not always obtain a large one in the FC case, and vice ver-sa. So, the levitation force data in FC case is recommended to the practical FC applications.

  15. A practical approach for predicting retention time shifts due to pressure and temperature gradients in ultra-high-pressure liquid chromatography.

    Åsberg, Dennis; Chutkowski, Marcin; Leśko, Marek; Samuelsson, Jörgen; Kaczmarski, Krzysztof; Fornstedt, Torgny


    Large pressure gradients are generated in ultra-high-pressure liquid chromatography (UHPLC) using sub-2μm particles causing significant temperature gradients over the column due to viscous heating. These pressure and temperature gradients affect retention and ultimately result in important selectivity shifts. In this study, we developed an approach for predicting the retention time shifts due to these gradients. The approach is presented as a step-by-step procedure and it is based on empirical linear relationships describing how retention varies as a function of temperature and pressure and how the average column temperature increases with the flow rate. It requires only four experiments on standard equipment, is based on straightforward calculations, and is therefore easy to use in method development. The approach was rigorously validated against experimental data obtained with a quality control method for the active pharmaceutical ingredient omeprazole. The accuracy of retention time predictions was very good with relative errors always less than 1% and in many cases around 0.5% (n=32). Selectivity shifts observed between omeprazole and the related impurities when changing the flow rate could also be accurately predicted resulting in good estimates of the resolution between critical peak pairs. The approximations which the presented approach are based on were all justified. The retention factor as a function of pressure and temperature was studied in an experimental design while the temperature distribution in the column was obtained by solving the fundamental heat and mass balance equations for the different experimental conditions. We strongly believe that this approach is sufficiently accurate and experimentally feasible for this separation to be a valuable tool when developing a UHPLC method. After further validation with other separation systems, it could become a useful approach in UHPLC method development, especially in the pharmaceutical industry where

  16. A search for concentric rings with unusual variance in the 7-year WMAP temperature maps using a fast convolution approach

    Bielewicz, P; Banday, A J


    We present a method for the computation of the variance of cosmic microwave background (CMB) temperature maps on azimuthally symmetric patches using a fast convolution approach. As an example of the application of the method, we show results for the search for concentric rings with unusual variance in the 7-year WMAP data. We re-analyse claims concerning the unusual variance profile of rings centred at two locations on the sky that have recently drawn special attention in the context of the conformal cyclic cosmology scenario proposed by Penrose (2009). We extend this analysis to rings with larger radii and centred on other points of the sky. Using the fast convolution technique enables us to perform this search with higher resolution and a wider range of radii than in previous studies. We show that for one of the two special points rings with radii larger than 10 degrees have systematically lower variance in comparison to the concordance LambdaCDM model predictions. However, we show that this deviation is ca...

  17. Conformational temperature-dependent behavior of a histone H2AX: a coarse-grained Monte Carlo approach via knowledge-based interaction potentials.

    Fritsche, Miriam; Pandey, Ras B; Farmer, Barry L; Heermann, Dieter W


    Histone proteins are not only important due to their vital role in cellular processes such as DNA compaction, replication and repair but also show intriguing structural properties that might be exploited for bioengineering purposes such as the development of nano-materials. Based on their biological and technological implications, it is interesting to investigate the structural properties of proteins as a function of temperature. In this work, we study the spatial response dynamics of the histone H2AX, consisting of 143 residues, by a coarse-grained bond fluctuating model for a broad range of normalized temperatures. A knowledge-based interaction matrix is used as input for the residue-residue Lennard-Jones potential.We find a variety of equilibrium structures including global globular configurations at low normalized temperature (T* = 0.014), combination of segmental globules and elongated chains (T* = 0.016,0.017), predominantly elongated chains (T* = 0.019,0.020), as well as universal SAW conformations at high normalized temperature (T* ≥ 0.023). The radius of gyration of the protein exhibits a non-monotonic temperature dependence with a maximum at a characteristic temperature (T(c)* = 0.019) where a crossover occurs from a positive (stretching at T* ≤ T(c)*) to negative (contraction at T* ≥ T(c)*) thermal response on increasing T*.

  18. Conformational temperature-dependent behavior of a histone H2AX: a coarse-grained Monte Carlo approach via knowledge-based interaction potentials.

    Miriam Fritsche

    Full Text Available Histone proteins are not only important due to their vital role in cellular processes such as DNA compaction, replication and repair but also show intriguing structural properties that might be exploited for bioengineering purposes such as the development of nano-materials. Based on their biological and technological implications, it is interesting to investigate the structural properties of proteins as a function of temperature. In this work, we study the spatial response dynamics of the histone H2AX, consisting of 143 residues, by a coarse-grained bond fluctuating model for a broad range of normalized temperatures. A knowledge-based interaction matrix is used as input for the residue-residue Lennard-Jones potential.We find a variety of equilibrium structures including global globular configurations at low normalized temperature (T* = 0.014, combination of segmental globules and elongated chains (T* = 0.016,0.017, predominantly elongated chains (T* = 0.019,0.020, as well as universal SAW conformations at high normalized temperature (T* ≥ 0.023. The radius of gyration of the protein exhibits a non-monotonic temperature dependence with a maximum at a characteristic temperature (T(c* = 0.019 where a crossover occurs from a positive (stretching at T* ≤ T(c* to negative (contraction at T* ≥ T(c* thermal response on increasing T*.

  19. Beat the Deviations in Estimating Maximum Power of Thermoelectric Modules

    Gao, Junling; Chen, Min


    Under a certain temperature difference, the maximum power of a thermoelectric module can be estimated by the open-circuit voltage and the short-circuit current. In practical measurement, there exist two switch modes, either from open to short or from short to open, but the two modes can give...... different estimations on the maximum power. Using TEG-127-2.8-3.5-250 and TEG-127-1.4-1.6-250 as two examples, the difference is about 10%, leading to some deviations with the temperature change. This paper analyzes such differences by means of a nonlinear numerical model of thermoelectricity, and finds out...

  20. Improved Reliability of Single-Phase PV Inverters by Limiting the Maximum Feed-in Power

    Yang, Yongheng; Wang, Huai; Blaabjerg, Frede


    . The CPG control strategy is activated only when the DC input power from PV panels exceeds a specific power limit. It enables to limit the maximum feed-in power to the electric grids and also to improve the utilization of PV inverters. As a further study, this paper investigates the reliability performance...... of the power devices (e.g. IGBTs) used in PV inverters with the CPG control under different feed-in power limits. A long-term mission profile (i.e. solar irradiance and ambient temperature) based stress analysis approach is extended and applied to obtain the yearly electrical and thermal stresses of the power...

  1. Genetic algorithms optimized fuzzy logic control for the maximum power point tracking in photovoltaic system

    Larbes, C.; Ait Cheikh, S.M.; Obeidi, T.; Zerguerras, A. [Laboratoire des Dispositifs de Communication et de Conversion Photovoltaique, Departement d' Electronique, Ecole Nationale Polytechnique, 10, Avenue Hassen Badi, El Harrach, Alger 16200 (Algeria)


    This paper presents an intelligent control method for the maximum power point tracking (MPPT) of a photovoltaic system under variable temperature and irradiance conditions. First, for the purpose of comparison and because of its proven and good performances, the perturbation and observation (P and O) technique is briefly introduced. A fuzzy logic controller based MPPT (FLC) is then proposed which has shown better performances compared to the P and O MPPT based approach. The proposed FLC has been also improved using genetic algorithms (GA) for optimisation. Different development stages are presented and the optimized fuzzy logic MPPT controller (OFLC) is then simulated and evaluated, which has shown better performances. (author)

  2. Maximum Matchings via Glauber Dynamics

    Jindal, Anant; Pal, Manjish


    In this paper we study the classic problem of computing a maximum cardinality matching in general graphs $G = (V, E)$. The best known algorithm for this problem till date runs in $O(m \\sqrt{n})$ time due to Micali and Vazirani \\cite{MV80}. Even for general bipartite graphs this is the best known running time (the algorithm of Karp and Hopcroft \\cite{HK73} also achieves this bound). For regular bipartite graphs one can achieve an $O(m)$ time algorithm which, following a series of papers, has been recently improved to $O(n \\log n)$ by Goel, Kapralov and Khanna (STOC 2010) \\cite{GKK10}. In this paper we present a randomized algorithm based on the Markov Chain Monte Carlo paradigm which runs in $O(m \\log^2 n)$ time, thereby obtaining a significant improvement over \\cite{MV80}. We use a Markov chain similar to the \\emph{hard-core model} for Glauber Dynamics with \\emph{fugacity} parameter $\\lambda$, which is used to sample independent sets in a graph from the Gibbs Distribution \\cite{V99}, to design a faster algori...

  3. 76 FR 1504 - Pipeline Safety: Establishing Maximum Allowable Operating Pressure or Maximum Operating Pressure...


    ...: Establishing Maximum Allowable Operating Pressure or Maximum Operating Pressure Using Record Evidence, and... facilities of their responsibilities, under Federal integrity management (IM) regulations, to perform... system, especially when calculating Maximum Allowable Operating Pressure (MAOP) or Maximum Operating...

  4. Maximum Likelihood Analysis in the PEN Experiment

    Lehman, Martin


    The experimental determination of the π+ -->e+ ν (γ) decay branching ratio currently provides the most accurate test of lepton universality. The PEN experiment at PSI, Switzerland, aims to improve the present world average experimental precision of 3 . 3 ×10-3 to 5 ×10-4 using a stopped beam approach. During runs in 2008-10, PEN has acquired over 2 ×107 πe 2 events. The experiment includes active beam detectors (degrader, mini TPC, target), central MWPC tracking with plastic scintillator hodoscopes, and a spherical pure CsI electromagnetic shower calorimeter. The final branching ratio will be calculated using a maximum likelihood analysis. This analysis assigns each event a probability for 5 processes (π+ -->e+ ν , π+ -->μ+ ν , decay-in-flight, pile-up, and hadronic events) using Monte Carlo verified probability distribution functions of our observables (energies, times, etc). A progress report on the PEN maximum likelihood analysis will be presented. Work supported by NSF grant PHY-0970013.

  5. Maximum likelihood estimates of pairwise rearrangement distances.

    Serdoz, Stuart; Egri-Nagy, Attila; Sumner, Jeremy; Holland, Barbara R; Jarvis, Peter D; Tanaka, Mark M; Francis, Andrew R


    Accurate estimation of evolutionary distances between taxa is important for many phylogenetic reconstruction methods. Distances can be estimated using a range of different evolutionary models, from single nucleotide polymorphisms to large-scale genome rearrangements. Corresponding corrections for genome rearrangement distances fall into 3 categories: Empirical computational studies, Bayesian/MCMC approaches, and combinatorial approaches. Here, we introduce a maximum likelihood estimator for the inversion distance between a pair of genomes, using a group-theoretic approach to modelling inversions introduced recently. This MLE functions as a corrected distance: in particular, we show that because of the way sequences of inversions interact with each other, it is quite possible for minimal distance and MLE distance to differently order the distances of two genomes from a third. The second aspect tackles the problem of accounting for the symmetries of circular arrangements. While, generally, a frame of reference is locked, and all computation made accordingly, this work incorporates the action of the dihedral group so that distance estimates are free from any a priori frame of reference. The philosophy of accounting for symmetries can be applied to any existing correction method, for which examples are offered. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. Filtering Additive Measurement Noise with Maximum Entropy in the Mean

    Gzyl, Henryk


    The purpose of this note is to show how the method of maximum entropy in the mean (MEM) may be used to improve parametric estimation when the measurements are corrupted by large level of noise. The method is developed in the context on a concrete example: that of estimation of the parameter in an exponential distribution. We compare the performance of our method with the bayesian and maximum likelihood approaches.

  7. Semidefinite Programming for Approximate Maximum Likelihood Sinusoidal Parameter Estimation


    We study the convex optimization approach for parameter estimation of several sinusoidal models, namely, single complex/real tone, multiple complex sinusoids, and single two-dimensional complex tone, in the presence of additive Gaussian noise. The major difficulty for optimally determining the parameters is that the corresponding maximum likelihood (ML) estimators involve finding the global minimum or maximum of multimodal cost functions because the frequencies are nonlinear in the observed s...

  8. Communication: Maximum caliber is a general variational principle for nonequilibrium statistical mechanics.

    Hazoglou, Michael J; Walther, Valentin; Dixit, Purushottam D; Dill, Ken A


    There has been interest in finding a general variational principle for non-equilibrium statistical mechanics. We give evidence that Maximum Caliber (Max Cal) is such a principle. Max Cal, a variant of maximum entropy, predicts dynamical distribution functions by maximizing a path entropy subject to dynamical constraints, such as average fluxes. We first show that Max Cal leads to standard near-equilibrium results—including the Green-Kubo relations, Onsager's reciprocal relations of coupled flows, and Prigogine's principle of minimum entropy production—in a way that is particularly simple. We develop some generalizations of the Onsager and Prigogine results that apply arbitrarily far from equilibrium. Because Max Cal does not require any notion of "local equilibrium," or any notion of entropy dissipation, or temperature, or even any restriction to material physics, it is more general than many traditional approaches. It also applicable to flows and traffic on networks, for example.

  9. The Stampacchia maximum principle for stochastic partial differential equations and applications

    Chekroun, Mickaël D.; Park, Eunhee; Temam, Roger


    Stochastic partial differential equations (SPDEs) are considered, linear and nonlinear, for which we establish comparison theorems for the solutions, or positivity results a.e., and a.s., for suitable data. Comparison theorems for SPDEs are available in the literature. The originality of our approach is that it is based on the use of truncations, following the Stampacchia approach to maximum principle. We believe that our method, which does not rely too much on probability considerations, is simpler than the existing approaches and to a certain extent, more directly applicable to concrete situations. Among the applications, boundedness results and positivity results are respectively proved for the solutions of a stochastic Boussinesq temperature equation, and of reaction-diffusion equations perturbed by a non-Lipschitz nonlinear noise. Stabilization results to a Chafee-Infante equation perturbed by a nonlinear noise are also derived.

  10. 最大概率方法在伺服系统故障诊断上的应用%Application of Maximum Probability Approach to the Fault Diagnosis of a Servo System

    马东升; 胡佑德; 戴凤智


    In an actual control system, it is often difficult to find out where the faults are if only based on the outside fault phenomena, acquired frequently from a fault system. So the fault diagnosis by outside fault phenomena is considered. Based on the theory of fuzzy recognition and fault diagnosis, this method only depends on experience and statistical data to set up fuzzy query relationship between the outside phenomena (fault characters) and the fault sources (fault patterns). From this relationship the most probable fault sources can be obtained, to attain the goal of quick diagnosis. Based on the above approach, the standard fuzzy relationship matrix is stored in the computer as a system database. And experiment data are given to show the fault diagnosis results. The important parameters can be on-line sampled and analyzed, and when faults occur, faults can be found, the alarm is given and the controller output is regulated.%为了解决实际控制系统中仅通过系统的故障现象难以确定系统故障元的难题,采用基于模糊识别和故障诊断理论的最大概率法,该方法仅仅依靠经验和统计数据,在外部故障现象和系统故障元之间建立模糊查询关系,从这一关系中可以获得最大故障概率点.将一个标准模糊关系矩阵作为数据库存储在计算机中,并给出了一个系统故障诊断的实验结果.通过以上方法,只要对系统的重要参数进行在线采集和分析,当发生故障时,就可以给出可能的故障元的故障概率,并发出警报.

  11. MLDS: Maximum Likelihood Difference Scaling in R

    Kenneth Knoblauch


    Full Text Available The MLDS package in the R programming language can be used to estimate perceptual scales based on the results of psychophysical experiments using the method of difference scaling. In a difference scaling experiment, observers compare two supra-threshold differences (a,b and (c,d on each trial. The approach is based on a stochastic model of how the observer decides which perceptual difference (or interval (a,b or (c,d is greater, and the parameters of the model are estimated using a maximum likelihood criterion. We also propose a method to test the model by evaluating the self-consistency of the estimated scale. The package includes an example in which an observer judges the differences in correlation between scatterplots. The example may be readily adapted to estimate perceptual scales for arbitrary physical continua.

  12. Maximum Segment Sum, Monadically (distilled tutorial

    Jeremy Gibbons


    Full Text Available The maximum segment sum problem is to compute, given a list of integers, the largest of the sums of the contiguous segments of that list. This problem specification maps directly onto a cubic-time algorithm; however, there is a very elegant linear-time solution too. The problem is a classic exercise in the mathematics of program construction, illustrating important principles such as calculational development, pointfree reasoning, algebraic structure, and datatype-genericity. Here, we take a sideways look at the datatype-generic version of the problem in terms of monadic functional programming, instead of the traditional relational approach; the presentation is tutorial in style, and leavened with exercises for the reader.

  13. Maximum Information and Quantum Prediction Algorithms

    McElwaine, J N


    This paper describes an algorithm for selecting a consistent set within the consistent histories approach to quantum mechanics and investigates its properties. The algorithm uses a maximum information principle to select from among the consistent sets formed by projections defined by the Schmidt decomposition. The algorithm unconditionally predicts the possible events in closed quantum systems and ascribes probabilities to these events. A simple spin model is described and a complete classification of all exactly consistent sets of histories formed from Schmidt projections in the model is proved. This result is used to show that for this example the algorithm selects a physically realistic set. Other tentative suggestions in the literature for set selection algorithms using ideas from information theory are discussed.

  14. Video segmentation using Maximum Entropy Model

    QIN Li-juan; ZHUANG Yue-ting; PAN Yun-he; WU Fei


    Detecting objects of interest from a video sequence is a fundamental and critical task in automated visual surveillance.Most current approaches only focus on discriminating moving objects by background subtraction whether or not the objects of interest can be moving or stationary. In this paper, we propose layers segmentation to detect both moving and stationary target objects from surveillance video. We extend the Maximum Entropy (ME) statistical model to segment layers with features, which are collected by constructing a codebook with a set of codewords for each pixel. We also indicate how the training models are used for the discrimination of target objects in surveillance video. Our experimental results are presented in terms of the success rate and the segmenting precision.

  15. The Sherpa Maximum Likelihood Estimator

    Nguyen, D.; Doe, S.; Evans, I.; Hain, R.; Primini, F.


    A primary goal for the second release of the Chandra Source Catalog (CSC) is to include X-ray sources with as few as 5 photon counts detected in stacked observations of the same field, while maintaining acceptable detection efficiency and false source rates. Aggressive source detection methods will result in detection of many false positive source candidates. Candidate detections will then be sent to a new tool, the Maximum Likelihood Estimator (MLE), to evaluate the likelihood that a detection is a real source. MLE uses the Sherpa modeling and fitting engine to fit a model of a background and source to multiple overlapping candidate source regions. A background model is calculated by simultaneously fitting the observed photon flux in multiple background regions. This model is used to determine the quality of the fit statistic for a background-only hypothesis in the potential source region. The statistic for a background-plus-source hypothesis is calculated by adding a Gaussian source model convolved with the appropriate Chandra point spread function (PSF) and simultaneously fitting the observed photon flux in each observation in the stack. Since a candidate source may be located anywhere in the field of view of each stacked observation, a different PSF must be used for each observation because of the strong spatial dependence of the Chandra PSF. The likelihood of a valid source being detected is a function of the two statistics (for background alone, and for background-plus-source). The MLE tool is an extensible Python module with potential for use by the general Chandra user.

  16. Determining the Mechanical Constitutive Properties of Metals as Function of Strain Rate and temperature: A Combined Experimental and Modeling Approach

    Ian Robertson


    Development and validation of constitutive models for polycrystalline materials subjected to high strain-rate loading over a range of temperatures are needed to predict the response of engineering materials to in-service type conditions. To account accurately for the complex effects that can occur during extreme and variable loading conditions, requires significant and detailed computational and modeling efforts. These efforts must be integrated fully with precise and targeted experimental measurements that not only verify the predictions of the models, but also provide input about the fundamental processes responsible for the macroscopic response. Achieving this coupling between modeling and experiment is the guiding principle of this program. Specifically, this program seeks to bridge the length scale between discrete dislocation interactions with grain boundaries and continuum models for polycrystalline plasticity. Achieving this goal requires incorporating these complex dislocation-interface interactions into the well-defined behavior of single crystals. Despite the widespread study of metal plasticity, this aspect is not well understood for simple loading conditions, let alone extreme ones. Our experimental approach includes determining the high-strain rate response as a function of strain and temperature with post-mortem characterization of the microstructure, quasi-static testing of pre-deformed material, and direct observation of the dislocation behavior during reloading by using the in situ transmission electron microscope deformation technique. These experiments will provide the basis for development and validation of physically-based constitutive models. One aspect of the program involves the direct observation of specific mechanisms of micro-plasticity, as these indicate the boundary value problem that should be addressed. This focus on the pre-yield region in the quasi-static effort (the elasto-plastic transition) is also a tractable one from an

  17. A Dynamic Approach to Addressing Observation-Minus-Forecast Mean Differences in a Land Surface Skin Temperature Data Assimilation System

    Draper, Clara; Reichle, Rolf; De Lannoy, Gabrielle; Scarino, Benjamin


    In land data assimilation, bias in the observation-minus-forecast (O-F) residuals is typically removed from the observations prior to assimilation by rescaling the observations to have the same long-term mean (and higher-order moments) as the corresponding model forecasts. Such observation rescaling approaches require a long record of observed and forecast estimates, and an assumption that the O-F mean differences are stationary. A two-stage observation bias and state estimation filter is presented, as an alternative to observation rescaling that does not require a long data record or assume stationary O-F mean differences. The two-stage filter removes dynamic (nonstationary) estimates of the seasonal scale O-F mean difference from the assimilated observations, allowing the assimilation to correct the model for synoptic-scale errors without adverse effects from observation biases. The two-stage filter is demonstrated by assimilating geostationary skin temperature (Tsk) observations into the Catchment land surface model. Global maps of the O-F mean differences are presented, and the two-stage filter is evaluated for one year over the Americas. The two-stage filter effectively removed the Tsk O-F mean differences, for example the GOES-West O-F mean difference at 21:00 UTC was reduced from 5.1 K for a bias-blind assimilation to 0.3 K. Compared to independent in situ and remotely sensed Tsk observations, the two-stage assimilation reduced the unbiased Root Mean Square Difference (ubRMSD) of the modeled Tsk by 10 of the open-loop values.

  18. Alloy by design: A materials genome approach to advanced high strength stainless steels for low and high temperature applications

    Lu, Q.; Xu, W.; Van der Zwaag, S.


    We report a computational 'alloy by design' approach which can significantly accelerate the design process and substantially reduce the development costs. This approach allows simultaneously optimization of alloy composition and heat treatment parameters based on the integration of thermodynamic, th

  19. Quantum-dot Carnot engine at maximum power.

    Esposito, Massimiliano; Kawai, Ryoichi; Lindenberg, Katja; Van den Broeck, Christian


    We evaluate the efficiency at maximum power of a quantum-dot Carnot heat engine. The universal values of the coefficients at the linear and quadratic order in the temperature gradient are reproduced. Curzon-Ahlborn efficiency is recovered in the limit of weak dissipation.

  20. Microcanonical origin of the maximum entropy principle for open systems.

    Lee, Julian; Pressé, Steve


    There are two distinct approaches for deriving the canonical ensemble. The canonical ensemble either follows as a special limit of the microcanonical ensemble or alternatively follows from the maximum entropy principle. We show the equivalence of these two approaches by applying the maximum entropy formulation to a closed universe consisting of an open system plus bath. We show that the target function for deriving the canonical distribution emerges as a natural consequence of partial maximization of the entropy over the bath degrees of freedom alone. By extending this mathematical formalism to dynamical paths rather than equilibrium ensembles, the result provides an alternative justification for the principle of path entropy maximization as well.