Well-log based prediction of temperature models in the exploration of sedimentary settings
DEFF Research Database (Denmark)
Fuchs, Sven; Förster, Andrea; Wonik, Thomas
porosity. TC vs. depth profiles corrected for in situ (p, T) conditions finally were used in conjunction with a published site-specific heat-flow value to model a temperature profile. The methodology is shown on the example of a 4-km deep borehole at Hannover in the North German Basin. This borehole...... these measurements are not available or only measured to a certain depth so that a temperature model needs to developed. A prerequisite for such a model is the knowledge of the regional heat flow and the geological conditions translated into lithology and thermal rock properties. For the determination of continuous...... borehole temperature profiles we propose a two-step procedure: (1) the use of standard petrophysical well logs and (2) the inversion of predicted TC to temperature gradients by applying Fourier’s law of heat conduction. The prediction of TC is solved by using set of equations (Fuchs & Förster, 2014...
Energy Technology Data Exchange (ETDEWEB)
Mukhopadhyay, S.; Tsang, Y.; Finsterle, S.
2009-01-15
A simple conceptual model has been recently developed for analyzing pressure and temperature data from flowing fluid temperature logging (FFTL) in unsaturated fractured rock. Using this conceptual model, we developed an analytical solution for FFTL pressure response, and a semianalytical solution for FFTL temperature response. We also proposed a method for estimating fracture permeability from FFTL temperature data. The conceptual model was based on some simplifying assumptions, particularly that a single-phase airflow model was used. In this paper, we develop a more comprehensive numerical model of multiphase flow and heat transfer associated with FFTL. Using this numerical model, we perform a number of forward simulations to determine the parameters that have the strongest influence on the pressure and temperature response from FFTL. We then use the iTOUGH2 optimization code to estimate these most sensitive parameters through inverse modeling and to quantify the uncertainties associated with these estimated parameters. We conclude that FFTL can be utilized to determine permeability, porosity, and thermal conductivity of the fracture rock. Two other parameters, which are not properties of the fractured rock, have strong influence on FFTL response. These are pressure and temperature in the borehole that were at equilibrium with the fractured rock formation at the beginning of FFTL. We illustrate how these parameters can also be estimated from FFTL data.
Directory of Open Access Journals (Sweden)
Vasilis Panagiotis Valdramidis
2005-01-01
Full Text Available A mathematical approach incorporating the shoulder effect during the quantification of microbial heat inactivation is being developed based on »the number of log cycles of reduction « concept. Hereto, the heat resistance of Escherichia coli K12 in BHI broth has been quantitatively determined in a generic and accurate way by defining the time t for x log reductions in the microbial population, i.e. txD, as a function of the treatment temperature T. Survival data of the examined microorganism are collected in a range of temperatures between 52–60.6 °C. Shoulder length Sl and specific inactivation rate kmax are derived from a mathematical expression that describes a non-log-linear behaviour. The temperature dependencies of Sl and kmax are used for structuring the txD(T function. Estimation of the txD(T parameters through a global identification procedure permits reliable predictions of the time to achieve a pre-decided microbial reduction. One of the parameters of the txD(T function is proposed as »the reference minimum temperature for inactivation«. For the case study considered, a value of 51.80 °C (with a standard error, SE, of 3.47 was identified. Finally, the time to achieve commercial sterilization and pasteurization for the product at hand, i.e. BHI broth, was found to be 11.70 s (SE=5.22, and 5.10 min (SE=1.22, respectively. Accounting for the uncertainty (based on the 90 % confidence intervals, CI a fail-safe treatment of these two processes takes 20.36 s and 7.12 min, respectively.
Minimal Model Theory for Log Surfaces
Fujino, Osamu
2012-01-01
We discuss the log minimal model theory for log surfaces. We show that the log minimal model program, the finite generation of log canonical rings, and the log abundance theorem for log surfaces hold true under assumptions weaker than the usual framework of the log minimal model theory.
Nuclear logging and geothermal log interpretation: formation temperature sonde evaluation
Energy Technology Data Exchange (ETDEWEB)
Ross, E.W.; Vagelatos, N.; Dickerson, J.M.; Nguyen, V.
1982-01-01
The theory and methodology of the neutron-based technique for the determination of the formation temperature in geothermal fields are discussed. The feasibility of the method was demonstrated before start of the present development phase. The present work is intended to evaluate the response of the temperature probe in a simulated fracture porosity granite matrix at temperatures likely to be encountered in known geothermal reservoirs. An above ground borehole model has been designed and constructed. The effect of high ambient temperatures on the response of the neutron detectors in the probe mockup used in the measurements was investigated and used to correct the detector counts. An improved data analysis method has been developed to account properly for the effects of low porosity and high temperatures. Measurements, using the above ground borehole model, have shown that a linear correlation between the ratio of thermal counts from a Gd-filtered detector to counts from a bare detector and formation temperature is good at temperatures as high as 380/sup 0/F. The present results are consistent with earlier data obtained in high-porosity laboratory models at lower temperatures (T < 167/sup 0/F). Further measurements at high temperature at various porosities and formation neutron absorption cross sections would be necessary for a more extensive comparison.
Technology development for high temperature logging tools
Energy Technology Data Exchange (ETDEWEB)
Veneruso, A.F.; Coquat, J.A.
1979-01-01
A set of prototype, high temperature logging tools (temperature, pressure and flow) were tested successfully to temperatures up to 275/sup 0/C in a Union geothermal well during November 1978 as part of the Geothermal Logging Instrumentation Development Program. This program is being conducted by Sandia Laboratories for the Department of Energy's Division of Geothermal Energy. The progress and plans of this industry based program to develop and apply the high temperature instrumentation technology needed to make reliable geothermal borehole measurements are described. Specifically, this program is upgrading existing sondes for improved high temperature performance, as well as applying new materials (elastomers, polymers, metals and ceramics) and developing component technology such as high temperature cables, cableheads and electronics to make borehole measurements such as formation temperature, flow rate, high resolution pressure and fracture mapping. In order to satisfy critical existing needs, the near term goal is for operation up to 275/sup 0/C and 7000 psi by the end of FY80. The long term goal is for operation up to 350/sup 0/C and 20,000 psi by the end of FY84.
Precision pressure/temperature logging tool
Energy Technology Data Exchange (ETDEWEB)
Henfling, J.A.; Normann, R.A.
1998-01-01
Past memory logging tools have provided excellent pressure/temperature data when used in a geothermal environment, and they are easier to maintain and deploy than tools requiring an electric wireline connection to the surface. However, they are deficient since the tool operator is unaware of downhole conditions that could require changes in the logging program. Tools that make ``decisions`` based on preprogrammed scenarios can partially overcome this difficulty, and a suite of such memory tools has been developed at Sandia National Laboratories. The first tool, which forms the basis for future instruments, measures pressure and temperature. Design considerations include a minimization of cost while insuring quality data, size compatibility with diamond-cored holes, operation in holes to 425 C (800 F), transportability by ordinary passenger air service, and ease of operation. This report documents the development and construction of the pressure/temperature tool. It includes: (1) description of the major components; (2) calibration; (3) typical logging scenario; (4) tool data examples; and (5) conclusions. The mechanical and electrical drawings, along with the tool`s software, will be furnished upon request.
Decomposable log-linear models
DEFF Research Database (Denmark)
Eriksen, Poul Svante
can be characterized by a structured set of conditional independencies between some variables given some other variables. We term the new model class decomposable log-linear models, which is illustrated to be a much richer class than decomposable graphical models.It covers a wide range of non...... The present paper considers discrete probability models with exact computational properties. In relation to contingency tables this means closed form expressions of the maksimum likelihood estimate and its distribution. The model class includes what is known as decomposable graphicalmodels, which......-hierarchical models, models with structural zeroes, models described by quasi independence and models for level merging. Also, they have a very natural interpretation as they may be formulated by a structured set of conditional independencies between two events given some other event. In relation to contingency...
Czech Academy of Sciences Publication Activity Database
Majorowicz, J.; Šafanda, Jan; Przybylak, R.
2014-01-01
Roč. 103, č. 4 (2014), s. 1163-1173 ISSN 1437-3254 Institutional support: RVO:67985530 Keywords : surface processes * borehole temperature s * climatic warming * Little Ice Age * solar irradiation Subject RIV: DC - Siesmology, Volcanology, Earth Structure Impact factor: 2.093, year: 2014
Log-binomial models: exploring failed convergence.
Williamson, Tyler; Eliasziw, Misha; Fick, Gordon Hilton
2013-12-13
Relative risk is a summary metric that is commonly used in epidemiological investigations. Increasingly, epidemiologists are using log-binomial models to study the impact of a set of predictor variables on a single binary outcome, as they naturally offer relative risks. However, standard statistical software may report failed convergence when attempting to fit log-binomial models in certain settings. The methods that have been proposed in the literature for dealing with failed convergence use approximate solutions to avoid the issue. This research looks directly at the log-likelihood function for the simplest log-binomial model where failed convergence has been observed, a model with a single linear predictor with three levels. The possible causes of failed convergence are explored and potential solutions are presented for some cases. Among the principal causes is a failure of the fitting algorithm to converge despite the log-likelihood function having a single finite maximum. Despite these limitations, log-binomial models are a viable option for epidemiologists wishing to describe the relationship between a set of predictors and a binary outcome where relative risk is the desired summary measure. Epidemiologists are encouraged to continue to use log-binomial models and advocate for improvements to the fitting algorithms to promote the widespread use of log-binomial models.
Model wells for nuclear well logging
International Nuclear Information System (INIS)
Tittle, C.W.
1989-01-01
Considerations needed in the design and construction of model wells for nuclear log calibration are covered, with special attention to neutron porosity logging and total γ-ray logging. Pulsed neutron decay-time and spectral γ-ray logging are discussed briefly. The American Petroleum Institute calibration facility for nuclear logs is a good starting point for similar or expanded facilities. A few of its shortcomings are mentioned; they are minor. The problem of fluid saturation is emphasized. Attention is given to models made of consolidated rock and those containing unconsolidated material such as Ottawa sand. Needed precautions are listed. A similarity method is presented for estimating the porosity index of formations that are not fully saturated. (author)
Modeling Precipitation Extremes using Log-Histospline
Huang, W. K.; Nychka, D. W.; Zhang, H.
2017-12-01
One of the commonly used approaches to modeling univariate extremes is the peaks-overthreshold (POT) method. The POT method models exceedances over a (sufficiently high/low) threshold as a generalized Pareto distribution (GPD). To apply this method, a threshold has to be chosen and the estimates might be sensitive to the chosen threshold. Here we propose an alternative, the "Log-Histospline", to explore modeling the tail behavior and the remainder of the density in one step using the full range of the data. Log-Histospline applies a smoothing spline model on a finely binned histogram of the log transformed data to estimate its log density. By construction, we are able to preserve the polynomial upper tail behavior, a feature commonly observed in geophysical observations. The Log-Histospline can be extended to the spatial setting by treating the marginal (log) density at each location as spatially indexed functional data, and perform a dimension reduction and spatial smoothing. We illustrate the proposed method by analyzing precipitation data from regional climate model output (North American Regional Climate Change and Assessment Program (NARCCAP)).
Shan Gao; Xiping Wang; Lihai Wang; R. Bruce. Allison
2012-01-01
The goals of this study were to investigate the effect of environment temperature on acoustic velocity of standing trees and green logs and to develop workable models for compensating temperature differences as acoustic measurements are performed in different climates and seasons. The objective of Part 1 was to investigate interactive effects of temperature and...
CS model coil experimental log book
International Nuclear Information System (INIS)
Nishijima, Gen; Sugimoto, Makoto; Nunoya, Yoshihiko; Wakabayashi, Hiroshi; Tsuji, Hiroshi
2001-02-01
Charging test of the ITER CS Model Coil which is the world's largest superconducting pulse coil and the CS Insert Coil had started at April 11, 2000 and had completed at August 18, 2000. In the campaign, total shot numbers were 356 and the size of the data file in the DAS (Data Acquisition System) was over 20 GB. This report is a database that consists of the log list and the log sheets of every shot. One can access the database, make a search, and browse results via Internet (http://1ogwww.naka.jaeri.go.jp). The database will be useful to quick search to choose necessary shots. (author)
Modelling tropical forests response to logging
Cazzolla Gatti, Roberto; Di Paola, Arianna; Valentini, Riccardo; Paparella, Francesco
2013-04-01
Tropical rainforests are among the most threatened ecosystems by large-scale fragmentation due to human activity such as heavy logging and agricultural clearance. Although, they provide crucial ecosystem goods and services, such as sequestering carbon from the atmosphere, protecting watersheds and conserving biodiversity. In several countries forest resource extraction has experienced a shift from clearcutting to selective logging to maintain a significant forest cover and understock of living biomass. However the knowledge on the short and long-term effects of removing selected species in tropical rainforest are scarce and need to be further investigated. One of the main effects of selective logging on forest dynamics seems to be the local disturbance which involve the invasion of open space by weed, vines and climbers at the expense of the late-successional state cenosis. We present a simple deterministic model that describes the dynamics of tropical rainforest subject to selective logging to understand how and why weeds displace native species. We argue that the selective removal of tallest tropical trees carries out gaps of light that allow weeds, vines and climbers to prevail on native species, inhibiting the possibility of recovery of the original vegetation. Our results show that different regime shifts may occur depending on the type of forest management adopted. This hypothesis is supported by a dataset of trees height and weed/vines cover that we collected from 9 plots located in Central and West Africa both in untouched and managed areas.
Acoustic sorting models for improved log segregation
Xiping Wang; Steve Verrill; Eini Lowell; Robert J. Ross; Vicki L. Herian
2013-01-01
In this study, we examined three individual log measures (acoustic velocity, log diameter, and log vertical position in a tree) for their ability to predict average modulus of elasticity (MOE) and grade yield of structural lumber obtained from Douglas-fir (Pseudotsuga menziesii [Mirb. Franco]) logs. We found that log acoustic velocity only had a...
Effect of temperature on Acoustic Evaluation of standing trees and logs: Part 2: Field Investigation
Shan Gao; Xiping Wang; Lihai Wang; R. Bruce Allison
2013-01-01
The objectives of this study were to investigate the effect of seasonal temperature changes on acoustic velocity measured on standing trees and green logs and to develop models for compensating temperature differences because acoustic measurements are performed in different climates and seasons. Field testing was conducted on 20 red pine (Pinus resinosa...
CS model coil experimental log book
Energy Technology Data Exchange (ETDEWEB)
Nishijima, Gen; Sugimoto, Makoto; Nunoya, Yoshihiko; Wakabayashi, Hiroshi; Tsuji, Hiroshi [Japan Atomic Energy Research Inst., Naka, Ibaraki (Japan). Naka Fusion Research Establishment
2001-02-01
Charging test of the ITER CS Model Coil which is the world's largest superconducting pulse coil and the CS Insert Coil had started at April 11, 2000 and had completed at August 18, 2000. In the campaign, total shot numbers were 356 and the size of the data file in the DAS (Data Acquisition System) was over 20 GB. This report is a database that consists of the log list and the log sheets of every shot. One can access the database, make a search, and browse results via Internet (http://1ogwww.naka.jaeri.go.jp). The database will be useful to quick search to choose necessary shots. (author)
Analysis of RIA standard curve by log-logistic and cubic log-logit models
International Nuclear Information System (INIS)
Yamada, Hideo; Kuroda, Akira; Yatabe, Tami; Inaba, Taeko; Chiba, Kazuo
1981-01-01
In order to improve goodness-of-fit in RIA standard analysis, programs for computing log-logistic and cubic log-logit were written in BASIC using personal computer P-6060 (Olivetti). Iterative least square method of Taylor series was applied for non-linear estimation of logistic and log-logistic. Hear ''log-logistic'' represents Y = (a - d)/(1 + (log(X)/c)sup(b)) + d As weights either 1, 1/var(Y) or 1/σ 2 were used in logistic or log-logistic and either Y 2 (1 - Y) 2 , Y 2 (1 - Y) 2 /var(Y), or Y 2 (1 - Y) 2 /σ 2 were used in quadratic or cubic log-logit. The term var(Y) represents squares of pure error and σ 2 represents estimated variance calculated using a following equation log(σ 2 + 1) = log(A) + J log(y). As indicators for goodness-of-fit, MSL/S sub(e)sup(2), CMD% and WRV (see text) were used. Better regression was obtained in case of alpha-fetoprotein by log-logistic than by logistic. Cortisol standard curve was much better fitted with cubic log-logit than quadratic log-logit. Predicted precision of AFP standard curve was below 5% in log-logistic in stead of 8% in logistic analysis. Predicted precision obtained using cubic log-logit was about five times lower than that with quadratic log-logit. Importance of selecting good models in RIA data processing was stressed in conjunction with intrinsic precision of radioimmunoassay system indicated by predicted precision. (author)
Analysis of artificial fireplace logs by high temperature gas chromatography.
Kuk, Raymond J
2002-11-01
High temperature gas chromatography is used to analyze the wax of artificial fireplace logs (firelogs). Firelogs from several different manufacturers are studied and compared. This study shows that the wax within a single firelog is homogeneous and that the wax is also uniform throughout a multi-firelog package. Different brands are shown to have different wax compositions. Firelogs of the same brand, but purchased in different locations, also have different wax compositions. With this information it may be possible to associate an unknown firelog sample to a known sample, but a definitive statement of the origin cannot be made.
Development of interpretation models for PFN uranium log analysis
International Nuclear Information System (INIS)
Barnard, R.W.
1980-11-01
This report presents the models for interpretation of borehole logs for the PFN (Prompt Fission Neutron) uranium logging system. Two models have been developed, the counts-ratio model and the counts/dieaway model. Both are empirically developed, but can be related to the theoretical bases for PFN analysis. The models try to correct for the effects of external factors (such as probe or formation parameters) in the calculation of uranium grade. The theoretical bases and calculational techniques for estimating uranium concentration from raw PFN data and other parameters are discussed. Examples and discussions of borehole logs are included
Procedures for Geometric Data Reduction in Solid Log Modelling
Luis G. Occeña; Wenzhen Chen; Daniel L. Schmoldt
1995-01-01
One of the difficulties in solid log modelling is working with huge data sets, such as those that come from computed axial tomographic imaging. Algorithmic procedures are described in this paper that have successfully reduced data without sacrificing modelling integrity.
Latent log-linear models for handwritten digit classification.
Deselaers, Thomas; Gass, Tobias; Heigold, Georg; Ney, Hermann
2012-06-01
We present latent log-linear models, an extension of log-linear models incorporating latent variables, and we propose two applications thereof: log-linear mixture models and image deformation-aware log-linear models. The resulting models are fully discriminative, can be trained efficiently, and the model complexity can be controlled. Log-linear mixture models offer additional flexibility within the log-linear modeling framework. Unlike previous approaches, the image deformation-aware model directly considers image deformations and allows for a discriminative training of the deformation parameters. Both are trained using alternating optimization. For certain variants, convergence to a stationary point is guaranteed and, in practice, even variants without this guarantee converge and find models that perform well. We tune the methods on the USPS data set and evaluate on the MNIST data set, demonstrating the generalization capabilities of our proposed models. Our models, although using significantly fewer parameters, are able to obtain competitive results with models proposed in the literature.
Ordinal Log-Linear Models for Contingency Tables
Directory of Open Access Journals (Sweden)
Brzezińska Justyna
2016-12-01
Full Text Available A log-linear analysis is a method providing a comprehensive scheme to describe the association for categorical variables in a contingency table. The log-linear model specifies how the expected counts depend on the levels of the categorical variables for these cells and provide detailed information on the associations. The aim of this paper is to present theoretical, as well as empirical, aspects of ordinal log-linear models used for contingency tables with ordinal variables. We introduce log-linear models for ordinal variables: linear-by-linear association, row effect model, column effect model and RC Goodman’s model. Algorithm, advantages and disadvantages will be discussed in the paper. An empirical analysis will be conducted with the use of R.
A regularity-based modeling of oil borehole logs
Gaci, Said; Zaourar, Naima
2013-04-01
Multifractional Brownian motions (mBms) are successfully used to describe borehole logs behavior. These local fractal models allow to investigate the depth-evolution of regularity of the logs, quantified by the Hölder exponent (H). In this study, a regularity analysis is carried out on datasets recorded in Algerian oil boreholes located in different geological settings. The obtained regularity profiles show a clear correlation with lithology. Each lithological discontinuity corresponds to a jump in H value. Moreover, for a given borehole, all the regularity logs are significantly correlated and lead to similar lithological segmentations. Therefore, the Hölderian regularity is a robust property which can be used to characterize lithological heterogeneities. However, this study does not draw any relation between the recorded physical property and its estimated regularity degree for all the analyzed logs. Keywords: well logs, regularity, Hölder exponent, multifractional Brownian motion
[Using log-binomial model for estimating the prevalence ratio].
Ye, Rong; Gao, Yan-hui; Yang, Yi; Chen, Yue
2010-05-01
To estimate the prevalence ratios, using a log-binomial model with or without continuous covariates. Prevalence ratios for individuals' attitude towards smoking-ban legislation associated with smoking status, estimated by using a log-binomial model were compared with odds ratios estimated by logistic regression model. In the log-binomial modeling, maximum likelihood method was used when there were no continuous covariates and COPY approach was used if the model did not converge, for example due to the existence of continuous covariates. We examined the association between individuals' attitude towards smoking-ban legislation and smoking status in men and women. Prevalence ratio and odds ratio estimation provided similar results for the association in women since smoking was not common. In men however, the odds ratio estimates were markedly larger than the prevalence ratios due to a higher prevalence of outcome. The log-binomial model did not converge when age was included as a continuous covariate and COPY method was used to deal with the situation. All analysis was performed by SAS. Prevalence ratio seemed to better measure the association than odds ratio when prevalence is high. SAS programs were provided to calculate the prevalence ratios with or without continuous covariates in the log-binomial regression analysis.
Log-Normal Turbulence Dissipation in Global Ocean Models
Pearson, Brodie; Fox-Kemper, Baylor
2018-03-01
Data from turbulent numerical simulations of the global ocean demonstrate that the dissipation of kinetic energy obeys a nearly log-normal distribution even at large horizontal scales O (10 km ) . As the horizontal scales of resolved turbulence are larger than the ocean is deep, the Kolmogorov-Yaglom theory for intermittency in 3D homogeneous, isotropic turbulence cannot apply; instead, the down-scale potential enstrophy cascade of quasigeostrophic turbulence should. Yet, energy dissipation obeys approximate log-normality—robustly across depths, seasons, regions, and subgrid schemes. The distribution parameters, skewness and kurtosis, show small systematic departures from log-normality with depth and subgrid friction schemes. Log-normality suggests that a few high-dissipation locations dominate the integrated energy and enstrophy budgets, which should be taken into account when making inferences from simplified models and inferring global energy budgets from sparse observations.
Estimation of oil reservoir thermal properties through temperature log data using inversion method
International Nuclear Information System (INIS)
Cheng, Wen-Long; Nian, Yong-Le; Li, Tong-Tong; Wang, Chang-Long
2013-01-01
Oil reservoir thermal properties not only play an important role in steam injection well heat transfer, but also are the basic parameters for evaluating the oil saturation in reservoir. In this study, for estimating reservoir thermal properties, a novel heat and mass transfer model of steam injection well was established at first, this model made full analysis on the wellbore-reservoir heat and mass transfer as well as the wellbore-formation, and the simulated results by the model were quite consistent with the log data. Then this study presented an effective inversion method for estimating the reservoir thermal properties through temperature log data. This method is based on the heat transfer model in steam injection wells, and can be used to predict the thermal properties as a stochastic approximation method. The inversion method was applied to estimate the reservoir thermal properties of two steam injection wells, it was found that the relative error of thermal conductivity for the two wells were 2.9% and 6.5%, and the relative error of volumetric specific heat capacity were 6.7% and 7.0%,which demonstrated the feasibility of the proposed method for estimating the reservoir thermal properties. - Highlights: • An effective inversion method for predicting the oil reservoir thermal properties was presented. • A novel model for steam injection well made full study on the wellbore-reservoir heat and mass transfer. • The wellbore temperature field and steam parameters can be simulated by the model efficiently. • Both reservoirs and formation thermal properties could be estimated simultaneously by the proposed method. • The estimated steam temperature was quite consistent with the field data
Monte Carlo Numerical Models for Nuclear Logging Applications
Directory of Open Access Journals (Sweden)
Fusheng Li
2012-06-01
Full Text Available Nuclear logging is one of most important logging services provided by many oil service companies. The main parameters of interest are formation porosity, bulk density, and natural radiation. Other services are also provided from using complex nuclear logging tools, such as formation lithology/mineralogy, etc. Some parameters can be measured by using neutron logging tools and some can only be measured by using a gamma ray tool. To understand the response of nuclear logging tools, the neutron transport/diffusion theory and photon diffusion theory are needed. Unfortunately, for most cases there are no analytical answers if complex tool geometry is involved. For many years, Monte Carlo numerical models have been used by nuclear scientists in the well logging industry to address these challenges. The models have been widely employed in the optimization of nuclear logging tool design, and the development of interpretation methods for nuclear logs. They have also been used to predict the response of nuclear logging systems for forward simulation problems. In this case, the system parameters including geometry, materials and nuclear sources, etc., are pre-defined and the transportation and interactions of nuclear particles (such as neutrons, photons and/or electrons in the regions of interest are simulated according to detailed nuclear physics theory and their nuclear cross-section data (probability of interacting. Then the deposited energies of particles entering the detectors are recorded and tallied and the tool responses to such a scenario are generated. A general-purpose code named Monte Carlo N– Particle (MCNP has been the industry-standard for some time. In this paper, we briefly introduce the fundamental principles of Monte Carlo numerical modeling and review the physics of MCNP. Some of the latest developments of Monte Carlo Models are also reviewed. A variety of examples are presented to illustrate the uses of Monte Carlo numerical models
Proposed geologic model based on geophysical well logs
Energy Technology Data Exchange (ETDEWEB)
Diaz C, S.; Puente C, I.; de la Pena L, A.
1981-01-01
An investigation of the subsurface based on a qualitative interpretation of well logs was carried out at Cerro Prieto to obtain information on the distribution of the different lithofacies that make up a deltaic depositional system. The sedimentological interpretation derived from the resistivity and spontaneous potential are shown in several cross-sections of the field. In addition to the sedimentological interpretation, a map of the structural geology of the region based on well logs and available geophysical information was prepared, including the results of gravity and seismic refraction surveys. The depth to the zone of hydrothermal alteration described by Elders (1980) was found by means of temperature, electrical, and radioactive logs. Two maps showing the configuration of the top of this anomaly show a clear correlation with the gravity anomalies found in the area.
Testing and analysis of internal hardwood log defect prediction models
R. Edward. Thomas
2011-01-01
The severity and location of internal defects determine the quality and value of lumber sawn from hardwood logs. Models have been developed to predict the size and position of internal defects based on external defect indicator measurements. These models were shown to predict approximately 80% of all internal knots based on external knot indicators. However, the size...
Modeling and validating the grabbing forces of hydraulic log grapples used in forest operations
Jingxin Wang; Chris B. LeDoux; Lihai Wang
2003-01-01
The grabbing forces of log grapples were modeled and analyzed mathematically under operating conditions when grabbing logs from compact log piles and from bunch-like log piles. The grabbing forces are closely related to the structural parameters of the grapple, the weight of the grapple, and the weight of the log grabbed. An operational model grapple was designed and...
Validation of an internal hardwood log defect prediction model
R. Edward. Thomas
2011-01-01
The type, size, and location of internal defects dictate the grade and value of lumber sawn from hardwood logs. However, acquiring internal defect knowledge with x-ray/computed-tomography or magnetic-resonance imaging technology can be expensive both in time and cost. An alternative approach uses prediction models based on correlations among external defect indicators...
Modelling discontinuous well log signal to identify lithological ...
Indian Academy of Sciences (India)
In this paper, we have proposed anew wavelet transform-based algorithm to model the abrupt discontinuous changes from well log databy taking care of nonstationary characteristics of the signal. Prior to applying the algorithm on thegeophysical well data, we analyzed the distribution of wavelet coefficients using synthetic ...
Guillermo A. Mendoza; Roger J. Meimban; Philip A. Araman; William G. Luppold
1991-01-01
A log inventory model and a real-time hardwood process simulation model were developed and combined into an integrated production planning and control system for hardwood sawmills. The log inventory model was designed to monitor and periodically update the status of the logs in the log yard. The process simulation model was designed to estimate various sawmill...
Defining a Family of Cognitive Diagnosis Models Using Log-Linear Models with Latent Variables
Henson, Robert A.; Templin, Jonathan L.; Willse, John T.
2009-01-01
This paper uses log-linear models with latent variables (Hagenaars, in "Loglinear Models with Latent Variables," 1993) to define a family of cognitive diagnosis models. In doing so, the relationship between many common models is explicitly defined and discussed. In addition, because the log-linear model with latent variables is a general model for…
Directory of Open Access Journals (Sweden)
Bingwei Tian
2015-03-01
Full Text Available Geothermal resources have become an increasingly important source of renewable energy for electrical power generation worldwide. Combined Three Dimension (3D Subsurface Temperature (SST and Land Surface Temperature (LST measurements are essential for accurate assessment of geothermal resources. In this study, subsurface and surface temperature distributions were combined using a dataset comprised of well logs and Thermal Infrared Remote sensing (TIR images from Hokkaido island, northern Japan. Using 28,476 temperature data points from 433 boreholes sites and a method of Kriging with External Drift or trend (KED, SST distribution model from depths of 100 to 1500 m was produced. Regional LST was estimated from 13 scenes of Landsat 8 images. Resultant SST ranged from around 50 °C to 300 °C at a depth of 1500 m. Most of western and part of the eastern Hokkaido are characterized by high temperature gradients, while low temperatures were found in the central region. Higher temperatures in shallower crust imply the western region and part of the eastern region have high geothermal potential. Moreover, several LST zones considered to have high geothermal potential were identified upon clarification of the underground heat distribution according to 3D SST. LST in these zones showed the anomalies, 3 to 9 °C higher than the surrounding areas. These results demonstrate that our combination of TIR and 3D temperature modeling using well logging and geostatistics is an efficient and promising approach to geothermal resource exploration.
Fluid flow model of the Cerro Prieto Geothermal Field based on well log interpretation
Energy Technology Data Exchange (ETDEWEB)
Halfman, S.E.; Lippmann, M.J.; Zelwe, R.; Howard, J.H.
1982-08-10
The subsurface geology of the Cerro Prieto geothermal field was analyzed using geophysical and lithologic logs. The distribution of permeable and relatively impermeable units and the location of faults are shown in a geologic model of the system. By incorporating well completion data and downhole temperature profiles into the geologic model, it was possible to determine the direction of geothermal fluid flow and the role of subsurface geologic features that control this movement.
Jack E. Janisch; Steven M. Wondzell; William J. Ehinger
2012-01-01
We examined stream temperature response to forest harvest in small forested headwater catchments in western Washington, USA over a seven year period (2002-2008). These streams have very low discharge in late summer and many become spatially intermittent. We used a before-after, control-impact (BACl) study design to contrast the effect of clearcut logging with two...
The effect on vegetation and soil temperature of logging flood-plain white spruce.
C.T. Dyrness; L.A. Vlereck; M.J. Foote; J.C. Zasada
1988-01-01
During winter 1982-83, five silvicultural treatments were applied on Willow Island (near Fairbanks, Alaska): two types of shelterwood cuttings, a clearcutting, a clearcutting with broadcast slash burning, and a thinning. The effects of these treatments on vegetation, soil temperature, and frost depth were followed from 1983 through 1985. In 1984 and 1985, logged plots...
Repeated temperature logs from Czech, Slovenian and Portuguese borehole climate observatories
Czech Academy of Sciences Publication Activity Database
Šafanda, Jan; Rajver, D.; Correia, A.; Dědeček, Petr
2007-01-01
Roč. 3, č. 3 (2007), s. 453-462 ISSN 1814-9324 R&D Projects: GA AV ČR(CZ) IAA300120603 Grant - others:NATO(US) PDD(CP)-(EST.CLG 980 152) Institutional research plan: CEZ:AV0Z30120515 Source of funding: V - iné verejné zdroje Keywords : borehole temperatures * temperature logs * borehole climate observatories Subject RIV: DC - Siesmology, Volcanology, Earth Structure Impact factor: 1.450, year: 2007
International Nuclear Information System (INIS)
Johnson, C.A. Jr.
1983-01-01
A general overview is given of well logging procedures used in the exploration and production of oil and gas wells. Techniques include γ logging, neutron logging, caliper logging, resistivity logging, temperature logging, and production logging. Typical logs are shown for some techniques
A new model for the sonic borehole logging tool
International Nuclear Information System (INIS)
Oelgaard, P.L.
1990-12-01
A number of models for the sonic borehole logging tool has earlier been developed. These models which are mainly based on experimental data, are discussed and compared. On this background the new model is developed. It is based on the assumptions that the pores of low porosity formations and the grains of high porosity media may be approximated by cylinders, and that the dimension of these cylinders are given by distribution functions. From these assumptions the transit time Δt p of low porosity formations and Δt g of high porosity media are calculated by use of the Monte Carlo method. Combining the Δt p and Δt g values obtained by use of selected weighting functions seems to permit the determination of the transit time Δt for the full porosity range (0 ≤ φ ≤ 100%). (author)
Bélanger, Sébastien; Bauce, Eric; Berthiaume, Richard; Long, Bernard; Labrie, Jacques; Daigle, Louis-Frédéric; Hébert, Christian
2013-06-01
The whitespotted sawyer, Monochamus scutellatus scutellatus (Say) (Coleoptera: Ce-rambycidae), is one of the most damaging wood-boring insects in recently burned boreal forests of North America. In Canada, salvage logging after wildfire contributes to maintaining the timber volume required by the forest industry, but larvae of this insect cause significant damage that reduces the economic value of lumber products. This study aimed to estimate damage progression as a function of temperature in recently burned black spruce (Picea mariana (Miller) Britton, Sterns, and Poggenburg) and jack pine (Pinus banksiana Lambert) trees. Using axial tomographic technology, we modeled subcortical development and gallery depth progression rates as functions of temperature for both tree species. Generally, these rates were slightly faster in black spruce than in jack pine logs. Eggs laid on logs kept at 12 degrees C did not hatch or larvae were unable to establish themselves under the bark because no larval development was observed. At 16 degrees C, larvae stayed under the bark for > 200 d before penetrating into the sapwood. At 20 degrees C, half of the larvae entered the sapwood after 30-50 d, but gallery depth progression stopped for approximately 70 d, suggesting that larvae went into diapause. The other half of the larvae entered the sapwood only after 100-200 d. At 24 and 28 degrees C, larvae entered the sapwood after 26-27 and 21 d, respectively. At 28 degrees C, gallery depth progressed at a rate of 1.44 mm/d. Temperature threshold for subcortical development was slightly lower in black spruce (12.9 degrees C) than in jack pine (14.6 degrees C) and it was 1 degrees C warmer for gallery depth progression for both tree species. These results indicate that significant damage may occur within a few months after fire during warm summers, particularly in black spruce, which highlights the importance of beginning postfire salvage logging as soon as possible to reduce economic
Modelling discontinuous well log signal to identify lithological ...
Indian Academy of Sciences (India)
Identification of sharp and discontinuous lithological boundaries from well log signal stemming from heterogeneous subsurface structures assumes a special significance in geo-exploration studies. Well log data acquired from various geological settings generally display nonstationary/nonlinear characteristics with varying ...
Modelling discontinuous well log signal to identify lithological ...
Indian Academy of Sciences (India)
ter level forecasting (Adamowski and Chan 2011). KTB research geoscientist team has examined the fractal behaviour of well-log signal variabil- ity, presuming that well-log signals in the super deep German Continental Deep Drilling Program. (KTB) borehole display nonlinear characteristics. (Leonardi and Kumpel 1999).
Czech Academy of Sciences Publication Activity Database
Majorowicz, J.; Grasby, S. E.; Ferguson, G.; Šafanda, Jan; Skinner, W.
2006-01-01
Roč. 2, č. 1 (2006), s. 1-10 ISSN 1814-9324 Institutional research plan: CEZ:AV0Z30120515 Keywords : palaeoclimatic reconstructions * Canada * borehole temperatures Subject RIV: DC - Siesmology, Volcanology, Earth Structure
Modelling discontinuous well log signal to identify lithological ...
Indian Academy of Sciences (India)
Identification of sharp and discontinuous lithological boundaries from well log signal stemming fromheterogeneous subsurface structures assumes a special significance in geo-exploration studies. Well logdata acquired from various geological settings generally display nonstationary/nonlinear characteristicswith varying ...
Calibration models for density borehole logging - construction report
International Nuclear Information System (INIS)
Engelmann, R.E.; Lewis, R.E.; Stromswold, D.C.
1995-10-01
Two machined blocks of magnesium and aluminum alloys form the basis for Hanford's density models. The blocks provide known densities of 1.780 ± 0.002 g/cm 3 and 2.804 ± 0.002 g/cm 3 for calibrating borehole logging tools that measure density based on gamma-ray scattering from a source in the tool. Each block is approximately 33 x 58 x 91 cm (13 x 23 x 36 in.) with cylindrical grooves cut into the sides of the blocks to hold steel casings of inner diameter 15 cm (6 in.) and 20 cm (8 in.). Spacers that can be inserted between the blocks and casings can create air gaps of thickness 0.64, 1.3, 1.9, and 2.5 cm (0.25, 0.5, 0.75 and 1.0 in.), simulating air gaps that can occur in actual wells from hole enlargements behind the casing
Statistical modelling of Poisson/log-normal data
International Nuclear Information System (INIS)
Miller, G.
2007-01-01
In statistical data fitting, self consistency is checked by examining the closeness of the quantity Χ 2 /NDF to 1, where Χ 2 is the sum of squares of data minus fit divided by standard deviation, and NDF is the number of data minus the number of fit parameters. In order to calculate Χ 2 one needs an expression for the standard deviation. In this note several alternative expressions for the standard deviation of data distributed according to a Poisson/log-normal distribution are proposed and evaluated by Monte Carlo simulation. Two preferred alternatives are identified. The use of replicate data to obtain uncertainty is problematic for a small number of replicates. A method to correct this problem is proposed. The log-normal approximation is good for sufficiently positive data. A modification of the log-normal approximation is proposed, which allows it to be used to test the hypothesis that the true value is zero. (authors)
Technology diffusion in hospitals : A log odds random effects regression model
Blank, J.L.T.; Valdmanis, V.G.
2013-01-01
This study identifies the factors that affect the diffusion of hospital innovations. We apply a log odds random effects regression model on hospital micro data. We introduce the concept of clustering innovations and the application of a log odds random effects regression model to describe the
Technology diffusion in hospitals: A log odds random effects regression model
J.L.T. Blank (Jos); V.G. Valdmanis (Vivian G.)
2015-01-01
textabstractThis study identifies the factors that affect the diffusion of hospital innovations. We apply a log odds random effects regression model on hospital micro data. We introduce the concept of clustering innovations and the application of a log odds random effects regression model to
Estimator of a non-Gaussian parameter in multiplicative log-normal models
Kiyono, Ken; Struzik, Zbigniew R.; Yamamoto, Yoshiharu
2007-10-01
We study non-Gaussian probability density functions (PDF’s) of multiplicative log-normal models in which the multiplication of Gaussian and log-normally distributed random variables is considered. To describe the PDF of the velocity difference between two points in fully developed turbulent flows, the non-Gaussian PDF model was originally introduced by Castaing [Physica D 46, 177 (1990)]. In practical applications, an experimental PDF is approximated with Castaing’s model by tuning a single non-Gaussian parameter, which corresponds to the logarithmic variance of the log-normally distributed variable in the model. In this paper, we propose an estimator of the non-Gaussian parameter based on the q th order absolute moments. To test the estimator, we introduce two types of stochastic processes within the framework of the multiplicative log-normal model. One is a sequence of independent and identically distributed random variables. The other is a log-normal cascade-type multiplicative process. By analyzing the numerically generated time series, we demonstrate that the estimator can reliably determine the theoretical value of the non-Gaussian parameter. Scale dependence of the non-Gaussian parameter in multiplicative log-normal models is also studied, both analytically and numerically. As an application of the estimator, we demonstrate that non-Gaussian PDF’s observed in the S&P500 index fluctuations are well described by the multiplicative log-normal model.
A log-linear multidimensional Rasch model for capture-recapture.
Pelle, E; Hessen, D J; van der Heijden, P G M
2016-02-20
In this paper, a log-linear multidimensional Rasch model is proposed for capture-recapture analysis of registration data. In the model, heterogeneity of capture probabilities is taken into account, and registrations are viewed as dichotomously scored indicators of one or more latent variables that can account for correlations among registrations. It is shown how the probability of a generic capture profile is expressed under the log-linear multidimensional Rasch model and how the parameters of the traditional log-linear model are derived from those of the log-linear multidimensional Rasch model. Finally, an application of the model to neural tube defects data is presented. Copyright © 2015 John Wiley & Sons, Ltd.
Matthew R. Kluber; Deanna H. Olson; Klaus J. Puettmann
2013-01-01
Down wood provides important faunal microhabitat in forests for many invertebrate taxa, small mammals, and amphibians. Habitat suitability of down wood as refugia is an increasing concern in managed forests of the US Pacifi c Northwest, where overstory reduction may result in both reduced down wood recruitment and increased temperatures within logs, which may make them...
[Evaluation of estimation of prevalence ratio using bayesian log-binomial regression model].
Gao, W L; Lin, H; Liu, X N; Ren, X W; Li, J S; Shen, X P; Zhu, S L
2017-03-10
To evaluate the estimation of prevalence ratio ( PR ) by using bayesian log-binomial regression model and its application, we estimated the PR of medical care-seeking prevalence to caregivers' recognition of risk signs of diarrhea in their infants by using bayesian log-binomial regression model in Openbugs software. The results showed that caregivers' recognition of infant' s risk signs of diarrhea was associated significantly with a 13% increase of medical care-seeking. Meanwhile, we compared the differences in PR 's point estimation and its interval estimation of medical care-seeking prevalence to caregivers' recognition of risk signs of diarrhea and convergence of three models (model 1: not adjusting for the covariates; model 2: adjusting for duration of caregivers' education, model 3: adjusting for distance between village and township and child month-age based on model 2) between bayesian log-binomial regression model and conventional log-binomial regression model. The results showed that all three bayesian log-binomial regression models were convergence and the estimated PRs were 1.130(95 %CI : 1.005-1.265), 1.128(95 %CI : 1.001-1.264) and 1.132(95 %CI : 1.004-1.267), respectively. Conventional log-binomial regression model 1 and model 2 were convergence and their PRs were 1.130(95 % CI : 1.055-1.206) and 1.126(95 % CI : 1.051-1.203), respectively, but the model 3 was misconvergence, so COPY method was used to estimate PR , which was 1.125 (95 %CI : 1.051-1.200). In addition, the point estimation and interval estimation of PRs from three bayesian log-binomial regression models differed slightly from those of PRs from conventional log-binomial regression model, but they had a good consistency in estimating PR . Therefore, bayesian log-binomial regression model can effectively estimate PR with less misconvergence and have more advantages in application compared with conventional log-binomial regression model.
International Nuclear Information System (INIS)
Kleboth, P.
1988-11-01
As part of the hydrogeological investigations in the six exploratory boreholes drilled by NAGRA in northern Switzerland until 1986, a comprehensive data collection programme aimed at the localisation and characterization of zones of increased rock permeability was carried out. Within this programme the 'Fluid Logging' campaign investigated temperature, conductivity and flow conditions of the borehole fluid column. The present report is a comprehensive collection of the most important Fluid-logging runs from all six boreholes (Boettstein, Weiach, Riniken, Schafisheim, Kaisten and Leuggern). The first part of the report presents the methodology followed and describes the operating principles of the temperature, salinometer and spinner-flowmeter tools used, the range of their application and their performance. The second part of the report details the individual logging runs for each borehole. It describes the borehole history, which is important for the interpretation of results and states the boundary conditions e.g. water level in well, pump rate, difficulties encountered etc. Results are presented as composite logs. The fluid logs are compared with the relevant geological data indicating high permeability zones and the provisional results from the numerous hydraulic tests. (author) 15 tabs., 35 figs
Energy Technology Data Exchange (ETDEWEB)
Korn, E L
1978-08-01
This thesis is concerned with the effect of classification error on contingency tables being analyzed with hierarchical log-linear models (independence in an I x J table is a particular hierarchical log-linear model). Hierarchical log-linear models provide a concise way of describing independence and partial independences between the different dimensions of a contingency table. The structure of classification errors on contingency tables that will be used throughout is defined. This structure is a generalization of Bross' model, but here attention is paid to the different possible ways a contingency table can be sampled. Hierarchical log-linear models and the effect of misclassification on them are described. Some models, such as independence in an I x J table, are preserved by misclassification, i.e., the presence of classification error will not change the fact that a specific table belongs to that model. Other models are not preserved by misclassification; this implies that the usual tests to see if a sampled table belong to that model will not be of the right significance level. A simple criterion will be given to determine which hierarchical log-linear models are preserved by misclassification. Maximum likelihood theory is used to perform log-linear model analysis in the presence of known misclassification probabilities. It will be shown that the Pitman asymptotic power of tests between different hierarchical log-linear models is reduced because of the misclassification. A general expression will be given for the increase in sample size necessary to compensate for this loss of power and some specific cases will be examined.
Biomass yield and modeling of logging residues of Terminalia ...
African Journals Online (AJOL)
The use of Dbh as an independent variable in the prediction of models for estimating the biomass residues of the tree species was adjudged best because it performed well. The validation results showed that the selected models satisfied the assumptions of regression analysis. The practical implication of the models is that ...
New descriptive temperature model
Bilitza, D.
The model profiles of the electron and ion temperature that have been proposed in connection with the International Reference Ionosphere (IRI) are surveyed, with a review given of the available data base. Plasma density is seen as exerting great influence, at least during daytime. It does not, however, appear to be appropriate for deriving the temperature unambiguously from the density value. On the basis of a comparison of measured data from the AE-C and Aeros-B satellites and incoherent backscatter stations Millstone Hill and Arecibo (U.S.) and Jicamarca (Peru), a new model relation between temperature and density is proposed for daylight hours. The relation depends on altitude and the modified magnetic dip latitude.
New descriptive temperature model
International Nuclear Information System (INIS)
Bilitza, D.
1982-01-01
The model profiles of the electron and ion temperature that have been proposed in connection with the International Reference Ionosphere (IRI) are surveyed, with a review given of the available data base. Plasma density is seen as exerting great influence, at least during daytime. It does not, however, appear to be appropriate for deriving the temperature unambiguously from the density value. On the basis of a comparison of measured data from the AE-C and Aeros-B satellites and incoherent backscatter stations Millstone Hill and Arecibo (U.S.) and Jicamarca (Peru), a new model relation between temperature and density is proposed for daylight hours. The relation depends on altitude and the modified magnetic dip latitude
Bladder cancer mapping in Libya based on standardized morbidity ratio and log-normal model
Alhdiri, Maryam Ahmed; Samat, Nor Azah; Mohamed, Zulkifley
2017-05-01
Disease mapping contains a set of statistical techniques that detail maps of rates based on estimated mortality, morbidity, and prevalence. A traditional approach to measure the relative risk of the disease is called Standardized Morbidity Ratio (SMR). It is the ratio of an observed and expected number of accounts in an area, which has the greatest uncertainty if the disease is rare or if geographical area is small. Therefore, Bayesian models or statistical smoothing based on Log-normal model are introduced which might solve SMR problem. This study estimates the relative risk for bladder cancer incidence in Libya from 2006 to 2007 based on the SMR and log-normal model, which were fitted to data using WinBUGS software. This study starts with a brief review of these models, starting with the SMR method and followed by the log-normal model, which is then applied to bladder cancer incidence in Libya. All results are compared using maps and tables. The study concludes that the log-normal model gives better relative risk estimates compared to the classical method. The log-normal model has can overcome the SMR problem when there is no observed bladder cancer in an area.
Log-normal frailty models fitted as Poisson generalized linear mixed models.
Hirsch, Katharina; Wienke, Andreas; Kuss, Oliver
2016-12-01
The equivalence of a survival model with a piecewise constant baseline hazard function and a Poisson regression model has been known since decades. As shown in recent studies, this equivalence carries over to clustered survival data: A frailty model with a log-normal frailty term can be interpreted and estimated as a generalized linear mixed model with a binary response, a Poisson likelihood, and a specific offset. Proceeding this way, statistical theory and software for generalized linear mixed models are readily available for fitting frailty models. This gain in flexibility comes at the small price of (1) having to fix the number of pieces for the baseline hazard in advance and (2) having to "explode" the data set by the number of pieces. In this paper we extend the simulations of former studies by using a more realistic baseline hazard (Gompertz) and by comparing the model under consideration with competing models. Furthermore, the SAS macro %PCFrailty is introduced to apply the Poisson generalized linear mixed approach to frailty models. The simulations show good results for the shared frailty model. Our new %PCFrailty macro provides proper estimates, especially in case of 4 events per piece. The suggested Poisson generalized linear mixed approach for log-normal frailty models based on the %PCFrailty macro provides several advantages in the analysis of clustered survival data with respect to more flexible modelling of fixed and random effects, exact (in the sense of non-approximate) maximum likelihood estimation, and standard errors and different types of confidence intervals for all variance parameters. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Bayesian log-periodic model for financial crashes
Rodríguez-Caballero, Carlos Vladimir; Knapik, Oskar
2014-10-01
This paper introduces a Bayesian approach in econophysics literature about financial bubbles in order to estimate the most probable time for a financial crash to occur. To this end, we propose using noninformative prior distributions to obtain posterior distributions. Since these distributions cannot be performed analytically, we develop a Markov Chain Monte Carlo algorithm to draw from posterior distributions. We consider three Bayesian models that involve normal and Student's t-distributions in the disturbances and an AR(1)-GARCH(1,1) structure only within the first case. In the empirical part of the study, we analyze a well-known example of financial bubble - the S&P 500 1987 crash - to show the usefulness of the three methods under consideration and crashes of Merval-94, Bovespa-97, IPCMX-94, Hang Seng-97 using the simplest method. The novelty of this research is that the Bayesian models provide 95% credible intervals for the estimated crash time.
Estimating the weight of Douglas-fir tree boles and logs with an iterative computer model.
Dale R. Waddell; Dale L Weyermann; Michael B. Lambert
1987-01-01
A computer model that estimates the green weights of standing trees was developed and validated for old-growth Douglas-fir. The model calculates the green weight for the entire bole, for the bole to any merchantable top, and for any log length within the bole. The model was validated by estimating the bias and accuracy of an independent subsample selected from the...
Olguin, Carlos José Maria; Sampaio, Silvio César; Dos Reis, Ralpho Rinaldo
2017-10-01
The soil sorption coefficient normalized to the organic carbon content (K oc ) is a physicochemical parameter used in environmental risk assessments and in determining the final fate of chemicals released into the environment. Several models for predicting this parameter have been proposed based on the relationship between log K oc and log P. The difficulty and cost of obtaining experimental log P values led to the development of algorithms to calculate these values, some of which are free to use. However, quantitative structure-property relationship (QSPR) studies did not detail how or why a particular algorithm was chosen. In this study, we evaluated several free algorithms for calculating log P in the modeling of log K oc , using a broad and diverse set of compounds (n = 639) that included several chemical classes. In addition, we propose the adoption of a simple test to verify if there is statistical equivalence between models obtained using different data sets. Our results showed that the ALOGPs, KOWWIN and XLOGP3 algorithms generated the best models for modeling K oc , and these models are statistically equivalent. This finding shows that it is possible to use the different algorithms without compromising statistical quality and predictive capacity. Copyright © 2017 Elsevier Ltd. All rights reserved.
A Log Logistic Survival Model Applied to Hypobaric Decompression Sickness
Conkin, Johnny
2001-01-01
Decompression sickness (DCS) is a complex, multivariable problem. A mathematical description or model of the likelihood of DCS requires a large amount of quality research data, ideas on how to define a decompression dose using physical and physiological variables, and an appropriate analytical approach. It also requires a high-performance computer with specialized software. I have used published DCS data to develop my decompression doses, which are variants of equilibrium expressions for evolved gas plus other explanatory variables. My analytical approach is survival analysis, where the time of DCS occurrence is modeled. My conclusions can be applied to simple hypobaric decompressions - ascents lasting from 5 to 30 minutes - and, after minutes to hours, to denitrogenation (prebreathing). They are also applicable to long or short exposures, and can be used whether the sufferer of DCS is at rest or exercising at altitude. Ultimately I would like my models to be applied to astronauts to reduce the risk of DCS during spacewalks, as well as to future spaceflight crews on the Moon and Mars.
Fast inference in generalized linear models via expected log-likelihoods
Ramirez, Alexandro D.; Paninski, Liam
2015-01-01
Generalized linear models play an essential role in a wide variety of statistical applications. This paper discusses an approximation of the likelihood in these models that can greatly facilitate computation. The basic idea is to replace a sum that appears in the exact log-likelihood by an expectation over the model covariates; the resulting “expected log-likelihood” can in many cases be computed significantly faster than the exact log-likelihood. In many neuroscience experiments the distribution over model covariates is controlled by the experimenter and the expected log-likelihood approximation becomes particularly useful; for example, estimators based on maximizing this expected log-likelihood (or a penalized version thereof) can often be obtained with orders of magnitude computational savings compared to the exact maximum likelihood estimators. A risk analysis establishes that these maximum EL estimators often come with little cost in accuracy (and in some cases even improved accuracy) compared to standard maximum likelihood estimates. Finally, we find that these methods can significantly decrease the computation time of marginal likelihood calculations for model selection and of Markov chain Monte Carlo methods for sampling from the posterior parameter distribution. We illustrate our results by applying these methods to a computationally-challenging dataset of neural spike trains obtained via large-scale multi-electrode recordings in the primate retina. PMID:23832289
Czech Academy of Sciences Publication Activity Database
Šafanda, Jan; Heidinger, P.; Wilhelm, H.; Čermák, Vladimír
2005-01-01
Roč. 2, č. 4 (2005), s. 326-331 ISSN 1742-2132 R&D Projects: GA ČR GA205/03/0997; GA MŠk LA 150 Grant - others:Deutsche Forschungsgemeinschaft(DE) WI 687/17-1,2,3 Institutional research plan: CEZ:AV0Z30120515 Keywords : temperature logging * karst formation * Chicxulub Subject RIV: DC - Siesmology, Volcanology, Earth Structure Impact factor: 0.860, year: 2005
Bayesian log-periodic model for financial crashes
DEFF Research Database (Denmark)
Rodríguez-Caballero, Carlos Vladimir; Knapik, Oskar
2014-01-01
This paper introduces a Bayesian approach in econophysics literature about financial bubbles in order to estimate the most probable time for a financial crash to occur. To this end, we propose using noninformative prior distributions to obtain posterior distributions. Since these distributions...... cannot be performed analytically, we develop a Markov Chain Monte Carlo algorithm to draw from posterior distributions. We consider three Bayesian models that involve normal and Student’s t-distributions in the disturbances and an AR(1)-GARCH(1,1) structure only within the first case. In the empirical...... part of the study, we analyze a well-known example of financial bubble – the S&P 500 1987 crash – to show the usefulness of the three methods under consideration and crashes of Merval-94, Bovespa-97, IPCMX-94, Hang Seng-97 using the simplest method. The novelty of this research is that the Bayesian...
Directory of Open Access Journals (Sweden)
Hea-Jung Kim
2017-06-01
Full Text Available This paper develops Bayesian inference in reliability of a class of scale mixtures of log-normal failure time (SMLNFT models with stochastic (or uncertain constraint in their reliability measures. The class is comprehensive and includes existing failure time (FT models (such as log-normal, log-Cauchy, and log-logistic FT models as well as new models that are robust in terms of heavy-tailed FT observations. Since classical frequency approaches to reliability analysis based on the SMLNFT model with stochastic constraint are intractable, the Bayesian method is pursued utilizing a Markov chain Monte Carlo (MCMC sampling based approach. This paper introduces a two-stage maximum entropy (MaxEnt prior, which elicits a priori uncertain constraint and develops Bayesian hierarchical SMLNFT model by using the prior. The paper also proposes an MCMC method for Bayesian inference in the SMLNFT model reliability and calls attention to properties of the MaxEnt prior that are useful for method development. Finally, two data sets are used to illustrate how the proposed methodology works.
Minimizing bias in biomass allometry: Model selection and log transformation of data
Joseph Mascaro; undefined undefined; Flint Hughes; Amanda Uowolo; Stefan A. Schnitzer
2011-01-01
Nonlinear regression is increasingly used to develop allometric equations for forest biomass estimation (i.e., as opposed to the raditional approach of log-transformation followed by linear regression). Most statistical software packages, however, assume additive errors by default, violating a key assumption of allometric theory and possibly producing spurious models....
Rapid Processing of Turner Designs Model 10-Au-005 Internally Logged Fluorescence Data
Continuous recording of dye fluorescence using field fluorometers at selected sampling sites facilitates acquisition of real-time dye tracing data. The Turner Designs Model 10-AU-005 field fluorometer allows for frequent fluorescence readings, data logging, and easy downloading t...
Kooten, van G.C.; Johnston, C.
2014-01-01
Forest product trade analysis is complicated by the inter-relationships among forest products. This paper deals with the development and application of an integrated log-lumber trade model that divides the globe into 20 regions. These regions play a significant role as producers and/or consumers of
Characterisation of non-Gaussian fluctuations in multiplicative log-normal models
Kiyono, Ken; Struzik, Zbigniew R.; Yamamoto, Yoshiharu
2007-07-01
Within the general framework of multiplicative log-normal models, we propose methods to characterise non-Gaussian and intermittent fluctuations, and study basic characteristics of non-Gaussian stochastic processes displaying slow convergence to a Gaussian with an increasing coarse-grained level of the time series. Here the multiplicative log-normal model stands for a stochastic process described by the multiplication of Gaussian and log-normally distributed variables. In other words, using two Gaussian variables, ξ and ω, the time series {xi} of this process can be described as xi = ξi expωi. Depending on the variance of ω, λ2, the probability density function (PDF) of x exhibits a non-Gaussian shape. As the non-Gaussianity parameter λ2 increases, the non-Gaussian tails become fatter. On the other hand, when λ2 → 0, the PDF converges to a Gaussian distribution. For the purpose of estimating the non-Gaussianity parameter λ2 from the observed time series, we evaluate a novel method based on analytical expressions of the absolute moments for the multiplicative log-normal models.
Large ground warming in the Canadian Arctic inferred from inversions of temperature logs
Czech Academy of Sciences Publication Activity Database
Majorowicz, J. A.; Skinner, W. R.; Šafanda, Jan
2004-01-01
Roč. 221, č. 1 (2004), s. 15-25 ISSN 0012-821X Institutional research plan: CEZ:AV0Z3012916 Keywords : global warming * borehole temperatures * ground temperatures Subject RIV: DC - Siesmology, Volcanology, Earth Structure Impact factor: 3.499, year: 2004
A Hierarchical Poisson Log-Normal Model for Network Inference from RNA Sequencing Data
Gallopin, Mélina; Rau, Andrea; Jaffrézic, Florence
2013-01-01
Gene network inference from transcriptomic data is an important methodological challenge and a key aspect of systems biology. Although several methods have been proposed to infer networks from microarray data, there is a need for inference methods able to model RNA-seq data, which are count-based and highly variable. In this work we propose a hierarchical Poisson log-normal model with a Lasso penalty to infer gene networks from RNA-seq data; this model has the advantage of directly modelling discrete data and accounting for inter-sample variance larger than the sample mean. Using real microRNA-seq data from breast cancer tumors and simulations, we compare this method to a regularized Gaussian graphical model on log-transformed data, and a Poisson log-linear graphical model with a Lasso penalty on power-transformed data. For data simulated with large inter-sample dispersion, the proposed model performs better than the other methods in terms of sensitivity, specificity and area under the ROC curve. These results show the necessity of methods specifically designed for gene network inference from RNA-seq data. PMID:24147011
Temperature dependence of scintillation properties of bright oxide scintillators for well-logging
Czech Academy of Sciences Publication Activity Database
Yanagida, T.; Fujimoto, Y.; Kurosawa, S.; Kamada, K.; Takahashi, H.; Fukazawa, Y.; Nikl, Martin; Chani, V.
2013-01-01
Roč. 52, č. 7 (2013), "076401-1"-"076401-6" ISSN 0021-4922 Institutional support: RVO:68378271 Keywords : scintillator * high temperature * light yield Subject RIV: BM - Solid Matter Physics ; Magnetism Impact factor: 1.057, year: 2013
Sensitivity of Population Size Estimation for Violating Parametric Assumptions in Log-linear Models
Directory of Open Access Journals (Sweden)
Gerritse Susanna C.
2015-09-01
Full Text Available An important quality aspect of censuses is the degree of coverage of the population. When administrative registers are available undercoverage can be estimated via capture-recapture methodology. The standard approach uses the log-linear model that relies on the assumption that being in the first register is independent of being in the second register. In models using covariates, this assumption of independence is relaxed into independence conditional on covariates. In this article we describe, in a general setting, how sensitivity analyses can be carried out to assess the robustness of the population size estimate. We make use of log-linear Poisson regression using an offset, to simulate departure from the model. This approach can be extended to the case where we have covariates observed in both registers, and to a model with covariates observed in only one register. The robustness of the population size estimate is a function of implied coverage: as implied coverage is low the robustness is low. We conclude that it is important for researchers to investigate and report the estimated robustness of their population size estimate for quality reasons. Extensions are made to log-linear modeling in case of more than two registers and the multiplier method
Wang, Bin-Bin; Liu, Cai-Gang; Lu, Ping; Latengbaolide, A; Lu, Yang
2011-06-21
To investigate the efficiency of Cox proportional hazard model in detecting prognostic factors for gastric cancer. We used the log-normal regression model to evaluate prognostic factors in gastric cancer and compared it with the Cox model. Three thousand and eighteen gastric cancer patients who received a gastrectomy between 1980 and 2004 were retrospectively evaluated. Clinic-pathological factors were included in a log-normal model as well as Cox model. The akaike information criterion (AIC) was employed to compare the efficiency of both models. Univariate analysis indicated that age at diagnosis, past history, cancer location, distant metastasis status, surgical curative degree, combined other organ resection, Borrmann type, Lauren's classification, pT stage, total dissected nodes and pN stage were prognostic factors in both log-normal and Cox models. In the final multivariate model, age at diagnosis, past history, surgical curative degree, Borrmann type, Lauren's classification, pT stage, and pN stage were significant prognostic factors in both log-normal and Cox models. However, cancer location, distant metastasis status, and histology types were found to be significant prognostic factors in log-normal results alone. According to AIC, the log-normal model performed better than the Cox proportional hazard model (AIC value: 2534.72 vs 1693.56). It is suggested that the log-normal regression model can be a useful statistical model to evaluate prognostic factors instead of the Cox proportional hazard model.
Energy Technology Data Exchange (ETDEWEB)
Anderson, David W. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
1993-12-15
Two logs of EE-3A were performed during the last couple of weeks. The first of which, was a Temperature/Casing-Collar Locator (CCL) log, which took place on Friday, December 10th., 1993. The second log was a Caliper log which was done in cooperation with the Dia-Log Company, of Odessa, TX. on Monday, December, 13th., 1993.
Wu, Wenting; Grana, Dario
2017-11-01
Rock and fluid volumetric properties, such as porosity, saturation, and mineral volumes, are generally estimated from petrophysical measurements such as density, resistivity, neutron porosity and gamma ray, through petrophysical equations. The computed petrophysical properties and sonic log measurements are generally used to estimate the petro-elastic relationship between elastic and rock and fluid volumetric properties used in reservoir characterization. In this paper, we present a unified workflow that includes petrophysical relations and rock physics models for the estimation of rock and fluid properties from elastic, electrical, and petrophysical (porosity, density, and lithology) measurements. The multi-physics model we propose has the advantage of accounting for the coupled effect of rock and fluid properties in the joint petro-elastic and electrical domains, and potentially reduce the uncertainty in the well log interpretation. Furthermore, the presented workflow can be eventually extended to three-dimensional reservoir characterization problems, where seismic and electromagnetic data are available. To demonstrate the validity of the methodology, we show the application of this multi-physics model to both laboratory measurements and well log data.
Kunina-Habenicht, Olga; Rupp, Andre A.; Wilhelm, Oliver
2012-01-01
Using a complex simulation study we investigated parameter recovery, classification accuracy, and performance of two item-fit statistics for correct and misspecified diagnostic classification models within a log-linear modeling framework. The basic manipulated test design factors included the number of respondents (1,000 vs. 10,000), attributes (3…
DEFF Research Database (Denmark)
Larsen, Finn; Ormarsson, Sigurdur
2014-01-01
to a moisture content (MC) of 18% before TSt tests at 20°C, 60°C, and 90°C were carried out. The maximum stress results of the disc simulations by FEM were compared with the experimental strength results at the same temperature levels. There is a rather good agreement between the results of modeling......Timber is normally dried by kiln drying, in the course of which moisture-induced stresses and fractures can occur. Cracks occur primarily in the radial direction due to tangential tensile strength (TSt) that exceeds the strength of the material. The present article reports on experiments...... and numerical simulations by finite element modeling (FEM) concerning the TSt and fracture behavior of Norway spruce under various climatic conditions. Thin log disc specimens were studied to simplify the description of the moisture flow in the samples. The specimens designed for TS were acclimatized...
Directory of Open Access Journals (Sweden)
Fatahillah Yosar
2017-01-01
Full Text Available Ngimbang Formation is known as one major source of hydrocarbon supply in the North Eastern Java Basin. Aged Mid-Eocene, Ngimbang is dominated by sedimentary clastic rocks mostly shale, shaly sandstone, and thick layers of limestone (CD Limestone, with thin layers of coal. Although, laboratory analyses show the Ngimbang Formation to be a relatively rich source-rocks, such data are typically too limited to regionally quantify the distribution of organic matter. To adequately sample the formation both horizontally and vertically on a basin–wide scale, large number of costly and time consuming laboratory analyses would be required. Such analyses are prone to errors from a number of sources, and core data are frequently not available at key locations. In this paper, the authors established four TOC (Total Organic Carbon Content logging calculation models; Passey, Schmoker-Hester, Meyer-Nederloff, and Decker/Density Model by considering the geology of Ngimbang. Well data along with its available core data was used to determine the most suitable model to be applied in the well AFA-1, as well as to compare the accuracy of these TOC model values. The result shows good correlation using Decker (TOC Model and Mallick-Raju (Ro- Vitrinite Reflectance Model. Two source rocks potential zones were detected by these log models.
Mathematical model of gamma-ray spectrometry borehole logging for quantitative analysis
Schimschal, Ulrich
1981-01-01
A technique for analyzing gamma-ray spectral-logging data has been developed, in which a digital computer is used to calculate the effects of gamma-ray attentuation in a borehole environment. The computer model allows for the calculation of the effects of lithology, porosity, density, and the thickness of a horizontal layer of uniformly distributed radioactive material surrounding a centralized probe in a cylindrical borehole. The computer program also contains parameters for the calculation of the effects of well casing, drilling fluid, probe housing, and losses through the sodium-iodide crystal. Errors associated with the commonly used mathematical assumption of a point detector are eliminated in this model. (USGS)
STOCHASTIC PRICING MODEL FOR THE REAL ESTATE MARKET: FORMATION OF LOG-NORMAL GENERAL POPULATION
Directory of Open Access Journals (Sweden)
Oleg V. Rusakov
2015-01-01
Full Text Available We construct a stochastic model of real estate pricing. The method of the pricing construction is based on a sequential comparison of the supply prices. We proof that under standard assumptions imposed upon the comparison coefficients there exists an unique non-degenerated limit in distribution and this limit has the lognormal law of distribution. The accordance of empirical distributions of prices to thetheoretically obtained log-normal distribution we verify by numerous statistical data of real estate prices from Saint-Petersburg (Russia. For establishing this accordance we essentially apply the efficient and sensitive test of fit of Kolmogorov-Smirnov. Basing on “The Russian Federal Estimation Standard N2”, we conclude that the most probable price, i.e. mode of distribution, is correctly and uniquely defined under the log-normal approximation. Since the mean value of log-normal distribution exceeds the mode - most probable value, it follows that the prices valued by the mathematical expectation are systematically overstated.
Anti Rohumaa; Akio Yamamoto; Christopher G. Hunt; Charles R. Frihart; Mark Hughes; Jaan Kers
2016-01-01
Heating logs prior to peeling positively affects the surface properties of veneer as well as the wood-adhesive bond strength. However, the mechanism behind this increase in strength is not fully understood. The aim of the present study was to separate the influence of soaking temperature and peeling temperature on the physical surface properties and bonding quality....
dos Reis, Ralpho Rinaldo; Sampaio, Silvio César; de Melo, Eduardo Borges
2013-10-01
Collecting data on the effects of pesticides on the environment is a slow and costly process. Therefore, significant efforts have been focused on the development of models that predict physical, chemical or biological properties of environmental interest. The soil sorption coefficient normalized to the organic carbon content (Koc) is a key parameter that is used in environmental risk assessments. Thus, several log Koc prediction models that use the hydrophobic parameter log P as a descriptor have been reported in the literature. Often, algorithms are used to calculate the value of log P due to the lack of experimental values for this property. Despite the availability of various algorithms, previous studies fail to describe the procedure used to select the appropriate algorithm. In this study, models that correlate log Koc with log P were developed for a heterogeneous group of nonionic pesticides using different freeware algorithms. The statistical qualities and predictive power of all of the models were evaluated. Thus, this study was conducted to assess the effect of the log P algorithm choice on log Koc modeling. The results clearly demonstrate that the lack of a selection criterion may result in inappropriate prediction models. Seven algorithms were tested, of which only two (ALOGPS and KOWWIN) produced good results. A sensible choice may result in simple models with statistical qualities and predictive power values that are comparable to those of more complex models. Therefore, the selection of the appropriate log P algorithm for modeling log Koc cannot be arbitrary but must be based on the chemical structure of compounds and the characteristics of the available algorithms. Copyright © 2013 Elsevier Ltd. All rights reserved.
Log-linear model-based multifactor dimensionality reduction method to detect gene gene interactions.
Lee, Seung Yeoun; Chung, Yujin; Elston, Robert C; Kim, Youngchul; Park, Taesung
2007-10-01
The identification and characterization of susceptibility genes that influence the risk of common and complex diseases remains a statistical and computational challenge in genetic association studies. This is partly because the effect of any single genetic variant for a common and complex disease may be dependent on other genetic variants (gene-gene interaction) and environmental factors (gene-environment interaction). To address this problem, the multifactor dimensionality reduction (MDR) method has been proposed by Ritchie et al. to detect gene-gene interactions or gene-environment interactions. The MDR method identifies polymorphism combinations associated with the common and complex multifactorial diseases by collapsing high-dimensional genetic factors into a single dimension. That is, the MDR method classifies the combination of multilocus genotypes into high-risk and low-risk groups based on a comparison of the ratios of the numbers of cases and controls. When a high-order interaction model is considered with multi-dimensional factors, however, there may be many sparse or empty cells in the contingency tables. The MDR method cannot classify an empty cell as high risk or low risk and leaves it as undetermined. In this article, we propose the log-linear model-based multifactor dimensionality reduction (LM MDR) method to improve the MDR in classifying sparse or empty cells. The LM MDR method estimates frequencies for empty cells from a parsimonious log-linear model so that they can be assigned to high-and low-risk groups. In addition, LM MDR includes MDR as a special case when the saturated log-linear model is fitted. Simulation studies show that the LM MDR method has greater power and smaller error rates than the MDR method. The LM MDR method is also compared with the MDR method using as an example sporadic Alzheimer's disease.
Etalle, Sandro; Massacci, Fabio; Yautsiukhin, Artsiom; Lambrinoudakis, Costas; Pernul, Günther; Tjoa, A Min
While logging events is becoming increasingly common in computing, in communication and in collaborative environments, log systems need to satisfy increasingly challenging (if not conflicting) requirements. In this paper we propose a high-level framework for modeling log systems, and reasoning about
Fantazzini, Dean; Geraskin, Petr
2011-01-01
Sornette et al. (1996), Sornette and Johansen (1997), Johansen et al. (2000) and Sornette (2003a) proposed that, prior to crashes, the mean function of a stock index price time series is characterized by a power law decorated with log-periodic oscillations, leading to a critical point that describes the beginning of the market crash. This paper reviews the original Log-Periodic Power Law (LPPL) model for financial bubble modelling, and discusses early criticism and recent generalizations prop...
Technology diffusion in hospitals: a log odds random effects regression model.
Blank, Jos L T; Valdmanis, Vivian G
2015-01-01
This study identifies the factors that affect the diffusion of hospital innovations. We apply a log odds random effects regression model on hospital micro data. We introduce the concept of clustering innovations and the application of a log odds random effects regression model to describe the diffusion of technologies. We distinguish a number of determinants, such as service, physician, and environmental, financial and organizational characteristics of the 60 Dutch hospitals in our sample. On the basis of this data set on Dutch general hospitals over the period 1995-2002, we conclude that there is a relation between a number of determinants and the diffusion of innovations underlining conclusions from earlier research. Positive effects were found on the basis of the size of the hospitals, competition and a hospital's commitment to innovation. It appears that if a policy is developed to further diffuse innovations, the external effects of demand and market competition need to be examined, which would de facto lead to an efficient use of technology. For the individual hospital, instituting an innovations office appears to be the most prudent course of action. © 2013 The Authors. International Journal of Health Planning and Management published by John Wiley & Sons, Ltd.
Log-linear model based behavior selection method for artificial fish swarm algorithm.
Huang, Zhehuang; Chen, Yidong
2015-01-01
Artificial fish swarm algorithm (AFSA) is a population based optimization technique inspired by social behavior of fishes. In past several years, AFSA has been successfully applied in many research and application areas. The behavior of fishes has a crucial impact on the performance of AFSA, such as global exploration ability and convergence speed. How to construct and select behaviors of fishes are an important task. To solve these problems, an improved artificial fish swarm algorithm based on log-linear model is proposed and implemented in this paper. There are three main works. Firstly, we proposed a new behavior selection algorithm based on log-linear model which can enhance decision making ability of behavior selection. Secondly, adaptive movement behavior based on adaptive weight is presented, which can dynamically adjust according to the diversity of fishes. Finally, some new behaviors are defined and introduced into artificial fish swarm algorithm at the first time to improve global optimization capability. The experiments on high dimensional function optimization showed that the improved algorithm has more powerful global exploration ability and reasonable convergence speed compared with the standard artificial fish swarm algorithm.
Use of critical pathway models and log-normal frequency distributions for siting nuclear facilities
International Nuclear Information System (INIS)
Waite, D.A.; Denham, D.H.
1975-01-01
The advantages and disadvantages of potential sites for nuclear facilities are evaluated through the use of environmental pathway and log-normal distribution analysis. Environmental considerations of nuclear facility siting are necessarily geared to the identification of media believed to be sifnificant in terms of dose to man or to be potential centres for long-term accumulation of contaminants. To aid in meeting the scope and purpose of this identification, an exposure pathway diagram must be developed. This type of diagram helps to locate pertinent environmental media, points of expected long-term contaminant accumulation, and points of population/contaminant interface for both radioactive and non-radioactive contaminants. Confirmation of facility siting conclusions drawn from pathway considerations must usually be derived from an investigatory environmental surveillance programme. Battelle's experience with environmental surveillance data interpretation using log-normal techniques indicates that this distribution has much to offer in the planning, execution and analysis phases of such a programme. How these basic principles apply to the actual siting of a nuclear facility is demonstrated for a centrifuge-type uranium enrichment facility as an example. A model facility is examined to the extent of available data in terms of potential contaminants and facility general environmental needs. A critical exposure pathway diagram is developed to the point of prescribing the characteristics of an optimum site for such a facility. Possible necessary deviations from climatic constraints are reviewed and reconciled with conclusions drawn from the exposure pathway analysis. Details of log-normal distribution analysis techniques are presented, with examples of environmental surveillance data to illustrate data manipulation techniques and interpretation procedures as they affect the investigatory environmental surveillance programme. Appropriate consideration is given these
Zero temperature landscape of the random sine-Gordon model
International Nuclear Information System (INIS)
Sanchez, A.; Bishop, A.R.; Cai, D.
1997-01-01
We present a preliminary summary of the zero temperature properties of the two-dimensional random sine-Gordon model of surface growth on disordered substrates. We found that the properties of this model can be accurately computed by using lattices of moderate size as the behavior of the model turns out to be independent of the size above certain length (∼ 128 x 128 lattices). Subsequently, we show that the behavior of the height difference correlation function is of (log r) 2 type up to a certain correlation length (ξ ∼ 20), which rules out predictions of log r behavior for all temperatures obtained by replica-variational techniques. Our results open the way to a better understanding of the complex landscape presented by this system, which has been the subject of very many (contradictory) analysis
Energy Technology Data Exchange (ETDEWEB)
Seamount, D.T. Jr.; Elders, W.A.
1981-01-01
Downhole electrical and gamma-gamma density logs from nine wells weere studed and these wireline log parameters with petrologic, temperature, and petrophysical data were correlated. Here, wells M-43, T-366, and M-107 are discussed in detail as typical cases. Log data for shales show good correlation with four zones of hydrothermal alteration previously recognized on the basis of characteristic mineral assemblages and temperatures. These zones are the unaltered montmorillonite zone (< 150/sup 0/C), the illite zone (150/sup 0/C to 230/sup 0/C to 245/sup 0/C), the chlorite zone (235/sup 0/C to 300/sup 0/C, equivalent to the calc-silicate I zone in sands), and the feldspar zone (> 300/sup 0/C, equivalent to the calc-silicate II zone in sands),
Directory of Open Access Journals (Sweden)
Hao Xu
2016-01-01
Full Text Available The tight gas reservoir in the fifth member of the Xujiahe formation contains heterogeneous interlayers of sandstone and shale that are low in both porosity and permeability. Elastic characteristics of sandstone and shale are analyzed in this study based on petrophysics tests. The tests indicate that sandstone and mudstone samples have different stress-strain relationships. The rock tends to exhibit elastic-plastic deformation. The compressive strength correlates with confinement pressure and elastic modulus. The results based on thin-bed log interpretation match dynamic Young’s modulus and Poisson’s ratio predicted by theory. The compressive strength is calculated from density, elastic impedance, and clay contents. The tensile strength is calibrated using compressive strength. Shear strength is calculated with an empirical formula. Finally, log interpretation of rock mechanical properties is performed on the fifth member of the Xujiahe formation. Natural fractures in downhole cores and rock microscopic failure in the samples in the cross section demonstrate that tensile fractures were primarily observed in sandstone, and shear fractures can be observed in both mudstone and sandstone. Based on different elasticity and plasticity of different rocks, as well as the characteristics of natural fractures, a fracture propagation model was built.
Taslimitehrani, Vahid; Dong, Guozhu; Pereira, Naveen L; Panahiazar, Maryam; Pathak, Jyotishman
2016-04-01
Computerized survival prediction in healthcare identifying the risk of disease mortality, helps healthcare providers to effectively manage their patients by providing appropriate treatment options. In this study, we propose to apply a classification algorithm, Contrast Pattern Aided Logistic Regression (CPXR(Log)) with the probabilistic loss function, to develop and validate prognostic risk models to predict 1, 2, and 5year survival in heart failure (HF) using data from electronic health records (EHRs) at Mayo Clinic. The CPXR(Log) constructs a pattern aided logistic regression model defined by several patterns and corresponding local logistic regression models. One of the models generated by CPXR(Log) achieved an AUC and accuracy of 0.94 and 0.91, respectively, and significantly outperformed prognostic models reported in prior studies. Data extracted from EHRs allowed incorporation of patient co-morbidities into our models which helped improve the performance of the CPXR(Log) models (15.9% AUC improvement), although did not improve the accuracy of the models built by other classifiers. We also propose a probabilistic loss function to determine the large error and small error instances. The new loss function used in the algorithm outperforms other functions used in the previous studies by 1% improvement in the AUC. This study revealed that using EHR data to build prediction models can be very challenging using existing classification methods due to the high dimensionality and complexity of EHR data. The risk models developed by CPXR(Log) also reveal that HF is a highly heterogeneous disease, i.e., different subgroups of HF patients require different types of considerations with their diagnosis and treatment. Our risk models provided two valuable insights for application of predictive modeling techniques in biomedicine: Logistic risk models often make systematic prediction errors, and it is prudent to use subgroup based prediction models such as those given by CPXR(Log
Hurwitz, S.; Farrar, C.D.; Williams, C.F.
2010-01-01
Long Valley Caldera in eastern California formed 0.76Ma ago in a cataclysmic eruption that resulted in the deposition of 600km3 of Bishop Tuff. The total current heat flow from the caldera floor is estimated to be ~290MW, and a geothermal power plant in Casa Diablo on the flanks of the resurgent dome (RD) generates ~40MWe. The RD in the center of the caldera was uplifted by ~80cm between 1980 and 1999 and was explained by most models as a response to magma intrusion into the shallow crust. This unrest has led to extensive research on geothermal resources and volcanic hazards in the caldera. Here we present results from precise, high-resolution, temperature-depth profiles in five deep boreholes (327-1,158m) on the RD to assess its thermal state, and more specifically 1) to provide bounds on the advective heat transport as a guide for future geothermal exploration, 2) to provide constraints on the occurrence of magma at shallow crustal depths, and 3) to provide a baseline for future transient thermal phenomena in response to large earthquakes, volcanic activity, or geothermal production. The temperature profiles display substantial non-linearity within each profile and variability between the different profiles. All profiles display significant temperature reversals with depth and temperature gradients temperature in the individual boreholes ranges between 124.7??C and 129.5??C and bottom hole temperatures range between 99.4??C and 129.5??C. The high-temperature units in the three Fumarole Valley boreholes are at the approximate same elevation as the high-temperature unit in borehole M-1 in Casa Diablo indicating lateral or sub-lateral hydrothermal flow through the resurgent dome. Small differences in temperature between measurements in consecutive years in three of the wells suggest slow cooling of the shallow hydrothermal flow system. By matching theoretical curves to segments of the measured temperature profiles, we calculate horizontal groundwater velocities in
Lee, S. K.; Lee, Y.; Lee, C.
2016-12-01
Estimation of deep temperature is significant procedure for exploration, development and sustainable use of geothermal resources in the geothermal area. For estimating subsurface temperature, there have been suggested many techniques for indirect geothermometers, such as mineral geothermometer, hydrochemical geothermometer, isotropic geothermometer, electromagnetic (EM) geothermometer and so forth. In this study, we have tested the feasibility of EM geothermometer using integrated frameworks of geothermal and geo-electromagnetic models. For this purpose, we have developed geothermal temperature model together with EM model based on common earth model, which satisfies all observed geoscientific data set including surface geology, structural geology, well log data, and geophysical data. We develop a series of plugin modules for integration of geo-electromagnetic modeling and inversion algorithms on a common geological modeling platform. The subsurface temperature with time are modeled by solving heat transfer equations using finite element method (FEM). The temperature dependent conductivity model are obtained by the temperature-conductivity relations to perform geo-electromagnetic modeling, such as magnetotelluric to analyze temperature model from EM data.
Rodríguez-Barranco, Miguel; Tobías, Aurelio; Redondo, Daniel; Molina-Portillo, Elena; Sánchez, María José
2017-03-17
Meta-analysis is very useful to summarize the effect of a treatment or a risk factor for a given disease. Often studies report results based on log-transformed variables in order to achieve the principal assumptions of a linear regression model. If this is the case for some, but not all studies, the effects need to be homogenized. We derived a set of formulae to transform absolute changes into relative ones, and vice versa, to allow including all results in a meta-analysis. We applied our procedure to all possible combinations of log-transformed independent or dependent variables. We also evaluated it in a simulation based on two variables either normally or asymmetrically distributed. In all the scenarios, and based on different change criteria, the effect size estimated by the derived set of formulae was equivalent to the real effect size. To avoid biased estimates of the effect, this procedure should be used with caution in the case of independent variables with asymmetric distributions that significantly differ from the normal distribution. We illustrate an application of this procedure by an application to a meta-analysis on the potential effects on neurodevelopment in children exposed to arsenic and manganese. The procedure proposed has been shown to be valid and capable of expressing the effect size of a linear regression model based on different change criteria in the variables. Homogenizing the results from different studies beforehand allows them to be combined in a meta-analysis, independently of whether the transformations had been performed on the dependent and/or independent variables.
Directory of Open Access Journals (Sweden)
Miguel Rodríguez-Barranco
2017-03-01
Full Text Available Abstract Background Meta-analysis is very useful to summarize the effect of a treatment or a risk factor for a given disease. Often studies report results based on log-transformed variables in order to achieve the principal assumptions of a linear regression model. If this is the case for some, but not all studies, the effects need to be homogenized. Methods We derived a set of formulae to transform absolute changes into relative ones, and vice versa, to allow including all results in a meta-analysis. We applied our procedure to all possible combinations of log-transformed independent or dependent variables. We also evaluated it in a simulation based on two variables either normally or asymmetrically distributed. Results In all the scenarios, and based on different change criteria, the effect size estimated by the derived set of formulae was equivalent to the real effect size. To avoid biased estimates of the effect, this procedure should be used with caution in the case of independent variables with asymmetric distributions that significantly differ from the normal distribution. We illustrate an application of this procedure by an application to a meta-analysis on the potential effects on neurodevelopment in children exposed to arsenic and manganese. Conclusions The procedure proposed has been shown to be valid and capable of expressing the effect size of a linear regression model based on different change criteria in the variables. Homogenizing the results from different studies beforehand allows them to be combined in a meta-analysis, independently of whether the transformations had been performed on the dependent and/or independent variables.
International Nuclear Information System (INIS)
Souza, Edmilson Monteiro de; Silva, Ademir Xavier da; Lopes, Ricardo T.; Correa, Samanda Cristine Arruda; Rocha, Paula L.F.
2011-01-01
This paper evaluates the absorbed dose and the effective dose on operators during the petroleum well logging with nuclear wireless that uses gamma radiation sources. To obtain the data, a typical scenery of a logging procedure will be simulated with MCNPX Monte Carlo code. The simulated logging probe was the Density Gamma Probe - TRISOND produced by Robertson Geolloging. The absorbed dose values were estimated through the anthropomorphic simulator in male voxel MAX. The effective dose values were obtained using the ICRP 103
Applications of a stump-to-mill computer model to cable logging planning
Chris B. LeDoux
1986-01-01
Logging cost simulators and data from logging cost studies have been assembled and converted into a series of simple equations that can be used to estimate the stump-to-mill cost of cable logging in mountainous terrain of the Eastern United States. These equations are based on the use of two small and four medium-sized cable yarders and are applicable for harvests of...
TENVERGERT, E; GILLESPIE, M; KINGMA, J
This paper shows how to use the log-linear subroutine of SPSS to fit the Rasch model. It also shows how to fit less restrictive models obtained by relaxing specific assumptions of the Rasch model. Conditional maximum likelihood estimation was achieved by including dummy variables for the total
Modelling water temperature in TOXSWA
Jacobs, C.M.J.; Deneer, J.W.; Adriaanse, P.I.
2010-01-01
A reasonably accurate estimate of the water temperature is necessary for a good description of the degradation of plant protection products in water which is used in the surface water model TOXSWA. Based on a consideration of basic physical processes that describe the influence of weather on the
A stable and robust calibration scheme of the log-periodic power law model
Filimonov, V.; Sornette, D.
2013-09-01
We present a simple transformation of the formulation of the log-periodic power law formula of the Johansen-Ledoit-Sornette (JLS) model of financial bubbles that reduces it to a function of only three nonlinear parameters. The transformation significantly decreases the complexity of the fitting procedure and improves its stability tremendously because the modified cost function is now characterized by good smooth properties with in general a single minimum in the case where the model is appropriate to the empirical data. We complement the approach with an additional subordination procedure that slaves two of the nonlinear parameters to the most crucial nonlinear parameter, the critical time tc, defined in the JLS model as the end of the bubble and the most probable time for a crash to occur. This further decreases the complexity of the search and provides an intuitive representation of the results of the calibration. With our proposed methodology, metaheuristic searches are not longer necessary and one can resort solely to rigorous controlled local search algorithms, leading to a dramatic increase in efficiency. Empirical tests on the Shanghai Composite index (SSE) from January 2007 to March 2008 illustrate our findings.
Celsie, Alena; Parnis, J Mark; Mackay, Donald
2016-03-01
The effects of temperature, pH, and salinity change on naphthenic acids (NAs) present in oil-sands process wastewater were modeled for 55 representative NAs. COSMO-RS was used to estimate octanol-water (KOW) and octanol-air (KOA) partition ratios and Henry's law constants (H). Validation with experimental carboxylic acid data yielded log KOW and log H RMS errors of 0.45 and 0.55 respectively. Calculations of log KOW, (or log D, for pH-dependence), log KOA and log H (or log HD, for pH-dependence) were made for model NAs between -20 °C and 40 °C, pH between 0 and 14, and salinity between 0 and 3 g NaCl L(-1). Temperature increase by 60 °C resulted in 3-5 log unit increase in H and a similar magnitude decrease in KOA. pH increase above the NA pKa resulted in a dramatic decrease in both log D and log HD. Salinity increase over the 0-3 g NaCl L(-1) range resulted in a 0.3 log unit increase on average for KOW and H values. Log KOW values of the sodium salt and anion of the conjugate base were also estimated to examine their potential for contribution to the overall partitioning of NAs. Sodium salts and anions of naphthenic acids are predicted to have on average 4 log units and 6 log units lower log KOW values, respectively, with respect to the corresponding neutral NA. Partitioning properties are profoundly influenced by the by the relative prevailing pH and the substance's pKa at the relevant temperature. Copyright © 2015 Elsevier Ltd. All rights reserved.
Hao, Jiangang; Smith, Lawrence; Mislevy, Robert; von Davier, Alina; Bauer, Malcolm
2016-01-01
Extracting information efficiently from game/simulation-based assessment (G/SBA) logs requires two things: a well-structured log file and a set of analysis methods. In this report, we propose a generic data model specified as an extensible markup language (XML) schema for the log files of G/SBAs. We also propose a set of analysis methods for…
Alam, N. M.; Sharma, G. C.; Moreira, Elsa; Jana, C.; Mishra, P. K.; Sharma, N. K.; Mandal, D.
2017-08-01
Markov chain and 3-dimensional log-linear models were attempted to model drought class transitions derived from the newly developed drought index the Standardized Precipitation Evapotranspiration Index (SPEI) at a 12 month time scale for six major drought prone areas of India. Log-linear modelling approach has been used to investigate differences relative to drought class transitions using SPEI-12 time series derived form 48 yeas monthly rainfall and temperature data. In this study, the probabilities of drought class transition, the mean residence time, the 1, 2 or 3 months ahead prediction of average transition time between drought classes and the drought severity class have been derived. Seasonality of precipitation has been derived for non-homogeneous Markov chains which could be used to explain the effect of the potential retreat of drought. Quasi-association and Quasi-symmetry log-linear models have been fitted to the drought class transitions derived from SPEI-12 time series. The estimates of odds along with their confidence intervals were obtained to explain the progression of drought and estimation of drought class transition probabilities. For initial months as the drought severity increases the calculated odds shows lower value and the odds decreases for the succeeding months. This indicates that the ratio of expected frequencies of occurrence of transition from drought class to the non-drought class decreases as compared to transition to any drought class when the drought severity of the present class increases. From 3-dimensional log-linear model it is clear that during the last 24 years the drought probability has increased for almost all the six regions. The findings from the present study will immensely help to assess the impact of drought on the gross primary production and to develop future contingent planning in similar regions worldwide.
Density gamma gamma logging of oil wells
International Nuclear Information System (INIS)
Gulin, Yu.A.
1974-01-01
The application of gamma-gamma density logging for the evaluation of the volume weight and porosity of terrigenous and carbonate rocks in oil and gas boreholes is discussed. A two-probe (155 and 360 mm) apparatus has been developed for this purpose and has been in serial production since 1970. It is designed for use in boreholes between 190 and 300 mm in diameter and down to 4.000 metres deep at a maximum temperature of up to 120 deg C. The radiation source is 137 Cs with an activity of up to 100 kg-eq Ra. To interpret the results, measuring grids have been compiled in accordance with the experimental measurements taken on models of the strata. For carbonate sections combination of gamma-gamma density logging and epithermal-neutron-neutron logging is recommended. A combination of gamma-gamma density logging and neutron-gamma logging is used to evaluate the clayness of terrigenous deposits
Chen, Wansu; Shi, Jiaxiao; Qian, Lei; Azen, Stanley P
2014-06-26
To estimate relative risks or risk ratios for common binary outcomes, the most popular model-based methods are the robust (also known as modified) Poisson and the log-binomial regression. Of the two methods, it is believed that the log-binomial regression yields more efficient estimators because it is maximum likelihood based, while the robust Poisson model may be less affected by outliers. Evidence to support the robustness of robust Poisson models in comparison with log-binomial models is very limited. In this study a simulation was conducted to evaluate the performance of the two methods in several scenarios where outliers existed. The findings indicate that for data coming from a population where the relationship between the outcome and the covariate was in a simple form (e.g. log-linear), the two models yielded comparable biases and mean square errors. However, if the true relationship contained a higher order term, the robust Poisson models consistently outperformed the log-binomial models even when the level of contamination is low. The robust Poisson models are more robust (or less sensitive) to outliers compared to the log-binomial models when estimating relative risks or risk ratios for common binary outcomes. Users should be aware of the limitations when choosing appropriate models to estimate relative risks or risk ratios.
Anti Rohumaa; Toni Antikainen; Christopher G. Hunt; Charles R. Frihart; Mark Hughes
2016-01-01
Wood material surface properties play an important role in adhesive bond formation and performance. In the present study, a test method was developed to evaluate the integrity of the wood surface, and the results were used to understand bond performance. Materials used were rotary cut birch (Betula pendula Roth) veneers, produced from logs soaked at 20 or 70 Â°C prior...
Czech Academy of Sciences Publication Activity Database
Majorowicz, J.; Šafanda, Jan; Wróblewska, M.; Szewczyk, J.; Čermák, Vladimír
2008-01-01
Roč. 97, č. 2 (2008), s. 307-315 ISSN 1437-3254 Grant - others:Polish Ministry of Science(PL) 4 T12B 00429 Institutional research plan: CEZ:AV0Z30120515 Source of funding: V - iné verejné zdroje Keywords : equilibrium deep temperature logs * heat flow variation with depth * Northwestern Poland Subject RIV: DC - Siesmology, Volcanology, Earth Structure Impact factor: 1.970, year: 2008
Log-rise of the resistivity in the holographic Kondo model
Padhi, Bikash; Tiwari, Apoorv; Setty, Chandan; Phillips, Philip W.
2018-03-01
We study a single-channel Kondo effect using a recently developed [1-4] holographic large-N technique. In order to obtain resistivity of this model, we introduce a probe field. The gravity dual of a localized fermionic impurity in 1 +1 -dimensional host matter is constructed by embedding a localized two-dimensional Anti-de Sitter (AdS2 )-brane in the bulk of three-dimensional AdS3 . This helps us construct an impurity charge density which acts as a source to the bulk equation of motion of the probe gauge field. The functional form of the charge density is obtained independently by solving the equations of motion for the fields confined to the AdS2 -brane. The asymptotic solution of the probe field is dictated by the impurity charge density, which in turn affects the current-current correlation functions and hence the resistivity. Our choice of parameters tunes the near-boundary impurity current to be marginal, resulting in a log T behavior in the UV resistivity, as is expected for the Kondo problem. The resistivity at the IR fixed point turns out to be zero, signaling a complete screening of the impurity.
International Nuclear Information System (INIS)
Zazula, J.M.
1988-01-01
The self-learning Monte Carlo technique has been implemented to the commonly used general purpose neutron transport code MORSE, in order to enhance sampling of the particle histories that contribute to a detector response. The parameters of all the biasing techniques available in MORSE, i.e. of splitting, Russian roulette, source and collision outgoing energy importance sampling, path length transformation and additional biasing of the source angular distribution are optimized. The learning process is iteratively performed after each batch of particles, by retrieving the data concerning the subset of histories that passed the detector region and energy range in the previous batches. This procedure has been tested on two sample problems in nuclear geophysics, where an unoptimized Monte Carlo calculation is particularly inefficient. The results are encouraging, although the presented method does not directly minimize the variance and the convergence of our algorithm is restricted by the statistics of successful histories from previous random walk. Further applications for modeling of the nuclear logging measurements seem to be promising. 11 refs., 2 figs., 3 tabs. (author)
Longo, M.; Keller, M.; Scaranello, M. A., Sr.; dos-Santos, M. N.; Xu, Y.; Huang, M.; Morton, D. C.
2017-12-01
Logging and understory fires are major drivers of tropical forest degradation, reducing carbon stocks and changing forest structure, composition, and dynamics. In contrast to deforested areas, sites that are disturbed by logging and fires retain some, albeit severely altered, forest structure and function. In this study we simulated selective logging using the Ecosystem Demography Model (ED-2) to investigate the impact of a broad range of logging techniques, harvest intensities, and recurrence cycles on the long-term dynamics of Amazon forests, including the magnitude and duration of changes in forest flammability following timber extraction. Model results were evaluated using eddy covariance towers at logged sites at the Tapajos National Forest in Brazil and data on long-term dynamics reported in the literature. ED-2 is able to reproduce both the fast (energy fluxes compared to flux tower, and the typical, field-observed, decadal time scales for biomass recovery when no additional logging occurs. Preliminary results using the original ED-2 fire model show that canopy cover loss of forests under high-intensity, conventional logging cause sufficient drying to support more intense fires. These results indicate that under intense degradation, forests may shift to novel disturbance regimes, severely reducing carbon stocks, and inducing long-term changes in forest structure and composition from recurrent fires.
Modeling Temperature and Pricing Weather Derivatives Based on Temperature
Directory of Open Access Journals (Sweden)
Birhan Taştan
2017-01-01
Full Text Available This study first proposes a temperature model to calculate the temperature indices upon which temperature-based derivatives are written. The model is designed as a mean-reverting process driven by a Levy process to represent jumps and other features of temperature. Temperature indices are mainly measured as deviations from a base temperature, and, hence, the proposed model includes jumps because they may constitute an important part of this deviation for some locations. The estimated value of a temperature index and its distribution in this model apply an inversion formula to the temperature model. Second, this study develops a pricing process over calculated index values, which returns a customized price for temperature-based derivatives considering that temperature has unique effects on every economic entity. This personalized price is also used to reveal the trading behavior of a hypothesized entity in a temperature-based derivative trade with profit maximization as the objective. Thus, this study presents a new method that does not need to evaluate the risk-aversion behavior of any economic entity.
Dynamic Model of High Temperature PEM Fuel Cell Stack Temperature
DEFF Research Database (Denmark)
Andreasen, Søren Juhl; Kær, Søren Knudsen
2007-01-01
The present work involves the development of a model for predicting the dynamic temperature of a high temperature PEM (HTPEM) fuel cell stack. The model is developed to test different thermal control strategies before implementing them in the actual system. The test system consists of a prototype...... parts, where also the temperatures are measured. The heat balance of the system involves a fuel cell model to describe the heat added by the fuel cells when a current is drawn. Furthermore the model also predicts the temperatures, when heating the stack with external heating elements for start-up, heat...... the stack at a high stoichiometric air flow. This is possible because of the PBI fuel cell membranes used, and the very low pressure drop in the stack. The model consists of a discrete thermal model dividing the stack into three parts: inlet, middle and end and predicting the temperatures in these three...
Reading Logs and Literature Teaching Models in English Language Teacher Education
Ochoa Delarriva, Ornella; Basabe, Enrique Alejandro
2016-01-01
Reading logs are regularly used in foreign language education since they are not only critical in the development of reading comprehension but may also be instrumental in taking readers beyond the referential into the representational realms of language. In this paper we offer the results of a qualitative analysis of a series of reading logs…
Modeling and Inversion Methods for the Interpretation of Resistivity Logging Tool Response
Anderson, B.I.
2001-01-01
The electrical resistivity measured by well logging tools is one of the most important rock parameters for indicating the amount of hydrocarbons present in a reservoir. The main interpretation challenge is to invert the measured data, solving for the true resistivity values in each zone of a
Correlation Models for Temperature Fields
North, Gerald R.
2011-05-16
This paper presents derivations of some analytical forms for spatial correlations of evolving random fields governed by a white-noise-driven damped diffusion equation that is the analog of autoregressive order 1 in time and autoregressive order 2 in space. The study considers the two-dimensional plane and the surface of a sphere, both of which have been studied before, but here time is introduced to the problem. Such models have a finite characteristic length (roughly the separation at which the autocorrelation falls to 1/e) and a relaxation time scale. In particular, the characteristic length of a particular temporal Fourier component of the field increases to a finite value as the frequency of the particular component decreases. Some near-analytical formulas are provided for the results. A potential application is to the correlation structure of surface temperature fields and to the estimation of large area averages, depending on how the original datastream is filtered into a distribution of Fourier frequencies (e.g., moving average, low pass, or narrow band). The form of the governing equation is just that of the simple energy balance climate models, which have a long history in climate studies. The physical motivation provided by the derivation from a climate model provides some heuristic appeal to the approach and suggests extensions of the work to nonuniform cases.
Salmerón, Diego; Cano, Juan A; Chirlaque, María D
2015-08-30
In cohort studies, binary outcomes are very often analyzed by logistic regression. However, it is well known that when the goal is to estimate a risk ratio, the logistic regression is inappropriate if the outcome is common. In these cases, a log-binomial regression model is preferable. On the other hand, the estimation of the regression coefficients of the log-binomial model is difficult owing to the constraints that must be imposed on these coefficients. Bayesian methods allow a straightforward approach for log-binomial regression models and produce smaller mean squared errors in the estimation of risk ratios than the frequentist methods, and the posterior inferences can be obtained using the software WinBUGS. However, Markov chain Monte Carlo methods implemented in WinBUGS can lead to large Monte Carlo errors in the approximations to the posterior inferences because they produce correlated simulations, and the accuracy of the approximations are inversely related to this correlation. To reduce correlation and to improve accuracy, we propose a reparameterization based on a Poisson model and a sampling algorithm coded in R. Copyright © 2015 John Wiley & Sons, Ltd.
Directory of Open Access Journals (Sweden)
Mónica F. Díaz
2012-12-01
Full Text Available Volatile organic compounds (VOCs are contained in a variety of chemicals that can be found in household products and may have undesirable effects on health. Thereby, it is important to model blood-to-liver partition coefficients (log Pliver for VOCs in a fast and inexpensive way. In this paper, we present two new quantitative structure-property relationship (QSPR models for the prediction of log Pliver, where we also propose a hybrid approach for the selection of the descriptors. This hybrid methodology combines a machine learning method with a manual selection based on expert knowledge. This allows obtaining a set of descriptors that is interpretable in physicochemical terms. Our regression models were trained using decision trees and neural networks and validated using an external test set. Results show high prediction accuracy compared to previous log Pliver models, and the descriptor selection approach provides a means to get a small set of descriptors that is in agreement with theoretical understanding of the target property.
R. Edward. Thomas
2009-01-01
As a hardwood tree grows and develops, surface defects such as branch stubs and wounds are overgrown. Evidence of these defects remain on the log surface for decades and in many instances for the life of the tree. As the tree grows the defect is encapsulated or grown over by new wood. During this process the appearance of the defect in the tree's bark changes. The...
Directory of Open Access Journals (Sweden)
Mah Boon Yih
2009-06-01
Full Text Available The web log is an exceptionally valuable tool for the teaching of second language writing, particularly written communication skills (Johnson, 2004; Wu, 2005. More and more international educators have applied this easy-to-use technology to classroom instruction and language learning (Campbell, 2003; Johnson, 2004. However, what is largely unknown is Malaysian students’ reaction to writing web logs in English as a Second Language (ESL classrooms. Therefore, this study aims to investigate the perception of writing web logs among Universiti Teknologi MARA (UiTM HM115 diploma students who took the BEL311 English course in their third semester based on the three Technology Acceptance Model (TAM variables. Specifically, the study sought to identify whether the two TAM determinants,Perceived Ease of Use (PEOU and Perceived Usefulness (PU, affected the students’ behavioural intention (BI to use web logs for specific writing tasks. This study employed Davis’s TAM (1989 and its questionnaire-based measurement instrument and three hypotheses were formulated based on the objectives of the study. The pilot test’s result confirmed the reliability of the modified TAM-based questionnaire. The findings showed that students accept writing web logs as a classroom activity since they perceived online journals to be more useful rather than easy to use.Additionally, the findings revealed that TAM can be used to diagnose and interpret the attitude of new technology users and most importantly, PEOU, PU, and BI were positively and highly correlated at a significant level. These results did not reject the three proposed hypotheses.
Hollon, Matthew F
2015-01-01
By using web-based tools in medical education, there are opportunities to innovatively teach important principles from the general competencies of graduate medical education. Postulating that faculty transparency in learning from uncertainties in clinical work could help residents to incorporate the principles of practice-based learning and improvement (PBLI) in their professional development, faculty in this community-based residency program modeled the steps of PBLI on a weekly basis through the use of a web log. The program confidentially surveyed residents before and after this project about actions consistent with PBLI and knowledge acquired through reading the web log. The frequency that residents encountered clinical situations where they felt uncertain declined over the course of the 24 weeks of the project from a mean frequency of uncertainty of 36% to 28% (Wilcoxon signed rank test, p=0.008); however, the frequency with which residents sought answers when faced with uncertainty did not change (Wilcoxon signed rank test, p=0.39), remaining high at approximately 80%. Residents answered a mean of 52% of knowledge questions correct when tested prior to faculty posts to the blog, rising to a mean of 65% of questions correct when tested at the end of the project (paired t-test, p=0.001). Faculty role modeling of PBLI behaviors and posting clinical questions and answers to a web log led to modest improvements in medical knowledge but did not alter behavior that was already taking place frequently among residents.
Directory of Open Access Journals (Sweden)
D. Rawal
2016-06-01
The study highlights the application of GIS in establishing the basic parameters of soil, land use and the distribution of water logging over a period of time and the groundwater modelling identifies the groundwater regime of the area and estimates the total recharge to the area due to surface water irrigation and rainfall and suggests suitable method to control water logging in the area.
Czech Academy of Sciences Publication Activity Database
Majorowicz, J.; Šafanda, Jan
2015-01-01
Roč. 104, č. 6 (2015), s. 1563-1571 ISSN 1437-3254 R&D Projects: GA ČR(CZ) GAP210/11/0183 Institutional support: RVO:67985530 Keywords : surface processes * borehole temperatures * climatic warming * Ice Age * heat flow Subject RIV: DC - Siesmology, Volcanology, Earth Structure Impact factor: 2.133, year: 2015
Czech Academy of Sciences Publication Activity Database
Majorowicz, J.; Šafanda, Jan; Skinner, W.
2002-01-01
Roč. 107, B10 (2002), s. ETG6 1-12 ISSN 0148-0227 Institutional research plan: CEZ:AV0Z3012916 Keywords : Canada climate warming * borehole temperature * geothermics Subject RIV: DB - Geology ; Mineralogy Impact factor: 2.245, year: 2002
Czech Academy of Sciences Publication Activity Database
Majorowicz, J. A.; Skinner, W. R.; Šafanda, Jan
2005-01-01
Roč. 162, č. 2 (2005), s. 109-128 ISSN 0033-4553 Institutional research plan: CEZ:AV0Z30120515 Keywords : global warming * regional climate variability and change * borehole temperatures Subject RIV: DC - Siesmology, Volcanology, Earth Structure Impact factor: 0.975, year: 2005
Aragao, Glaucia M F; Corradini, Maria G; Normand, Mark D; Peleg, Micha
2007-11-01
Published survival curves of Escherichia coli in two growth media, with and without the presence of salt, at various temperatures and in a Greek eggplant salad having various levels of essential oil, all had a characteristic downward concavity when plotted on semi logarithmic coordinates. Some also exhibited what appeared as a 'shoulder' of considerable length. Regardless of whether a shoulder was noticed, the survival pattern could be considered as a manifestation of an underlying unimodal distribution of the cells' death times. Mathematically, the data could be described equally well by the Weibull and log normal distribution functions, which had similar modes, means, standard deviations and coefficients of skewness. When plotted in their probability density function (PDF) form, the curves also appeared very similar visually. This enabled us to quantify and compare the effect of temperature or essential oil concentration on the organism's survival in terms of these temporal distributions' characteristics. Increased lethality was generally expressed in a shorter mean and mode, a smaller standard deviation and increased overall symmetry as judged by the distributions' degree of skewness. The 'shoulder', as expected, simply indicated that the distribution's standard deviation was much smaller than its mode. Rate models based on the two distribution functions could be used to predict non isothermal survival patterns. They were derived on the assumption that the momentary inactivation rate is the isothermal rate at the momentary temperature at a time that corresponds to the momentary survival ratio. In this application, however, the Weibullian model with a fixed power was not only simpler and more convenient mathematically than the one based on the log normal distribution, but it also provided more accurate estimates of the dynamic inactivation patterns.
Directory of Open Access Journals (Sweden)
Rosenbaum Peter L
2006-10-01
Full Text Available Abstract Background In this paper we compare the results in an analysis of determinants of caregivers' health derived from two approaches, a structural equation model and a log-linear model, using the same data set. Methods The data were collected from a cross-sectional population-based sample of 468 families in Ontario, Canada who had a child with cerebral palsy (CP. The self-completed questionnaires and the home-based interviews used in this study included scales reflecting socio-economic status, child and caregiver characteristics, and the physical and psychological well-being of the caregivers. Both analytic models were used to evaluate the relationships between child behaviour, caregiving demands, coping factors, and the well-being of primary caregivers of children with CP. Results The results were compared, together with an assessment of the positive and negative aspects of each approach, including their practical and conceptual implications. Conclusion No important differences were found in the substantive conclusions of the two analyses. The broad confirmation of the Structural Equation Modeling (SEM results by the Log-linear Modeling (LLM provided some reassurance that the SEM had been adequately specified, and that it broadly fitted the data.
Czech Academy of Sciences Publication Activity Database
Dědeček, Petr; Šafanda, Jan; Rajver, D.
2012-01-01
Roč. 113, č. 3-4 (2012), s. 787-801 ISSN 0165-0009 R&D Projects: GA ČR(CZ) GAP210/11/0183; GA AV ČR KSK3046108; GA ČR GETOP/08/E014 Institutional research plan: CEZ:AV0Z30120515 Keywords : subsurface temperature * thermal conductivity * urbanization Subject RIV: DC - Siesmology, Volcanology, Earth Structure Impact factor: 3.634, year: 2012
Czech Academy of Sciences Publication Activity Database
Majorowicz, J.; Šafanda, Jan; Skinner, W.
2004-01-01
Roč. 47, - (2004), s. 113-174 ISSN 0065-2687 R&D Projects: GA AV ČR KSK3046108 Institutional research plan: CEZ:AV0Z3012916 Keywords : well temperature * global warming * surface temperature Subject RIV: DC - Siesmology, Volcanology, Earth Structure Impact factor: 1.667, year: 2004
Directory of Open Access Journals (Sweden)
Yosar Fatahillah
2017-01-01
Full Text Available Abstrak - Analisa laboratorium telah membuktikan Formasi Ngimbang sebagai batuan induk yang relatif kaya kandungan organik, ketersediaan data laboratorium terbatas untuk mengukur distribusi kematangan dan kuantitas material organik dalam skala luas cekungan regional. Hal inilah yang menjadi tujuan utama dalam penelitian ini. Dengan demikian dibutuhkan suatu metode sederhana yang terbukti dan akurat mengukur TOC (kandungan karbon organik pada seluruh kedalaman lubang bor. Penelitian ini mengamati perilaku TOC Model berdasarkan data log sumur. Dengan demikian, akan menghemat banyak waktu dan menekan biaya observasi. Dalam penelitian ini, digunakan metode Passey untuk menentukan besar kandungan karbon organik pada formasi (TOC dan model Mallick-Raju sebagai indikator kematangan formasi.. Tersedia satu data sumur dan data batuan inti yang digunakan untuk menentukan potensi batuan induk di formasi Ngimbang, Nilai LOM (Tingkat Metamorfisme diperlukan untuk digunakan pada Passey Model dengan menggunakan crossplot antara DlogR dan TOC dari data inti. Hasil memperlihatkan formasi Ngimbang merupakan formasi dengan rerata kandungan TOC berada pada tingkat buruk – cukup baik dengan tingkat kematangan immature hingga post mature Kata kunci— TOC, Vitrinite Reflectance, Vshale, ΔLogR
Modeling maximum daily temperature using a varying coefficient regression model
Han Li; Xinwei Deng; Dong-Yum Kim; Eric P. Smith
2014-01-01
Relationships between stream water and air temperatures are often modeled using linear or nonlinear regression methods. Despite a strong relationship between water and air temperatures and a variety of models that are effective for data summarized on a weekly basis, such models did not yield consistently good predictions for summaries such as daily maximum temperature...
Williams, C.T.; Sheriff, M.J.; Schmutz, J.A.; Kohl, F.; Toien, O.; Buck, C.L.; Barnes, B.M.
2011-01-01
Precise measures of phenology are critical to understanding how animals organize their annual cycles and how individuals and populations respond to climate-induced changes in physical and ecological stressors. We show that patterns of core body temperature (T b) can be used to precisely determine the timing of key seasonal events including hibernation, mating and parturition, and immergence and emergence from the hibernacula in free-living arctic ground squirrels (Urocitellus parryii). Using temperature loggers that recorded T b every 20 min for up to 18 months, we monitored core T b from three females that subsequently gave birth in captivity and from 66 female and 57 male ground squirrels free-living in the northern foothills of the Brooks Range Alaska. In addition, dates of emergence from hibernation were visually confirmed for four free-living male squirrels. Average T b in captive females decreased by 0.5–1.0°C during gestation and abruptly increased by 1–1.5°C on the day of parturition. In free-living females, similar shifts in T b were observed in 78% (n = 9) of yearlings and 94% (n = 31) of adults; females without the shift are assumed not to have given birth. Three of four ground squirrels for which dates of emergence from hibernation were visually confirmed did not exhibit obvious diurnal rhythms in T b until they first emerged onto the surface when T b patterns became diurnal. In free-living males undergoing reproductive maturation, this pre-emergence euthermic interval averaged 20.4 days (n = 56). T b-loggers represent a cost-effective and logistically feasible method to precisely investigate the phenology of reproduction and hibernation in ground squirrels.
Weather Derivatives and Stochastic Modelling of Temperature
Directory of Open Access Journals (Sweden)
Fred Espen Benth
2011-01-01
Full Text Available We propose a continuous-time autoregressive model for the temperature dynamics with volatility being the product of a seasonal function and a stochastic process. We use the Barndorff-Nielsen and Shephard model for the stochastic volatility. The proposed temperature dynamics is flexible enough to model temperature data accurately, and at the same time being analytically tractable. Futures prices for commonly traded contracts at the Chicago Mercantile Exchange on indices like cooling- and heating-degree days and cumulative average temperatures are computed, as well as option prices on them.
Cao, Xiangyu; Fyodorov, Yan V; Le Doussal, Pierre
2018-02-01
We address systematically an apparent nonphysical behavior of the free-energy moment generating function for several instances of the logarithmically correlated models: the fractional Brownian motion with Hurst index H=0 (fBm0) (and its bridge version), a one-dimensional model appearing in decaying Burgers turbulence with log-correlated initial conditions and, finally, the two-dimensional log-correlated random-energy model (logREM) introduced in Cao et al. [Phys. Rev. Lett. 118, 090601 (2017)PRLTAO0031-900710.1103/PhysRevLett.118.090601] based on the two-dimensional Gaussian free field with background charges and directly related to the Liouville field theory. All these models share anomalously large fluctuations of the associated free energy, with a variance proportional to the log of the system size. We argue that a seemingly nonphysical vanishing of the moment generating function for some values of parameters is related to the termination point transition (i.e., prefreezing). We study the associated universal log corrections in the frozen phase, both for logREMs and for the standard REM, filling a gap in the literature. For the above mentioned integrable instances of logREMs, we predict the nontrivial free-energy cumulants describing non-Gaussian fluctuations on the top of the Gaussian with extensive variance. Some of the predictions are tested numerically.
Hollon, Matthew F.
2015-01-01
Background By using web-based tools in medical education, there are opportunities to innovatively teach important principles from the general competencies of graduate medical education. Objectives Postulating that faculty transparency in learning from uncertainties in clinical work could help residents to incorporate the principles of practice-based learning and improvement (PBLI) in their professional development, faculty in this community-based residency program modeled the steps of PBLI on a weekly basis through the use of a web log. Method The program confidentially surveyed residents before and after this project about actions consistent with PBLI and knowledge acquired through reading the web log. Results The frequency that residents encountered clinical situations where they felt uncertain declined over the course of the 24 weeks of the project from a mean frequency of uncertainty of 36% to 28% (Wilcoxon signed rank test, p=0.008); however, the frequency with which residents sought answers when faced with uncertainty did not change (Wilcoxon signed rank test, p=0.39), remaining high at approximately 80%. Residents answered a mean of 52% of knowledge questions correct when tested prior to faculty posts to the blog, rising to a mean of 65% of questions correct when tested at the end of the project (paired t-test, p=0.001). Conclusions Faculty role modeling of PBLI behaviors and posting clinical questions and answers to a web log led to modest improvements in medical knowledge but did not alter behavior that was already taking place frequently among residents. PMID:26653701
Directory of Open Access Journals (Sweden)
Matthew F. Hollon
2015-12-01
Full Text Available Background: By using web-based tools in medical education, there are opportunities to innovatively teach important principles from the general competencies of graduate medical education. Objectives: Postulating that faculty transparency in learning from uncertainties in clinical work could help residents to incorporate the principles of practice-based learning and improvement (PBLI in their professional development, faculty in this community-based residency program modeled the steps of PBLI on a weekly basis through the use of a web log. Method: The program confidentially surveyed residents before and after this project about actions consistent with PBLI and knowledge acquired through reading the web log. Results: The frequency that residents encountered clinical situations where they felt uncertain declined over the course of the 24 weeks of the project from a mean frequency of uncertainty of 36% to 28% (Wilcoxon signed rank test, p=0.008; however, the frequency with which residents sought answers when faced with uncertainty did not change (Wilcoxon signed rank test, p=0.39, remaining high at approximately 80%. Residents answered a mean of 52% of knowledge questions correct when tested prior to faculty posts to the blog, rising to a mean of 65% of questions correct when tested at the end of the project (paired t-test, p=0.001. Conclusions: Faculty role modeling of PBLI behaviors and posting clinical questions and answers to a web log led to modest improvements in medical knowledge but did not alter behavior that was already taking place frequently among residents.
Integration of seismic and well log data for petrophysical modeling of ...
African Journals Online (AJOL)
Zonation of the reservoirs enhanced the sensitivity of the petrophysical properties in every stratigraphic interval of the structural model. Petrophysical model parameters were delineated along each well location, the model was upscaled and populated on the structural model. The features of the reservoir rock were ...
Temperature Modelling of the Biomass Pretreatment Process
DEFF Research Database (Denmark)
Prunescu, Remus Mihail; Blanke, Mogens; Jensen, Jakob M.
2012-01-01
In a second generation biorefinery, the biomass pretreatment stage has an important contribution to the efficiency of the downstream processing units involved in biofuel production. Most of the pretreatment process occurs in a large pressurized thermal reactor that presents an irregular temperature...... distribution. Therefore, an accurate temperature model is critical for observing the biomass pretreatment. More than that, the biomass is also pushed with a constant horizontal speed along the reactor in order to ensure a continuous throughput. The goal of this paper is to derive a temperature model...... that captures the environmental temperature differences inside the reactor using distributed parameters. A Kalman filter is then added to account for any missing dynamics and the overall model is embedded into a temperature soft sensor. The operator of the plant will be able to observe the temperature in any...
Park, Ji Hun; Shin, Min Hwa; Kim, Hong Kook
In this paper, a voice activity detection (VAD) method for dual-channel noisy speech recognition is proposed on the basis of statistical models constructed by spatial cues and log energy. In particular, spatial cues are composed of the interaural time differences and interaural level differences of dual-channel speech signals, and the statistical models for speech presence and absence are based on a Gaussian kernel density. In order to evaluate the performance of the proposed VAD method, speech recognition is performed using only speech signals segmented by the proposed VAD method. The performance of the proposed VAD method is then compared with those of conventional methods such as a signal-to-noise ratio variance based method and a phase vector based method. It is shown from the experiments that the proposed VAD method outperforms conventional methods, providing the relative word error rate reductions of 19.5% and 12.2%, respectively.
DEFF Research Database (Denmark)
Marker, Pernille Aabye; Foged, N.; He, X.
2015-01-01
resistivity and clay fraction are classified into hydrostratigraphic zones using k-means clustering. Hydraulic conductivity values of the zones are estimated by hydrological calibration using hydraulic head and stream discharge observations. The method is applied to a Danish case study. Benchmarking...... hydrological performance by comparison of performance statistics from comparable hydrological models, the cluster model performed competitively. Calibrations of 11 hydrostratigraphic cluster models with 1-11 hydraulic conductivity zones showed improved hydrological performance with an increasing number...... of clusters. Beyond the 5-cluster model hydrological performance did not improve. Due to reproducibility and possibility of method standardization and automation, we believe that hydrostratigraphic model generation with the proposed method has important prospects for groundwater models used in water resources...
Directory of Open Access Journals (Sweden)
Farooq Ahmad
2006-01-01
Full Text Available This is cross sectional study based on 304 households (couples with wives age less than 48 years, chosen from urban locality (city Lahore. Fourteen religious, demographic and socio-economic factors of categorical nature like husband education, wife education, husband’s monthly income, occupation of husband, household size, husband-wife discussion, number of living children, desire for more children, duration of marriage, present age of wife, age of wife at marriage, offering of prayers, political view, and religiously decisions were taken to understand acceptance of family planning. Multivariate log-linear analysis was applied to identify association pattern and interrelationship among factors. The logit model was applied to explore the relationship between predictor factors and dependent factor, and to explore which are the factors upon which acceptance of family planning is highly depending. Log-linear analysis demonstrate that preference of contraceptive use was found to be consistently associated with factors Husband-Wife discussion, Desire for more children, No. of children, Political view and Duration of married life. While Husband’s monthly income, Occupation of husband, Age of wife at marriage and Offering of prayers resulted in no statistical explanation of adoption of family planning methods.
Efficient and Accurate Log-Levy Approximations of Levy-Driven LIBOR Models
DEFF Research Database (Denmark)
Papapantoleon, Antonis; Schoenmakers, John; Skovmand, David
2012-01-01
The LIBOR market model is very popular for pricing interest rate derivatives but is known to have several pitfalls. In addition, if the model is driven by a jump process, then the complexity of the drift term grows exponentially fast (as a function of the tenor length). We consider a Lévy-driven ...
Efficient and Accurate Log-Levy Approximations of Levy-Driven LIBOR Models
DEFF Research Database (Denmark)
Papapantoleon, Antonis; Schoenmakers, John; Skovmand, David
2012-01-01
The LIBOR market model is very popular for pricing interest rate derivatives but is known to have several pitfalls. In addition, if the model is driven by a jump process, then the complexity of the drift term grows exponentially fast (as a function of the tenor length). We consider a Lévy...
Efficient and accurate log-Lévy approximations to Lévy driven LIBOR models
DEFF Research Database (Denmark)
Papapantoleon, Antonis; Schoenmakers, John; Skovmand, David
2011-01-01
The LIBOR market model is very popular for pricing interest rate derivatives, but is known to have several pitfalls. In addition, if the model is driven by a jump process, then the complexity of the drift term is growing exponentially fast (as a function of the tenor length). In this work, we...
DEFF Research Database (Denmark)
Marker, Pernille Aabye; Foged, N.; He, X.
2015-01-01
of electrical resistivity and clay fraction are classified into hydrostratigraphic zones using k-means clustering. Hydraulic conductivity values of the zones are estimated by hydrological calibration using hydraulic head and stream discharge observations. The method is applied to a Danish case study....... Benchmarking hydrological performance by comparison of simulated hydrological state variables, the cluster model performed competitively. Calibrations of 11 hydrostratigraphic cluster models with 1–11 hydraulic conductivity zones showed improved hydrological performance with increasing number of clusters....... Beyond the 5-cluster model hydrological performance did not improve. Due to reproducibility and possibility of method standardization and automation, we believe that hydrostratigraphic model generation with the proposed method has important prospects for groundwater models used in water resources...
Vinson, C C; Kanashiro, M; Sebbenn, A M; Williams, T C R; Harris, S A; Boshier, D H
2015-08-01
The impact of logging and subsequent recovery after logging is predicted to vary depending on specific life history traits of the logged species. The Eco-gene simulation model was used to evaluate the long-term impacts of selective logging over 300 years on two contrasting Brazilian Amazon tree species, Dipteryx odorata and Jacaranda copaia. D. odorata (Leguminosae), a slow growing climax tree, occurs at very low densities, whereas J. copaia (Bignoniaceae) is a fast growing pioneer tree that occurs at high densities. Microsatellite multilocus genotypes of the pre-logging populations were used as data inputs for the Eco-gene model and post-logging genetic data was used to verify the output from the simulations. Overall, under current Brazilian forest management regulations, there were neither short nor long-term impacts on J. copaia. By contrast, D. odorata cannot be sustainably logged under current regulations, a sustainable scenario was achieved by increasing the minimum cutting diameter at breast height from 50 to 100 cm over 30-year logging cycles. Genetic parameters were only slightly affected by selective logging, with reductions in the numbers of alleles and single genotypes. In the short term, the loss of alleles seen in J. copaia simulations was the same as in real data, whereas fewer alleles were lost in D. odorata simulations than in the field. The different impacts and periods of recovery for each species support the idea that ecological and genetic information are essential at species, ecological guild or reproductive group levels to help derive sustainable management scenarios for tropical forests.
Temperature stochastic modeling and weather derivatives pricing ...
African Journals Online (AJOL)
... over a sufficient period to apply a stochastic process that describes the evolution of the temperature. A numerical example of a swap contract pricing is presented, using an approximation formula as well as Monte Carlo simulations. Keywords: Weather derivatives, temperature stochastic model, Monte Carlo simulation.
Directory of Open Access Journals (Sweden)
Zhe Wei
2013-01-01
Full Text Available To solve the product data consistency problem which is caused by the portable system that cannot conduct real-time update of product data in mobile environment under the mass customization production mode, a new product data optimistic replication method based on log is presented. This paper focuses on the design thinking provider, probing into a manufacturing resource design thinking cloud platform based on manufacturing resource-locating technologies, and also discuss several application scenarios of cloud locating technologies in the manufacturing environment. The actual demand of manufacturing creates a new mode which is service-oriented and has high efficiency and low consumption. Finally, they are different from the crowd-sourcing application model of Local-Motors. The sharing platform operator is responsible for a master plan for the platform, proposing a open interface standard and establishing a service operation mode.
2d forward modelling of marine CSEM survey geometry for seabed logging
International Nuclear Information System (INIS)
Hussain, N.; Noh, M.; Yahya, N.B.
2011-01-01
Hydrocarbon reserve exploration in deep water is done by geophysical surveys. Previously seismic geophysical surveys were explicitly used but it has indistinct results for both water and hydrocarbon saturated reservoir. Recent development for the detection of hydrocarbon reservoir in deeper water is Marine Controlled Source Electromagnetic (MCSEM) geophysical survey. MCSEM is sensitive to electrical conductivity of rocks by which it can differentiate between hydrocarbon reservoir and water saturated reservoir. MCSEM survey geometry put vital role and may causes for anomalies in synthetic data. Consequentially MCSEM is sensitive to survey geometry (e.g. source dipping, rotation and speed, receivers' orientation etc) which causes anomalies. The interpretation for delineating subsurface structure from survey data need to well understand the effects of survey geometry anomalies. Forward modelling is an alternative rather real time survey to study the aforementioned anomalies. In this paper finite difference method (FDM) is implemented for 2D forward modelling in the sense of qualitative understanding to how induced Electromagnetic (EM) signal changes its overall pattern while interact with physical earth properties. A stratified earth structure is developed and modelled in MatLabTM software to study the behaviour of EM field with physical earth properties. Obtained results of 2D geological models are also discussed in this paper. (author)
DEFF Research Database (Denmark)
Marker, Pernille Aabye; Bauer-Gottwein, Peter; Mosegaard, Klaus
a k-means cluster analysis of the principal components of resistivity and clay-fraction values. Under the assumption that the units have uniform hydrological properties, the units constitute the hydrostratigraphy for a groundwater model. Only aquifer structures are obtained from geophysical...... and lithological data, while the estimation of the hydrological properties of the units is inversely derived from the groundwater model and hydrological data. A synthetic analysis was performed to investigate the principles underlying the clustering approach using three petrophysical relationships between...... electrical conductivity and hydraulic conductivity. Aquifer structures obtained from clustering on electrical conductivity and clay fraction resulted in mismatch with the true pumping well capture isochrone of 8 to 13 percent. Results for clustering only on electrical conductivity were not stable...
Matsubara, Hiroki; Hirano, Hiroki; Hirano, Harutoyo; Soh, Zu; Nakamura, Ryuji; Saeki, Noboru; Kawamoto, Masashi; Yoshizumi, Masao; Yoshino, Atsuo; Sasaoka, Takafumi; Yamawaki, Shigeto; Tsuji, Toshio
2018-02-15
In clinical practice, subjective pain evaluations, e.g., the visual analogue scale and the numeric rating scale, are generally employed, but these are limited in terms of their ability to detect inaccurate reports, and are unsuitable for use in anesthetized patients or those with dementia. We focused on the peripheral sympathetic nerve activity that responds to pain, and propose a method for evaluating pain sensation, including intensity, sharpness, and dullness, using the arterial stiffness index. In the experiment, electrocardiogram, blood pressure, and photoplethysmograms were obtained, and an arterial viscoelastic model was applied to estimate arterial stiffness. The relationships among the stiffness index, self-reported pain sensation, and electrocutaneous stimuli were examined and modelled. The relationship between the stiffness index and pain sensation could be modelled using a sigmoid function with high determination coefficients, where R 2 ≥ 0.88, p < 0.01 for intensity, R 2 ≥ 0.89, p < 0.01 for sharpness, and R 2 ≥ 0.84, p < 0.01 for dullness when the stimuli could appropriately evoke dull pain.
Multiple Temperature Model for Near Continuum Flows
International Nuclear Information System (INIS)
XU, Kun; Liu, Hongwei; Jiang, Jianzheng
2007-01-01
In the near continuum flow regime, the flow may have different translational temperatures in different directions. It is well known that for increasingly rarefied flow fields, the predictions from continuum formulation, such as the Navier-Stokes equations, lose accuracy. These inaccuracies may be partially due to the single temperature assumption in the Navier-Stokes equations. Here, based on the gas-kinetic Bhatnagar-Gross-Krook (BGK) equation, a multitranslational temperature model is proposed and used in the flow calculations. In order to fix all three translational temperatures, two constraints are additionally proposed to model the energy exchange in different directions. Based on the multiple temperature assumption, the Navier-Stokes relation between the stress and strain is replaced by the temperature relaxation term, and the Navier-Stokes assumption is recovered only in the limiting case when the flow is close to the equilibrium with the same temperature in different directions. In order to validate the current model, both the Couette and Poiseuille flows are studied in the transition flow regime
The high temperature Ising model is a critical percolation model
Meester, R.W.J.; Camia, F.; Balint, A.
2010-01-01
We define a new percolation model by generalising the FK representation of the Ising model, and show that on the triangular lattice and at high temperatures, the critical point in the new model corresponds to the Ising model. Since the new model can be viewed as Bernoulli percolation on a random
SEALEX — Internal reef chronology and virtual drill logs from a spreadsheet-based reef growth model
Koelling, Martin; Webster, Jody Michael; Camoin, Gilbert; Iryu, Yasufumi; Bard, Edouard; Seard, Claire
2009-03-01
A reef growth model has been developed using an Excel spreadsheet. The 1D forward model is driven by a user definable sea-level curve. Other adjustable model parameters include maximum coral growth rate, coral growth rate depth dependence and light attenuation, subaerial erosion and subsidence. A time lag for the establishment of significant reef accretion may also be set. During the model run, both, the external shape and the internal chronologic structure of the growing reef as well as the paleo-water-depths are continuously displayed and recorded. We tested the model on fossil reef systems growing in a range of different tectonic settings such as both on slowly subsiding islands like Tahiti (subsidence rates of 0.25 m ka - 1 ) and rapidly subsiding islands like Hawaii (subsidence rate of 2.5 mka - 1 ) as well as rapidly uplifting coastal settings like Huon Peninsula (uplift rates of 0.5 to 4 m ka - 1 ) and more slowly uplifting settings like Haiti (uplift rates of 0.55 mka - 1 ). The model runs show the sensitivity of the resulting overall morphology and internal age structure to different model parameters. Additionally the water depth at the time of deposition is recorded. This allows the constructions of virtual borehole logs with the coral age profiles and the paleo water depth at the time of growth both displayed and recorded. Because the model is implemented as a macro in a popular spreadsheet program, it may be easily adapted or extended to model the growth of different reef and carbonate platform settings. Single model runs take a few minutes on a standard (2 GHz CoreDuo) desktop computer under Windows XP. The model may be used to investigate the effects of different boundary conditions such as maximum reef growth, erosion rates, subsidence or uplift on both, the general morphology of the reefs, and the internal chronologic structure. These results can then be compared to observed data allowing different hypothesis concerning reefs development to be
Drinking Water Temperature Modelling in Domestic Systems
Moerman, A.; Blokker, M.; Vreeburg, J.; van der Hoek, J.P.
2014-01-01
Domestic water supply systems are the final stage of the transport process to deliver potable water to the customers’ tap. Under the influence of temperature, residence time and pipe materials the drinking water quality can change while the water passes the domestic drinking water system. According to the Dutch Drinking Water Act the drinking water temperature may not exceed the 25 °C threshold at point-of-use level. This paper provides a mathematical approach to model the heating of drinking...
Groundwater temperature estimation and modeling using hydrogeophysics.
Nguyen, F.; Lesparre, N.; Hermans, T.; Dassargues, A.; Klepikova, M.; Kemna, A.; Caers, J.
2017-12-01
Groundwater temperature may be of use as a state variable proxy for aquifer heat storage, highlighting preferential flow paths, or contaminant remediation monitoring. However, its estimation often relies on scarce temperature data collected in boreholes. Hydrogeophysical methods such as electrical resistivity tomography (ERT) and distributed temperature sensing (DTS) may provide more exhaustive spatial information of the bulk properties of interest than samples from boreholes. If a properly calibrated DTS reading provides direct measurements of the groundwater temperature in the well, ERT requires one to determine the fractional change per degree Celsius. One advantage of this petrophysical relationship is its relative simplicity: the fractional change is often found to be around 0.02 per degree Celcius, and represents mainly the variation of electrical resistivity due to the viscosity effect. However, in presence of chemical and kinetics effects, the variation may also depend on the duration of the test and may neglect reactions occurring between the pore water and the solid matrix. Such effects are not expected to be important for low temperature systems (<30 °C), at least for short experiments. In this contribution, we use different field experiments under natural and forced flow conditions to review developments for the joint use of DTS and ERT to map and monitor the temperature distribution within aquifers, to characterize aquifers in terms of heterogeneity and to better understand processes. We show how temperature time-series measurements might be used to constraint the ERT inverse problem in space and time and how combined ERT-derived and DTS estimation of temperature may be used together with hydrogeological modeling to provide predictions of the groundwater temperature field.
Enhanced battery model including temperature effects
Rosca, B.; Wilkins, S.
2013-01-01
Within electric and hybrid vehicles, batteries are used to provide/buffer the energy required for driving. However, battery performance varies throughout the temperature range specific to automotive applications, and as such, models that describe this behaviour are required. This paper presents a
Temperature Calculations in the Coastal Modeling System
2017-04-01
with the change of water turbidity in coastal and estuarine systems. Water quality and ecological models often require input of water temperature...of the American Society of Civil Engineers 81(717): 1–11. Sánchez, A., W. Wu, H. Li, M. E. Brown, C. W. Reed, J. D. Rosati, and Z. Demirbilek. 2014
Modeling Soil Temperature Variations | Ogunlela | Journal of ...
African Journals Online (AJOL)
This paper reports on modeling soil temperature variations. Transient heat flow principles were used in the study, with the assumptions that the heat flow was one-dimensional, the soil was homogenous and that the thermal diffusivity was constant. Average conditions are also assumed. The annual and diurnal (daily) soil ...
Neighbour, Gordon
2013-04-01
In 2012 Computing and Information Technology was disapplied from the English National Curriculum and therefore no longer has a compulsory programme of study. Data logging and data modelling are still essential components of the curriculum in the Computing and Information Technology classroom. Once the students have mastered the basics of both spreadsheet and information handling software they need to be further challenged. All too often the data used with relation to data-logging and data-handling is not realistic enough to really challenge very able students. However, using data from seismology allows students to manipulate "real" data and enhances their experience of geo-science, developing their skills and then allowing them to build on this work in both the science and geography classroom. This new scheme of work "Seismology at School" has allowed the students to work and develop skills beyond those normally expected for their age group and has allowed them to better appreciate their learning experience of "Natural Hazards" in the science and geography classroom in later years. The students undertake research to help them develop their understanding of earthquakes. This includes using materials from other nations within the European Economic Area, to also develop and challenge their use of Modern Foreign Languages. They are then challenged to create their own seismometers using simple kits and 'free' software - this "problem-solving" approach to their work is designed to enhance team-work and to extend the challenge they experience in the classroom. The students are then are asked to manipulate a "real" set of data using international earthquake data from the most recent whole year. This allows the students to make use of many of the analytical and statistical functions of both spreadsheet software and information handling software in a meaningful way. The students will need to have developed a hypothesis which their work should have provided either validation
Rodriguez, Brian D.; Sawyer, David A.; Hudson, Mark R.; Grauch, V.J.S.
2013-01-01
Two- and three-dimensional electrical resistivity models derived from the magnetotelluric method were interpreted to provide more accurate hydrogeologic parameters for the Albuquerque and Española Basins. Analysis and interpretation of the resistivity models are aided by regional borehole resistivity data. Examination of the magnetotelluric response of hypothetical stratigraphic cases using resistivity characterizations from the borehole data elucidates two scenarios where the magnetotelluric method provides the strongest constraints. In the first scenario, the magnetotelluric method constrains the thickness of extensive volcanic cover, the underlying thickness of coarser-grained facies of buried Santa Fe Group sediments, and the depth to Precambrian basement or overlying Pennsylvanian limestones. In the second scenario, in the absence of volcanic cover, the magnetotelluric method constrains the thickness of coarser-grained facies of buried Santa Fe Group sediments and the depth to Precambrian basement or overlying Pennsylvanian limestones. Magnetotelluric surveys provide additional constraints on the relative positions of basement rocks and the thicknesses of Paleozoic, Mesozoic, and Tertiary sedimentary rocks in the region of the Albuquerque and Española Basins. The northern extent of a basement high beneath the Cerros del Rio volcanic field is delineated. Our results also reveal that the largest offset of the Hubbell Spring fault zone is located 5 km west of the exposed scarp. By correlating our resistivity models with surface geology and the deeper stratigraphic horizons using deep well log data, we are able to identify which of the resistivity variations in the upper 2 km belong to the upper Santa Fe Group sediment
Measurements of temperature on LHC thermal models
Darve, C
2001-01-01
Full-scale thermal models for the Large Hadron Collider (LHC) accelerator cryogenic system have been studied at CERN and at Fermilab. Thermal measurements based on two different models permitted us to evaluate the performance of the LHC dipole cryostats as well as to validate the LHC Interaction Region (IR) inner triplet cooling scheme. The experimental procedures made use of temperature sensors supplied by industry and assembled on specially designed supports. The described thermal models took the advantage of advances in cryogenic thermometry which will be implemented in the future LHC accelerator to meet the strict requirements of the LHC for precision, accuracy, reliability, and ease-of-use. The sensors used in the temperature measurement of the superfluid (He II) systems are the primary focus of this paper, although some aspects of the LHC control system and signal conditioning are also reviewed. (15 refs).
George M. Banzhaf; Thomas G. Matney; Emily B. Schultz; James S. Meadows; J. Paul Jeffreys; William C. Booth; Gan Li; Andrew W. Ezell; Theodor D. Leininger
2016-01-01
Red oak (Quercus section Labatae)-sweetgum (Liquidambar styraciflua L.) stands growing on mid-south bottomland sites in the United States are well known for producing high-quality grade hardwood logs, but models for estimating the quantity and quality of standing grade wood in these stands have been unavailable. Prediction...
Phillips, Joe Scutt; Patterson, Toby A; Leroy, Bruno; Pilling, Graham M; Nicol, Simon J
2015-07-01
many different types of noisy autocorrelated data, as typically found across a range of ecological systems. Summarizing time-series data into a multivariate assemblage of dimensions relevant to the desired classification provides a means to examine these data in an appropriate behavioral space. We discuss how outputs of these models can be applied to bio-logging and other imperfect behavioral data, providing easily interpretable models for hypothesis testing.
Modeling quantum fluid dynamics at nonzero temperatures
Berloff, Natalia G.; Brachet, Marc; Proukakis, Nick P.
2014-01-01
The detailed understanding of the intricate dynamics of quantum fluids, in particular in the rapidly growing subfield of quantum turbulence which elucidates the evolution of a vortex tangle in a superfluid, requires an in-depth understanding of the role of finite temperature in such systems. The Landau two-fluid model is the most successful hydrodynamical theory of superfluid helium, but by the nature of the scale separations it cannot give an adequate description of the processes involving vortex dynamics and interactions. In our contribution we introduce a framework based on a nonlinear classical-field equation that is mathematically identical to the Landau model and provides a mechanism for severing and coalescence of vortex lines, so that the questions related to the behavior of quantized vortices can be addressed self-consistently. The correct equation of state as well as nonlocality of interactions that leads to the existence of the roton minimum can also be introduced in such description. We review and apply the ideas developed for finite-temperature description of weakly interacting Bose gases as possible extensions and numerical refinements of the proposed method. We apply this method to elucidate the behavior of the vortices during expansion and contraction following the change in applied pressure. We show that at low temperatures, during the contraction of the vortex core as the negative pressure grows back to positive values, the vortex line density grows through a mechanism of vortex multiplication. This mechanism is suppressed at high temperatures. PMID:24704874
Directory of Open Access Journals (Sweden)
Tengiz Mdzinarishvili
2009-12-01
Full Text Available A simple, computationally efficient procedure for analyses of the time period and birth cohort effects on the distribution of the age-specific incidence rates of cancers is proposed. Assuming that cohort effects for neighboring cohorts are almost equal and using the Log-Linear Age-Period-Cohort Model, this procedure allows one to evaluate temporal trends and birth cohort variations of any type of cancer without prior knowledge of the hazard function. This procedure was used to estimate the influence of time period and birth cohort effects on the distribution of the age-specific incidence rates of first primary, microscopically confirmed lung cancer (LC cases from the SEER9 database. It was shown that since 1975, the time period effect coefficients for men increase up to 1980 and then decrease until 2004. For women, these coefficients increase from 1975 up to 1990 and then remain nearly constant. The LC birth cohort effect coefficients for men and women increase from the cohort of 1890–94 until the cohort of 1925–29, then decrease until the cohort of 1950–54 and then remain almost unchanged. Overall, LC incidence rates, adjusted by period and cohort effects, increase up to the age of about 72–75, turn over, and then fall after the age of 75–78. The peak of the adjusted rates in men is around the age of 77–78, while in women, it is around the age of 72–73. Therefore, these results suggest that the age distribution of the incidence rates in men and women fall at old ages.
Carrillo-Rubio, Eduardo; Kéry, Marc; Morreale, Stephen J; Sullivan, Patrick J; Gardner, Beth; Cooch, Evan G; Lassoie, James P
2014-08-01
Forest degradation is arguably the greatest threat to biodiversity, ecosystem services, and rural livelihoods. Therefore, increasing understanding of how organisms respond to degradation is essential for management and conservation planning. We were motivated by the need for rapid and practical analytical tools to assess the influence of management and degradation on biodiversity and system state in areas subject to rapid environmental change. We compared bird community composition and size in managed (ejido, i.e., communally owned lands) and unmanaged (national park) forests in the Sierra Tarahumara region, Mexico, using multispecies occupancy models and data from a 2-year breeding bird survey. Unmanaged sites had on average higher species occupancy and richness than managed sites. Most species were present in low numbers as indicated by lower values of detection and occupancy associated with logging-induced degradation. Less than 10% of species had occupancy probabilities >0.5, and degradation had no positive effects on occupancy. The estimated metacommunity size of 125 exceeded previous estimates for the region, and sites with mature trees and uneven-aged forest stand characteristics contained the highest species richness. Higher estimation uncertainty and decreases in richness and occupancy for all species, including habitat generalists, were associated with degraded young, even-aged stands. Our findings show that multispecies occupancy methods provide tractable measures of biodiversity and system state and valuable decision support for landholders and managers. These techniques can be used to rapidly address gaps in biodiversity information, threats to biodiversity, and vulnerabilities of species of interest on a landscape level, even in degraded or fast-changing environments. Moreover, such tools may be particularly relevant in the assessment of species richness and distribution in a wide array of habitats. © 2014 Society for Conservation Biology.
Shi, Y.; Jiang, G.; Hu, S.
2017-12-01
Daqing, as the largest oil field of China with more than 50 years of exploration and production history for oil and gas, its geothermal energy utilization was started in 2000, with a main focus on district heating and direct use. In our ongoing study, data from multiple sources are collected, including BHT, DST, steady state temperature measurements in deep wells and thermophysical properties of formations. Based on these measurements, an elaborate investigation of the temperature field of Daqing Oilfield is made. Moreover, through exploration for oil and gas, subsurface geometry, depth, thickness and properties of the stratigraphic layers have been extensively delineated by well logs and seismic profiles. A 3D model of the study area is developed incorporating the information of structure, stratigraphy, basal heat flow, and petrophysical and thermophysical properties of strata. Based on the model, a simulation of the temperature field of Daqing Oilfield is generated. A purely conductive regime is presumed, as demonstrated by measured temperature log in deep wells. Wells W1, W2 and SK2 are used as key wells for model calibration. Among them, SK2, as part of the International Continental Deep Drilling Program, has a designed depth of 6400m, the steady state temperature measurement in the borehole has reached the depth of 4000m. The results of temperature distribution generated from simulation and investigation are compared, in order to evaluate the potential of applying the method to other sedimentary basins with limited borehole temperature measurements but available structural, stratigraphic and thermal regime information.
Directory of Open Access Journals (Sweden)
Ivana Đurđević Babić
2015-03-01
Full Text Available Student satisfaction with courses in academic institutions is an important issue and is recognized as a form of support in ensuring effective and quality education, as well as enhancing student course experience. This paper investigates whether there is a connection between student satisfaction with courses and log data on student courses in a virtual learning environment. Furthermore, it explores whether a successful classification model for predicting student satisfaction with course can be developed based on course log data and compares the results obtained from implemented methods. The research was conducted at the Faculty of Education in Osijek and included analysis of log data and course satisfaction on a sample of third and fourth year students. Multilayer Perceptron (MLP with different activation functions and Radial Basis Function (RBF neural networks as well as classification tree models were developed, trained and tested in order to classify students into one of two categories of course satisfaction. Type I and type II errors, and input variable importance were used for model comparison and classification accuracy. The results indicate that a successful classification model using tested methods can be created. The MLP model provides the highest average classification accuracy and the lowest preference in misclassification of students with a low level of course satisfaction, although a t-test for the difference in proportions showed that the difference in performance between the compared models is not statistically significant. Student involvement in forum discussions is recognized as a valuable predictor of student satisfaction with courses in all observed models.
DEFF Research Database (Denmark)
Bøving, Kristian Billeskov; Simonsen, Jesper
2004-01-01
This article documents how log analysis can inform qualitative studies concerning the usage of web-based information systems (WIS). No prior research has used http log files as data to study collaboration between multiple users in organisational settings. We investigate how to perform http log...... analysis; what http log analysis says about the nature of collaborative WIS use; and how results from http log analysis may support other data collection methods such as surveys, interviews, and observation. The analysis of log files initially lends itself to research designs, which serve to test...... hypotheses using a quantitative methodology. We show that http log analysis can also be valuable in qualitative research such as case studies. The results from http log analysis can be triangulated with other data sources and for example serve as a means of supporting the interpretation of interview data...
Directory of Open Access Journals (Sweden)
Johnston Neil W
2010-06-01
Full Text Available Abstract Background A zero-inflated continuous outcome is characterized by occurrence of "excess" zeros that more than a single distribution can explain, with the positive observations forming a skewed distribution. Mixture models are employed for regression analysis of zero-inflated data. Moreover, for repeated measures zero-inflated data the clustering structure should also be modeled for an adequate analysis. Methods Diary of Asthma and Viral Infections Study (DAVIS was a one year (2004 cohort study conducted at McMaster University to monitor viral infection and respiratory symptoms in children aged 5-11 years with and without asthma. Respiratory symptoms were recorded daily using either an Internet or paper-based diary. Changes in symptoms were assessed by study staff and led to collection of nasal fluid specimens for virological testing. The study objectives included investigating the response of respiratory symptoms to respiratory viral infection in children with and without asthma over a one year period. Due to sparse data daily respiratory symptom scores were aggregated into weekly average scores. More than 70% of the weekly average scores were zero, with the positive scores forming a skewed distribution. We propose a random effects probit/log-skew-normal mixture model to analyze the DAVIS data. The model parameters were estimated using a maximum marginal likelihood approach. A simulation study was conducted to assess the performance of the proposed mixture model if the underlying distribution of the positive response is different from log-skew normal. Results Viral infection status was highly significant in both probit and log-skew normal model components respectively. The probability of being symptom free was much lower for the week a child was viral positive relative to the week she/he was viral negative. The severity of the symptoms was also greater for the week a child was viral positive. The probability of being symptom free was
Etalle, Sandro; Massacci, Fabio; Yautsiukhin, Artsiom
2007-01-01
While logging events is becoming increasingly common in computing, in communication and in collaborative work, log systems need to satisfy increasingly challenging (if not conflicting) requirements.Despite the growing pervasiveness of log systems, to date there is no high-level framework which
Anti Rohumaa; Christopher G. Hunt; Charles R. Frihart; Pekka Saranpää; Martin Ohlmeyer; Mark Hughes
2014-01-01
Most adhesive studies employing wood veneer as the substrate assume that it is a relatively uniform material if wood species and veneer thickness are constant. In the present study, veneers from rotary cut birch (Betula pendula Roth) were produced from logs harvested in spring, autumn and winter, and soaked at 20Â°C and 70Â°C prior to peeling. Firstly...
Temperature Buffer Test. Final THM modelling
Energy Technology Data Exchange (ETDEWEB)
Aakesson, Mattias; Malmberg, Daniel; Boergesson, Lennart; Hernelind, Jan [Clay Technology AB, Lund (Sweden); Ledesma, Alberto; Jacinto, Abel [UPC, Universitat Politecnica de Catalunya, Barcelona (Spain)
2012-01-15
The Temperature Buffer Test (TBT) is a joint project between SKB/ANDRA and supported by ENRESA (modelling) and DBE (instrumentation), which aims at improving the understanding and to model the thermo-hydro-mechanical behavior of buffers made of swelling clay submitted to high temperatures (over 100 deg C) during the water saturation process. The test has been carried out in a KBS-3 deposition hole at Aespoe HRL. It was installed during the spring of 2003. Two heaters (3 m long, 0.6 m diameter) and two buffer arrangements have been investigated: the lower heater was surrounded by bentonite only, whereas the upper heater was surrounded by a composite barrier, with a sand shield between the heater and the bentonite. The test was dismantled and sampled during the winter of 2009/2010. This report presents the final THM modelling which was resumed subsequent to the dismantling operation. The main part of this work has been numerical modelling of the field test. Three different modelling teams have presented several model cases for different geometries and different degree of process complexity. Two different numerical codes, Code{sub B}right and Abaqus, have been used. The modelling performed by UPC-Cimne using Code{sub B}right, has been divided in three subtasks: i) analysis of the response observed in the lower part of the test, by inclusion of a number of considerations: (a) the use of the Barcelona Expansive Model for MX-80 bentonite; (b) updated parameters in the vapour diffusive flow term; (c) the use of a non-conventional water retention curve for MX-80 at high temperature; ii) assessment of a possible relation between the cracks observed in the bentonite blocks in the upper part of TBT, and the cycles of suction and stresses registered in that zone at the start of the experiment; and iii) analysis of the performance, observations and interpretation of the entire test. It was however not possible to carry out a full THM analysis until the end of the test due to
Temperature Buffer Test. Final THM modelling
International Nuclear Information System (INIS)
Aakesson, Mattias; Malmberg, Daniel; Boergesson, Lennart; Hernelind, Jan; Ledesma, Alberto; Jacinto, Abel
2012-01-01
The Temperature Buffer Test (TBT) is a joint project between SKB/ANDRA and supported by ENRESA (modelling) and DBE (instrumentation), which aims at improving the understanding and to model the thermo-hydro-mechanical behavior of buffers made of swelling clay submitted to high temperatures (over 100 deg C) during the water saturation process. The test has been carried out in a KBS-3 deposition hole at Aespoe HRL. It was installed during the spring of 2003. Two heaters (3 m long, 0.6 m diameter) and two buffer arrangements have been investigated: the lower heater was surrounded by bentonite only, whereas the upper heater was surrounded by a composite barrier, with a sand shield between the heater and the bentonite. The test was dismantled and sampled during the winter of 2009/2010. This report presents the final THM modelling which was resumed subsequent to the dismantling operation. The main part of this work has been numerical modelling of the field test. Three different modelling teams have presented several model cases for different geometries and different degree of process complexity. Two different numerical codes, Code B right and Abaqus, have been used. The modelling performed by UPC-Cimne using Code B right, has been divided in three subtasks: i) analysis of the response observed in the lower part of the test, by inclusion of a number of considerations: (a) the use of the Barcelona Expansive Model for MX-80 bentonite; (b) updated parameters in the vapour diffusive flow term; (c) the use of a non-conventional water retention curve for MX-80 at high temperature; ii) assessment of a possible relation between the cracks observed in the bentonite blocks in the upper part of TBT, and the cycles of suction and stresses registered in that zone at the start of the experiment; and iii) analysis of the performance, observations and interpretation of the entire test. It was however not possible to carry out a full THM analysis until the end of the test due to
A Soil Temperature Model for Closed Canopied Forest Stands
James M. Vose; Wayne T. Swank
1991-01-01
A microcomputer-based soil temperature model was developed to predict temperature at the litter-soil interface and soil temperatures at three depths (0.10 m, 0.20 m, and 1.25 m) under closed forest canopies. Comparisons of predicted and measured soil temperatures indicated good model performance under most conditions. When generalized parameters describing soil...
Modeling the wafer temperature profile in a multiwafer LPCVD furnace
Energy Technology Data Exchange (ETDEWEB)
Badgwell, T.A. [Rice Univ., Houston, TX (United States). Dept. of Chemical Engineering; Trachtenberg, I.; Edgar, T.F. [Univ. of Texas, Austin, TX (United States). Dept. of Chemical Engineering
1994-01-01
A mathematical model has been developed to predict wafer temperatures within a hot-wall multiwafer low pressure chemical vapor deposition (LPCVD) reactor. The model predicts both axial (wafer-to-wafer) and radial (across-wafer) temperature profiles. Model predictions compare favorably with in situ wafer temperature measurements described in an earlier paper. Measured axial and radial temperature nonuniformities are explained in terms of radiative heat-transfer effects. A simulation study demonstrates how changes in the outer tube temperature profile and reactor geometry affect wafer temperatures. Reactor design changes which could improve the wafer temperature profile are discussed.
Extracting operative temperatures from temperatures of physical models with thermal inertia.
O'Connor
2000-10-01
Temperatures of operative temperature models, particularly those of thick-walled models of larger ectotherms, lag behind and are more restricted in range than the operative temperatures they estimate.Algorithms are provided to extract estimates of instantaneous operative temperatures from model temperatures.A simple deconvolution method can be used when wind speeds are constant.An iterative estimation method must be used when wind speed varies during the monitoring period.The iterative method is sensitive to measurement error, and so uses a smoothing filter to limit instabilities. The smoothing also limits the short-term fluctuations in the estimated operative temperature.Iterative estimates of operative temperature suggested time lags of up to 90 min between predicted operative temperatures and model temperatures for desert tortoises (mass=3 kg). Differences this large could affect estimates of time available for foraging.
Directory of Open Access Journals (Sweden)
Yan Tang
2018-01-01
Full Text Available A business process or workflow is an assembly of tasks that accomplishes a business goal. Business process management is the study of the design, configuration/implementation, enactment and monitoring, analysis, and re-design of workflows. The traditional methodology for the re-design and improvement of workflows relies on the well-known sequence of extract, transform, and load (ETL, data/process warehousing, and online analytical processing (OLAP tools. In this paper, we study the ad hoc queryiny of process enactments for (data-centric business processes, bypassing the traditional methodology for more flexibility in querying. We develop an algebraic query language based on “incident patterns” with four operators inspired from Business Process Model and Notation (BPMN representation, allowing the user to formulate ad hoc queries directly over workflow logs. A formal semantics of this query language, a preliminary query evaluation algorithm, and a group of elementary properties of the operators are provided.
Westhoff, M. C.; Savenije, H.G.; Luxemburg, W. M. J.; Stelling, G.S.; van de Giesen, N.C.; Selker, J. S.; Pfister, L.; Uhlenbrook, S.
2007-01-01
Distributed temperature data are used as input and as calibration data for an energy based temperature model of a first order stream in Luxembourg. A DTS (Distributed Temperature Sensing) system with a fiber optic cable of 1500m was used to measure stream water temperature with 1m resolution each 2
A study of electric field components in shallow water and water half-space models in seabed logging
Rostami, Amir; Soleimani, Hassan; Yahya, Noorhana; Nyamasvisva, Tadiwa Elisha; Rauf, Muhammad
2016-11-01
Seabed logging (SBL) is an electromagnetic (EM) method to detect hydrocarbon (HC) laid beneath the seafloor, which is a development of marine controlled source electromagnetic (CSEM) method. CSEM is a method to show resistivity log of geological layers, transmitting ultra-low frequency EM wave. In SBL a net of receivers, placed on the seafloor, detect reflected and refracted EM wave by layers with different resistivity. Contrast of electrical resistivity of layers impacts on amplitude and phase of the EM wave response. The most indispensable concern in SBL is to detect guided wave via high resistive layer under the seafloor that can be an HC reservoir. Guided wave by HC creates a remarkable difference in received signal when HC reservoir does not exist. While the major contribution of received EM wave in large offset, especially in shallow water environment, is airwave, which is refracted by sea surface due to extremely high resistivity of atmosphere, airwave can affect received guided wave, dramatically. Our objective for this work is to compare HC delineation of tangential and normal components of electric field in shallow water area, using finite element method simulation. Will be reported that, in shallow water environment, minor contribution of air wave in normal component of E field (Ey) versus its major contribution in the tangential component (Ex), causes a considerable contrast on HC delineation of Ey for deeply buried reservoirs (more than 3000 m), while Ex is unable to show different contrasts of received data for with and without HC media at the same condition.
Effect of Flux Adjustments on Temperature Variability in Climate Models
International Nuclear Information System (INIS)
Duffy, P.; Bell, J.; Covey, C.; Sloan, L.
1999-01-01
It has been suggested that ''flux adjustments'' in climate models suppress simulated temperature variability. If true, this might invalidate the conclusion that at least some of observed temperature increases since 1860 are anthropogenic, since this conclusion is based in part on estimates of natural temperature variability derived from flux-adjusted models. We assess variability of surface air temperatures in 17 simulations of internal temperature variability submitted to the Coupled Model Intercomparison Project. By comparing variability in flux-adjusted vs. non-flux adjusted simulations, we find no evidence that flux adjustments suppress temperature variability in climate models; other, largely unknown, factors are much more important in determining simulated temperature variability. Therefore the conclusion that at least some of observed temperature increases are anthropogenic cannot be questioned on the grounds that it is based in part on results of flux-adjusted models. Also, reducing or eliminating flux adjustments would probably do little to improve simulations of temperature variability
Processing well logging data, for example for verification and calibration of well logs
International Nuclear Information System (INIS)
Suau, J.; Boutemy, Y.
1981-01-01
A method is described of machine processing well logging data derived from borehole exploring devices which investigate earth formations traversed by boreholes. The method can be used for verifying and recalibrating logs, reconstructing missing logs and combining the data to form a statistical model of the traversed earth formations. (U.K.)
Decomposition of log crepant birational morphisms between log terminal surfaces
Fukuda, Shigetaka
1999-01-01
We prove that every log crepant birational morphism between log terminal surfaces is decomposed into log-flopping type divisorial contraction morphisms and log blow-downs. Repeating these two kinds of contractions we reach a minimal log minimal surface from any log minimal surface.
Directory of Open Access Journals (Sweden)
Mitchell Laura E
2011-04-01
Full Text Available Abstract Background Several platforms for the analysis of genome-wide association data are available. However, these platforms focus on the evaluation of the genotype inherited by affected (i.e. case individuals, whereas for some conditions (e.g. birth defects the genotype of the mothers of affected individuals may also contribute to risk. For such conditions, it is critical to evaluate associations with both the maternal and the inherited (i.e. case genotype. When genotype data are available for case-parent triads, a likelihood-based approach using log-linear modeling can be used to assess both the maternal and inherited genotypes. However, available software packages for log-linear analyses are not well suited to the analysis of typical genome-wide association data (e.g. including missing data. Results An integrated platform, Maternal and Inherited Analyses for Genome-wide Association Studies (MI-GWAS for log-linear analyses of maternal and inherited genetic effects in large, genome-wide datasets, is described. MI-GWAS uses SAS and LEM software in combination to appropriately format data, perform the log-linear analyses and summarize the results. This platform was evaluated using existing genome-wide data and was shown to perform accurately and relatively efficiently. Conclusions The MI-GWAS platform provides a valuable tool for the analysis of association of a phenotype or condition with maternal and inherited genotypes using genome-wide data from case-parent triads. The source code for this platform is freely available at http://www.sph.uth.tmc.edu/sbrr/mi-gwas.htm.
Modelling survival of Salmonella Enteritidis during storage of yoghurt at different temperatures.
Savran, Derya; Pérez-Rodríguez, Fernando; Kadir Halkman, A
2018-04-20
The aim of this study was to evaluate the behaviour of Salmonella Enteritidis during the storage of yoghurt at different temperatures (4, 12, 20, and 25 °C), and to develop mathematical models to predict the behaviour of this bacterium as a function of storage temperature. Results indicated that Salmonella was able to survive longer during storage when temperature was low (e.g. 304 h at 4 °C, 60 h at 25 °C). The Geeraerd model with log-decrease and tailing was selected as the most suitable model to describe survival. To evaluate the effect of storage temperature on kinetic parameters such as death rate (k max ) secondary models were developed. The k max was maximum at 25 °C and minimum at 4 °C with k max = 0.28 and 0.039 h -1 , respectively. The residual population (N res ) ranged 0.5 and 1.8 log CFU/g but there was no temperature dependency of this parameter. A probabilistic example was conduced based on the developed model to assess the exposure to Salmonella by consumption of traditional Turkish yoghurt. Copyright © 2018 Elsevier B.V. All rights reserved.
Pulsed neutron generator for logging
International Nuclear Information System (INIS)
Thibideau, F.D.
1977-01-01
A pulsed neutron generator for uranium logging is described. This generator is one component of a prototype uranium logging probe which is being developed by SLA to detect, and assay, uranium by borehole logging. The logging method is based on the measurement of epithermal neutrons resulting from the prompt fissioning of uranium from a pulsed source of 17.6 MeV neutrons. An objective of the prototype probe was that its diameter not exceed 2.75 inches, which would allow its use in conventional rotary drill holes of 4.75-inch diameter. This restriction limited the generator to a maximum 2.375-inch diameter. The performance requirements for the neutron generator specified that it operate with a nominal output of 5 x 10 6 neutrons/pulse at up to 100 pulses/second for a one-hour period. The development of a neutron generator meeting the preliminary design goals was completed and two prototype models were delivered to SLA. These two generators have been used by SLA to log a number of boreholes in field evaluation of the probe. The results of the field evaluations have led to the recommendation of several changes to improve the probe's operation. Some of these changes will require additional development effort on the neutron generator. It is expected that this work will be performed during 1977. The design and operation of the first prototype neutron generators is described
CERN. Geneva; MACMAHON, Joseph
2015-01-01
Are you tired of using grep, vi and emacs to read your logs? Do you feel like you’re missing the big picture? Does the word "statistics" put a smile on your face? Then it’s time to give power to the logs!
DEFF Research Database (Denmark)
Karthikeyan, Matheswaran; Blemmer, Morten; Mortensen, Julie Flor
2011-01-01
Surface water–groundwater interactions at the stream interface influences, and at times controls the stream temperature, a critical water property driving biogeochemical processes. This study investigates the effects of these interactions on temperature of Stream Elverdamsåen in Denmark using...... the Distributed Temperature Sensing (DTS) system and instream temperature modelling. Locations of surface water–groundwater interactions were identified from the temperature data collected over a 2-km stream reach using a DTS system with 1-m spatial and 5-min temporal resolution. The stream under consideration...... surface water–groundwater interactions on heterogeneous behaviour of stream temperature....
Balin, Riccardo; Spalart, Philippe R.; Jansen, Kenneth E.
2017-11-01
Hybrid RANS/LES modeling approaches used in the context of wall-modeled LES (WMLES) of channel flows and boundary layers often suffer from a mismatch in the RANS and LES log-layer intercepts of the mean velocity profile. In the vicinity of the interface between the RANS and LES regions, the mean velocity gradient is too steep causing a departure from the log-law, an over-prediction of the velocity in the outer layer and an under-prediction of the skin-friction. This steep gradient is attributed to inadequate modeled Reynolds stresses in the upper portion of the RANS layer and at the interface. Channel flow computations were carried out with the IDDES approach of Shur et al. in WMLES mode based on the Spalart-Allmaras RANS model. This talk investigates the robustness of this approach for unstructured grids and explores changes required for grids where insufficient elevation of the Reynolds stresses is observed. Awards of computer time were provided by Innovative and Novel Computational Impact on Theory and Experiment (INCITE) and Early Science programs. Resources of the Argonne Leadership Computing Facility, a DOE Office of Science User Facility, were used.
A temperature dependent slip factor based thermal model for friction ...
Indian Academy of Sciences (India)
This paper proposes a new slip factor based three-dimensional thermal model to predict the temperature distribution during friction stir welding of 304L stainless steel plates. The proposed model employs temperature and radius dependent heat source to study the thermal cycle, temperature distribution, power required, the ...
A temperature dependent slip factor based thermal model for friction
Indian Academy of Sciences (India)
This paper proposes a new slip factor based three-dimensional thermal model to predict the temperature distribution during friction stir welding of 304L stainless steel plates. The proposed model employs temperature and radius dependent heat source to study the thermal cycle, temperature distribution, power required, the ...
Modeling shoot-tip temperature in the greenhouse environment
International Nuclear Information System (INIS)
Faust, J.E.; Heins, R.D.
1998-01-01
An energy-balance model is described that predicts vinca (Catharanthus roseus L.) shoot-tip temperature using four environmental measurements: solar radiation and dry bulb, wet bulb, and glazing material temperature. The time and magnitude of the differences between shoot-tip and air temperature were determined in greenhouses maintained at air temperatures of 15, 20, 25, 30, or 35 °C. At night, shoot-tip temperature was always below air temperature. Shoot-tip temperature decreased from 0.5 to 5 °C below air temperature as greenhouse glass temperature decreased from 2 to 15 °C below air temperature. During the photoperiod under low vapor-pressure deficit (VPD) and low air temperature, shoot-tip temperature increased ≈4 °C as solar radiation increased from 0 to 600 W·m -2 . Under high VPD and high air temperature, shoot-tip temperature initially decreased 1 to 2 °C at sunrise, then increased later in the morning as solar radiation increased. The model predicted shoot-tip temperatures within ±1 °C of 81% of the observed 1-hour average shoot-tip temperatures. The model was used to simulate shoot-tip temperatures under different VPD, solar radiation, and air temperatures. Since the rate of leaf and flower development are influenced by the temperature of the meristematic tissues, a model of shoot-tip temperature will be a valuable tool to predict plant development in greenhouses and to control the greenhouse environment based on a plant temperature setpoint. (author)
Modeling sea-surface temperature and its variability
Sarachik, E. S.
1985-01-01
A brief review is presented of the temporal scales of sea surface temperature variability. Progress in modeling sea surface temperature, and remaining obstacles to the understanding of the variability is discussed.
Temperature bounds in a model of laminar flames
International Nuclear Information System (INIS)
Kirane, M.; Badraoui, S.
1994-06-01
We consider reaction-diffusion equations coupling temperature and mass fraction in one-step-reaction model of combustion in R N . Uniform temperature bounds are derived when the Lewis number is less than one. This result completes the case of Lewis number greater than one studied by J.D. Avrin and M. Kirane ''Temperature growth and temperature bounds in special cases of combustion models'' (to appear in Applicable Analysis). (author). 5 refs
Digital mineral logging system
International Nuclear Information System (INIS)
West, J.B.
1980-01-01
A digital mineral logging system acquires data from a mineral logging tool passing through a borehole and transmits the data uphole to an electronic digital signal processor. A predetermined combination of sensors, including a deviometer, is located in a logging tool for the acquisition of the desired data as the logging tool is raised from the borehole. Sensor data in analog format is converted in the logging tool to a digital format and periodically batch transmitted to the surface at a predetermined sampling rate. An identification code is provided for each mineral logging tool, and the code is transmitted to the surface along with the sensor data. The self-identifying tool code is transmitted to the digital signal processor to identify the code against a stored list of the range of numbers assigned to that type of tool. The data is transmitted up the d-c power lines of the tool by a frequency shift key transmission technique. At the surface, a frequency shift key demodulation unit transmits the decoupled data to an asynchronous receiver interfaced to the electronic digital signal processor. During a recording phase, the signals from the logging tool are read by the electronic digital signal processor and stored for later processing. During a calculating phase, the stored data is processed by the digital signal processor and the results are outputted to a printer or plotter, or both
DEFF Research Database (Denmark)
Karthikeyan, Matheswaran; Blemmer, Morten; Mortensen, Julie Flor
2011-01-01
Surface water–groundwater interactions at the stream interface influences, and at times controls the stream temperature, a critical water property driving biogeochemical processes. This study investigates the effects of these interactions on temperature of Stream Elverdamsåen in Denmark using...... the Distributed Temperature Sensing (DTS) system and instream temperature modelling. Locations of surface water–groundwater interactions were identified from the temperature data collected over a 2-km stream reach using a DTS system with 1-m spatial and 5-min temporal resolution. The stream under consideration...... exhibits three distinct thermal regimes within a 2 km reach length due to two major interactions. An energy balance model is used to simulate the instream temperature and to quantify the effect of these interactions on the stream temperature. This research demonstrates the effect of reach level small scale...
Modeling of the temperature field of the casting ladle lining
Zabolotsky, A. V.
2011-03-01
We propose a method for calculating the temperature field of the casting ladle lining by a modified relaxation method. Given such initial data as the metal temperature in the ladle, the ambient temperature, and the lining structure, this method permits calculating the stationary temperature fields both inside the lining and on the surface of the ladle jacket. The model was tested by comparing experimentally measured temperature values on the surface of the ladle jacket with calculated temperatures. A satisfactory agreement between calculated and experimental temperature values of the ladle surface has been obtained.
Yang, Qhi-xiao; Peng, Si-long; Shan, Peng; Bi, Yi-ming; Tang, Liang; Xie, Qiong
2015-05-01
In the present paper, a new model-based method was proposed for temperature prediction and correction. First, a temperature prediction model was obtained from training samples; then, the temperature of test samples were predicted; and finally, the correction model was used to reduce the nonlinear effects of spectra from temperature variations. Two experiments were used to verify the proposed method, including a water-ethanol mixture experiment and a ternary mixture experiment. The results show that, compared with classic method such as continuous piecewise direct standardization (CPDS), our method is efficient for temperature correction. Furthermore, the temperatures of test samples are not necessary in the proposed method, making it easier to use in real applications.
Ambient temperature modelling with soft computing techniques
Energy Technology Data Exchange (ETDEWEB)
Bertini, Ilaria; Ceravolo, Francesco; Citterio, Marco; Di Pietra, Biagio; Margiotta, Francesca; Pizzuti, Stefano; Puglisi, Giovanni [Energy, New Technology and Environment Agency (ENEA), Via Anguillarese 301, 00123 Rome (Italy); De Felice, Matteo [Energy, New Technology and Environment Agency (ENEA), Via Anguillarese 301, 00123 Rome (Italy); University of Rome ' ' Roma 3' ' , Dipartimento di Informatica e Automazione (DIA), Via della Vasca Navale 79, 00146 Rome (Italy)
2010-07-15
This paper proposes a hybrid approach based on soft computing techniques in order to estimate monthly and daily ambient temperature. Indeed, we combine the back-propagation (BP) algorithm and the simple Genetic Algorithm (GA) in order to effectively train artificial neural networks (ANN) in such a way that the BP algorithm initialises a few individuals of the GA's population. Experiments concerned monthly temperature estimation of unknown places and daily temperature estimation for thermal load computation. Results have shown remarkable improvements in accuracy compared to traditional methods. (author)
Temperature-dependent rate models of vascular cambium cell mortality
Matthew B. Dickinson; Edward A. Johnson
2004-01-01
We use two rate-process models to describe cell mortality at elevated temperatures as a means of understanding vascular cambium cell death during surface fires. In the models, cell death is caused by irreversible damage to cellular molecules that occurs at rates that increase exponentially with temperature. The models differ in whether cells show cumulative effects of...
Fraschetti, F.; Pohl, M.
2017-10-01
We develop a model of the steady-state spectrum of the Crab nebula encompassing both the radio/soft X-ray and the GeV/multi-TeV observations. By solving the transport equation for TeV electrons injected at the wind termination shock as a log-parabola momentum distribution and evolved via energy losses, we determine analytically the resulting photon differential energy spectrum. We find an impressive agreement with the observations in the synchrotron region. The predicted synchrotron self-Compton accommodates the previously unsolved origin of the broad 200 GeV peak that matches the Fermi/LAT data beyond 1 GeV with the MAGIC data. A natural interpretation of the deviation from power-law of the photon spectrum customarily fit with empirical broken power-laws is provided. This model can be applied to the radio-to- multi-TeV spectra of a variety of astrophysical outflows, including pulsar wind nebulae and supernova remnants. We also show that MeV-range energetic particle distribution at interplanetary shocks typically fit with broken-power laws or Band function can be accurately reproduced by log-parabolas.
National Oceanic and Atmospheric Administration, Department of Commerce — The Mariners Weather Log (MWL) is a publication containing articles, news and information about marine weather events and phenomena, worldwide environmental impact...
A Temperature-Dependent Hysteresis Model for Relaxor Ferroelectric Compounds
National Research Council Canada - National Science Library
Raye, Julie K; Smith, Ralph C
2004-01-01
This paper summarizes the development of a homogenized free energy model which characterizes the temperature-dependent hysteresis and constitutive nonlinearities inherent to relaxor ferroelectric materials...
Madison, Matthew J.; Bradshaw, Laine P.
2015-01-01
Diagnostic classification models are psychometric models that aim to classify examinees according to their mastery or non-mastery of specified latent characteristics. These models are well-suited for providing diagnostic feedback on educational assessments because of their practical efficiency and increased reliability when compared with other…
Dynamic Model of the High Temperature Proton Exchange Membrane Fuel Cell Stack Temperature
DEFF Research Database (Denmark)
Andreasen, Søren Juhl; Kær, Søren Knudsen
2009-01-01
The present work involves the development of a model for predicting the dynamic temperature of a high temperature proton exchange membrane (HTPEM) fuel cell stack. The model is developed to test different thermal control strategies before implementing them in the actual system. The test system....... The temperature is predicted in these three parts, where they also are measured. The heat balance of the system involves a fuel cell model to describe the heat added by the fuel cells when a current is drawn. Furthermore the model also predicts the temperatures when heating the stack with external heating...... elements for start-up, heat conduction through stack insulation, cathode air convection, and heating of the inlet gases in the manifold. Various measurements are presented to validate the model predictions of the stack temperatures....
A model of evaluating the pseudogap temperature for high ...
Indian Academy of Sciences (India)
We have presented a model of evaluating the pseudogap temperature for high temperature superconductors using paraconductivity approach. The theoretical analysis is based on the crossing point technique of the conductivity expressions. The pseudogap temperature T ∗ is found to depend on dimension and is ...
Hybrid Pre-Log and Post-Log Image Reconstruction for Computed Tomography.
Wang, Guobao; Zhou, Jian; Yu, Zhou; Wang, Wenli; Qi, Jinyi
2017-12-01
Tomographic image reconstruction for low-dose computed tomography (CT) is increasingly challenging as dose continues to reduce in clinical applications. Pre-log domain methods and post-log domain methods have been proposed individually and each method has its own disadvantage. While having the potential to improve image quality for low-dose data by using an accurate imaging model, pre-log domain methods suffer slow convergence in practice due to the nonlinear transformation from the image to measurements. In contrast, post-log domain methods have fast convergence speed but the resulting image quality is suboptimal for low dose CT data because the log transformation is extremely unreliable for low-count measurements and undefined for negative values. This paper proposes a hybrid method that integrates the pre-log model and post-log model together to overcome the disadvantages of individual pre-log and post-log methods. We divide a set of CT data into high-count and low-count regions. The post-log weighted least squares model is used for measurements in the high-count region and the pre-log shifted Poisson model for measurements in the low-count region. The hybrid likelihood function can be optimized using an existing iterative algorithm. Computer simulations and phantom experiments show that the proposed hybrid method can achieve faster early convergence than the pre-log shifted Poisson likelihood method and better signal-to-noise performance than the post-log weighted least squares method.
Integrated flow and temperature modeling at the catchment scale
DEFF Research Database (Denmark)
Loinaz, Maria Christina; Davidsen, Hasse Kampp; Butts, Michael
2013-01-01
Changes in natural stream temperature levels can be detrimental to the health of aquatic ecosystems. Water use and land management directly affect the distribution of diffuse heat sources and thermal loads to streams, while riparian vegetation and geomorphology play a critical role in how thermal......–groundwater dynamics affect stream temperature. A coupled surface water–groundwater and temperature model has therefore been developed to quantify the impacts of land management and water use on stream flow and temperatures. The model is applied to the simulation of stream temperature levels in a spring-fed stream...... loads are buffered. In many areas, groundwater flow is a significant contribution to river flow, particularly during low flows and therefore has a strong influence on stream temperature levels and dynamics. However, previous stream temperature models do not properly simulate how surface water...
Water temperature modeling in the Garonne River (France
Directory of Open Access Journals (Sweden)
Larnier K.
2010-10-01
Full Text Available Stream water temperature is one of the most important parameters for water quality and ecosystem studies. Temperature can influence many chemical and biological processes and therefore impacts on the living conditions and distribution of aquatic ecosystems. Simplified models such as statistical models can be very useful for practitioners and water resource management. The present study assessed two statistical models – an equilibrium-based model and stochastic autoregressive model with exogenous inputs – in modeling daily mean water temperatures in the Garonne River from 1988 to 2005. The equilibrium temperature-based model is an approach where net heat flux at the water surface is expressed as a simpler form than in traditional deterministic models. The stochastic autoregressive model with exogenous inputs consists of decomposing the water temperature time series into a seasonal component and a short-term component (residual component. The seasonal component was modeled by Fourier series and residuals by a second-order autoregressive process (Markov chain with use of short-term air temperatures as exogenous input. The models were calibrated using data of the first half of the period 1988–2005 and validated on the second half. Calibration of the models was done using temperatures above 20 °C only to ensure better prediction of high temperatures that are currently at stake for the aquatic conditions of the Garonne River, and particularly for freshwater migrating fishes such as Atlantic Salmon (Salmo salar L.. The results obtained for both approaches indicated that both models performed well with an average root mean square error for observed temperatures above 20 °C that varied on an annual basis from 0.55 °C to 1.72 °C on validation, and good predictions of temporal occurrences and durations of three temperature threshold crossings linked to the conditions of migration and survival of Atlantic Salmon.
A concise wall temperature model for DI Diesel engines
Energy Technology Data Exchange (ETDEWEB)
Torregrosa, A.; Olmeda, P.; Degraeuwe, B. [CMT-Motores Termicos, Universidad Politecnica de Valencia (Spain); Reyes, M. [Centro de Mecanica de Fluidos y Aplicaciones, Universidad Simon Bolivar (Venezuela)
2006-08-15
A concise resistor model for wall temperature prediction in diesel engines with piston cooling is presented here. The model uses the instantaneous in-cylinder pressure and some usually measured operational parameters to predict the temperature of the structural elements of the engine. The resistor model was adjusted by means of temperature measurements in the cylinder head, the liner and the piston. For each model parameter, an expression as a function of the engine geometry, operational parameters and material properties was derived to make the model applicable to other similar engines. The model predicts well the cylinder head, liner and piston temperature and is sensitive to variations of operational parameters such as the start of injection, coolant and oil temperature and engine speed and load. (author)
International Nuclear Information System (INIS)
Czubek, J.A.; Lenda, A.
1979-01-01
The minimum dimensions have been calculated assuring 91, 96 and 98 % of the probe response in respect to the infinite medium. The models are of cylindrical form, the probe (source-to-detector distance equal to 60 or 90 cm) being placed on the model axis, symmetrically with respect to the two end-faces. All the models are ''embedded'' in various media, such as: air, sand of 40% porosity and completely saturated with water, sand of 30 % porosity and of moisture content equal to 10 %, and water. The models are of three types of material: sandstone, limestone and dolomite, with various porosities, ranging from 0 to 100 %. The probe response is due to gamma rays arising from the radiativecapture of thermal neutrons. The calculations were carried out for the highest energy line of gamma rays arising in given litology. Gamma-ray flux from the neutron radiative capture has been calculated versus rock porosity and model dimensions and radiation migration lengths determined for given litologies. The minimum dimensions of cylindrical models are given as functions of: porosity, probe length (source-to-detector distance) lithology of model and type of medium surrounding our model. (author)
Uranium logging in earth formations
International Nuclear Information System (INIS)
Givens, W.W.
1979-01-01
A technique is provided for assaying the formations surrounding a borehole for uranium. A borehole logging tool cyclically irradiates the formations with neutrons and responds to neutron fluxes produced during the period of time that prompt neutrons are being produced by the neutron fission of uranium in the formations. A borehole calibration tool employs a steady-state (continuous output) neutron source, firstly, to produce a response to neutron fluxes in models having known concentrations of uranium and, secondly, to to produce a response to neutron fluxes in the formations surrounding the borehole. The neutron flux responses of the borehole calibration tool in both the model and the formations surrounding the borehole are utilized to correct the neutron flux response of the borehole logging tool for the effects of epithermal/thermal neutron moderation, scattering, and absorption within the borehole itself
MODELS OF HOURLY DRY BULB TEMPERATURE AND ...
African Journals Online (AJOL)
Hourly meteorological data of both dry bulb temperature and relative humidity for 18 locations in Nigeria for the period 1995 to 2009 were analysed to obtain the mean monthly average and monthly hourly average of each of the two meteorological variables for each month for each location. The difference between the ...
Drinking Water Temperature Modelling in Domestic Systems
Moerman, A.; Blokker, M.; Vreeburg, J.; Van der Hoek, J.P.
2014-01-01
Domestic water supply systems are the final stage of the transport process to deliver potable water to the customers’ tap. Under the influence of temperature, residence time and pipe materials the drinking water quality can change while the water passes the domestic drinking water system. According
Directory of Open Access Journals (Sweden)
Bhanupriya Dash
2017-09-01
Full Text Available Background: Replenishment policy for entropic order quantity model with two component demand and partial backlogging under inflation is an important subject in the stock management. Methods: In this paper an inventory model for non-instantaneous deteriorating items with stock dependant consumption rate and partial back logged in addition the effect of inflection and time value of money on replacement policy with zero lead time consider was developed. Profit maximization model is formulated by considering the effects of partial backlogging under inflation with cash discounts. Further numerical example presented to evaluate the relative performance between the entropic order quantity and EOQ models separately. Numerical example is present to demonstrate the developed model and to illustrate the procedure. Lingo 13.0 version software used to derive optimal order quantity and total cost of inventory. Finally sensitivity analysis of the optimal solution with respect to different parameters of the system carried out. Results and conclusions: The obtained inventory model is very useful in retail business. This model can extend to total backorder.
3D Temperature Distribution Model Based on Thermal Infrared Image
Directory of Open Access Journals (Sweden)
Tong Jia
2017-01-01
Full Text Available This paper aims to study the construction of 3D temperature distribution reconstruction system based on binocular vision technology. Initially, a traditional calibration method cannot be directly used, because the thermal infrared camera is only sensitive to temperature. Therefore, the thermal infrared camera is calibrated separately. Belief propagation algorithm is also investigated and its smooth model is improved in terms of stereo matching to optimize mismatching rate. Finally, the 3D temperature distribution model is built based on the matching of 3D point cloud and 2D thermal infrared information. Experimental results show that the method can accurately construct the 3D temperature distribution model and has strong robustness.
New Temperature-based Models for Predicting Global Solar Radiation
International Nuclear Information System (INIS)
Hassan, Gasser E.; Youssef, M. Elsayed; Mohamed, Zahraa E.; Ali, Mohamed A.; Hanafy, Ahmed A.
2016-01-01
Highlights: • New temperature-based models for estimating solar radiation are investigated. • The models are validated against 20-years measured data of global solar radiation. • The new temperature-based model shows the best performance for coastal sites. • The new temperature-based model is more accurate than the sunshine-based models. • The new model is highly applicable with weather temperature forecast techniques. - Abstract: This study presents new ambient-temperature-based models for estimating global solar radiation as alternatives to the widely used sunshine-based models owing to the unavailability of sunshine data at all locations around the world. Seventeen new temperature-based models are established, validated and compared with other three models proposed in the literature (the Annandale, Allen and Goodin models) to estimate the monthly average daily global solar radiation on a horizontal surface. These models are developed using a 20-year measured dataset of global solar radiation for the case study location (Lat. 30°51′N and long. 29°34′E), and then, the general formulae of the newly suggested models are examined for ten different locations around Egypt. Moreover, the local formulae for the models are established and validated for two coastal locations where the general formulae give inaccurate predictions. Mostly common statistical errors are utilized to evaluate the performance of these models and identify the most accurate model. The obtained results show that the local formula for the most accurate new model provides good predictions for global solar radiation at different locations, especially at coastal sites. Moreover, the local and general formulas of the most accurate temperature-based model also perform better than the two most accurate sunshine-based models from the literature. The quick and accurate estimations of the global solar radiation using this approach can be employed in the design and evaluation of performance for
International Nuclear Information System (INIS)
Sato, Hiroaki
2014-01-01
In the Niigata area, which suffered from several large earthquakes such as the 2007 Chuetsu-oki earthquake, geographical observation that elucidates the S-wave structure of the underground is advancing. Modeling of S-wave velocity structure in the subsurface is underway to enable simulation of long-period ground motion. The one-dimensional velocity model by inverse analysis of micro-tremors is sufficiently appropriate for long-period site response but not for short-period, which is important for ground motion evaluation at NPP sites. The high-frequency site responses may be controlled by the strength of heterogeneity of underground structure because the heterogeneity of the 1D model plays an important role in estimating high-frequency site responses and is strongly related to the damping factor of the 1D layered velocity model. (author)
A physically based analytical spatial air temperature and humidity model
Yang Yang; Theodore A. Endreny; David J. Nowak
2013-01-01
Spatial variation of urban surface air temperature and humidity influences human thermal comfort, the settling rate of atmospheric pollutants, and plant physiology and growth. Given the lack of observations, we developed a Physically based Analytical Spatial Air Temperature and Humidity (PASATH) model. The PASATH model calculates spatial solar radiation and heat...
3D subsurface temperature model of Europe for geothermal exploration
Limberger, J.; Wees, J.D. van
2014-01-01
For the assessment of geothermal resources in Europe we constructed a digital 3D temperature model of the European crust and sedimentary basins, incorporating publicly available temperature data. Using European crustal thickness models and indirect parameters such as surface heat flow measurements,
International Nuclear Information System (INIS)
Sundberg, Jan; Back, Paer-Erik; Laendell, Maerta; Sundberg, Anders
2009-05-01
This report presents modelling of temperature and temperature gradients in boreholes in Laxemar and Forsmark and fitting to measured temperature data. The modelling is performed with an analytical expression including thermal conductivity, thermal diffusivity, heat flow, internal heat generation and climate events in the past. As a result of the fitting procedure it is also possible to evaluate local heat flow values for the two sites. However, since there is no independent evaluation of the heat flow, uncertainties in for example thermal conductivity, diffusivity and the palaeoclimate temperature curve are transferred into uncertainties in the heat flow. Both for Forsmark and Laxemar, reasonably good fits were achieved between models and data on borehole temperatures. However, none of the general models achieved a fit within the 95% confidence intervals of the measurements. This was achieved in some cases for the additional optimised models. Several of the model parameters are uncertain. A good model fit does not automatically imply that 'correct' values have been used for these parameters. Similar model fits can be expected with different sets of parameter values. The palaeoclimatically corrected surface mean heat flow at Forsmark and Laxemar is suggested to be 61 and 56 mW/m 2 respectively. If all uncertainties are combined, including data uncertainties, the total uncertainty in the heat flow determination is judged to be within +12% to -14% for both sites. The corrections for palaeoclimate are quite large and verify the need of site-specific climate descriptions. Estimations of the current ground surface temperature have been made by extrapolations from measured temperature logging. The mean extrapolated ground surface temperature in Forsmark and Laxemar is estimated to 6.5 deg and 7.3 deg C respectively. This is approximately 1.7 deg C higher for Forsmark, and 1.6 deg C higher for Laxemar compared to data in the report SKB-TR-06-23. Comparison with air
Perry, Steven
2009-01-01
Log4j has been around for a while now, and it seems like so many applications use it. I've used it in my applications for years now, and I'll bet you have too. But every time I need to do something with log4j I've never done before I find myself searching for examples of how to do whatever that is, and I don't usually have much luck. I believe the reason for this is that there is a not a great deal of useful information about log4j, either in print or on the Internet. The information is too simple to be of real-world use, too complicated to be distilled quickly (which is what most developers
Modeling the inactivation of ascaris eggs as a function of ammonia concentration and temperature.
Fidjeland, J; Nordin, A; Pecson, B M; Nelson, K L; Vinnerås, B
2015-10-15
Ammonia sanitization is a promising technology for sanitizing human excreta intended for use as a fertilizer in agriculture. Ascaris eggs are the most persistent pathogens regarding ammonia inactivation and are commonly present in fecal sludge in low- and middle-income countries. In this study, a model for predicting ammonia inactivation of ascaris eggs was developed. Data from four previous studies were compiled and analyzed statistically, and a mathematical model for the treatment time required for inactivation was created. The inactivation rate increased with NH3 activity to the power of 0.7. The required treatment time was found to decrease 10-fold for each 16 °C temperature increase. Dry matter (DM) content and pH had no direct effect on inactivation, but had an indirect effect due to their impact on NH3 activity, which was estimated using the Pitzer approach. An additional model giving an approximation of Pitzer NH3 activity but based on the Emerson approach, DM content and total ammonia (NHTot) was also developed. The treatment time required for different log10 reductions of ascaris egg viability can thus easily be estimated by the model as a function of NH3 activity and temperature. The impact on treatment time by different treatment options can then be theoretically evaluated, promoting improvements of the treatment e.g. by adding urea or alkaline agents, or increasing the temperature by solar heating. Copyright © 2015 Elsevier Ltd. All rights reserved.
Initialization of a mesoscale model with satellite derived temperature profiles
Kalb, Michael W.
1986-01-01
The abilities of rawinsonde data and Tiros-N satellite derived temperature profile data to depict mesoscale precipitation accumulation are evaluated. Four mesoscale simulations using combinations of temperature, low-level wind, and low-level wind initialization were performed with the limited area mesoscale prediction system (LAMPS) model. Comparisons of the simulations with operational LFM forecast accumulations reveal that the LAMPS model simulations provide a better depiction of the observed precipitation accumulation than the LFM forecasts, and the satellite temperature profiles produce better mesoscale precipitation accumulation forecasts than the rawinsonde temperature data.
Energy based model for temperature dependent behavior of ferromagnetic materials
International Nuclear Information System (INIS)
Sah, Sanjay; Atulasimha, Jayasimha
2017-01-01
An energy based model for temperature dependent anhysteretic magnetization curves of ferromagnetic materials is proposed and benchmarked against experimental data. This is based on the calculation of macroscopic magnetic properties by performing an energy weighted average over all possible orientations of the magnetization vector. Most prior approaches that employ this method are unable to independently account for the effect of both inhomogeneity and temperature in performing the averaging necessary to model experimental data. Here we propose a way to account for both effects simultaneously and benchmark the model against experimental data from ~5 K to ~300 K for two different materials in both annealed (fewer inhomogeneities) and deformed (more inhomogeneities) samples. This demonstrates that this framework is well suited to simulate temperature dependent experimental magnetic behavior. - Highlights: • Energy based model for temperature dependent ferromagnetic behavior. • Simultaneously accounts for effect of temperature and inhomogeneities. • Benchmarked against experimental data from 5 K to 300 K.
Bergshoeff, Eric A.; Hohm, Olaf; Rosseel, Jan; Townsend, Paul K.
2011-01-01
The physical modes of a recently proposed D-dimensional "critical gravity'', linearized about its anti-de Sitter vacuum, are investigated. All "log mode'' solutions, which we categorize as "spin-2'' or "Proca'', arise as limits of the massive spin-2 modes of the noncritical theory. The linearized
International Nuclear Information System (INIS)
Allen, L.S.
1988-01-01
A radioactive borehole logging tool employs an epithermal neutron detector having a neutron counter surrounded by an inner thermal neutron filter and an outer thermal neutron filter. Located between the inner and outer filters is a neutron moderating material for extending the lifetime of epithermal neutrons to enhance the counting rate of such epithermal neutrons by the neutron counter
National Aeronautics and Space Administration, Washington, DC.
The major events of the first manned moon landing mission, Apollo 11, are presented in chronological order from launch time until arrival of the astronauts aboard the U.S.S. Hornet. The log is descriptive, non-technical, and includes numerous color photographs of the astronauts on the moon. (PR)
Yustin Kamah, Muhammad; Armando, Adilla; Larasati Rahmani, Dinda; Paramitha, Shabrina
2017-12-01
Geophysical methods such as gravity and magnetotelluric methods commonly used in conventional and unconventional energy exploration, notably for exploring geothermal prospect. They used to identify the subsurface geology structures which is estimated as a path of fluid flow. This study was conducted in Kamojang Geothermal Field with the aim of highlighting the volcanic lineament in West Java, precisely in Guntur-Papandayan chain where there are three geothermal systems. Kendang Fault has predominant direction NE-SW, identified by magnetotelluric techniques and gravity data processing techniques. Gravity techniques such as spectral analysis, derivative solutions, and Euler deconvolution indicate the type and geometry of anomaly. Magnetotelluric techniques such as inverse modeling and polar diagram are required to know subsurface resistivity charactersitics and major orientation. Furthermore, the result from those methods will be compared to geology information and some section of well data, which is sufficiently suitable. This research is very useful to trace out another potential development area.
Modelling of a multi-temperature plasma composition
International Nuclear Information System (INIS)
Liani, B.; Benallal, R.; Bentalha, Z.
2005-01-01
Knowledge of plasma composition is very important for various plasma applications and prediction of plasma properties. The authors use the Saha equation and Debye length equation to calculate the non-local thermodynamic-equilibrium plasma composition. It has been shown that the model to 2T with T representing the temperature (electron temperature and heavy-particle temperature) described by Chen and Han [J. Phys. D 32(1999)1711] can be applied for a mixture of gases, where each atomic species has its own temperature, but the model to 4T is more general because it can be applicable to temperatures distant enough of the heavy particles. This can occur in a plasma composed of big- or macro-molecules. The electron temperature T e varies in the range 8000∼20000 K at atmospheric pressure. (authors)
A physically based model of global freshwater surface temperature
van Beek, Ludovicus P. H.; Eikelboom, Tessa; van Vliet, Michelle T. H.; Bierkens, Marc F. P.
2012-09-01
Temperature determines a range of physical properties of water and exerts a strong control on surface water biogeochemistry. Thus, in freshwater ecosystems the thermal regime directly affects the geographical distribution of aquatic species through their growth and metabolism and indirectly through their tolerance to parasites and diseases. Models used to predict surface water temperature range between physically based deterministic models and statistical approaches. Here we present the initial results of a physically based deterministic model of global freshwater surface temperature. The model adds a surface water energy balance to river discharge modeled by the global hydrological model PCR-GLOBWB. In addition to advection of energy from direct precipitation, runoff, and lateral exchange along the drainage network, energy is exchanged between the water body and the atmosphere by shortwave and longwave radiation and sensible and latent heat fluxes. Also included are ice formation and its effect on heat storage and river hydraulics. We use the coupled surface water and energy balance model to simulate global freshwater surface temperature at daily time steps with a spatial resolution of 0.5° on a regular grid for the period 1976-2000. We opt to parameterize the model with globally available data and apply it without calibration in order to preserve its physical basis with the outlook of evaluating the effects of atmospheric warming on freshwater surface temperature. We validate our simulation results with daily temperature data from rivers and lakes (U.S. Geological Survey (USGS), limited to the USA) and compare mean monthly temperatures with those recorded in the Global Environment Monitoring System (GEMS) data set. Results show that the model is able to capture the mean monthly surface temperature for the majority of the GEMS stations, while the interannual variability as derived from the USGS and NOAA data was captured reasonably well. Results are poorest for
Dynamic modeling of temperature change in outdoor operated tubular photobioreactors.
Androga, Dominic Deo; Uyar, Basar; Koku, Harun; Eroglu, Inci
2017-07-01
In this study, a one-dimensional transient model was developed to analyze the temperature variation of tubular photobioreactors operated outdoors and the validity of the model was tested by comparing the predictions of the model with the experimental data. The model included the effects of convection and radiative heat exchange on the reactor temperature throughout the day. The temperatures in the reactors increased with increasing solar radiation and air temperatures, and the predicted reactor temperatures corresponded well to the measured experimental values. The heat transferred to the reactor was mainly through radiation: the radiative heat absorbed by the reactor medium, ground radiation, air radiation, and solar (direct and diffuse) radiation, while heat loss was mainly through the heat transfer to the cooling water and forced convection. The amount of heat transferred by reflected radiation and metabolic activities of the bacteria and pump work was negligible. Counter-current cooling was more effective in controlling reactor temperature than co-current cooling. The model developed identifies major heat transfer mechanisms in outdoor operated tubular photobioreactors, and accurately predicts temperature changes in these systems. This is useful in determining cooling duty under transient conditions and scaling up photobioreactors. The photobioreactor design and the thermal modeling were carried out and experimental results obtained for the case study of photofermentative hydrogen production by Rhodobacter capsulatus, but the approach is applicable to photobiological systems that are to be operated under outdoor conditions with significant cooling demands.
Energy Technology Data Exchange (ETDEWEB)
Flores Armenta, Magaly; Jaimes Maldonado, Guillermo [Gerencia de Proyectos Geotermoelectricos, Comision Federal de Electricidad, Morelia, Michoacan (Mexico)
1999-08-01
In this article are exposed the results of an electronic tool to measure pressure-temperature and spinner profiles in the geothermal wells of Mexico, utilized in order to identify unobservable phenomena with traditional Kuster type pressure and temperature logs. Some examples of the applications are the identifications of production zones, interaction from between two or more zones of contribution under several conditions of operation, casing damages and apparition of sink flow intervals into the formation in producer wells. It is also presented the quantitative method utilized to calculate the masic contribution of the intervals of interest. [Spanish] En este articulo se exponen los resultados obtenidos mediante el uso de una sonda electronica para la medicion de presion-temperatura y flujo en los pozos geotermicos de Mexico, utilizada para identificar fenomenos que no son observables con las mediciones tradicionales tipo Kuster de presion y temperatura. Se ejemplifican algunas de las aplicaciones hechas, tales como la identificacion de zonas de produccion, forma de interaccion entre dos o mas zonas de aporte bajo diferentes condiciones de operacion, roturas en tuberias y aparicion de zonas ladronas en pozos. Se presenta brevemente el metodo cuantitativo utilizado para calcular el aporte masico de las intervalos de interes.
Modelling fracture of aged graphite bricks under radiation and temperature
Directory of Open Access Journals (Sweden)
Atheer Hashim
2017-05-01
Full Text Available The graphite bricks of the UK carbon dioxide gas cooled nuclear reactors are subjected to neutron irradiation and radiolytic oxidation during operation which will affect thermal and mechanical material properties and may lead to structural failure. In this paper, an empirical equation is obtained and used to represent the reduction in the thermal conductivity as a result of temperature and neutron dose. A 2D finite element thermal analysis was carried out using Abaqus to obtain temperature distribution across the graphite brick. Although thermal conductivity could be reduced by up to 75% under certain conditions of dose and temperature, analysis has shown that it has no significant effect on the temperature distribution. It was found that the temperature distribution within the graphite brick is non-radial, different from the steady state temperature distribution used in the previous studies [1,2]. To investigate the significance of this non-radial temperature distribution on the failure of graphite bricks, a subsequent mechanical analysis was also carried out with the nodal temperature information obtained from the thermal analysis. To predict the formation of cracks within the brick and the subsequent propagation, a linear traction–separation cohesive model in conjunction with the extended finite element method (XFEM is used. Compared to the analysis with steady state radial temperature distribution, the crack initiation time for the model with non-radial temperature distribution is delayed by almost one year in service, and the maximum crack length is also shorter by around 20%.
Temperature sensitivity of a numerical pollen forecast model
Scheifinger, Helfried; Meran, Ingrid; Szabo, Barbara; Gallaun, Heinz; Natali, Stefano; Mantovani, Simone
2016-04-01
Allergic rhinitis has become a global health problem especially affecting children and adolescence. Timely and reliable warning before an increase of the atmospheric pollen concentration means a substantial support for physicians and allergy suffers. Recently developed numerical pollen forecast models have become means to support the pollen forecast service, which however still require refinement. One of the problem areas concerns the correct timing of the beginning and end of the flowering period of the species under consideration, which is identical with the period of possible pollen emission. Both are governed essentially by the temperature accumulated before the entry of flowering and during flowering. Phenological models are sensitive to a bias of the temperature. A mean bias of -1°C of the input temperature can shift the entry date of a phenological phase for about a week into the future. A bias of such an order of magnitude is still possible in case of numerical weather forecast models. If the assimilation of additional temperature information (e.g. ground measurements as well as satellite-retrieved air / surface temperature fields) is able to reduce such systematic temperature deviations, the precision of the timing of phenological entry dates might be enhanced. With a number of sensitivity experiments the effect of a possible temperature bias on the modelled phenology and the pollen concentration in the atmosphere is determined. The actual bias of the ECMWF IFS 2 m temperature will also be calculated and its effect on the numerical pollen forecast procedure presented.
A model of evaluating the pseudogap temperature for high ...
Indian Academy of Sciences (India)
DOI: 10.1007/s12043-015-1088-3; ePublication: 30 September 2015. Abstract. We have presented a model of evaluating the pseudogap temperature for high- temperature superconductors using paraconductivity approach. The theoretical analysis is based on the crossing point technique of the conductivity expressions.
Modelling atmospheric temperature rise due to pollutants and its ...
African Journals Online (AJOL)
Using a mathematical model we show that temperature increases (warming) as the Hartman number due to pollutant increases. Thus, temperature and pollutants contribute to global warming. We also discuss the implications of the result on agriculture and forestry. Journal of the Nigerian Association of Mathematical ...
Relativistic finite-temperature Thomas-Fermi model
Faussurier, Gérald
2017-11-01
We investigate the relativistic finite-temperature Thomas-Fermi model, which has been proposed recently in an astrophysical context. Assuming a constant distribution of protons inside the nucleus of finite size avoids severe divergence of the electron density with respect to a point-like nucleus. A formula for the nuclear radius is chosen to treat any element. The relativistic finite-temperature Thomas-Fermi model matches the two asymptotic regimes, i.e., the non-relativistic and the ultra-relativistic finite-temperature Thomas-Fermi models. The equation of state is considered in detail. For each version of the finite-temperature Thomas-Fermi model, the pressure, the kinetic energy, and the entropy are calculated. The internal energy and free energy are also considered. The thermodynamic consistency of the three models is considered by working from the free energy. The virial question is also studied in the three cases as well as the relationship with the density functional theory. The relativistic finite-temperature Thomas-Fermi model is far more involved than the non-relativistic and ultra-relativistic finite-temperature Thomas-Fermi models that are very close to each other from a mathematical point of view.
Electronic Modeling and Design for Extreme Temperatures Project
National Aeronautics and Space Administration — We are developing CAD tools, models and methodologies for electronics design for circuit operation in extreme environments with focus on very low temperatures...
A high temperature interparticle potential for an alternative gauge model
International Nuclear Information System (INIS)
Doria, R.M.
1984-01-01
A thermal Wilson loop for a model with two gauge fields associated with the same gauge group is discussed. Deconfinement appears at high temperature. It is not possible however specify the colour of the deconfined matter. (Author) [pt
Applying Time Series Analysis Model to Temperature Data in Greenhouses
Directory of Open Access Journals (Sweden)
Abdelhafid Hasni
2011-03-01
Full Text Available The objective of the research is to find an appropriate Seasonal Auto-Regressive Integrated Moving Average (SARIMA Model for fitting the inside air temperature (Tin of a naturally ventilated greenhouse under Mediterranean conditions by considering the minimum of Akaike Information Criterion (AIC. The results of fitting were as follows: the best SARIMA Model for fitting air temperature of greenhouse is SARIMA (1,0,0 (1,0,224.
Modeling the Temperature Effect of Orientations in Residential Buildings
Directory of Open Access Journals (Sweden)
Sabahat Arif
2012-07-01
Full Text Available Indoor thermal comfort in a building has been an important issue for the environmental sustainability. It is an accepted fact that their designs and planning consume a lot of energy in the modern architecture of 20th and 21st centuries. An appropriate orientation of a building can provide thermally comfortable indoor temperatures which otherwise can consume extra energy to condition these spaces through all the seasons. This experimental study investigates the potential effect of this solar passive design strategy on indoor temperatures and a simple model is presented for predicting indoor temperatures based upon the ambient temperatures.
Dapson, Richard W; Horobin, Richard W
2013-11-01
The log P descriptor, despite its usefulness, can be difficult to use, especially for researchers lacking skills in physical chemistry. Moreover this classic measure has been determined in numerous ways, which can result in inconsistant estimates of log P values, especially for relatively complex molecules such as fluorescent probes. Novel measures of hydrophilicity/lipophilicity (the Hydrophilic/Lipophilic Index, HLI) and amphiphilicity (hydrophilic/lipophilic indices for the head group and tail, HLIT and HLIHG, respectively) therefore have been devised. We compare these descriptors with measures based on log P, the standard method for quantitative structure activity relationships (QSAR) studies. HLI can be determined using widely available molecular modeling software, coupled with simple arithmetic calculations. It is based on partial atomic charges and is intended to be a stand-alone measure of hydrophilicity/lipophilicity. Given the wide application of log P, however, we investigated the correlation between HLI and log P using a test set of 56 fluorescent probes of widely different physicochemical character. Overall correlation was poor; however, correlation of HLI and log P for probes of narrowly specified charge types, i.e., non-ionic compounds, anions, conjugated cations, or zwitterions, was excellent. Values for probes with additional nonconjugated quaternary cations, however, were less well correlated. The newly devised HLI can be divided into domain-specific descriptors, HLIT and HLIHG in amphiphilic probes. Determinations of amphiphilicity, made independently by the authors using their respective methods, showed excellent agreement. Quantifying amphiphilicity from partial log P values of the head group (head group hydrophilicity; HGH) and tail (amphiphilicity index; AI) has proved useful for understanding fluorescent probe action. The same limitations of log P apply to HGH and AI, however. The novel descriptors, HLIT and HLIHG, offer analogous advantages
Log-Concavity and Strong Log-Concavity: a review.
Saumard, Adrien; Wellner, Jon A
We review and formulate results concerning log-concavity and strong-log-concavity in both discrete and continuous settings. We show how preservation of log-concavity and strongly log-concavity on ℝ under convolution follows from a fundamental monotonicity result of Efron (1969). We provide a new proof of Efron's theorem using the recent asymmetric Brascamp-Lieb inequality due to Otto and Menz (2013). Along the way we review connections between log-concavity and other areas of mathematics and statistics, including concentration of measure, log-Sobolev inequalities, convex geometry, MCMC algorithms, Laplace approximations, and machine learning.
International Nuclear Information System (INIS)
McCann, D.; Barton, K.J.; Hearn, K.
1981-08-01
Most of the available literature on geophysical borehole logging refers to studies carried out in sedimentary rocks. It is only in recent years that any great interest has been shown in geophysical logging in boreholes in metamorphic and igneous rocks following the development of research programmes associated with geothermal energy and nuclear waste disposal. This report is concerned with the programme of geophysical logging carried out on the three deep boreholes at Altnabreac, Caithness, to examine the effectiveness of these methods in crystalline rock. Of particular importance is the assessment of the performance of the various geophysical sondes run in the boreholes in relation to the rock mass properties. The geophysical data can be used to provide additional in-situ information on the geological, hydrogeological and engineering properties of the rock mass. Fracturing and weathering in the rock mass have a considerable effect on both the design parameters for an engineering structure and the flow of water through the rock mass; hence, the relation between the geophysical properties and the degree of fracturing and weathering is examined in some detail. (author)
RAYSAW: a log sawing simulator for 3D laser-scanned hardwood logs
R. Edward. Thomas
2013-01-01
Laser scanning of hardwood logs provides detailed high-resolution imagery of log surfaces. Characteristics such as sweep, taper, and crook, as well as most surface defects, are visible to the eye in the scan data. In addition, models have been developed that predict interior knot size and position based on external defect information. Computerized processing of...
Peltier cells as temperature control elements: Experimental characterization and modeling
International Nuclear Information System (INIS)
Mannella, Gianluca A.; La Carrubba, Vincenzo; Brucato, Valerio
2014-01-01
The use of Peltier cells to realize compact and precise temperature controlled devices is under continuous extension in recent years. In order to support the design of temperature control systems, a simplified modeling of heat transfer dynamics for thermoelectric devices is presented. By following a macroscopic approach, the heat flux removed at the cold side of Peltier cell can be expressed as Q . c =γ(T c −T c eq ), where γ is a coefficient dependent on the electric current, T c and T c eq are the actual and steady state cold side temperature, respectively. On the other hand, a microscopic modeling approach was pursued via finite element analysis software packages. To validate the models, an experimental apparatus was designed and build-up, consisting in a sample vial with the surfaces in direct contact with Peltier cells. Both modeling approaches led to reliable prediction of transient and steady state sample temperature. -- Highlights: • Simplified modeling of heat transfer dynamics in Peltier cells. • Coupled macroscopic and microscopic approach. • Experimental apparatus: temperature control of a sample vial. • Both modeling approaches predict accurately the transient and steady state sample temperature
A THERMODYNAMIC CAVITATION MODEL APPLICABLE TO HIGH TEMPERATURE FLOW
Directory of Open Access Journals (Sweden)
De-Min Liu
2011-01-01
Full Text Available Cavitation is not only related with pressure, but also affected by temperature. Under high temperature, temperature depression of liquids is caused by latent heat of vaporization. The cavitation characteristics under such condition are different from those under room temperature. The paper focuses on thermodynamic cavitation based on the Rayleigh-Plesset equation and modifies the mass transfer equation with fully consideration of the thermodynamic effects and physical properties. To validate the modified model, the external and internal flow fields, such as hydrofoil NACA0015 and nozzle, are calculated, respectively. The hydrofoil NACA0015's cavitation characteristic is calculated by the modified model at different temperatures. The pressure coefficient is found in accordance with the experimental data. The nozzle cavitation under the thermodynamic condition is calculated and compared with the experiment.
Modeling the wet bulb globe temperature using standard meteorological measurements.
Liljegren, James C; Carhart, Richard A; Lawday, Philip; Tschopp, Stephen; Sharp, Robert
2008-10-01
The U.S. Army has a need for continuous, accurate estimates of the wet bulb globe temperature to protect soldiers and civilian workers from heat-related injuries, including those involved in the storage and destruction of aging chemical munitions at depots across the United States. At these depots, workers must don protective clothing that increases their risk of heat-related injury. Because of the difficulty in making continuous, accurate measurements of wet bulb globe temperature outdoors, the authors have developed a model of the wet bulb globe temperature that relies only on standard meteorological data available at each storage depot for input. The model is composed of separate submodels of the natural wet bulb and globe temperatures that are based on fundamental principles of heat and mass transfer, has no site-dependent parameters, and achieves an accuracy of better than 1 degree C based on comparisons with wet bulb globe temperature measurements at all depots.
A new weighted mean temperature model in China
Liu, Jinghong; Yao, Yibin; Sang, Jizhang
2018-01-01
The Global Positioning System (GPS) has been applied in meteorology to monitor the change of Precipitable Water Vapor (PWV) in atmosphere, transformed from Zenith Wet Delay (ZWD). A key factor in converting the ZWD into the PWV is the weighted mean temperature (Tm), which has a direct impact on the accuracy of the transformation. A number of Bevis-type models, like Tm -Ts and Tm -Ts,Ps type models, have been developed by statistics approaches, and are not able to clearly depict the relationship between Tm and the surface temperature, Ts . A new model for Tm , called weighted mean temperature norm model (abbreviated as norm model), is derived as a function of Ts , the lapse rate of temperature, δ, the tropopause height, htrop , and the radiosonde station height, hs . It is found that Tm is better related to Ts through an intermediate temperature. The small effects of lapse rate can be ignored and the tropopause height be obtained from an empirical model. Then the norm model is reduced to a simplified form, which causes fewer loss of accuracy and needs two inputs, Ts and hs . In site-specific fittings, the norm model performs much better, with RMS values reduced averagely by 0.45 K and the Mean of Absolute Differences (MAD) values by 0.2 K. The norm model is also found more appropriate than the linear models to fit Tm in a large area, not only with the RMS value reduced from 4.3 K to 3.80 K, correlation coefficient R2 increased from 0.84 to 0.88, and MAD decreased from 3.24 K to 2.90 K, but also with the distribution of simplified model values to be more reasonable. The RMS and MAD values of the differences between reference and computed PWVs are reduced by on average 16.3% and 14.27%, respectively, when using the new norm models instead of the linear model.
Analysis of Low Temperature Preheating Effect Based on Battery Temperature-Rise Model
Directory of Open Access Journals (Sweden)
Xiaogang Wu
2017-08-01
Full Text Available It is difficult to predict the heating time and power consumption associated with the self-heating process of lithium-ion batteries at low temperatures. A temperature-rise model considering the dynamic changes in battery temperature and state of charge is thus proposed. When this model is combined with the ampere-hour integral method, the quantitative relationship among the discharge rate, heating time, and power consumption, during the constant-current discharge process in an internally self-heating battery, is realized. Results show that the temperature-rise model can accurately reflect actual changes in battery temperature. The results indicate that the discharge rate and the heating time present an exponential decreasing trend that is similar to the discharge rate and the power consumption. When a 2 C discharge rate is selected, the battery temperature can rise from −10 °C to 5 °C in 280 s. In this scenario, power consumption of the heating process does not exceed 15% of the rated capacity. As the discharge rate gradually reduced, the heating time and power consumption of the heating process increase slowly. When the discharge rate is 1 C, the heating time is more than 1080 s and the power consumption approaches 30% of the rated capacity. The effect of discharge rate on the heating time and power consumption during the heating process is significantly enhanced when it is less than 1 C.
Yoon, Yohan; Geornaras, Ifigenia; Kendall, Patricia A; Sofos, John N
2009-01-01
This study modeled the effect of drying temperature in combination with predrying marination treatments to inactivate Salmonella on beef jerky. Beef inside round slices were inoculated with Salmonella and treated with (1) nothing (C), (2) traditional marinade (M), or (3) dipped into a 5% acetic acid solution for 10 min before exposure to M (AM). After 24 h of marination at 4 degrees C, samples were dehydrated at 52, 57, or 63 degrees C. Total counts (tryptic soy agar supplemented with 0.1% sodium pyruvate, TSAP) and Salmonella (XLD agar) were enumerated after inoculation and at 0, 2, 4, 6, 8, and 10 h during drying. For calculation of death rates (DR, log CFU/cm(2)/h), shoulder period (h), low asymptote, and upper asymptote, cell counts from TSAP were fitted to the Baranyi model. The DRs were then further expressed as a function of storage temperature. Inactivation occurred without an initial lag phase (shoulder period), while correlation (R(2)) values of fitted curves were >/= 0.861. The DRs of C (-0.29 to -0.62) and M (-0.36 to -0.63) treatments were similar, while DRs of the AM treatment were higher (-1.22 to -1.46). The DRs were then fitted to a polynomial equation as a function of temperature. After validation, good (C and M) or acceptable (AM) model performances were observed (R(2)= 0.954 to 0.987; bias factors: 1.03 [C], 1.01 [M], 0.71 [AM]; accuracy factors: 1.05 [C], 1.06 [M], 1.41 [AM]). The developed models may be useful in selecting drying temperatures and times in combination with predrying treatments for adequate inactivation of Salmonella in beef jerky.
McConnell, Jennifer A; Schaffner, Donald W
2014-07-01
Temperature is a primary factor in controlling the growth of microorganisms in food. The current U. S. Food and Drug Administration Model Food Code guidelines state that food can be kept out of temperature control for up to 4 h without qualifiers, or up to 6 h, if the food product starts at an initial 41 °F (5 °C) temperature and does not exceed 70 °F (21 °C) at 6 h. This project validates existing ComBase computer models for Salmonella growth under changing temperature conditions modeling scenarios using raw ground beef as a model system. A cocktail of Salmonella serovars isolated from different meat products ( Salmonella Copenhagen, Salmonella Montevideo, Salmonella Typhimurium, Salmonella Saintpaul, and Salmonella Heidelberg) was made rifampin resistant and used for all experiments. Inoculated samples were held in a programmable water bath at 4.4 °C (40 °F) and subjected to linear temperature changes to different final temperatures over various lengths of time and then returned to 4.4 °C (40 °F). Maximum temperatures reached were 15.6, 26.7, or 37.8 °C (60, 80, or 100 °F), and the temperature increases took place over 4, 6, and 8 h, with varying cooling times. Our experiments show that when maximum temperatures were lower (15.6 or 26.7 °C), there was generally good agreement between the ComBase models and experiments: when temperature increases of 15.6 or 26.7 °C occurred over 8 h, experimental data were within 0.13 log CFU of the model predictions. When maximum temperatures were 37 °C, predictive models were fail-safe. Overall bias of the models was 1.11. and accuracy was 2.11. Our experiments show the U.S. Food and Drug Administration Model Food Code guidelines for holding food out of temperature control are quite conservative. Our research also shows that the ComBase models for Salmonella growth are accurate or fail-safe for dynamic temperature conditions as might be observed due to power loss from natural disasters or during transport out of
Modelling the effect of temperature on seed germination in some ...
African Journals Online (AJOL)
USER
2010-03-01
Mar 1, 2010 ... The prediction of germination percentage (GP) and germination speed (GS) of the seeds for some cucurbits (watermelon, melon, cucumber, summer squash, pumpkin and winter squash) was investigated by mathematical model based on temperature. The model, D = [a - (b x T) + (c x T2)] of Uzun et al.
Modelling individual temperature profiles from an isolated perfused bovine tongue
Raaymakers, B. W.; Crezee, J.; Lagendijk, J. J.
2000-01-01
To predict the temperature distribution during hyperthermia treatments a thermal model that accounts for the thermal effect of blood flow is mandatory. The DIscrete VAsculature (DIVA) thermal model developed at our department is able to do so; geometrically described vessels are handled individually
Modeling of Sokoto Daily Average Temperature: A Fractional ...
African Journals Online (AJOL)
Modeling of Sokoto Daily Average Temperature: A Fractional Integration Approach. 22 extension of the class of ARIMA processes stemming from Box and Jenkins methodology. One of their originalities is the explicit modeling of the long term correlation structure (Diebolt and. Guiraud, 2000). Autoregressive fractionally.
Modelling the effect of temperature on seed germination in some ...
African Journals Online (AJOL)
The prediction of germination percentage (GP) and germination speed (GS) of the seeds for some cucurbits (watermelon, melon, cucumber, summer squash, pumpkin and winter squash) was investigated by mathematical model based on temperature. The model, D = [a - (b x T) + (c x T2)] of Uzun et al. (2001), was adapted ...
Mathematical modelling of steam generator and design of temperature regulator
Energy Technology Data Exchange (ETDEWEB)
Bogdanovic, S.S. [EE Institute Nikola Tesla, Belgrade (Yugoslavia)
1999-07-01
The paper considers mathematical modelling of once-through power station boiler and numerical algorithm for simulation of the model. Fast and numerically stable algorithm based on the linearisation of model equations and on the simultaneous solving of differential and algebraic equations is proposed. The paper also presents the design of steam temperature regulator by using the method of projective controls. Dynamic behaviour of the system closed with optimal linear quadratic regulator is taken as the reference system. The desired proprieties of the reference system are retained and solutions for superheated steam temperature regulator are determined. (author)
Fuchs, Sven; Bording, Thue S.; Balling, Niels
2015-04-01
Thermal modelling is used to examine the subsurface temperature field and geothermal conditions at various scales (e.g. sedimentary basins, deep crust) and in the framework of different problem settings (e.g. scientific or industrial use). In such models, knowledge of rock thermal properties is prerequisites for the parameterisation of boundary conditions and layer properties. In contrast to hydrogeological ground-water models, where parameterization of the major rock property (i.e. hydraulic conductivity) is generally conducted considering lateral variations within geological layers, parameterization of thermal models (in particular regarding thermal conductivity but also radiogenic heat production and specific heat capacity) in most cases is conducted using constant parameters for each modelled layer. For such constant thermal parameter values, moreover, initial values are normally obtained from rare core measurements and/or literature values, which raise questions for their representativeness. Some few studies have considered lithological composition or well log information, but still keeping the layer values constant. In the present thermal-modelling scenario analysis, we demonstrate how the use of different parameter input type (from literature, well logs and lithology) and parameter input style (constant or laterally varying layer values) affects the temperature model prediction in sedimentary basins. For this purpose, rock thermal properties are deduced from standard petrophysical well logs and lithological descriptions for several wells in a project area. Statistical values of thermal properties (mean, standard deviation, moments, etc.) are calculated at each borehole location for each geological formation and, moreover, for the entire dataset. Our case study is located at the Danish-German border region (model dimension: 135 x115 km, depth: 20 km). Results clearly show that (i) the use of location-specific well-log derived rock thermal properties and (i
A simple lumped model to convert air temperature into surface water temperature in lakes
Directory of Open Access Journals (Sweden)
S. Piccolroaz
2013-08-01
Full Text Available Water temperature in lakes is governed by a complex heat budget, where the estimation of the single fluxes requires the use of several hydro-meteorological variables that are not generally available. In order to address this issue, we developed Air2Water, a simple physically based model to relate the temperature of the lake superficial layer (epilimnion to air temperature only. The model has the form of an ordinary differential equation that accounts for the overall heat exchanges with the atmosphere and the deeper layer of the lake (hypolimnion by means of simplified relationships, which contain a few parameters (from four to eight in the different proposed formulations to be calibrated with the combined use of air and water temperature measurements. The calibration of the parameters in a given case study allows for one to estimate, in a synthetic way, the influence of the main processes controlling the lake thermal dynamics, and to recognize the atmospheric temperature as the main factor driving the evolution of the system. In fact, under certain hypotheses the air temperature variation implicitly contains proper information about the other major processes involved, and hence in our approach is considered as the only input variable of the model. In particular, the model is suitable to be applied over long timescales (from monthly to interannual, and can be easily used to predict the response of a lake to climate change, since projected air temperatures are usually available by large-scale global circulation models. In this paper, the model is applied to Lake Superior (USA–Canada considering a 27 yr record of measurements, among which 18 yr are used for calibration and the remaining 9 yr for model validation. The calibration of the model is obtained by using the generalized likelihood uncertainty estimation (GLUE methodology, which also allows for a sensitivity analysis of the parameters. The results show remarkable agreement with
Potential for shared log transport services
Tim McDonald; Steve Taylor; Jorge Valenzuela
2001-01-01
A simulation model of a log transport logistics network was developed. The model could be structured to either share truck capacity among a group of loggers, or to assign a fixed number of trucks to individual loggers. Another variation of the model allowed the use of a staging yard to set out loaded trailers and deliver them to destinations using dedicated shuttle...
A model of the ground surface temperature for micrometeorological analysis
Leaf, Julian S.; Erell, Evyatar
2017-07-01
Micrometeorological models at various scales require ground surface temperature, which may not always be measured in sufficient spatial or temporal detail. There is thus a need for a model that can calculate the surface temperature using only widely available weather data, thermal properties of the ground, and surface properties. The vegetated/permeable surface energy balance (VP-SEB) model introduced here requires no a priori knowledge of soil temperature or moisture at any depth. It combines a two-layer characterization of the soil column following the heat conservation law with a sinusoidal function to estimate deep soil temperature, and a simplified procedure for calculating moisture content. A physically based solution is used for each of the energy balance components allowing VP-SEB to be highly portable. VP-SEB was tested using field data measuring bare loess desert soil in dry weather and following rain events. Modeled hourly surface temperature correlated well with the measured data (r 2 = 0.95 for a whole year), with a root-mean-square error of 2.77 K. The model was used to generate input for a pedestrian thermal comfort study using the Index of Thermal Stress (ITS). The simulation shows that the thermal stress on a pedestrian standing in the sun on a fully paved surface, which may be over 500 W on a warm summer day, may be as much as 100 W lower on a grass surface exposed to the same meteorological conditions.
Alizadeh, Bahram; Najjari, Saeid; Kadkhodaie-Ilkhchi, Ali
2012-08-01
Intelligent and statistical techniques were used to extract the hidden organic facies from well log responses in the Giant South Pars Gas Field, Persian Gulf, Iran. Kazhdomi Formation of Mid-Cretaceous and Kangan-Dalan Formations of Permo-Triassic Data were used for this purpose. Initially GR, SGR, CGR, THOR, POTA, NPHI and DT logs were applied to model the relationship between wireline logs and Total Organic Carbon (TOC) content using Artificial Neural Networks (ANN). The correlation coefficient (R2) between the measured and ANN predicted TOC equals to 89%. The performance of the model is measured by the Mean Squared Error function, which does not exceed 0.0073. Using Cluster Analysis technique and creating a binary hierarchical cluster tree the constructed TOC column of each formation was clustered into 5 organic facies according to their geochemical similarity. Later a second model with the accuracy of 84% was created by ANN to determine the specified clusters (facies) directly from well logs for quick cluster recognition in other wells of the studied field. Each created facies was correlated to its appropriate burial history curve. Hence each and every facies of a formation could be scrutinized separately and directly from its well logs, demonstrating the time and depth of oil or gas generation. Therefore potential production zone of Kazhdomi probable source rock and Kangan- Dalan reservoir formation could be identified while well logging operations (especially in LWD cases) were in progress. This could reduce uncertainty and save plenty of time and cost for oil industries and aid in the successful implementation of exploration and exploitation plans.
Low-temperature behavior of the quark-meson model
Tripolt, Ralf-Arno; Schaefer, Bernd-Jochen; von Smekal, Lorenz; Wambach, Jochen
2018-02-01
We revisit the phase diagram of strong-interaction matter for the two-flavor quark-meson model using the functional renormalization group. In contrast to standard mean-field calculations, an unusual phase structure is encountered at low temperatures and large quark chemical potentials. In particular, we identify a regime where the pressure decreases with increasing temperature and discuss possible reasons for this unphysical behavior.
A constitutive model with damage for high temperature superalloys
Sherwood, J. A.; Stouffer, D. C.
1988-01-01
A unified constitutive model is searched for that is applicable for high temperature superalloys used in modern gas turbines. Two unified inelastic state variable constitutive models were evaluated for use with the damage parameter proposed by Kachanov. The first is a model (Bodner, Partom) in which hardening is modeled through the use of a single state variable that is similar to drag stress. The other (Ramaswamy) employs both a drag stress and back stress. The extension was successful for predicting the tensile, creep, fatigue, torsional and nonproportional response of Rene' 80 at several temperatures. In both formulations, a cumulative damage parameter is introduced to model the changes in material properties due to the formation of microcracks and microvoids that ultimately produce a macroscopic crack. A back stress/drag stress/damage model was evaluated for Rene' 95 at 1200 F and is shown to predict the tensile, creep, and cyclic loading responses reasonably well.
Du, Xinzhong; Shrestha, Narayan Kumar; Ficklin, Darren L.; Wang, Junye
2018-04-01
Stream temperature is an important indicator for biodiversity and sustainability in aquatic ecosystems. The stream temperature model currently in the Soil and Water Assessment Tool (SWAT) only considers the impact of air temperature on stream temperature, while the hydroclimatological stream temperature model developed within the SWAT model considers hydrology and the impact of air temperature in simulating the water-air heat transfer process. In this study, we modified the hydroclimatological model by including the equilibrium temperature approach to model heat transfer processes at the water-air interface, which reflects the influences of air temperature, solar radiation, wind speed and streamflow conditions on the heat transfer process. The thermal capacity of the streamflow is modeled by the variation of the stream water depth. An advantage of this equilibrium temperature model is the simple parameterization, with only two parameters added to model the heat transfer processes. The equilibrium temperature model proposed in this study is applied and tested in the Athabasca River basin (ARB) in Alberta, Canada. The model is calibrated and validated at five stations throughout different parts of the ARB, where close to monthly samplings of stream temperatures are available. The results indicate that the equilibrium temperature model proposed in this study provided better and more consistent performances for the different regions of the ARB with the values of the Nash-Sutcliffe Efficiency coefficient (NSE) greater than those of the original SWAT model and the hydroclimatological model. To test the model performance for different hydrological and environmental conditions, the equilibrium temperature model was also applied to the North Fork Tolt River Watershed in Washington, United States. The results indicate a reasonable simulation of stream temperature using the model proposed in this study, with minimum relative error values compared to the other two models
Modeling bleaching of tomato derivatives at subzero temperatures.
Manzocco, Lara; Calligaris, Sonia; Nicoli, Maria Cristina
2006-02-22
This work was addressed to obtain a predictive model of the rate of bleaching in tomato derivatives at subzero temperatures. To this aim, a tomato puree was freeze-dried and equilibrated at increasing solid fractions. The bleaching rate was assessed by measuring tomato color during storage for up to 18 months at temperatures from -30 to 0 degrees C. The temperature dependence of the tomato-bleaching rate was neither predictable using the Arrhenius equation nor simply related to tomato physical state. The lack of a clear Arrhenius relation was attributed to the occurrence of temperature-dependent phenomena, such as ice crystallization and oxygen solubility modifications, which strongly changed the local concentration of reactants. A modified Arrhenius equation predicting the tomato-bleaching rate in the entire temperature range was proposed. Tomato concentration, and hence its physical state, affected the temperature dependence of bleaching, modifying apparent activation energy and frequency factor of the modified Arrhenius equation. In light of these considerations, a mathematical model was set up and validated to accurately predict the tomato-bleaching rate on the basis of only its concentration and storage temperature.
Directory of Open Access Journals (Sweden)
H. Portner
2010-11-01
Full Text Available Models of carbon cycling in terrestrial ecosystems contain formulations for the dependence of respiration on temperature, but the sensitivity of predicted carbon pools and fluxes to these formulations and their parameterization is not well understood. Thus, we performed an uncertainty analysis of soil organic matter decomposition with respect to its temperature dependency using the ecosystem model LPJ-GUESS.
We used five temperature response functions (Exponential, Arrhenius, Lloyd-Taylor, Gaussian, Van't Hoff. We determined the parameter confidence ranges of the formulations by nonlinear regression analysis based on eight experimental datasets from Northern Hemisphere ecosystems. We sampled over the confidence ranges of the parameters and ran simulations for each pair of temperature response function and calibration site. We analyzed both the long-term and the short-term heterotrophic soil carbon dynamics over a virtual elevation gradient in southern Switzerland.
The temperature relationship of Lloyd-Taylor fitted the overall data set best as the other functions either resulted in poor fits (Exponential, Arrhenius or were not applicable for all datasets (Gaussian, Van't Hoff. There were two main sources of uncertainty for model simulations: (1 the lack of confidence in the parameter estimates of the temperature response, which increased with increasing temperature, and (2 the size of the simulated soil carbon pools, which increased with elevation, as slower turn-over times lead to higher carbon stocks and higher associated uncertainties. Our results therefore indicate that such projections are more uncertain for higher elevations and hence also higher latitudes, which are of key importance for the global terrestrial carbon budget.
Portner, H.; Wolf, A.; Bugmann, H.
2009-04-01
Many biogeochemical models have been applied to study the response of the carbon cycle to changes in climate, whereby the process of carbon uptake (photosynthesis) has usually gained more attention than the equally important process of carbon release by respiration. The decomposition of soil organic matter is driven by a combination of factors with a prominent one being soil temperature [Berg and Laskowski(2005)]. One uncertainty concerns the response function used to describe the sensitivity of soil organic matter decomposition to temperature. This relationship is often described by one out of a set of similar exponential functions, but it has not been investigated how uncertainties in the choice of the response function influence the long term predictions of biogeochemical models. We built upon the well-established LPJ-GUESS model [Smith et al.(2001)]. We tested five candidate functions and calibrated them against eight datasets from different Ameriflux and CarboEuropeIP sites [Hibbard et al.(2006)]. We used a simple Exponential function with a constant Q10, the Arrhenius function, the Gaussian function [Tuomi et al.(2008), O'Connell(1990)], the Van't Hoff function [Van't Hoff(1901)] and the Lloyd&Taylor function [Lloyd and Taylor(1994)]. We assessed the impact of uncertainty in model formulation of temperature response on estimates of present and future long-term carbon storage in ecosystems and hence on the CO2 feedback potential to the atmosphere. We specifically investigated the relative importance of model formulation and the error introduced by using different data sets for the parameterization. Our results suggested that the Exponential and Arrhenius functions are inappropriate, as they overestimated the respiration rates at lower temperatures. The Gaussian, Van't Hoff and Lloyd&Taylor functions all fit the observed data better, whereby the functions of Gaussian and Van't Hoff underestimated the response at higher temperatures. We suggest, that the
Heat Transfer Modeling for Rigid High-Temperature Fibrous Insulation
Daryabeigi, Kamran; Cunnington, George R.; Knutson, Jeffrey R.
2012-01-01
Combined radiation and conduction heat transfer through a high-temperature, high-porosity, rigid multiple-fiber fibrous insulation was modeled using a thermal model previously used to model heat transfer in flexible single-fiber fibrous insulation. The rigid insulation studied was alumina enhanced thermal barrier (AETB) at densities between 130 and 260 kilograms per cubic meter. The model consists of using the diffusion approximation for radiation heat transfer, a semi-empirical solid conduction model, and a standard gas conduction model. The relevant parameters needed for the heat transfer model were estimated from steady-state thermal measurements in nitrogen gas at various temperatures and environmental pressures. The heat transfer modeling methodology was evaluated by comparison with standard thermal conductivity measurements, and steady-state thermal measurements in helium and carbon dioxide gases. The heat transfer model is applicable over the temperature range of 300 to 1360 K, pressure range of 0.133 to 101.3 x 10(exp 3) Pa, and over the insulation density range of 130 to 260 kilograms per cubic meter in various gaseous environments.
Geothermal Project Den Haag - 3-D models for temperature prediction and reservoir characterization
Mottaghy, D.; Pechnig, R.; Willemsen, G.; Simmelink, H. J.; Vandeweijer, V.
2009-04-01
In the framework of the "Den Haag Zuidwest" geothermal district heating system a deep geothermal installation is projected. The target horizon of the planned doublet is the "Delft sandstone" which has been extensively explored for oil- and gas reservoirs in the last century. In the target area, this upper Jurassic sandstone layer is found at a depth of about 2300 m with an average thickness of about 50 m. The study presented here focuses on the prediction of reservoir temperatures and production behavior which is crucial for planning a deep geothermal installation. In the first phase, the main objective was to find out whether there is a significant influence of the 3-dimensional structures of anticlines and synclines on the temperature field, which could cause formation temperatures deviating from the predicted extrapolated temperature data from oil and gas exploration wells. To this end a regional model was set up as a basis for steady state numerical simulations. Since representative input parameters are decisive for reliable model results, all available information was compiled: a) the subsurface geometry, depth and thickness of the stratigraphic layers known from seismic data sets 2) borehole geophysical data and c) geological and petrographical information from exploration wells. In addition 50 cuttings samples were taken from two selected key wells in order to provide direct information on thermal properties of the underlying strata. Thermal conductivity and rock matrix density were measured in the laboratory. These data were combined with a petrophysical log analysis (Gamma Ray, Sonic, Density and Resistivity), which resulted in continuous profiles of porosity, effective thermal conductivity and radiogenetic heat production. These profiles allowed to asses in detail the variability of the petrophysical properties with depth and to check for lateral changes between the wells. All this data entered the numerical simulations which were performed by a 3-D
Genetic Programming and Standardization in Water Temperature Modelling
Directory of Open Access Journals (Sweden)
Maritza Arganis
2009-01-01
Full Text Available An application of Genetic Programming (an evolutionary computational tool without and with standardization data is presented with the aim of modeling the behavior of the water temperature in a river in terms of meteorological variables that are easily measured, to explore their explanatory power and to emphasize the utility of the standardization of variables in order to reduce the effect of those with large variance. Recorded data corresponding to the water temperature behavior at the Ebro River, Spain, are used as analysis case, showing a performance improvement on the developed model when data are standardized. This improvement is reflected in a reduction of the mean square error. Finally, the models obtained in this document were applied to estimate the water temperature in 2004, in order to provide evidence about their applicability to forecasting purposes.
An exospheric temperature model from CHAMP thermospheric density
Weng, Libin; Lei, Jiuhou; Sutton, Eric; Dou, Xiankang; Fang, Hanxian
2017-02-01
In this study, the effective exospheric temperature, named as T∞, derived from thermospheric densities measured by the CHAMP satellite during 2002-2010 was utilized to develop an exospheric temperature model (ETM) with the aid of the NRLMSISE-00 model. In the ETM, the temperature variations are characterized as a function of latitude, local time, season, and solar and geomagnetic activities. The ETM is validated by the independent GRACE measurements, and it is found that T∞ and thermospheric densities from the ETM are in better agreement with the GRACE data than those from the NRLMSISE-00 model. In addition, the ETM captures well the thermospheric equatorial anomaly feature, seasonal variation, and the hemispheric asymmetry in the thermosphere.
A two-temperature model for shocked porous explosive
Lambourn, Brian; Handley, Caroline
2017-01-01
Mesoscale calculations of hotpots created by a shock wave in a porous explosive show that the hotspots do not cool in times of order at least a microsecond. This suggests that simple models of porosity like the Snowplough model, which assume that a shocked porous explosive jumps to a point on the Hugoniot that is instantaneously in thermodynamic equilibrium, are not correct. A two-temperature model of shocked porous explosive has been developed in which a small fraction of the material, representing the hotspots, has a high temperature, but the bulk of the material is cooler than the temperature calculated by the Snowplough model. In terms of the mean state of the material, it is shown that the two-temperature model only minimally affects the pressure vs. volume and shock velocity vs. particle velocity plot of the Hugoniot, but that the mean state lies slightly off the equation of state surface. The results of the model are compared with two dimensional mesoscale calculations.
Estimation of the non records logs from existing logs using artificial neural networks
Directory of Open Access Journals (Sweden)
Mehdi Mohammad Salehi
2017-12-01
Full Text Available Finding the information of the hydrocarbon reservoirs from well logs is one of the main objectives of the engineers. But, missing the log records (due to many reasons such as broken instruments, unsuitable borehole and etc. is a major challenge to achieve it. Prediction of the density and resistivity logs (Rt, DT and LLS from the conventional wire-line logs in one of the Iranian southwest oil fields is the main purpose of this study. Multilayer neural network was applied to develop an intelligent predictive model for prediction of the logs. A total of 3000 data sets from 3 wells (A, B and C of the studied field were used. Among them, the data of A, B and C wells were used to constructing and testing the model, respectively. To evaluate the performance of the model, the mean square error (MSE and correlation coefficient (R2 in the test data were calculated. A comparison between the MSE of the proposed model and recently intelligent models shows that the proposed model is more accurate than others. Acceptable accuracy and using conventional well logging data are the highlight advantages of the proposed intelligent model.
Dynamic Planar Convex Hull with Optimal Query Time and O(log n · log log n ) Update Time
DEFF Research Database (Denmark)
Brodal, Gerth Stølting; Jakob, Riko
2000-01-01
The dynamic maintenance of the convex hull of a set of points in the plane is one of the most important problems in computational geometry. We present a data structure supporting point insertions in amortized O(log n · log log log n) time, point deletions in amortized O(log n · log log n) time, a...
International Nuclear Information System (INIS)
Calliger, R.J.; Suski, G.J.
1981-01-01
Nova is a 200 terawatt, 10-beam High Energy Glass Laser currently under construction at LLNL. This facility, designed to demonstrate the feasibility of laser driven inertial confinement fusion, contains over 5000 elements requiring coordinated control, data acquisition, and analysis functions. The large amounts of data that will be generated must be maintained over the life of the facility. Often the most useful but inaccessible data is that related to time dependent events associated with, for example, operator actions or experiment activity. We have developed an Event Logging System to synchronously record, maintain, and analyze, in part, this data. We see the system as being particularly useful to the physics and engineering staffs of medium and large facilities in that it is entirely separate from experimental apparatus and control devices. The design criteria, implementation, use, and benefits of such a system will be discussed
Thermal modelling of PV module performance under high ambient temperatures
Energy Technology Data Exchange (ETDEWEB)
Diarra, D.C.; Harrison, S.J. [Queen' s Univ., Kingston, ON (Canada). Dept. of Mechanical and Materials Engineering Solar Calorimetry Lab; Akuffo, F.O. [Kwame Nkrumah Univ. of Science and Technology, Kumasi (Ghana). Dept. of Mechanical Engineering
2005-07-01
When predicting the performance of photovoltaic (PV) generators, the actual performance is typically lower than test results conducted under standard test conditions because the radiant energy absorbed in the module under normal operation raises the temperature of the cell and other multilayer components. The increase in temperature translates to a lower conversion efficiency of the solar cells. In order to address these discrepancies, a thermal model of a characteristic PV module was developed to assess and predict its performance under real field-conditions. The PV module consisted of monocrystalline silicon cells in EVA between a glass cover and a tedlar backing sheet. The EES program was used to compute the equilibrium temperature profile in the PV module. It was shown that heat is dissipated towards the bottom and the top of the module, and that its temperature can be much higher than the ambient temperature. Modelling results indicate that 70-75 per cent of the absorbed solar radiation is dissipated from the solar cells as heat, while 4.7 per cent of the solar energy is absorbed in the glass cover and the EVA. It was also shown that the operating temperature of the PV module decreases with increased wind speed. 2 refs.
Geomicrobial Optical Logging Detectors (GOLD)
Bramall, N. E.; Stoker, C. R.; Price, P. B.; Coates, J. D.; Allamandola, L. J.; Mattioda, A. L.
2008-12-01
We will present concepts for downhole instrumentation that could be used in the Deep Underground Science and Engineering Laboratory (DUSEL). We envision optical borehole-logging instruments that could monitor bacterial concentration, mineralogy, aromatic organics, temperature and oxygen concentration, allowing for the in situ monitoring of time-dependent microbial and short-scale geologic processes and provide valuable in situ data on stratigraphy to supplement core analyses, especially where instances of missing or damaged core sections make such studies difficult. Incorporated into these instruments will be a sampling/inoculation tool to allow for the recovery and/or manipulation of particularly interesting sections of the borehole wall for further study, enabling a series of microbiological studies. The borehole tools we will develop revolve around key emerging technologies and methods, some of which are briefly described below: 1) Autofluorescence Spectroscopy: Building on past instruments, we will develop a new borehole logger that searches for microbial life and organics using fluorescence spectroscopy. Many important organic compounds (e.g. PAHs) and biomolecules (e.g. aromatic amino acids, proteins, methanogenic coenzymes) fluoresce when excited with ultraviolet and visible light. Through the careful selection of excitation wavelength(s) and temporal gating parameters, a borehole logging instrument can detect and differentiate between these different compounds and the mineral matrix in which they exist. 2) Raman Spectroscopy: Though less sensitive than fluorescence spectroscopy, Raman spectroscopy is more definitive: it can provide important mineral phase distribution/proportions and other chemical data enabling studies of mineralogy and microbe-mineral interactions (when combined with fluorescence). 3) Borehole Camera: Imaging of the borehole wall with extended information in the UV, visible, and NIR for a more informative view can provide a lot of insight
Modeling Apple Surface Temperature Dynamics Based on Weather Data
Directory of Open Access Journals (Sweden)
Lei Li
2014-10-01
Full Text Available The exposure of fruit surfaces to direct sunlight during the summer months can result in sunburn damage. Losses due to sunburn damage are a major economic problem when marketing fresh apples. The objective of this study was to develop and validate a model for simulating fruit surface temperature (FST dynamics based on energy balance and measured weather data. A series of weather data (air temperature, humidity, solar radiation, and wind speed was recorded for seven hours between 11:00–18:00 for two months at fifteen minute intervals. To validate the model, the FSTs of “Fuji” apples were monitored using an infrared camera in a natural orchard environment. The FST dynamics were measured using a series of thermal images. For the apples that were completely exposed to the sun, the RMSE of the model for estimating FST was less than 2.0 °C. A sensitivity analysis of the emissivity of the apple surface and the conductance of the fruit surface to water vapour showed that accurate estimations of the apple surface emissivity were important for the model. The validation results showed that the model was capable of accurately describing the thermal performances of apples under different solar radiation intensities. Thus, this model could be used to more accurately estimate the FST relative to estimates that only consider the air temperature. In addition, this model provides useful information for sunburn protection management.
A complex autoregressive model and application to monthly temperature forecasts
Directory of Open Access Journals (Sweden)
X. Gu
2005-11-01
Full Text Available A complex autoregressive model was established based on the mathematic derivation of the least squares for the complex number domain which is referred to as the complex least squares. The model is different from the conventional way that the real number and the imaginary number are separately calculated. An application of this new model shows a better forecast than forecasts from other conventional statistical models, in predicting monthly temperature anomalies in July at 160 meteorological stations in mainland China. The conventional statistical models include an autoregressive model, where the real number and the imaginary number are separately disposed, an autoregressive model in the real number domain, and a persistence-forecast model.
Can spatial statistical river temperature models be transferred between catchments?
Jackson, Faye L.; Fryer, Robert J.; Hannah, David M.; Malcolm, Iain A.
2017-09-01
There has been increasing use of spatial statistical models to understand and predict river temperature (Tw) from landscape covariates. However, it is not financially or logistically feasible to monitor all rivers and the transferability of such models has not been explored. This paper uses Tw data from four river catchments collected in August 2015 to assess how well spatial regression models predict the maximum 7-day rolling mean of daily maximum Tw (Twmax) within and between catchments. Models were fitted for each catchment separately using (1) landscape covariates only (LS models) and (2) landscape covariates and an air temperature (Ta) metric (LS_Ta models). All the LS models included upstream catchment area and three included a river network smoother (RNS) that accounted for unexplained spatial structure. The LS models transferred reasonably to other catchments, at least when predicting relative levels of Twmax. However, the predictions were biased when mean Twmax differed between catchments. The RNS was needed to characterise and predict finer-scale spatially correlated variation. Because the RNS was unique to each catchment and thus non-transferable, predictions were better within catchments than between catchments. A single model fitted to all catchments found no interactions between the landscape covariates and catchment, suggesting that the landscape relationships were transferable. The LS_Ta models transferred less well, with particularly poor performance when the relationship with the Ta metric was physically implausible or required extrapolation outside the range of the data. A single model fitted to all catchments found catchment-specific relationships between Twmax and the Ta metric, indicating that the Ta metric was not transferable. These findings improve our understanding of the transferability of spatial statistical river temperature models and provide a foundation for developing new approaches for predicting Tw at unmonitored locations across
Modeling temperature and moisture state effects on acoustic velocity in wood
Shan Gao; X. Wang; L. Wang; R.B. Bruce
2011-01-01
Previous research has proved the concept of acoustic wave propagation methods for evaluating wood quality of trees and logs during forest operations. As commercial acoustic equipment is implemented in field for various purposes, one has to consider the influence of operating temperature on acoustic velocity â a key parameter for wood property prediction. Our field...
Braking System Modeling and Brake Temperature Response to Repeated Cycle
Directory of Open Access Journals (Sweden)
Zaini Dalimus
2014-12-01
Full Text Available Braking safety is crucial while driving the passenger or commercial vehicles. Large amount of kinetic energy is absorbed by four brakes fitted in the vehicle. If the braking system fails to work, road accident could happen and may result in death. This research aims to model braking system together with vehicle in Matlab/Simulink software and measure actual brake temperature. First, brake characteristic and vehicle dynamic model were generated to estimate friction force and dissipated heat. Next, Arduino based prototype brake temperature monitoring was developed and tested on the road. From the experiment, it was found that brake temperature tends to increase steadily in long repeated deceleration and acceleration cycle.
Modeling and Forecasting Average Temperature for Weather Derivative Pricing
Directory of Open Access Journals (Sweden)
Zhiliang Wang
2015-01-01
Full Text Available The main purpose of this paper is to present a feasible model for the daily average temperature on the area of Zhengzhou and apply it to weather derivatives pricing. We start by exploring the background of weather derivatives market and then use the 62 years of daily historical data to apply the mean-reverting Ornstein-Uhlenbeck process to describe the evolution of the temperature. Finally, Monte Carlo simulations are used to price heating degree day (HDD call option for this city, and the slow convergence of the price of the HDD call can be found through taking 100,000 simulations. The methods of the research will provide a frame work for modeling temperature and pricing weather derivatives in other similar places in China.
On the Temperature Dependence of the UNIQUAC/UNIFAC Models
DEFF Research Database (Denmark)
Skjold-Jørgensen, Steen; Rasmussen, Peter; Fredenslund, Aage
1980-01-01
of the simultaneous correlation. The temperature dependent parameters have, however, little physical meaning and very odd results are frequently obtained when the interaction parameters obtained from excess enthalpy information alone are used for the prediction of vapor-liquid equilibria. The UNIQUAC/UNIFAC models...... are modified in this work by the introduction of a general temperature dependence of the coordination number. The modified UNIQUAC/UNIFAC models are especially suited for the representation of mixtures containing non-associating components. The modified models contain the same number of interaction parameters...... parameters based on excess enthalpy data, and the prediction of excess enthalpy information from only one isothermal set of vapor-liquid equilibrium data is qualitatively acceptable. A parameter table for the modified UNIFAC model is given for the five main groups: CH2, C = C, ACH, ACCH2 and CH2O....
The Model of Temperature Dynamics of Pulsed Fuel Assembly
Bondarchenko, E A; Popov, A K
2002-01-01
Heat exchange process differential equations are considered for a subcritical fuel assembly with an injector. The equations are obtained by means of the use of the Hermit polynomial. The model is created for modelling of temperature transitional processes. The parameters and dynamics are estimated for hypothetical fuel assembly consisting of real mountings: the powerful proton accelerator and the reactor IBR-2 core at its subcritica l state.
Last interglacial temperature evolution – a model inter-comparison
Directory of Open Access Journals (Sweden)
P. Bakker
2013-03-01
Full Text Available There is a growing number of proxy-based reconstructions detailing the climatic changes that occurred during the last interglacial period (LIG. This period is of special interest, because large parts of the globe were characterized by a warmer-than-present-day climate, making this period an interesting test bed for climate models in light of projected global warming. However, mainly because synchronizing the different palaeoclimatic records is difficult, there is no consensus on a global picture of LIG temperature changes. Here we present the first model inter-comparison of transient simulations covering the LIG period. By comparing the different simulations, we aim at investigating the common signal in the LIG temperature evolution, investigating the main driving forces behind it and at listing the climate feedbacks which cause the most apparent inter-model differences. The model inter-comparison shows a robust Northern Hemisphere July temperature evolution characterized by a maximum between 130–125 ka BP with temperatures 0.3 to 5.3 K above present day. A Southern Hemisphere July temperature maximum, −1.3 to 2.5 K at around 128 ka BP, is only found when changes in the greenhouse gas concentrations are included. The robustness of simulated January temperatures is large in the Southern Hemisphere and the mid-latitudes of the Northern Hemisphere. For these regions maximum January temperature anomalies of respectively −1 to 1.2 K and −0.8 to 2.1 K are simulated for the period after 121 ka BP. In both hemispheres these temperature maxima are in line with the maximum in local summer insolation. In a number of specific regions, a common temperature evolution is not found amongst the models. We show that this is related to feedbacks within the climate system which largely determine the simulated LIG temperature evolution in these regions. Firstly, in the Arctic region, changes in the summer sea-ice cover control the evolution of LIG winter
Control and modelling of vertical temperature distribution in greenhouse crops
Kempkes, F.L.K.; Bakker, J.C.; Braak, van de N.J.
1998-01-01
Based on physical transport processes (radiation, convection and latent heat transfer) a model has been developed to describe the vertical temperature distribution of a greenhouse crop. The radiation exchange factors between heating pipes, crop layers, soil and roof were determined as a function of
Modelling near subsurface temperature with mixed type boundary ...
Indian Academy of Sciences (India)
... of transfer coefficient and groundwater flux. There are significant changes in temperature and depth profiles due to changes in the transfer coefficient and groundwater flux. The analytical model will find applications in the interpretation of the borehole geothermal data to extract both climate and groundwater flow signals.
Modeling temperature variations in a pilot plant thermophilic anaerobic digester.
Valle-Guadarrama, Salvador; Espinosa-Solares, Teodoro; López-Cruz, Irineo L; Domaschko, Max
2011-05-01
A model that predicts temperature changes in a pilot plant thermophilic anaerobic digester was developed based on fundamental thermodynamic laws. The methodology utilized two simulation strategies. In the first, model equations were solved through a searching routine based on a minimal square optimization criterion, from which the overall heat transfer coefficient values, for both biodigester and heat exchanger, were determined. In the second, the simulation was performed with variable values of these overall coefficients. The prediction with both strategies allowed reproducing experimental data within 5% of the temperature span permitted in the equipment by the system control, which validated the model. The temperature variation was affected by the heterogeneity of the feeding and extraction processes, by the heterogeneity of the digestate recirculation through the heating system and by the lack of a perfect mixing inside the biodigester tank. The use of variable overall heat transfer coefficients improved the temperature change prediction and reduced the effect of a non-ideal performance of the pilot plant modeled.
Models of Solar Irradiance Variability and the Instrumental Temperature Record
Marcus, S. L.; Ghil, M.; Ide, K.
1998-01-01
The effects of decade-to-century (Dec-Cen) variations in total solar irradiance (TSI) on global mean surface temperature Ts during the pre-Pinatubo instrumental era (1854-1991) are studied by using two different proxies for TSI and a simplified version of the IPCC climate model.
Wang, Ruzhuan; Li, Weiguo; Ji, Baohua; Fang, Daining
2017-10-01
The particulate-reinforced ultra-high temperature ceramics (pUHTCs) have been particularly developed for fabricating the leading edge and nose cap of hypersonic vehicles. They have drawn intensive attention of scientific community for their superior fracture strength at high temperatures. However, there is no proper model for predicting the fracture strength of the ceramic composites and its dependency on temperature. In order to account for the effect of temperature on the fracture strength, we proposed a concept called energy storage capacity, by which we derived a new model for depicting the temperature dependent fracture toughness of the composites. This model gives a quantitative relationship between the fracture toughness and temperature. Based on this temperature dependent fracture toughness model and Griffith criterion, we developed a new fracture strength model for predicting the temperature dependent fracture strength of pUHTCs at different temperatures. The model takes into account the effects of temperature, flaw size and residual stress without any fitting parameters. The predictions of the fracture strength of pUHTCs in argon or air agreed well with the experimental measurements. Additionally, our model offers a mechanism of monitoring the strength of materials at different temperatures by testing the change of flaw size. This study provides a quantitative tool for design, evaluation and monitoring of the fracture properties of pUHTCs at high temperatures.
Sensitivities and uncertainties of modeled ground temperatures in mountain environments
Directory of Open Access Journals (Sweden)
S. Gubler
2013-08-01
Full Text Available Model evaluation is often performed at few locations due to the lack of spatially distributed data. Since the quantification of model sensitivities and uncertainties can be performed independently from ground truth measurements, these analyses are suitable to test the influence of environmental variability on model evaluation. In this study, the sensitivities and uncertainties of a physically based mountain permafrost model are quantified within an artificial topography. The setting consists of different elevations and exposures combined with six ground types characterized by porosity and hydraulic properties. The analyses are performed for a combination of all factors, that allows for quantification of the variability of model sensitivities and uncertainties within a whole modeling domain. We found that model sensitivities and uncertainties vary strongly depending on different input factors such as topography or different soil types. The analysis shows that model evaluation performed at single locations may not be representative for the whole modeling domain. For example, the sensitivity of modeled mean annual ground temperature to ground albedo ranges between 0.5 and 4 °C depending on elevation, aspect and the ground type. South-exposed inclined locations are more sensitive to changes in ground albedo than north-exposed slopes since they receive more solar radiation. The sensitivity to ground albedo increases with decreasing elevation due to shorter duration of the snow cover. The sensitivity in the hydraulic properties changes considerably for different ground types: rock or clay, for instance, are not sensitive to uncertainties in the hydraulic properties, while for gravel or peat, accurate estimates of the hydraulic properties significantly improve modeled ground temperatures. The discretization of ground, snow and time have an impact on modeled mean annual ground temperature (MAGT that cannot be neglected (more than 1 °C for several
Temperature-Corrected Model of Turbulence in Hot Jet Flows
Abdol-Hamid, Khaled S.; Pao, S. Paul; Massey, Steven J.; Elmiligui, Alaa
2007-01-01
An improved correction has been developed to increase the accuracy with which certain formulations of computational fluid dynamics predict mixing in shear layers of hot jet flows. The CFD formulations in question are those derived from the Reynolds-averaged Navier-Stokes equations closed by means of a two-equation model of turbulence, known as the k-epsilon model, wherein effects of turbulence are summarized by means of an eddy viscosity. The need for a correction arises because it is well known among specialists in CFD that two-equation turbulence models, which were developed and calibrated for room-temperature, low Mach-number, plane-mixing-layer flows, underpredict mixing in shear layers of hot jet flows. The present correction represents an attempt to account for increased mixing that takes place in jet flows characterized by high gradients of total temperature. This correction also incorporates a commonly accepted, previously developed correction for the effect of compressibility on mixing.
A model for quantification of temperature profiles via germination times
DEFF Research Database (Denmark)
Pipper, Christian Bressen; Adolf, Verena Isabelle; Jacobsen, Sven-Erik
2013-01-01
Current methodology to quantify temperature characteristics in germination of seeds is predominantly based on analysis of the time to reach a given germination fraction, that is, the quantiles in the distribution of the germination time of a seed. In practice interpolation between observed...... germination fractions at given monitoring times is used to obtain the time to reach a given germination fraction. As a consequence the obtained value will be highly dependent on the actual monitoring scheme used in the experiment. In this paper a link between currently used quantile models for the germination...... time and a specific type of accelerated failure time models is provided. As a consequence the observed number of germinated seeds at given monitoring times may be analysed directly by a grouped time-to-event model from which characteristics of the temperature profile may be identified and estimated...
Baryon number dissipation at finite temperature in the standard model
International Nuclear Information System (INIS)
Mottola, E.; Raby, S.; Starkman, G.
1990-01-01
We analyze the phenomenon of baryon number violation at finite temperature in the standard model, and derive the relaxation rate for the baryon density in the high temperature electroweak plasma. The relaxation rate, γ is given in terms of real time correlation functions of the operator E·B, and is directly proportional to the sphaleron transition rate, Γ: γ preceq n f Γ/T 3 . Hence it is not instanton suppressed, as claimed by Cohen, Dugan and Manohar (CDM). We show explicitly how this result is consistent with the methods of CDM, once it is recognized that a new anomalous commutator is required in their approach. 19 refs., 2 figs
Recycler model magnet test on temperature compensation for strontium ferrite
International Nuclear Information System (INIS)
Yamada, R.; Foster, W.; Ostiguy, F.; Wake, M.
1995-10-01
The Recycler ring magnet will be made of Strontium ferrite permanent magnets. A strontium ferrite permanent magnet without compensation has a temperature coefficient of -0.2 % in dB/dT. To compensate this effect, we are utilizing 30 % Ni 70 % Fe alloy, a temperature compensation ferromagnetic material with a low Curie point. To search for optimum commercially available material and optimum condition, we made a couple of simple model magnets, and tested with several different compensating material. The test results are reported and its optimal conditions are shown. Several different configurations were tested including a possible 2 kG magnet configuration
Modeling high temperature materials behavior for structural analysis
Naumenko, Konstantin
2016-01-01
This monograph presents approaches to characterize inelastic behavior of materials and structures at high temperature. Starting from experimental observations, it discusses basic features of inelastic phenomena including creep, plasticity, relaxation, low cycle and thermal fatigue. The authors formulate constitutive equations to describe the inelastic response for the given states of stress and microstructure. They introduce evolution equations to capture hardening, recovery, softening, ageing and damage processes. Principles of continuum mechanics and thermodynamics are presented to provide a framework for the modeling materials behavior with the aim of structural analysis of high-temperature engineering components.
Elevated temperature alters carbon cycling in a model microbial community
Mosier, A.; Li, Z.; Thomas, B. C.; Hettich, R. L.; Pan, C.; Banfield, J. F.
2013-12-01
Earth's climate is regulated by biogeochemical carbon exchanges between the land, oceans and atmosphere that are chiefly driven by microorganisms. Microbial communities are therefore indispensible to the study of carbon cycling and its impacts on the global climate system. In spite of the critical role of microbial communities in carbon cycling processes, microbial activity is currently minimally represented or altogether absent from most Earth System Models. Method development and hypothesis-driven experimentation on tractable model ecosystems of reduced complexity, as presented here, are essential for building molecularly resolved, benchmarked carbon-climate models. Here, we use chemoautotropic acid mine drainage biofilms as a model community to determine how elevated temperature, a key parameter of global climate change, regulates the flow of carbon through microbial-based ecosystems. This study represents the first community proteomics analysis using tandem mass tags (TMT), which enable accurate, precise, and reproducible quantification of proteins. We compare protein expression levels of biofilms growing over a narrow temperature range expected to occur with predicted climate changes. We show that elevated temperature leads to up-regulation of proteins involved in amino acid metabolism and protein modification, and down-regulation of proteins involved in growth and reproduction. Closely related bacterial genotypes differ in their response to temperature: Elevated temperature represses carbon fixation by two Leptospirillum genotypes, whereas carbon fixation is significantly up-regulated at higher temperature by a third closely related genotypic group. Leptospirillum group III bacteria are more susceptible to viral stress at elevated temperature, which may lead to greater carbon turnover in the microbial food web through the release of viral lysate. Overall, this proteogenomics approach revealed the effects of climate change on carbon cycling pathways and other
A High Temperature Liquid Plasma Model of the Sun
Directory of Open Access Journals (Sweden)
Robitaille P.-M.
2007-01-01
Full Text Available In this work, a liquid model of the Sun is presented wherein the entire solar mass is viewed as a high density/high energy plasma. This model challenges our current understanding of the densities associated with the internal layers of the Sun, advocating a relatively constant density, almost independent of radial position. The incompressible nature of liquids is advanced to prevent solar collapse from gravitational forces. The liquid plasma model of the Sun is a non-equilibrium approach, where nuclear reactions occur throughout the solar mass. The primary means of addressing internal heat transfer are convection and conduction. As a result of the convective processes on the solar surface, the liquid model brings into question the established temperature of the solar photosphere by highlighting a violation of Kirchhoff’s law of thermal emission. Along these lines, the model also emphasizes that radiative emission is a surface phenomenon. Evidence that the Sun is a high density/high energy plasma is based on our knowledge of Planckian thermal emission and condensed matter, including the existence of pressure ionization and liquid metallic hydrogen at high temperatures and pressures. Prior to introducing the liquid plasma model, the historic and scientific justifications for the gaseous model of the Sun are reviewed and the gaseous equations of state are also discussed.
Constitutive model of discontinuous plastic flow at cryogenic temperatures
Skoczen, B; Bielski, J; Marcinek, D
2010-01-01
FCC metals and alloys are frequently used in cryogenic applications, nearly down to the temperature of absolute zero, because of their excellent physical and mechanical properties including ductility. Some of these materials, often characterized by the low stacking fault energy (LSFE), undergo at low temperatures three distinct phenomena: dynamic strain ageing (DSA), plastic strain induced transformation from the parent phase (gamma) to the secondary phase (alpha) and evolution of micro-damage. The constitutive model presented in the paper is focused on the discontinuous plastic flow (serrated yielding) and takes into account the relevant thermodynamic background. The discontinuous plastic flow reflecting the DSA effect is described by the mechanism of local catastrophic failure of Lomer-Cottrell (LC) locks under the stress fields related to the accumulating edge dislocations (below the transition temperature from the screw dislocations to the edge dislocations mode T-1). The failure of LC locks leads to mass...
Model predictive control of room temperature with disturbance compensation
Kurilla, Jozef; Hubinský, Peter
2017-08-01
This paper deals with temperature control of multivariable system of office building. The system is simplified to several single input-single output systems by decoupling their mutual linkages, which are separately controlled by regulator based on generalized model predictive control. Main part of this paper focuses on the accuracy of the office temperature with respect to occupancy profile and effect of disturbance. Shifting of desired temperature and changing of weighting coefficients are used to achieve the desired accuracy of regulation. The final structure of regulation joins advantages of distributed computing power and possibility to use network communication between individual controllers to consider the constraints. The advantage of using decoupled MPC controllers compared to conventional PID regulators is demonstrated in a simulation study.
Mathematical model of the metal mould surface temperature optimization
International Nuclear Information System (INIS)
Mlynek, Jaroslav; Knobloch, Roman; Srb, Radek
2015-01-01
The article is focused on the problem of generating a uniform temperature field on the inner surface of shell metal moulds. Such moulds are used e.g. in the automotive industry for artificial leather production. To produce artificial leather with uniform surface structure and colour shade the temperature on the inner surface of the mould has to be as homogeneous as possible. The heating of the mould is realized by infrared heaters located above the outer mould surface. The conceived mathematical model allows us to optimize the locations of infrared heaters over the mould, so that approximately uniform heat radiation intensity is generated. A version of differential evolution algorithm programmed in Matlab development environment was created by the authors for the optimization process. For temperate calculations software system ANSYS was used. A practical example of optimization of heaters locations and calculation of the temperature of the mould is included at the end of the article
Directory of Open Access Journals (Sweden)
Edder Parody Camargo
2015-01-01
Full Text Available El objetivo de este artículo de investigación es la predicción de las cotizaciones de las acciones del sector bancario que cotizaron en el índice general de la bolsa de valores de Colombia (IGBC durante el periodo 2008 – 2012. Para tal efecto, metodológicamente se hizo uso del modelo Log-normal complementado con simulaciones de Monte-Carlo a fin de determinar pruebas de bondad de ajuste del modelo mediante la raíz del error métrico cuadrado (RMSE. Los resultados encontrados indican que si bien es cierto el modelo sirve para tener una aproximación a los posibles valores mínimos y máximos que puede tomar las acciones, sus resultados carecen de la suficiente precisión para inducir la compra certera de este tipo de activo financiero, razón por la cual se recomienda en próximas investigaciones la aplicación de modelos con promedios móviles de suavizamiento exponencial y modelos de la familia Arch y Garch que generan mayor capacidad de predicción.
Ceramic vacuum tubes for geothermal well logging
Energy Technology Data Exchange (ETDEWEB)
Kelly, R.D.
1977-01-01
Useful design data acquired in the evaluation of ceramic vacuum tubes for the development of a 500/sup 0/C instrumentation amplifier are presented. The general requirements for ceramic vacuum tubes are discussed for application to the development of high temperature well logs. Commercially available tubes are described and future contract activities that specifically relate to ceramic vacuum tubes are detailed. Supplemental data are presented in the appendix.
A multifluid model extended for strong temperature nonequilibrium
Energy Technology Data Exchange (ETDEWEB)
Chang, Chong [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2016-08-08
We present a multifluid model in which the material temperature is strongly affected by the degree of segregation of each material. In order to track temperatures of segregated form and mixed form of the same material, they are defined as different materials with their own energy. This extension makes it necessary to extend multifluid models to the case in which each form is defined as a separate material. Statistical variations associated with the morphology of the mixture have to be simplified. Simplifications introduced include combining all molecularly mixed species into a single composite material, which is treated as another segregated material. Relative motion within the composite material, diffusion, is represented by material velocity of each component in the composite material. Compression work, momentum and energy exchange, virtual mass forces, and dissipation of the unresolved kinetic energy have been generalized to the heterogeneous mixture in temperature nonequilibrium. The present model can be further simplified by combining all mixed forms of materials into a composite material. Molecular diffusion in this case is modeled by the Stefan-Maxwell equations.
Logging Concessions Enable Illegal Logging Crisis in the Peruvian Amazon
Finer, Matt; Jenkins, Clinton N.; Sky, Melissa A. Blue; Pine, Justin
2014-04-01
The Peruvian Amazon is an important arena in global efforts to promote sustainable logging in the tropics. Despite recent efforts to achieve sustainability, such as provisions in the US-Peru Trade Promotion Agreement, illegal logging continues to plague the region. We present evidence that Peru's legal logging concession system is enabling the widespread illegal logging via the regulatory documents designed to ensure sustainable logging. Analyzing official government data, we found that 68.3% of all concessions supervised by authorities were suspected of major violations. Of the 609 total concessions, nearly 30% have been cancelled for violations and we expect this percentage to increase as investigations continue. Moreover, the nature of the violations indicate that the permits associated with legal concessions are used to harvest trees in unauthorized areas, thus threatening all forested areas. Many of the violations pertain to the illegal extraction of CITES-listed timber species outside authorized areas. These findings highlight the need for additional reforms.
Fedders, E. R.; Anderson, W. P., Jr.; Hengst, A. M.; Gu, C.
2017-12-01
Boone Creek is a headwater stream of low to moderate gradient located in Boone, North Carolina, USA. Total impervious surface coverage in the 5.2 km2 catchment drained by the 1.9 km study reach increases from 13.4% in the upstream half of the reach to 24.3% in the downstream half. Other markers of urbanization, including culverting, lack of riparian shade vegetation, and bank armoring also increase downstream. Previous studies have shown the stream to be prone to temperature surges on short timescales (minutes to hours) caused by summer runoff from the urban hardscaping. This study investigates the effects of urbanization on the stream's thermal regime at daily to yearly timescales. To do this, we developed an analytical model of daily average stream temperatures based on daily average air temperatures. We utilized a two-part model comprising annual and biannual components and a daily component consisting of a 3rd-order Markov process in order to fit the thermal dynamics of our small, gaining stream. Optimizing this model at each of our study sites in each studied year (78 total site-years of data) yielded annual thermal exchange coefficients (K) for each site. These K values quantify the strength of the relationship between stream and air temperature, or inverse thermal stability. In a uniform, pristine catchment environment, K values are expected to decrease downstream as the stream gains discharge volume and, therefore, thermal inertia. Interannual average K values for our study reach, however, show an overall increase from 0.112 furthest upstream to 0.149 furthest downstream, despite a near doubling of stream discharge between these monitoring points. K values increase only slightly in the upstream, less urban, half of the reach. A line of best fit through these points on a plot of reach distance versus K value has a slope of 2E-6. But the K values of downstream, more urbanized sites increase at a rate of 2E-5 per meter of reach distance, an order of magnitude
On the fate of the Standard Model at finite temperature
Energy Technology Data Exchange (ETDEWEB)
Rose, Luigi Delle; Marzo, Carlo [Università del Salento, Dipartimento di Matematica e Fisica “Ennio De Giorgi' ,Via Arnesano, 73100 Lecce (Italy); INFN - Sezione di Lecce,via Arnesano, 73100 Lecce (Italy); Urbano, Alfredo [SISSA - International School for Advanced Studies,via Bonomea 256, 34136 Trieste (Italy)
2016-05-10
In this paper we revisit and update the computation of thermal corrections to the stability of the electroweak vacuum in the Standard Model. At zero temperature, we make use of the full two-loop effective potential, improved by three-loop beta functions with two-loop matching conditions. At finite temperature, we include one-loop thermal corrections together with resummation of daisy diagrams. We solve numerically — both at zero and finite temperature — the bounce equation, thus providing an accurate description of the thermal tunneling. Assuming a maximum temperature in the early Universe of the order of 10{sup 18} GeV, we find that the instability bound excludes values of the top mass M{sub t}≳173.6 GeV, with M{sub h}≃125 GeV and including uncertainties on the strong coupling. We discuss the validity and temperature-dependence of this bound in the early Universe, with a special focus on the reheating phase after inflation.
Directory of Open Access Journals (Sweden)
Caro, Norma Patricia
2013-01-01
Full Text Available Este trabajo replica y adapta el modelo de Jones y Hensher (2004 a los datos de una economía emergente con el propósito de evaluar su validez externa. Se compara el desempeño del modelo logístico estándar en relación con el modelo logístico mixto para predecir el riesgo de crisis en el periodo 1993-2000, utilizando estados contables de empresas argentinas y ratios definidos en estudios de Altman y Jones y Hensher. Como en estudios anteriores, rentabilidad, rotación, endeudamiento y flujo de fondos operativos explican la probabilidad de crisis financiera. La contribución de esta nueva metodología reduce la tasa de error del tipo I a un 9 %. Se demuestra que el modelo logístico mixto, que tiene en cuenta la heterogeneidad no observada, supera ampliamente el desempeño del modelo logístico estándar. || This study is a replication and adaptation of Jones and Hensher (2004 model in an emerging economy with the purpose of testing its eternal validity. It compares the logistic standard model's performance with the logistic mixed model to predict bankruptcy risk of Argentinean companies between 1993-2000 by using financial statements and ratios defined in previous studies by Altman and Jones and Hensher. Similar to previous studies, profitability, asset turnover, debt and cash flow from operations explain financial distress' probability. The main contribution of this new methodology is the important reduction of error type I to the 9 %. This study asserts that the logistic mixed model, that considers the effect of non-observed heterogeneity, significantly improves the performance of the logistic standard model.
Modelling Ischemic Stroke and Temperature Intervention Using Vascular Porous Method
Blowers, Stephen; Valluri, Prashant; Marshall, Ian; Andrews, Peter; Harris, Bridget; Thrippleton, Michael
2017-11-01
In the event of cerebral infarction, a region of tissue is supplied with insufficient blood flow to support normal metabolism. This can lead to an ischemic reaction which incurs cell death. Through a reduction of temperature, the metabolic demand can be reduced, which then offsets the onset of necrosis. This allows extra time for the patient to receive medical attention and could help prevent permanent brain damage from occurring. Here, we present a vascular-porous (VaPor) blood flow model that can simulate such an event. Cerebral blood flow is simulated using a combination of 1-Dimensional vessels embedded in 3-Dimensional porous media. This allows for simple manipulation of the structure and determining the effect of an obstructed vessel. Results show regional temperature increase of 1-1.5°C comparable with results from literature (in contrast to previous simpler models). Additionally, the application of scalp cooling in such an event dramatically reduces the temperature in the affected region to near hypothermic temperatures, which points to a potential rapid form of first intervention.
Dewarless Logging Tool - 1st Generation
Energy Technology Data Exchange (ETDEWEB)
HENFLING,JOSEPH A.; NORMANN,RANDY A.
2000-08-01
This report focuses on Sandia National Laboratories' effort to create high-temperature logging tools for geothermal applications without the need for heat shielding. One of the mechanisms for failure in conventional downhole tools is temperature. They can only survive a limited number of hours in high temperature environments. For the first time since the evolution of integrated circuits, components are now commercially available that are qualified to 225 C with many continuing to work up to 300 C. These components are primarily based on Silicon-On-Insulator (SOI) technology. Sandia has developed and tested a simple data logger based on this technology that operates up to 300 C with a few limiting components operating to only 250 C without thermal protection. An actual well log to 240 C without shielding is discussed. The first prototype high-temperature tool measures pressure and temperature using a wire-line for power and communication. The tool is based around the HT83C51 microcontroller. A brief discussion of the background and status of the High Temperature Instrumentation program at Sandia, objectives, data logger development, and future project plans are given.
A neural network model for predicting weighted mean temperature
Ding, Maohua
2018-02-01
Water vapor is an important element of the Earth's atmosphere, and most of it concentrates at the bottom of the troposphere. Knowledge of the water vapor measured by Global Navigation Satellite Systems (GNSS) is an important direction of GNSS research. In particular, when the zenith wet delay is converted to precipitable water vapor, the weighted mean temperature T_m is a variable parameter to be determined in this conversion. The purpose of the study is getting a more accurate T_m model for global users by a combination of two different characteristics of T_m (i.e., the T_m seasonal variations and the relationships between T_m and surface meteorological elements). The modeling process was carried out by using the neural network technology. A multilayer feedforward neural network model (the NN) was established. The NN model is used with measurements of only surface temperature T_S . The NN was validated and compared with four other published global T_m models. The results show that the NN performed better than any of the four compared models on the global scale.
Measurement of Laser Weld Temperatures for 3D Model Input
Energy Technology Data Exchange (ETDEWEB)
Dagel, Daryl [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Grossetete, Grant [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Maccallum, Danny O. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2016-10-01
Laser welding is a key joining process used extensively in the manufacture and assembly of critical components for several weapons systems. Sandia National Laboratories advances the understanding of the laser welding process through coupled experimentation and modeling. This report summarizes the experimental portion of the research program, which focused on measuring temperatures and thermal history of laser welds on steel plates. To increase confidence in measurement accuracy, researchers utilized multiple complementary techniques to acquire temperatures during laser welding. This data serves as input to and validation of 3D laser welding models aimed at predicting microstructure and the formation of defects and their impact on weld-joint reliability, a crucial step in rapid prototyping of weapons components.
A short-range objective nocturnal temperature forecasting model
Sutherland, R. A.
1980-01-01
A relatively simple, objective, nocturnal temperature forecasting model suitable for freezing and near-freezing conditions has been designed so that a user, presumably a weather forecaster, can put in standard meteorological data at a particular location and receive an hour-by-hour prediction of surface and air temperatures for that location for an entire night. The user has the option of putting in his own estimates of wind speeds and background sky radiation which are treated as independent variables. An analysis of 141 test runs show that 57.4% of the time the model predicts to within 1 C for the best cases and to within 3 C for 98.0% of all cases.
Systems Modeling for Crew Core Body Temperature Prediction Postlanding
Cross, Cynthia; Ochoa, Dustin
2010-01-01
The Orion Crew Exploration Vehicle, NASA s latest crewed spacecraft project, presents many challenges to its designers including ensuring crew survivability during nominal and off nominal landing conditions. With a nominal water landing planned off the coast of San Clemente, California, off nominal water landings could range from the far North Atlantic Ocean to the middle of the equatorial Pacific Ocean. For all of these conditions, the vehicle must provide sufficient life support resources to ensure that the crew member s core body temperatures are maintained at a safe level prior to crew rescue. This paper will examine the natural environments, environments created inside the cabin and constraints associated with post landing operations that affect the temperature of the crew member. Models of the capsule and the crew members are examined and analysis results are compared to the requirement for safe human exposure. Further, recommendations for updated modeling techniques and operational limits are included.
Foundations of modelling of nonequilibrium low-temperature plasmas
Alves, L. L.; Bogaerts, A.; Guerra, V.; Turner, M. M.
2018-02-01
This work explains the need for plasma models, introduces arguments for choosing the type of model that better fits the purpose of each study, and presents the basics of the most common nonequilibrium low-temperature plasma models and the information available from each one, along with an extensive list of references for complementary in-depth reading. The paper presents the following models, organised according to the level of multi-dimensional description of the plasma: kinetic models, based on either a statistical particle-in-cell/Monte-Carlo approach or the solution to the Boltzmann equation (in the latter case, special focus is given to the description of the electron kinetics); multi-fluid models, based on the solution to the hydrodynamic equations; global (spatially-average) models, based on the solution to the particle and energy rate-balance equations for the main plasma species, usually including a very complete reaction chemistry; mesoscopic models for plasma–surface interaction, adopting either a deterministic approach or a stochastic dynamical Monte-Carlo approach. For each plasma model, the paper puts forward the physics context, introduces the fundamental equations, presents advantages and limitations, also from a numerical perspective, and illustrates its application with some examples. Whenever pertinent, the interconnection between models is also discussed, in view of multi-scale hybrid approaches.
Non-local Thirring model at finite temperature
Energy Technology Data Exchange (ETDEWEB)
Manias, M.V.; Naon, C.M.; Trobo, M.L. [La Plata Univ. Nacional (Argentina). Dept. de Fisica]|[Consejo Nacional de Investigaciones Cientificas y Tecnicas (Argentina)
1998-08-17
We extend a recently proposed non-local and non-covariant version of the Thirring model to the finite-temperature case. We obtain a completely bosonized expression for the partition function, describing the thermodynamics of the collective modes which are the underlying excitations of this system. From this result we derive closed formulae for the free-energy, specific-heat, two-point correlation functions and momentum distribution, as functionals of electron-electron coupling potentials. (orig.) 15 refs.
Camp, Richard J.; Pratt, Thane K.; Gorresen, P. Marcos; Woodworth, Bethany L.; Jeffrey, John J.
2014-01-01
Freed and Cann (2013) criticized our use of linear models to assess trends in the status of Hawaiian forest birds through time (Camp et al. 2009a, 2009b, 2010) by questioning our sampling scheme, whether we met model assumptions, and whether we ignored short-term changes in the population time series. In the present paper, we address these concerns and reiterate that our results do not support the position of Freed and Cann (2013) that the forest birds in the Hakalau Forest National Wildlife Refuge (NWR) are declining, or that the federally listed endangered birds are showing signs of imminent collapse. On the contrary, our data indicate that the 21-year long-term trends for native birds in Hakalau Forest NWR are stable to increasing, especially in areas that have received active management.
SMOS brightness temperature assimilation into the Community Land Model
Rains, Dominik; Han, Xujun; Lievens, Hans; Montzka, Carsten; Verhoest, Niko E. C.
2017-11-01
SMOS (Soil Moisture and Ocean Salinity mission) brightness temperatures at a single incident angle are assimilated into the Community Land Model (CLM) across Australia to improve soil moisture simulations. Therefore, the data assimilation system DasPy is coupled to the local ensemble transform Kalman filter (LETKF) as well as to the Community Microwave Emission Model (CMEM). Brightness temperature climatologies are precomputed to enable the assimilation of brightness temperature anomalies, making use of 6 years of SMOS data (2010-2015). Mean correlation R with in situ measurements increases moderately from 0.61 to 0.68 (11 %) for upper soil layers if the root zone is included in the updates. A reduced improvement of 5 % is achieved if the assimilation is restricted to the upper soil layers. Root-zone simulations improve by 7 % when updating both the top layers and root zone, and by 4 % when only updating the top layers. Mean increments and increment standard deviations are compared for the experiments. The long-term assimilation impact is analysed by looking at a set of quantiles computed for soil moisture at each grid cell. Within hydrological monitoring systems, extreme dry or wet conditions are often defined via their relative occurrence, adding great importance to assimilation-induced quantile changes. Although still being limited now, longer L-band radiometer time series will become available and make model output improved by assimilating such data that are more usable for extreme event statistics.
SMOS brightness temperature assimilation into the Community Land Model
Directory of Open Access Journals (Sweden)
D. Rains
2017-11-01
Full Text Available SMOS (Soil Moisture and Ocean Salinity mission brightness temperatures at a single incident angle are assimilated into the Community Land Model (CLM across Australia to improve soil moisture simulations. Therefore, the data assimilation system DasPy is coupled to the local ensemble transform Kalman filter (LETKF as well as to the Community Microwave Emission Model (CMEM. Brightness temperature climatologies are precomputed to enable the assimilation of brightness temperature anomalies, making use of 6 years of SMOS data (2010–2015. Mean correlation R with in situ measurements increases moderately from 0.61 to 0.68 (11 % for upper soil layers if the root zone is included in the updates. A reduced improvement of 5 % is achieved if the assimilation is restricted to the upper soil layers. Root-zone simulations improve by 7 % when updating both the top layers and root zone, and by 4 % when only updating the top layers. Mean increments and increment standard deviations are compared for the experiments. The long-term assimilation impact is analysed by looking at a set of quantiles computed for soil moisture at each grid cell. Within hydrological monitoring systems, extreme dry or wet conditions are often defined via their relative occurrence, adding great importance to assimilation-induced quantile changes. Although still being limited now, longer L-band radiometer time series will become available and make model output improved by assimilating such data that are more usable for extreme event statistics.
Reheating temperature and gauge mediation models of supersymmetry breaking
International Nuclear Information System (INIS)
Olechowski, Marek; Pokorski, Stefan; Turzynski, Krzysztof; Wells, James D.
2009-01-01
For supersymmetric theories with gravitino dark matter, the maximal reheating temperature consistent with big bang nucleosynthesis bounds arises when the physical gaugino masses are degenerate. We consider the cases of a stau or sneutrino next-to-lightest superpartner, which have relatively less constraint from big bang nucleosynthesis. The resulting parameter space is consistent with leptogenesis requirements, and can be reached in generalized gauge mediation models. Such models illustrate a class of theories that overcome the well-known tension between big bang nucleosynthesis and leptogenesis.
Small velocity and finite temperature variations in kinetic relaxation models
Markowich, Peter
2010-01-01
A small Knuden number analysis of a kinetic equation in the diffusive scaling is performed. The collision kernel is of BGK type with a general local Gibbs state. Assuming that the flow velocity is of the order of the Knudsen number, a Hilbert expansion yields a macroscopic model with finite temperature variations, whose complexity lies in between the hydrodynamic and the energy-transport equations. Its mathematical structure is explored and macroscopic models for specific examples of the global Gibbs state are presented. © American Institute of Mathematical Sciences.
High-temperature series expansions for random Potts models
Directory of Open Access Journals (Sweden)
M.Hellmund
2005-01-01
Full Text Available We discuss recently generated high-temperature series expansions for the free energy and the susceptibility of random-bond q-state Potts models on hypercubic lattices. Using the star-graph expansion technique, quenched disorder averages can be calculated exactly for arbitrary uncorrelated coupling distributions while keeping the disorder strength p as well as the dimension d as symbolic parameters. We present analyses of the new series for the susceptibility of the Ising (q=2 and 4-state Potts model in three dimensions up to the order 19 and 18, respectively, and compare our findings with results from field-theoretical renormalization group studies and Monte Carlo simulations.
Effects of electrostatic discharge on three cryogenic temperature sensor models
International Nuclear Information System (INIS)
Courts, S. Scott; Mott, Thomas B.
2014-01-01
Cryogenic temperature sensors are not usually thought of as electrostatic discharge (ESD) sensitive devices. However, the most common cryogenic thermometers in use today are thermally sensitive diodes or resistors - both electronic devices in their base form. As such, they are sensitive to ESD at some level above which either catastrophic or latent damage can occur. Instituting an ESD program for safe handling and installation of the sensor is costly and it is desirable to balance the risk of ESD damage against this cost. However, this risk cannot be evaluated without specific knowledge of the ESD vulnerability of the devices in question. This work examines three types of cryogenic temperature sensors for ESD sensitivity - silicon diodes, Cernox(trade mark, serif) resistors, and wire wound platinum resistors, all manufactured by Lake Shore Cryotronics, Inc. Testing was performed per TIA/EIA FOTP129 (Human Body Model). Damage was found to occur in the silicon diode sensors at discharge levels of 1,500 V. For Cernox(trade mark, serif) temperature sensors, damage was observed at 3,500 V. The platinum temperature sensors were not damaged by ESD exposure levels of 9,900 V. At the lower damage limit, both the silicon diode and the Cernox(trade mark, serif) temperature sensors showed relatively small calibration shifts of 1 to 3 K at room temperature. The diode sensors were stable with time and thermal cycling, but the long term stability of the Cernox(trade mark, serif) sensors was degraded. Catastrophic failure occurred at higher levels of ESD exposure
MODELING OF TEMPERATURE FIELDS IN A SOLID HEAT ACCUMULLATORS
Directory of Open Access Journals (Sweden)
S. S. Belimenko
2016-10-01
Full Text Available Purpose. Currently, one of the priorities of energy conservation is a cost savings for heating in commercial and residential buildings by the stored thermal energy during the night and its return in the daytime. Economic effect is achieved due to the difference in tariffs for the cost of electricity in the daytime and at night. One of the most common types of devices that allow accumulating and giving the resulting heat are solid heat accumulators. The main purpose of the work: 1 software development for the calculation of the temperature field of a flat solid heat accumulator, working due to the heat energy accumulation in the volume of thermal storage material without phase transition; 2 determination the temperature distribution in its volumes at convective heat transfer. Methodology. To achieve the study objectives a heat transfer theory and Laplace integral transform were used. On its base the problems of determining the temperature fields in the channels of heat accumulators, having different cross-sectional shapes were solved. Findings. Authors have developed the method of calculation and obtained solutions for the determination of temperature fields in channels of the solid heat accumulator in conditions of convective heat transfer. Temperature fields over length and thickness of channels were investigated. Experimental studies on physical models and industrial equipment were conducted. Originality. For the first time the technique of calculating the temperature field in the channels of different cross-section for the solid heat accumulator in the charging and discharging modes was proposed. The calculation results are confirmed by experimental research. Practical value. The proposed technique is used in the design of solid heat accumulators of different power as well as full-scale production of them was organized.
Bhamidipati, Ravi Kanth; Syed, Muzeeb; Mullangi, Ramesh; Srinivas, Nuggehally
2018-02-01
1. Dalbavancin, a lipoglycopeptide, is approved for treating gram-positive bacterial infections. Area under plasma concentration versus time curve (AUC inf ) of dalbavancin is a key parameter and AUC inf /MIC ratio is a critical pharmacodynamic marker. 2. Using end of intravenous infusion concentration (i.e. C max ) C max versus AUC inf relationship for dalbavancin was established by regression analyses (i.e. linear, log-log, log-linear and power models) using 21 pairs of subject data. 3. The predictions of the AUC inf were performed using published C max data by application of regression equations. The quotient of observed/predicted values rendered fold difference. The mean absolute error (MAE)/root mean square error (RMSE) and correlation coefficient (r) were used in the assessment. 4. MAE and RMSE values for the various models were comparable. The C max versus AUC inf exhibited excellent correlation (r > 0.9488). The internal data evaluation showed narrow confinement (0.84-1.14-fold difference) with a RMSE regression models, a single time point strategy of using C max (i.e. end of 30-min infusion) is amenable as a prospective tool for predicting AUC inf of dalbavancin in patients.
Precipitates/Salts Model Calculations for Various Drift Temperature Environments
International Nuclear Information System (INIS)
Marnier, P.
2001-01-01
The objective and scope of this calculation is to assist Performance Assessment Operations and the Engineered Barrier System (EBS) Department in modeling the geochemical effects of evaporation within a repository drift. This work is developed and documented using procedure AP-3.12Q, Calculations, in support of ''Technical Work Plan For Engineered Barrier System Department Modeling and Testing FY 02 Work Activities'' (BSC 2001a). The primary objective of this calculation is to predict the effects of evaporation on the abstracted water compositions established in ''EBS Incoming Water and Gas Composition Abstraction Calculations for Different Drift Temperature Environments'' (BSC 2001c). A secondary objective is to predict evaporation effects on observed Yucca Mountain waters for subsequent cement interaction calculations (BSC 2001d). The Precipitates/Salts model is documented in an Analysis/Model Report (AMR), ''In-Drift Precipitates/Salts Analysis'' (BSC 2001b)
Low reheating temperatures in monomial and binomial inflationary models
International Nuclear Information System (INIS)
Rehagen, Thomas; Gelmini, Graciela B.
2015-01-01
We investigate the allowed range of reheating temperature values in light of the Planck 2015 results and the recent joint analysis of Cosmic Microwave Background (CMB) data from the BICEP2/Keck Array and Planck experiments, using monomial and binomial inflationary potentials. While the well studied ϕ 2 inflationary potential is no longer favored by current CMB data, as well as ϕ p with p>2, a ϕ 1 potential and canonical reheating (w re =0) provide a good fit to the CMB measurements. In this last case, we find that the Planck 2015 68% confidence limit upper bound on the spectral index, n s , implies an upper bound on the reheating temperature of T re ≲6×10 10 GeV, and excludes instantaneous reheating. The low reheating temperatures allowed by this model open the possibility that dark matter could be produced during the reheating period instead of when the Universe is radiation dominated, which could lead to very different predictions for the relic density and momentum distribution of WIMPs, sterile neutrinos, and axions. We also study binomial inflationary potentials and show the effects of a small departure from a ϕ 1 potential. We find that as a subdominant ϕ 2 term in the potential increases, first instantaneous reheating becomes allowed, and then the lowest possible reheating temperature of T re =4 MeV is excluded by the Planck 2015 68% confidence limit
Referenceless magnetic resonance temperature imaging using Gaussian process modeling.
Yung, Joshua P; Fuentes, David; MacLellan, Christopher J; Maier, Florian; Liapis, Yannis; Hazle, John D; Stafford, R Jason
2017-07-01
During magnetic resonance (MR)-guided thermal therapies, water proton resonance frequency shift (PRFS)-based MR temperature imaging can quantitatively monitor tissue temperature changes. It is widely known that the PRFS technique is easily perturbed by tissue motion, tissue susceptibility changes, magnetic field drift, and modality-dependent applicator-induced artifacts. Here, a referenceless Gaussian process modeling (GPM)-based estimation of the PRFS is investigated as a methodology to mitigate unwanted background field changes. The GPM offers a complementary trade-off between data fitting and smoothing and allows prior information to be used. The end result being the GPM provides a full probabilistic prediction and an estimate of the uncertainty. GPM was employed to estimate the covariance between the spatial position and MR phase measurements. The mean and variance provided by the statistical model extrapolated background phase values from nonheated neighboring voxels used to train the model. MR phase predictions in the heating ROI are computed using the spatial coordinates as the test input. The method is demonstrated in ex vivo rabbit liver tissue during focused ultrasound heating with manually introduced perturbations (n = 6) and in vivo during laser-induced interstitial thermal therapy to treat the human brain (n = 1) and liver (n = 1). Temperature maps estimated using the GPM referenceless method demonstrated a RMS error of <0.8°C with artifact-induced reference-based MR thermometry during ex vivo heating using focused ultrasound. Nonheated surrounding areas were <0.5°C from the artifact-free MR measurements. The GPM referenceless MR temperature values and thermally damaged regions were within the 95% confidence interval during in vivo laser ablations. A new approach to estimation for referenceless PRFS temperature imaging is introduced that allows for an accurate probabilistic extrapolation of the background phase. The technique demonstrated reliable
Computer Modeling of Planetary Surface Temperatures in Introductory Astronomy Courses
Barker, Timothy; Goodman, J.
2013-01-01
Barker, T., and Goodman, J. C., Wheaton College, Norton, MA Computer modeling is an essential part of astronomical research, and so it is important that students be exposed to its powers and limitations in the first (and, perhaps, only) astronomy course they take in college. Building on the ideas of Walter Robinson (“Modeling Dynamic Systems,” Springer, 2002) we have found that STELLA software (ISEE Systems) allows introductory astronomy students to do sophisticated modeling by the end of two classes of instruction, with no previous experience in computer programming or calculus. STELLA’s graphical interface allows students to visualize systems in terms of “flows” in and out of “stocks,” avoiding the need to invoke differential equations. Linking flows and stocks allows feedback systems to be constructed. Students begin by building an easily understood system: a leaky bucket. This is a simple negative feedback system in which the volume in the bucket (a “stock”) depends on a fixed inflow rate and an outflow that increases in proportion to the volume in the bucket. Students explore how changing inflow rate and feedback parameters affect the steady-state volume and equilibration time of the system. This model is completed within a 50-minute class meeting. In the next class, students are given an analogous but more sophisticated problem: modeling a planetary surface temperature (“stock”) that depends on the “flow” of energy from the Sun, the planetary albedo, the outgoing flow of infrared radiation from the planet’s surface, and the infrared return from the atmosphere. Students then compare their STELLA model equilibrium temperatures to observed planetary temperatures, which agree with model ones for worlds without atmospheres, but give underestimates for planets with atmospheres, thus introducing students to the concept of greenhouse warming. We find that if we give the students part of this model at the start of a 50-minute class they are
[Study on temperature correctional models of quantitative analysis with near infrared spectroscopy].
Zhang, Jun; Chen, Hua-cai; Chen, Xing-dan
2005-06-01
Effect of enviroment temperature on near infrared spectroscopic quantitative analysis was studied. The temperature correction model was calibrated with 45 wheat samples at different environment temperaturs and with the temperature as an external variable. The constant temperature model was calibated with 45 wheat samples at the same temperature. The predicted results of two models for the protein contents of wheat samples at different temperatures were compared. The results showed that the mean standard error of prediction (SEP) of the temperature correction model was 0.333, but the SEP of constant temperature (22 degrees C) model increased as the temperature difference enlarged, and the SEP is up to 0.602 when using this model at 4 degrees C. It was suggested that the temperature correctional model improves the analysis precision.
Tantalum strength model incorporating temperature, strain rate and pressure
Lim, Hojun; Battaile, Corbett; Brown, Justin; Lane, Matt
Tantalum is a body-centered-cubic (BCC) refractory metal that is widely used in many applications in high temperature, strain rate and pressure environments. In this work, we propose a physically-based strength model for tantalum that incorporates effects of temperature, strain rate and pressure. A constitutive model for single crystal tantalum is developed based on dislocation kink-pair theory, and calibrated to measurements on single crystal specimens. The model is then used to predict deformations of single- and polycrystalline tantalum. In addition, the proposed strength model is implemented into Sandia's ALEGRA solid dynamics code to predict plastic deformations of tantalum in engineering-scale applications at extreme conditions, e.g. Taylor impact tests and Z machine's high pressure ramp compression tests, and the results are compared with available experimental data. Sandia National Laboratories is a multi program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
Model for low temperature oxidation during long term interim storage
International Nuclear Information System (INIS)
Desgranges, Clara; Bertrand, Nathalie; Gauvain, Danielle; Terlain, Anne; Poquillon, Dominique; Monceau, Daniel
2004-01-01
For high-level nuclear waste containers in long-term interim storage, dry oxidation will be the first and the main degradation mode during about one century. The metal lost by dry oxidation over such a long period must be evaluated with a good reliability. To achieve this goal, modelling of the oxide scale growth is necessary and this is the aim of the dry oxidation studies performed in the frame of the COCON program. An advanced model based on the description of elementary mechanisms involved in scale growth at low temperatures, like partial interfacial control of the oxidation kinetics and/or grain boundary diffusion, is developed in order to increase the reliability of the long term extrapolations deduced from basic models developed from short time experiments. Since only few experimental data on dry oxidation are available in the temperature range of interest, experiments have also been performed to evaluate the relevant input parameters for models like grain size of oxide scale, considering iron as simplified material. (authors)
Theoretical temperature model with experimental validation for CLIC Accelerating Structures
AUTHOR|(CDS)2126138; Vamvakas, Alex; Alme, Johan
Micron level stability of the Compact Linear Collider (CLIC) components is one of the main requirements to meet the luminosity goal for the future $48 \\,km$ long underground linear accelerator. The radio frequency (RF) power used for beam acceleration causes heat generation within the aligned structures, resulting in mechanical movements and structural deformations. A dedicated control of the air- and water- cooling system in the tunnel is therefore crucial to improve alignment accuracy. This thesis investigates the thermo-mechanical behavior of the CLIC Accelerating Structure (AS). In CLIC, the AS must be aligned to a precision of $10\\,\\mu m$. The thesis shows that a relatively simple theoretical model can be used within reasonable accuracy to predict the temperature response of an AS as a function of the applied RF power. During failure scenarios or maintenance interventions, the RF power is turned off resulting in no heat dissipation and decrease in the overall temperature of the components. The theoretica...
Design and Modelling of Small Scale Low Temperature Power Cycles
DEFF Research Database (Denmark)
Wronski, Jorrit
he work presented in this report contributes to the state of the art within design and modelling of small scale low temperature power cycles. The study is divided into three main parts: (i) fluid property evaluation, (ii) expansion device investigations and (iii) heat exchanger performance...... times and below 10−7 away from the phase boundaries.Regarding expansion devices for small scale organic Rankine cycle (ORC) systems,this work focussed on reciprocating machines. A prototype of a reciprocating expander with a swept volume of 736 cm3 was tested and modelled. he model was written in object......-oriented Modelica code and was included in the thermo Cycle framework for small scale ORC systems. Special attention was paid to the valve system and a control method for variable expansion ratios was introduced based on a cogeneration scenario. Admission control based on evaporator and condenser conditions...
Engineering aspects of radiometric logging
International Nuclear Information System (INIS)
Huppert, P.
1982-01-01
Engineering problems encountered in the development of nuclear borehole logging techniques are discussed. Spectrometric techniques require electronic stability of the equipment. In addition the electronics must be capable of handling high count rates of randomly distributed pulses of fast rise time from the detector and the systems must be designed so that precise calibration is possible under field operating conditions. Components of a logging system are discussed in detail. They include the logging probe (electronics, detector, high voltage supply, preamplifier), electronic instrumentation for data collection and processing and auxiliary equipment
SDSS Log Viewer: visual exploratory analysis of large-volume SQL log data
Zhang, Jian; Chen, Chaomei; Vogeley, Michael S.; Pan, Danny; Thakar, Ani; Raddick, Jordan
2012-01-01
User-generated Structured Query Language (SQL) queries are a rich source of information for database analysts, information scientists, and the end users of databases. In this study a group of scientists in astronomy and computer and information scientists work together to analyze a large volume of SQL log data generated by users of the Sloan Digital Sky Survey (SDSS) data archive in order to better understand users' data seeking behavior. While statistical analysis of such logs is useful at aggregated levels, efficiently exploring specific patterns of queries is often a challenging task due to the typically large volume of the data, multivariate features, and data requirements specified in SQL queries. To enable and facilitate effective and efficient exploration of the SDSS log data, we designed an interactive visualization tool, called the SDSS Log Viewer, which integrates time series visualization, text visualization, and dynamic query techniques. We describe two analysis scenarios of visual exploration of SDSS log data, including understanding unusually high daily query traffic and modeling the types of data seeking behaviors of massive query generators. The two scenarios demonstrate that the SDSS Log Viewer provides a novel and potentially valuable approach to support these targeted tasks.
High temperature viscoplastic ratchetting: Material response or modeling artifact
International Nuclear Information System (INIS)
Freed, A.D.
1991-01-01
Ratchetting, the net accumulation of strain over a loading cycle, is a deformation mechanism that leads to distortions in shape, often resulting in a loss of function that culminates in structural failure. Viscoplastic ratchetting is prevalent at high homologous temperatures where viscous characteristics are prominent in material response. This deformation mechanism is accentuated by the presence of a mean stress; a consequence of interaction between thermal gradients and structural constraints. Favorable conditions for viscoplastic ratchetting exist in the Stirling engines being developed by the National Aeronautics and Space Administration (NASA) and the Department of Energy (DOE) for space and terrestrial power applications. To assess the potential for ratchetting and its effect on durability of high temperature structures requires a viscoplastic analysis of the design. But ratchetting is a very difficult phenomenon to accurately model. One must therefore ask whether the results from such an analysis are indicative of actual material behavior, or if they are artifacts of the theory being used in the analysis. There are several subtle aspects in a viscoplastic model that must be dealt with in order to accurately model ratchetting behavior, and therefore obtain meaningful predictions from it. In this paper, some of these subtlties and the necessary ratchet experiments needed to obtain an accurate viscoplastic representation of a material are discussed
Model for temperature-dependent magnetization of nanocrystalline materials
Energy Technology Data Exchange (ETDEWEB)
Bian, Q.; Niewczas, M. [Department of Materials Science and Engineering, McMaster University, Hamilton, Ontario L8S4M1 (Canada)
2015-01-07
A magnetization model of nanocrystalline materials incorporating intragrain anisotropies, intergrain interactions, and texture effects has been extended to include the thermal fluctuations. The method relies on the stochastic Landau–Lifshitz–Gilbert theory of magnetization dynamics and permits to study the magnetic properties of nanocrystalline materials at arbitrary temperature below the Currie temperature. The model has been used to determine the intergrain exchange constant and grain boundary anisotropy constant of nanocrystalline Ni at 100 K and 298 K. It is found that the thermal fluctuations suppress the strength of the intergrain exchange coupling and also reduce the grain boundary anisotropy. In comparison with its value at 2 K, the interparticle exchange constant decreases by 16% and 42% and the grain boundary anisotropy constant decreases by 28% and 40% at 100 K and 298 K, respectively. An application of the model to study the grain size-dependent magnetization indicates that when the thermal activation energy is comparable to the free energy of grains, the decrease in the grain size leads to the decrease in the magnetic permeability and saturation magnetization. The mechanism by which the grain size influences the magnetic properties of nc–Ni is discussed.
Cased-hole log analysis and reservoir performance monitoring
Bateman, Richard M
2015-01-01
This book addresses vital issues, such as the evaluation of shale gas reservoirs and their production. Topics include the cased-hole logging environment, reservoir fluid properties; flow regimes; temperature, noise, cement bond, and pulsed neutron logging; and casing inspection. Production logging charts and tables are included in the appendices. The work serves as a comprehensive reference for production engineers with upstream E&P companies, well logging service company employees, university students, and petroleum industry training professionals. This book also: · Provides methods of conveying production logging tools along horizontal well segments as well as measurements of formation electrical resistivity through casing · Covers new information on fluid flow characteristics in inclined pipe and provides new and improved nuclear tool measurements in cased wells · Includes updates on cased-hole wireline formation testing
Yang, Yan; Onishi, Takeo; Hiramatsu, Ken
2014-01-01
Simulation results of the widely used temperature index snowmelt model are greatly influenced by input air temperature data. Spatially sparse air temperature data remain the main factor inducing uncertainties and errors in that model, which limits its applications. Thus, to solve this problem, we created new air temperature data using linear regression relationships that can be formulated based on MODIS land surface temperature data. The Soil Water Assessment Tool model, which includes an improved temperature index snowmelt module, was chosen to test the newly created data. By evaluating simulation performance for daily snowmelt in three test basins of the Amur River, performance of the newly created data was assessed. The coefficient of determination (R (2)) and Nash-Sutcliffe efficiency (NSE) were used for evaluation. The results indicate that MODIS land surface temperature data can be used as a new source for air temperature data creation. This will improve snow simulation using the temperature index model in an area with sparse air temperature observations.
New materials for fireplace logs
Kieselback, D. J.; Smock, A. W.
1971-01-01
Fibrous insulation and refractory concrete are used for logs as well as fireproof walls, incinerator bricks, planters, and roof shingles. Insulation is lighter and more shock resistant than fireclay. Lightweight slag bonded with refractory concrete serves as aggregrate.
Enhanced Modeling of Remotely Sensed Annual Land Surface Temperature Cycle
Zou, Z.; Zhan, W.; Jiang, L.
2017-09-01
Satellite thermal remote sensing provides access to acquire large-scale Land surface temperature (LST) data, but also generates missing and abnormal values resulting from non-clear-sky conditions. Given this limitation, Annual Temperature Cycle (ATC) model was employed to reconstruct the continuous daily LST data over a year. The original model ATCO used harmonic functions, but the dramatic changes of the real LST caused by the weather changes remained unclear due to the smooth sine curve. Using Aqua/MODIS LST products, NDVI and meteorological data, we proposed enhanced model ATCE based on ATCO to describe the fluctuation and compared their performances for the Yangtze River Delta region of China. The results demonstrated that, the overall root mean square errors (RMSEs) of the ATCE was lower than ATCO, and the improved accuracy of daytime was better than that of night, with the errors decreased by 0.64 K and 0.36 K, respectively. The improvements of accuracies varied with different land cover types: the forest, grassland and built-up areas improved larger than water. And the spatial heterogeneity was observed for performance of ATC model: the RMSEs of built-up area, forest and grassland were around 3.0 K in the daytime, while the water attained 2.27 K; at night, the accuracies of all types significantly increased to similar RMSEs level about 2 K. By comparing the differences between LSTs simulated by two models in different seasons, it was found that the differences were smaller in the spring and autumn, while larger in the summer and winter.
DEFF Research Database (Denmark)
Matheswaran, K.; Blemmer, M.; Mortensen, J.
2011-01-01
Surface water–groundwater interactions at the stream interface influences, and at times controls the stream temperature, a critical water property driving biogeochemical processes. This study investigates the effects of these interactions on temperature of Stream Elverdamsåen in Denmark using...... the Distributed Temperature Sensing (DTS) system and instream temperature modelling. Locations of surface water–groundwater interactions were identified from the temperature data collected over a 2-km stream reach using a DTS system with 1-m spatial and 5-min temporal resolution. The stream under consideration...... exhibits three distinct thermal regimes within a 2 km reach length due to two major interactions. An energy balance model is used to simulate the instream temperature and to quantify the effect of these interactions on the stream temperature. This research demonstrates the effect of reach level small scale...
Massiot, Cécile; Townend, John; Nicol, Andrew; McNamara, David D.
2017-08-01
Acoustic borehole televiewer (BHTV) logs provide measurements of fracture attributes (orientations, thickness, and spacing) at depth. Orientation, censoring, and truncation sampling biases similar to those described for one-dimensional outcrop scanlines, and other logging or drilling artifacts specific to BHTV logs, can affect the interpretation of fracture attributes from BHTV logs. K-means, fuzzy K-means, and agglomerative clustering methods provide transparent means of separating fracture groups on the basis of their orientation. Fracture spacing is calculated for each of these fracture sets. Maximum likelihood estimation using truncated distributions permits the fitting of several probability distributions to the fracture attribute data sets within truncation limits, which can then be extrapolated over the entire range where they naturally occur. Akaike Information Criterion (AIC) and Schwartz Bayesian Criterion (SBC) statistical information criteria rank the distributions by how well they fit the data. We demonstrate these attribute analysis methods with a data set derived from three BHTV logs acquired from the high-temperature Rotokawa geothermal field, New Zealand. Varying BHTV log quality reduces the number of input data points, but careful selection of the quality levels where fractures are deemed fully sampled increases the reliability of the analysis. Spacing data analysis comprising up to 300 data points and spanning three orders of magnitude can be approximated similarly well (similar AIC rankings) with several distributions. Several clustering configurations and probability distributions can often characterize the data at similar levels of statistical criteria. Thus, several scenarios should be considered when using BHTV log data to constrain numerical fracture models.
Cheng, Jianhua; Qi, Bing; Chen, Daidai; Landry, René
2015-05-13
This paper presents modification of Radial Basis Function Artificial Neural Network (RBF ANN)-based temperature compensation models for Interferometric Fiber Optical Gyroscopes (IFOGs). Based on the mathematical expression of IFOG output, three temperature relevant terms are extracted, which include: (1) temperature of fiber loops; (2) temperature variation of fiber loops; (3) temperature product term of fiber loops. Then, the input-modified RBF ANN-based temperature compensation scheme is established, in which temperature relevant terms are transferred to train the RBF ANN. Experimental temperature tests are conducted and sufficient data are collected and post-processed to form the novel RBF ANN. Finally, we apply the modified RBF ANN based on temperature compensation model in two IFOGs with temperature compensation capabilities. The experimental results show the proposed temperature compensation model could efficiently reduce the influence of environment temperature on the output of IFOG, and exhibit a better temperature compensation performance than conventional scheme without proposed improvements.
Yang, Jie; Weng, Wenguo; Wang, Faming; Song, Guowen
2017-05-01
This paper aims to integrate a human thermoregulatory model with a clothing model to predict core and skin temperatures. The human thermoregulatory model, consisting of an active system and a passive system, was used to determine the thermoregulation and heat exchanges within the body. The clothing model simulated heat and moisture transfer from the human skin to the environment through the microenvironment and fabric. In this clothing model, the air gap between skin and clothing, as well as clothing properties such as thickness, thermal conductivity, density, porosity, and tortuosity were taken into consideration. The simulated core and mean skin temperatures were compared to the published experimental results of subject tests at three levels of ambient temperatures of 20 °C, 30 °C, and 40 °C. Although lower signal-to-noise-ratio was observed, the developed model demonstrated positive performance at predicting core temperatures with a maximum difference between the simulations and measurements of no more than 0.43 °C. Generally, the current model predicted the mean skin temperatures with reasonable accuracy. It could be applied to predict human physiological responses and assess thermal comfort and heat stress. Copyright © 2017 Elsevier Ltd. All rights reserved.
40 CFR 146.66 - Logging, sampling, and testing prior to new well operation.
2010-07-01
...: (A) Resistivity, spontaneous potential, and caliper logs before the casing is installed; and (B) A...) Upon installation of the long string casing: (A) Resistivity, spontaneous potential, porosity, caliper... radioactive tracer survey; (iii) A temperature or noise log; (iv) A casing inspection log, if required by the...
Zhang, Tao; Jiang, Feng; Yan, Lan; Xu, Xipeng
2017-12-26
The high-temperature hardness test has a wide range of applications, but lacks test standards. The purpose of this study is to develop a finite element method (FEM) model of the relationship between the high-temperature hardness and high-temperature, quasi-static compression experiment, which is a mature test technology with test standards. A high-temperature, quasi-static compression test and a high-temperature hardness test were carried out. The relationship between the high-temperature, quasi-static compression test results and the high-temperature hardness test results was built by the development of a high-temperature indentation finite element (FE) simulation. The simulated and experimental results of high-temperature hardness have been compared, verifying the accuracy of the high-temperature indentation FE simulation.The simulated results show that the high temperature hardness basically does not change with the change of load when the pile-up of material during indentation is ignored. The simulated and experimental results show that the decrease in hardness and thermal softening are consistent. The strain and stress of indentation were analyzed from the simulated contour. It was found that the strain increases with the increase of the test temperature, and the stress decreases with the increase of the test temperature.
Directory of Open Access Journals (Sweden)
Tao Zhang
2017-12-01
Full Text Available The high-temperature hardness test has a wide range of applications, but lacks test standards. The purpose of this study is to develop a finite element method (FEM model of the relationship between the high-temperature hardness and high-temperature, quasi-static compression experiment, which is a mature test technology with test standards. A high-temperature, quasi-static compression test and a high-temperature hardness test were carried out. The relationship between the high-temperature, quasi-static compression test results and the high-temperature hardness test results was built by the development of a high-temperature indentation finite element (FE simulation. The simulated and experimental results of high-temperature hardness have been compared, verifying the accuracy of the high-temperature indentation FE simulation.The simulated results show that the high temperature hardness basically does not change with the change of load when the pile-up of material during indentation is ignored. The simulated and experimental results show that the decrease in hardness and thermal softening are consistent. The strain and stress of indentation were analyzed from the simulated contour. It was found that the strain increases with the increase of the test temperature, and the stress decreases with the increase of the test temperature.
Modelling of the temperature field that accompanies friction stir welding
Directory of Open Access Journals (Sweden)
Nosal Przemysław
2017-01-01
Full Text Available The thermal modelling of the Friction Stir Welding process allows for better recognition and understanding of phenomena occurring during the joining process of different materials. It is of particular importance considering the possibilities of process technology parameters, optimization and the mechanical properties of the joint. This work demonstrates the numerical modelling of temperature distribution accompanying the process of friction stir welding. The axisymmetric problem described by Fourier’s type equation with internal heat source is considered. In order to solve the diffusive initial value problem a fully implicit scheme of the finite difference method is applied. The example under consideration deals with the friction stir welding of a plate (0.7 cm thick made of Al 6082-T6 by use of a tool made of tungsten alloy, whereas the material subjected to welding was TiC powder. Obtained results confirm both quantitatively and qualitatively experimental observations that the superior temperature corresponds to the zone where the pin joints the shoulder.
Geothermal well log interpretation state of the art. Final report
Energy Technology Data Exchange (ETDEWEB)
Sanyal, S.K.; Wells, L.E.; Bickham, R.E.
1980-01-01
An in-depth study of the state of the art in Geothermal Well Log Interpretation has been made encompassing case histories, technical papers, computerized literature searches, and actual processing of geothermal wells from New Mexico, Idaho, and California. A classification scheme of geothermal reservoir types was defined which distinguishes fluid phase and temperature, lithology, geologic province, pore geometry, salinity, and fluid chemistry. Major deficiencies of Geothermal Well Log Interpretation are defined and discussed with recommendations of possible solutions or research for solutions. The Geothermal Well Log Interpretation study and report has concentrated primarily on Western US reservoirs. Geopressured geothermal reservoirs are not considered.
International Nuclear Information System (INIS)
Farnan, R.A.; Mc Hattie, C.M.
1984-01-01
To improve the monitoring of logging company performance, computer programs were developed to assess information en masse from log quality check lists completed on wellsite by the service company engineer and Phillips representative. A study of all logging jobs performed by different service companies for Phillips in Oklahoma (panhandle excepted) during 1982 enabled several pertinent and beneficial interpretations to be made. Company A provided the best tool and crew service. Company B incurred an excessive amount of lost time related to tool failure, in particular the neutron-density tool combination. Company C, although used only three times, incurred no lost time. With a reasonable data base valid conclusions were made pertaining, for example, to repeated tool malfunctions. The actual logs were then assessed for quality
Inversion of a lateral log using neural networks
International Nuclear Information System (INIS)
Garcia, G.; Whitman, W.W.
1992-01-01
In this paper a technique using neural networks is demonstrated for the inversion of a lateral log. The lateral log is simulated by a finite difference method which in turn is used as an input to a backpropagation neural network. An initial guess earth model is generated from the neural network, which is then input to a Marquardt inversion. The neural network reacts to gross and subtle data features in actual logs and produces a response inferred from the knowledge stored in the network during a training process. The neural network inversion of lateral logs is tested on synthetic and field data. Tests using field data resulted in a final earth model whose simulated lateral is in good agreement with the actual log data
Data-Model Comparison of Pliocene Sea Surface Temperature
Dowsett, H. J.; Foley, K.; Robinson, M. M.; Bloemers, J. T.
2013-12-01
The mid-Piacenzian (late Pliocene) climate represents the most geologically recent interval of long-term average warmth and shares similarities with the climate projected for the end of the 21st century. As such, its fossil and sedimentary record represents a natural experiment from which we can gain insight into potential climate change impacts, enabling more informed policy decisions for mitigation and adaptation. We present the first systematic comparison of Pliocene sea surface temperatures (SST) between an ensemble of eight climate model simulations produced as part of PlioMIP (Pliocene Model Intercomparison Project) and the PRISM (Pliocene Research, Interpretation and Synoptic Mapping) Project mean annual SST field. Our results highlight key regional (mid- to high latitude North Atlantic and tropics) and dynamic (upwelling) situations where there is discord between reconstructed SST and the PlioMIP simulations. These differences can lead to improved strategies for both experimental design and temporal refinement of the palaeoenvironmental reconstruction. Scatter plot of multi-model-mean anomalies (squares) and PRISM3 data anomalies (large blue circles) by latitude. Vertical bars on data anomalies represent the variability of warm climate phase within the time-slab at each locality. Small colored circles represent individual model anomalies and show the spread of model estimates about the multi-model-mean. While not directly comparable in terms of the development of the means nor the meaning of variability, this plot provides a first order comparison of the anomalies. Encircled areas are a, PRISM low latitude sites outside of upwelling areas; b, North Atlantic coastal sequences and Mediterranean sites; c, large anomaly PRISM sites from the northern hemisphere. Numbers identify Ocean Drilling Program sites.
B. B. B. Booth; D. Bernie; D. McNeall; E. Hawkins; J. Caesar; C. Boulton; P. Friedlingstein; D. Sexton
2012-01-01
We compare future changes in global mean temperature in response to different future scenarios which, for the first time, arise from emission driven rather than concentration driven perturbed parameter ensemble of a Global Climate Model (GCM). These new GCM simulations sample uncertainties in atmospheric feedbacks, land carbon cycle, ocean physics and aerosol sulphur cycle processes. We find broader ranges of projected temperature responses arising when considering emission rather than concen...
Juneja, Vijay K; Valenzuela-Melendres, Martin; Heperkan, Dilek; Bautista, Derrick; Anderson, David; Hwang, Cheng-An; Peña-Ramos, Aida; Camou, Juan Pedro; Torrentera-Olivera, Noemi
2016-11-07
The objective of this study was to develop a predictive model for the inactivation of Salmonella spp. in ground beef jerky as a function of temperature (T), pH, potassium sorbate (PS), and final water activity (aw). Following a central composite design, ground beef was combined with PS (0 to 0.3%, w/w), pH adjusted from 5 to 7, inoculated with a cocktail of 6 serotypes of Salmonella spp. and heat processed at temperatures between 65 and 85°C until the final aw ranging from 0.65 to 0.85 was achieved. Surviving Salmonella cells were enumerated on tryptic soy agar overlaid with xylose lysine deoxycholate agar (pre-tempered to 47°C) after incubation for 48h at 30°C. Bacterial inactivation was quantified in terms of logarithmic reductions of Salmonella counts (log10CFU/g) and inactivation rate (log10(CFU/g)/h). The results indicated that pH, PS and T significantly (pbeef jerky. Decreasing meat pH significantly (pBeef jerky processed at 82°C, pH5.5, with 0.25% PS to a final aw of 0.7 resulted in a maximum Salmonella logarithmic reduction of 5.0log10CFU/g and an inactivation rate of 1.3log10(CFU/g)/h. The predictive model developed can be used to effectively design drying processes for beef jerky under low humidity conditions and thereby, ensuring an adequate degree of protection against risks associated with Salmonella spp. Copyright © 2016. Published by Elsevier B.V.
Deterministic Modeling of the High Temperature Test Reactor
Energy Technology Data Exchange (ETDEWEB)
Ortensi, J.; Cogliati, J. J.; Pope, M. A.; Ferrer, R. M.; Ougouag, A. M.
2010-06-01
Idaho National Laboratory (INL) is tasked with the development of reactor physics analysis capability of the Next Generation Nuclear Power (NGNP) project. In order to examine INL’s current prismatic reactor deterministic analysis tools, the project is conducting a benchmark exercise based on modeling the High Temperature Test Reactor (HTTR). This exercise entails the development of a model for the initial criticality, a 19 column thin annular core, and the fully loaded core critical condition with 30 columns. Special emphasis is devoted to the annular core modeling, which shares more characteristics with the NGNP base design. The DRAGON code is used in this study because it offers significant ease and versatility in modeling prismatic designs. Despite some geometric limitations, the code performs quite well compared to other lattice physics codes. DRAGON can generate transport solutions via collision probability (CP), method of characteristics (MOC), and discrete ordinates (Sn). A fine group cross section library based on the SHEM 281 energy structure is used in the DRAGON calculations. HEXPEDITE is the hexagonal z full core solver used in this study and is based on the Green’s Function solution of the transverse integrated equations. In addition, two Monte Carlo (MC) based codes, MCNP5 and PSG2/SERPENT, provide benchmarking capability for the DRAGON and the nodal diffusion solver codes. The results from this study show a consistent bias of 2–3% for the core multiplication factor. This systematic error has also been observed in other HTTR benchmark efforts and is well documented in the literature. The ENDF/B VII graphite and U235 cross sections appear to be the main source of the error. The isothermal temperature coefficients calculated with the fully loaded core configuration agree well with other benchmark participants but are 40% higher than the experimental values. This discrepancy with the measurement stems from the fact that during the experiments the
FEM modelling of firing temperature and stress zones
Energy Technology Data Exchange (ETDEWEB)
Schulle, W.; Schultz, K. [Bergakademie Freiberg (Germany)
1999-03-01
In the introduction, the principal possibilities of using finite element modelling (FEM) for problem solving in the firing processes of ceramics are given. Subsequently, two concrete application examples are described. In the first example, the temperature and stress calculation during biscuit firing of porcelain is discussed. The results are given for the influence of the heating rate for stacked firing and single-layer firing of flat tableware, the influence of the positioning of the individual plates in the plate stack, and changes in shape of the article during the formation of temperature and stress fields could be estimated. In a second example, the heat stresses that arise during the firing of high voltage insulators were calculated. It can be shown how the progression of the stresses is influenced by the body geometry, especially with the insulators ''cup'' design. During the course of the firing process, a regulating influence is possible. The examples should encourage further problem solving by specific use of FEM. (orig.)
Estimating radiation and temperature data for crop simulation model
International Nuclear Information System (INIS)
Ferrer, A.B.; Centeno, H.G.S.; Sheehy, J.E.
1996-01-01
Weather (radiation and temperature) and crop characteristics determine the potential production of an irrigated rice crop. Daily weather data are important inputs to ORYZA 1, an eco-physiological crop model. However, in most cases, missing values occur and sometimes daily weather data are not readily available. More than 20 years of historic daily weather data had been collected from six stations in the Philippines -- Albay, Butuan, Munoz, Batac, Aborlan, and Los Banos. Methods to estimate daily weather data values were made by deriving long-term monthly means and (1) using the same value per month, (2) linearly interpolating between months, and (3) using SIMMETEO weather generator. A validated ORYZA 1 was run using actual daily weather data. The model was run again using weather data obtained from each estimation procedure and the predicted yields from the different simulation runs were compared. The yield predicted using the different weather data sets for each site difference by as much as 20 percent. Among the three estimation procedures used, the interpolated monthly mean values of weather data gave results comparable with those of model runs using actual weather data
Elevated Temperature Testing and Modeling of Advanced Toughened Ceramic Materials
Keith, Theo G.
2005-01-01
The purpose of this report is to provide a final report for the period of 12/1/03 through 11/30/04 for NASA Cooperative Agreement NCC3-776, entitled "Elevated Temperature Testing and Modeling of Advanced Toughened Ceramic Materials." During this final period, major efforts were focused on both the determination of mechanical properties of advanced ceramic materials and the development of mechanical test methodologies under several different programs of the NASA-Glenn. The important research activities made during this period are: 1. Mechanical properties evaluation of two gas-turbine grade silicon nitrides. 2) Mechanical testing for fuel-cell seal materials. 3) Mechanical properties evaluation of thermal barrier coatings and CFCCs and 4) Foreign object damage (FOD) testing.
Sousa, Paulo Teixeira de
2002-01-01
Dissertação (mestrado) - Universidade Federal de Santa Catarina, Centro Tecnológico. Programa de Pós-Graduação em Engenharia de Produção. Essa dissertação tem como objetivo demonstrar a importância da logística interna nas organizações prestadoras de serviço. A logística é vista como um processo capaz de aumentar a eficiência organizacional através da redução dos custos operacionais bem como agiliza os processos de movimentação das mercadorias em toda cadeia de abastecimento. A logística é...
A Model of Temperature-Dependent Young's Modulus for Ultrahigh Temperature Ceramics
Directory of Open Access Journals (Sweden)
Weiguo Li
2011-01-01
Full Text Available Based on the different sensitivities of material properties to temperature between ultrahigh temperature ceramics (UHTCs and traditional ceramics, the original empirical formula of temperature-dependent Young's modulus of ceramic materials is unable to describe the temperature dependence of Young's modulus of UHTCs which are used as thermal protection materials. In this paper, a characterization applied to Young's modulus of UHTC materials under high temperature which is revised from the original empirical formula is established. The applicable temperature range of the characterization extends to the higher temperature zone. This study will provide a basis for the characterization for strength and fracture toughness of UHTC materials and provide theoretical bases and technical reserves for the UHTC materials' design and application in the field of spacecraft.
Mud Logging; Control geologico en perforaciones petroliferas (Mud Logging)
Energy Technology Data Exchange (ETDEWEB)
Pumarega Lafuente, J.C.
1994-12-31
Mud Logging is an important activity in the oil field and it is a key job in drilling operations, our duties are the acquisition, collection and interpretation of the geological and engineering data at the wellsite, also inform the client immediately of any significant changes in the well. (Author)
Energy Technology Data Exchange (ETDEWEB)
Marcon, Diogo Reato; Souza, Ana Paula Martins de; Vieira, Alexandre J.M. [PETROBRAS, Rio de Janeiro, RJ (Brazil)
2008-07-01
This work presents a new methodology for characterizing an unsampled oil interval, using basically production log data and PVT analyses available in the well. The methodology was applied to a real case, where the live oil samples were collected during a well test run in three different depths, revealing some evidence of a compositional grading due to gravity. Each individual sample was a mixture of the fluid produced from the reservoir bottom to the sampling point, since the whole interval was perforated and the isolation had to be made with a packer. The first sample was corresponding to the mixture of lower and all upper oils. The other two samples are only the heavier and that oil with part of the one from the upper interval. In order to identify the fluid properties from the upper interval, needed for production development studies, the following procedure was devised: equation-of-state tuning, reproducing the sampled fluid properties; conversion of volumetric flowrates from production log into mass and molar flowrates; flowrate ratio calculation, between the upper and lower intervals; upper interval fluid composition estimative; upper interval fluid properties simulation, using the previously tuned equation-of-state, thus generating what was considered a representative, synthetic PVT analysis. (author)
Pulsed neutron porosity logging system
International Nuclear Information System (INIS)
Smith, H.D. Jr.; Smith, M.P.; Schultz, W.E.
1978-01-01
An improved pulsed neutron porosity logging system is provided in the present invention. A logging tool provided with a 14 MeV pulsed neutron source, an epithermal neutron detector, and a fast neutron detector is moved through a borehole. Repetitive bursts of neutrons irradiate the earth formations and, during the bursts, the fast neutron population is sampled. During the interval between bursts the epithermal neutron population is sampled along with background gamma radiation due to lingering thermal neutrons. The fast and epithermal neutron population measurements are combined to provide a measurement of formation porosity
Directory of Open Access Journals (Sweden)
G. Polt
2015-10-01
Full Text Available In-situ X-ray diffraction was applied to isotactic polypropylene with a high volume fraction of α-phase (α-iPP while it has been compressed at temperatures below and above its glass transition temperature Tg. The diffraction patterns were evaluated by the Multi-reflection X-ray Profile Analysis (MXPA method, revealing microstructural parameters such as the density of dislocations and the size of coherently scattering domains (CSD-size. A significant difference in the development of the dislocation density was found compared to compression at temperatures above Tg, pointing at a different plastic deformation mechanism at these temperatures. Based on the individual evolutions of the dislocation density and CSD-size observed as a function of compressive strain, suggestions for the deformation mechanisms occurring below and above Tg are made.
New temperature model of the Netherlands from new data and novel modelling methodology
Bonté, Damien; Struijk, Maartje; Békési, Eszter; Cloetingh, Sierd; van Wees, Jan-Diederik
2017-04-01
Deep geothermal energy has grown in interest in Western Europe in the last decades, for direct use but also, as the knowledge of the subsurface improves, for electricity generation. In the Netherlands, where the sector took off with the first system in 2005, geothermal energy is seen has a key player for a sustainable future. The knowledge of the temperature subsurface, together with the available flow from the reservoir, is an important factor that can determine the success of a geothermal energy project. To support the development of deep geothermal energy system in the Netherlands, we have made a first assessment of the subsurface temperature based on thermal data but also on geological elements (Bonté et al, 2012). An outcome of this work was ThermoGIS that uses the temperature model. This work is a revision of the model that is used in ThermoGIS. The improvement from the first model are multiple, we have been improving not only the dataset used for the calibration and structural model, but also the methodology trough an improved software (called b3t). The temperature dataset has been updated by integrating temperature on the newly accessible wells. The sedimentary description in the basin has been improved by using an updated and refined structural model and an improved lithological definition. A major improvement in from the methodology used to perform the modelling, with b3t the calibration is made not only using the lithospheric parameters but also using the thermal conductivity of the sediments. The result is a much more accurate definition of the parameters for the model and a perfected handling of the calibration process. The result obtain is a precise and improved temperature model of the Netherlands. The thermal conductivity variation in the sediments associated with geometry of the layers is an important factor of temperature variations and the influence of the Zechtein salt in the north of the country is important. In addition, the radiogenic heat
Whittington, A. G.; Romine, W. L.
2014-12-01
Understanding the dynamics of rhyolitic conduits and lava flows, requires precise knowledge of how viscosity (η) varies with temperature (T), pressure (P) and volatile content (X). In order to address the paucity of viscosity data for high-silica rhyolite at low water contents, which represent water saturation at near-surface conditions, we made 245 viscosity measurements on Mono Craters (California) rhyolites containing between 0.01 and 1.1 wt.% H2O, at temperatures between 796 and 1774 K using parallel plate and concentric cylinder methods at atmospheric pressure. We then developed and calibrated a new empirical model for the log of the viscosity of rhyolitic melts, where non-linear variations due to temperature and water content are nested within a linear dependence of log η on P. The model was fitted to a total of 563 data points: our 245 new data, 255 published data from rhyolites across a wide P-T-X space, and 63 data on haplogranitic and granitic melts under high P-T conditions. Statistically insignificant parameters were eliminated from the model in an effort to increase parsimony and the final model is simple enough for use in numerical models of conduit or lava flow dynamics: log η = -5.142+(13080-2982log(w+0.229))/(T-(98.9-175.9 log(w+0.229)))- P(0.0007-0.76/T ) where η is in Pa s, w is water content in wt.%, P is in MPa and T is in K. The root mean square deviation (rmsd) between the model predictions and the 563 data points used in calibration is 0.39 log units. Experimental constraints have led previously to spurious correlations between P, T, X and η in viscosity data sets, so that predictive models may struggle to correctly resolve the individual effects of P, T and X, and especially their cross-correlations. The increasing water solubility with depth inside a simple isothermal sheet of obsidian suggests that viscosity should decrease by ~1 order of magnitude at ~20m depth and by ~2 orders of magnitude at ~100m depth. If equilibrium water
A physics-based temperature model for ultrasonic vibration-assisted pelleting of cellulosic biomass.
Song, Xiaoxu; Yu, Xiaoming; Zhang, Meng; Pei, Z J; Wang, Donghai
2014-09-01
Temperature in ultrasonic vibration-assisted (UV-A) pelleting of cellulosic biomass has a significant impact on pellet quality. However, there are no reports on temperature models for UV-A pelleting of cellulosic biomass. The development of a physics-based temperature model can help to explain experimentally determined relations between UV-A pelleting process variables and temperature, and provide guidelines to optimize these process variables in order to produce pellets of good quality. This paper presents such a model for UV-A pelleting of cellulosic biomass. Development of the model is described first. Then temperature distribution is investigated using the model, and temperature difference between the top and the bottom surfaces of a pellet is explained. Based on this model, relations between process variables (ultrasonic power and pelleting duration) and temperature are predicted. Experiments were conducted for model verification, and the results agreed well with model predictions. Copyright © 2014 Elsevier B.V. All rights reserved.
High resolution gamma spectroscopy well logging system
International Nuclear Information System (INIS)
Giles, J.R.; Dooley, K.J.
1997-01-01
A Gamma Spectroscopy Logging System (GSLS) has been developed to study sub-surface radionuclide contamination. The absolute counting efficiencies of the GSLS detectors were determined using cylindrical reference sources. More complex borehole geometries were modeled using commercially available shielding software and correction factors were developed based on relative gamma-ray fluence rates. Examination of varying porosity and moisture content showed that as porosity increases, and as the formation saturation ratio decreases, relative gamma-ray fluence rates increase linearly for all energies. Correction factors for iron and water cylindrical shields were found to agree well with correction factors determined during previous studies allowing for the development of correction factors for type-304 stainless steel and low-carbon steel casings. Regression analyses of correction factor data produced equations for determining correction factors applicable to spectral gamma-ray well logs acquired under non-standard borehole conditions
Modelling property changes in graphite irradiated at changing irradiation temperature
CSIR Research Space (South Africa)
Kok, S
2011-01-01
Full Text Available A new method is proposed to predict the irradiation induced property changes in nuclear; graphite, including the effect of a change in irradiation temperature. The currently used method; to account for changes in irradiation temperature, the scaled...
Log-Euclidean Metrics for Contrast Preserving Decolorization.
Liu, Qiegen; Shao, Guangpu; Wang, Yuhao; Gao, Junbin; Leung, Henry
2017-08-25
This paper presents a novel Log-Euclidean metric inspired color-to-gray conversion model for faithfully preserving the contrast details of color image, which differs from the traditional Euclidean metric approaches. In the proposed model, motivated by the fact that Log-Euclidean metric has promising invariance properties such as inversion invariant and similarity invariant, we present a Log-Euclidean metric based maximum function to model the decolorization procedure. The Gaussian-like penalty function consisting of the log-Euclidean metric between gradients of the input color image and transformed grayscale image is incorporated to better reflect the degree of preserving feature discriminability and color ordering in color-to-gray conversion. A discrete searching algorithm is employed to solve the proposed model with linear parametric and non-negative constraints. Extensive evaluation experiments show that the proposed method outperforms the state-of-the-art methods both quantitatively and qualitatively.
Fluid temperatures: Modeling the thermal regime of a river network
Rhonda Mazza; Ashley Steel
2017-01-01
Water temperature drives the complex food web of a river network. Aquatic organisms hatch, feed, and reproduce in thermal niches within the tributaries and mainstem that comprise the river network. Changes in water temperature can synchronize or asynchronize the timing of their life stages throughout the year. The water temperature fluctuates over time and place,...
Soil Wetness Influences Log Skidding
William N. Darwin
1960-01-01
One of the least explored variables in timber harvesting is the effect of ground conditions on log production . The Southern Hardwoods Laboratory is studying this variable and its influence on performance of skidding vehicles in Southern bottom lands. The test reported here was designed to evaluate the effects of bark features on skidding coefficients, but it also...
Debra D. Warren
1989-01-01
Volumes and average values of log exports by port have been compiled by quarter for 1987. The tables show the four Northwest customs districts by ports, species, and destinations. These data were received from the U.S. Department of Commerce too late to be published in the 1987 quarterly reports, "Production, Prices, Employment, and Trade in Northwest Forest...
Postfire logging in riparian areas.
Gordon H. Reeves; Peter A. Bisson; Bruce E. Rieman; Lee E. Benda
2006-01-01
We reviewed the behavior of wildfire in riparian zones, primarily in the western United States, and the potential ecological consequences of postfire logging. Fire behavior in riparian zones is complex, but many aquatic and riparian organisms exhibit a suite of adaptations that allow relatively rapid recovery after fire. Unless constrained by other factors, fish tend...
Well-log based prediction of thermal conductivity
DEFF Research Database (Denmark)
Fuchs, Sven; Förster, Andrea
Rock thermal conductivity (TC) is paramount for the determination of heat flow and the calculation of temperature profiles. Due to the scarcity of drill cores compared to the availability of petrophysical well logs, methods are desired to indirectly predict TC in sedimentary basins. Most of the w......Rock thermal conductivity (TC) is paramount for the determination of heat flow and the calculation of temperature profiles. Due to the scarcity of drill cores compared to the availability of petrophysical well logs, methods are desired to indirectly predict TC in sedimentary basins. Most...
Roth, T. R.; Westhoff, M. C.; Huwald, H.; Huff, J. A.; Rubin, J. F.; Barrenetxea, G.; Vetterli, M.; Parriaux, A.; Selker, J. S.; Parlange, M.B.
2010-01-01
Elevated in-stream temperature has led to a surge in the occurrence of parasitic intrusion proliferative kidney disease and has resulted in fish kills throughout Switzerland’s waterways. Data from distributed temperature sensing (DTS) in-stream measurements for three cloud-free days in August 2007
Nuclear well logging in hydrology
International Nuclear Information System (INIS)
1971-01-01
The optimum development of regional and local groundwater resources requires a quantitative evaluation of its aquifers and aquicludes, and of the physical and chemical properties relevant to the recharge to and withdrawal of water from them. If an understanding of the groundwater regime is to be obtained, geological observations at outcrop must be augmented by subsurface measurements of the strata and the waters they contain. Measurements of many hydrological and geological parameters can be made in situ by nuclear geophysical well-logging methods. Very simply, well logging consists of lowering a measuring probe into a well and making a continuous record of the variations of a particular parameter with depth. In most circumstances, repetition of the measurements under differing hydrodynamic conditions results in a better definition of the flow regime in the aquifer. Nuclear well-logging techniques have for some years been capable of solving a number of the sub-surface measurement problems faced by hydrogeologists. However, the present usage of these methods varies from country to country and the literature concerning applications is scattered in the professional journals of several disciplines. The objective of this report is to include in a single reference volume descriptions of the physical principles of nuclear logging methods, their applications to hydrogeological problems and their limitations on a level suitable for the practising hydrologists with a limited knowledge of nuclear physics. The Working Group responsible for compiling the report recommended that it should cover a broad spectrum of hydrogeological investigations and problems. For example, it saw no valid reason to distinguish for the purposes of the report between well-logging applications for water-supply purposes and for water-flooding studies in the petroleum industry. Neutron measurements made for soil-moisture determinations in the unsaturated zone have been specifically omitted, however, as
A Three-Compartment Model Describing Temperature Changes in Tethered Flying Blowflies
Stavenga, D.G.; Schwering, P.B.W.; Tinbergen, J.
1993-01-01
A three-compartment model is presented that describes temperature measurements of tethered flying blowflies, obtained by thermal imaging. During rest, the body temperature is approximately equal to the ambient temperature. At the start of flight, the thorax temperature increases exponentially with a
On conditional independence and log-convexity
Czech Academy of Sciences Publication Activity Database
Matúš, František
2012-01-01
Roč. 48, č. 4 (2012), s. 1137-1147 ISSN 0246-0203 R&D Projects: GA AV ČR IAA100750603; GA ČR GA201/08/0539 Institutional support: RVO:67985556 Keywords : Conditional independence * Markov properties * factorizable distributions * graphical Markov models * log-convexity * Gibbs- Markov equivalence * Markov fields * Gaussian distributions * positive definite matrices * covariance selection model Subject RIV: BA - General Mathematics Impact factor: 0.933, year: 2012 http://library.utia.cas.cz/separaty/2013/MTR/matus-0386229.pdf
Hardwood log supply: a broader perspective
Iris Montague; Adri Andersch; Jan Wiedenbeck; Urs. Buehlmann
2015-01-01
At regional and state meetings we talk with others in our business about the problems we face: log exports, log quality, log markets, logger shortages, cash flow problems, the weather. These are familiar talking points and real and persistent problems. But what is the relative importance of these problems for log procurement in different regions of...
Palm distributions for log Gaussian Cox processes
DEFF Research Database (Denmark)
Coeurjolly, Jean-Francois; Møller, Jesper; Waagepetersen, Rasmus Plenge
2017-01-01
This paper establishes a remarkable result regarding Palm distributions for a log Gaussian Cox process: the reduced Palm distribution for a log Gaussian Cox process is itself a log Gaussian Cox process that only differs from the original log Gaussian Cox process in the intensity function. This new...
Silva, Leonardo P; Gonzales-Barron, Ursula; Cadavez, Vasco; Sant'Ana, Anderson S
2015-04-01
In this work, all publicly-accessible published findings on Alicyclobacillus acidoterrestris heat resistance in fruit beverages as affected by temperature and pH were compiled. Then, study characteristics (protocols, fruit and variety, °Brix, pH, temperature, heating medium, culture medium, inactivation method, strains, etc.) were extracted from the primary studies, and some of them incorporated to a meta-analysis mixed-effects linear model based on the basic Bigelow equation describing the heat resistance parameters of this bacterium. The model estimated mean D* values (time needed for one log reduction at a temperature of 95 °C and a pH of 3.5) of Alicyclobacillus in beverages of different fruits, two different concentration types, with and without bacteriocins, and with and without clarification. The zT (temperature change needed to cause one log reduction in D-values) estimated by the meta-analysis model were compared to those ('observed' zT values) reported in the primary studies, and in all cases they were within the confidence intervals of the model. The model was capable of predicting the heat resistance parameters of Alicyclobacillus in fruit beverages beyond the types available in the meta-analytical data. It is expected that the compilation of the thermal resistance of Alicyclobacillus in fruit beverages, carried out in this study, will be of utility to food quality managers in the determination or validation of the lethality of their current heat treatment processes. Copyright © 2014 Elsevier Ltd. All rights reserved.
Modeling seasonal surface temperature variations in secondary tropical dry forests
Cao, Sen; Sanchez-Azofeifa, Arturo
2017-10-01
Secondary tropical dry forests (TDFs) provide important ecosystem services such as carbon sequestration, biodiversity conservation, and nutrient cycle regulation. However, their biogeophysical processes at the canopy-atmosphere interface remain unknown, limiting our understanding of how this endangered ecosystem influences, and responds to the ongoing global warming. To facilitate future development of conservation policies, this study characterized the seasonal land surface temperature (LST) behavior of three successional stages (early, intermediate, and late) of a TDF, at the Santa Rosa National Park (SRNP), Costa Rica. A total of 38 Landsat-8 Thermal Infrared Sensor (TIRS) data and the Surface Reflectance (SR) product were utilized to model LST time series from July 2013 to July 2016 using a radiative transfer equation (RTE) algorithm. We further related the LST time series to seven vegetation indices which reflect different properties of TDFs, and soil moisture data obtained from a Wireless Sensor Network (WSN). Results showed that the LST in the dry season was 15-20 K higher than in the wet season at SRNP. We found that the early successional stages were about 6-8 K warmer than the intermediate successional stages and were 9-10 K warmer than the late successional stages in the middle of the dry season; meanwhile, a minimum LST difference (0-1 K) was observed at the end of the wet season. Leaf phenology and canopy architecture explained most LST variations in both dry and wet seasons. However, our analysis revealed that it is precipitation that ultimately determines the LST variations through both biogeochemical (leaf phenology) and biogeophysical processes (evapotranspiration) of the plants. Results of this study could help physiological modeling studies in secondary TDFs.
A critical view on temperature modelling for application in weather derivatives markets
International Nuclear Information System (INIS)
Šaltytė Benth, Jūratė; Benth, Fred Espen
2012-01-01
In this paper we present a stochastic model for daily average temperature. The model contains seasonality, a low-order autoregressive component and a variance describing the heteroskedastic residuals. The model is estimated on daily average temperature records from Stockholm (Sweden). By comparing the proposed model with the popular model of Campbell and Diebold (2005), we point out some important issues to be addressed when modelling the temperature for application in weather derivatives market. - Highlights: ► We present a stochastic model for daily average temperature, containing seasonality, a low-order autoregressive component and a variance describing the heteroskedastic residuals. ► We compare the proposed model with the popular model of Campbell and Diebold (2005). ► Some important issues to be addressed when modelling the temperature for application in weather derivatives market are pointed out.
DEFF Research Database (Denmark)
Mantzouni, Irene; Sørensen, Helle; O'Hara, Robert B.
2010-01-01
and Beverton and Holt stock–recruitment (SR) models were extended by applying hierarchical methods, mixed-effects models, and Bayesian inference to incorporate the influence of these ecosystem factors on model parameters representing cod maximum reproductive rate and carrying capacity. We identified...... the pattern of temperature effects on cod productivity at the species level and estimated SR model parameters with increased precision. Temperature impacts vary geographically, being positive in areas where temperatures are...
DEFF Research Database (Denmark)
Maiorano, Andrea; Martre, Pierre; Asseng, Senthold
2017-01-01
ensemble percentile range) of grain yields simulated by the MME on average by 39% in the calibration data set and by 26% in the independent evaluation data set for crops grown in mean seasonal temperatures >24 °C. MME mean squared error in simulating grain yield decreased by 37%. A reduction in MME...... uncertainty range by 27% increased MME prediction skills by 47%. Results suggest that the mean level of variation observed in field experiments and used as a benchmark can be reached with half the number of models in the MME. Improving crop models is therefore important to increase the certainty of model...
Rumiantsev, G V
2004-04-01
On created in laboratory heat-physical model of a rabbit body reflecting basic heat-physical parameters of the body such as: weight, size of a relative surface, heat absorption and heat conduction, heat capacity etc., a change of radial distribution of temperature and size was found across a superficial layer of evaporation of water from its surface, that simulates sweating, with various ratio of environmental temperature and capacity of electrical heater simulating heat production in animal. The experiments have shown that with evaporation of moisture from a surface of model in all investigated cases, there is an increase of superficial layer of body of a temperature gradient and simultaneous decrease of temperature of a model inside and on the surface. It seems that, with evaporation of a moisture from a surface of a body, the size of a temperature gradient in a thin superficial layer dependent in our experiments on capacity for heat production and environmental temperature, is increased and can be used in a live organism for definition of change in general heat content of the body with the purpose of maintenance of its thermal balance with environment.
A linear regression model for predicting PNW estuarine temperatures in a changing climate
Pacific Northwest coastal regions, estuaries, and associated ecosystems are vulnerable to the potential effects of climate change, especially to changes in nearshore water temperature. While predictive climate models simulate future air temperatures, no such projections exist for...
Feres Sahid
1987-01-01
RESUMEN El concepto logístico, se pudo ver reflejado con exactitud desde el punto de vista etimológico e histórico a través de la revista de la E.A.N; ya que tiene cierto carácter militar que lo hace característico a la gestión empresarial y de esto se formula un debate definitivo de este concepto.
Directory of Open Access Journals (Sweden)
Feres Sahid
1987-04-01
Full Text Available RESUMEN El concepto logístico, se pudo ver reflejado con exactitud desde el punto de vista etimológico e histórico a través de la revista de la E.A.N; ya que tiene cierto carácter militar que lo hace característico a la gestión empresarial y de esto se formula un debate definitivo de este concepto.
Chemical logging of geothermal wells
Allen, C.A.; McAtee, R.E.
The presence of geothermal aquifers can be detected while drilling in geothermal formations by maintaining a chemical log of the ratio of the concentrations of calcium to carbonate and bicarbonate ions in the return drilling fluid. A continuous increase in the ratio of the concentrations of calcium to carbonate and bicarbonate ions is indicative of the existence of a warm or hot geothermal aquifer at some increased depth.
Neutron capture in borehole logging
International Nuclear Information System (INIS)
Randall, R.R.
1981-01-01
The use is described of a pulsed source of fast neutrons and a radiation detector to measure the thermal neutron population decay rate in a well logging instrument. The macroscopic neutron absorption cross-section is calculated by taking the natural logarithm of the ratio of the detected radiation counts occurring within two measurement intervals of fixed duration and starting at a fixed time after a neutron burst. (U.K.)
Calibration of the radionuclide logging system germanium detector
International Nuclear Information System (INIS)
Randall, R.R.
1994-01-01
High resolution passive gamma-ray logging, high resolution gamma-ray-emitting nuclides in areas surrounding underground waste disposal facilities on the US Department of Energy's Hanford Site. Gamma-ray source concentrations are derived from log data by calculations that employ the calibration factors and correction functions described in this report. Calibration data were collected with a Radionuclide Logging System. Analyses of the calibration data established: (1) calibration factors for potassium, uranium, and thorium, and (2) a calibration function that permits assessments of cesium-137, cobalt-60, and other artificial nuclides not represented in the calibration models
Evaluation of brightness temperature from a forward model of ...
Indian Academy of Sciences (India)
Ground-based microwave radiometers are getting great attention in recent years due to their capability to profile the temperature and humidity at high temporal and vertical resolution in the lower troposphere. The process of retrieving these parameters from the measurements of radiometric brightness temperature.
Prediction of water temperature metrics using spatial modelling in ...
African Journals Online (AJOL)
Water temperature regime dynamics should be viewed regionally, where regional divisions have an inherent underpinning by an understanding of natural thermal variability. The aim of this research was to link key water temperature metrics to readily-mapped environmental surrogates, and to produce spatial images of ...
A model of evaluating the pseudogap temperature for high ...
Indian Academy of Sciences (India)
The observation of pseudogap in normal-state properties of high-temperature supercon- ducting (HTS) oxide materials has raised many questions about the origin and its relation with superconductivity. Emery and Kevilson [1] first used the term pseudogap temper- ature for underdoped high-Tc materials. The temperature at ...
Modelling and analysis of radial thermal stresses and temperature ...
African Journals Online (AJOL)
A theoretical investigation has been undertaken to study operating temperatures, heat fluxes and radial thermal stresses in the valves of a modern diesel engine with and without air-cavity. Temperatures, heat fluxes and radial thermal stresses were measured theoretically for both cases under all four thermal loading ...
Bio-logging of physiological parameters in higher marine vertebrates
Ponganis, Paul J.
2007-02-01
Bio-logging of physiological parameters in higher marine vertebrates had its origins in the field of bio-telemetry in the 1960s and 1970s. The development of microprocessor technology allowed its first application to bio-logging investigations of Weddell seal diving physiology in the early 1980s. Since that time, with the use of increased memory capacity, new sensor technology, and novel data processing techniques, investigators have examined heart rate, temperature, swim speed, stroke frequency, stomach function (gastric pH and motility), heat flux, muscle oxygenation, respiratory rate, diving air volume, and oxygen partial pressure (P) during diving. Swim speed, heart rate, and body temperature have been the most commonly studied parameters. Bio-logging investigation of pressure effects has only been conducted with the use of blood samplers and nitrogen analyses on animals diving at isolated dive holes. The advantages/disadvantages and limitations of recording techniques, probe placement, calibration techniques, and study conditions are reviewed.
Regional temperature models are needed for characterizing and mapping stream thermal regimes, establishing reference conditions, predicting future impacts and identifying critical thermal refugia. Spatial statistical models have been developed to improve regression modeling techn...
Glass Transition Temperature- and Specific Volume- Composition Models for Tellurite Glasses
Energy Technology Data Exchange (ETDEWEB)
Riley, Brian J. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Vienna, John D. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)
2017-09-01
This report provides models for predicting composition-properties for tellurite glasses, namely specific gravity and glass transition temperature. Included are the partial specific coefficients for each model, the component validity ranges, and model fit parameters.
Zhou, Siyuan; Sheen, Shiowshuh; Pang, Yu-Hsin; Liu, Linshu; Yam, Kit L
2015-02-01
Salmonella is a microorganism of concern on a global basis for raw shrimp. This research modeled the impact of vapor thymol concentration (0, 0.8, and 1.6 mg/liter), storage temperature (8, 12, and 16°C), and modified atmosphere condition (0.04 as in the natural atmosphere and 59.5% CO2) against the growth behavior of a Salmonella cocktail (six strains) on raw shrimp. Lag time (hour) and maximum growth rate (log CFU per gram per hour), chosen as two growth indicators, were obtained through DMFit software and then developed into polynomial as well as nonlinear modified secondary models (dimensional and/or dimensionless), consisting of two or even three impact factors in the equations. The models were validated, and results showed that the predictive values from both models demonstrated good matches to the observed experimental values, yet the prediction based on lag time was more accurate than maximum growth rate. The information will provide the food industry with insight into the potential safety risk of Salmonella growth on raw shrimp under stressed conditions.
Accurately determining log and bark volumes of saw logs using high-resolution laser scan data
R. Edward Thomas; Neal D. Bennett
2014-01-01
Accurately determining the volume of logs and bark is crucial to estimating the total expected value recovery from a log. Knowing the correct size and volume of a log helps to determine which processing method, if any, should be used on a given log. However, applying volume estimation methods consistently can be difficult. Errors in log measurement and oddly shaped...
AUTOMATED TECHNIQUE FOR CREATING LITHOLOGIC LOG PLOTS
Directory of Open Access Journals (Sweden)
Kristijan Posavec
2006-12-01
Full Text Available Paper presents automated technique for creating lithologic log plots. Technique is based on three computer tools: Microsoft (MS Access program, LogPlot program, and Visual Basic (VB macros for MS Excel. MS Access ensures professional storage of lithologic data which can be in that way easier and faster entered, searched, updated, and also used for different purposes, while LogPlot provides tools for creating lithologic log plots. VB macros enable transfer of lithologic data from MS Access to LogPlot. Data stored in MS Access are exported in ASCII files which are later used by LogPlot for creation of lithologic log plots. Presented concept facilitates creation of lithologic log plots, and automated technique enables processing of a large number of data i.e. creation of lareg number lithologic log plots in a short period of time (the paper is published in Croatian.
MCEM algorithm for the log-Gaussian Cox process
Delmas, Celine; Dubois-Peyrard, Nathalie; Sabbadin, Regis
2014-01-01
Log-Gaussian Cox processes are an important class of models for aggregated point patterns. They have been largely used in spatial epidemiology (Diggle et al., 2005), in agronomy (Bourgeois et al., 2012), in forestry (Moller et al.), in ecology (sightings of wild animals) or in environmental sciences (radioactivity counts). A log-Gaussian Cox process is a Poisson process with a stochastic intensity depending on a Gaussian random eld. We consider the case where this Gaussian random eld is ...
Application of oil-field well log interpretation techniques to the Cerro Prieto Geothermal Field
Energy Technology Data Exchange (ETDEWEB)
Ershaghi, I.; Phillips, L.B.; Dougherty, E.L.; Handy, L.L.
1979-10-01
An example is presented of the application of oil-field techniques to the Cerro Prieto Field, Mexico. The lithology in this field (sand-shale lithology) is relatively similar to oil-field systems. The study was undertaken as a part of the first series of case studies supported by the Geothermal Log Interpretation Program (GLIP) of the US Department of Energy. The suites of logs for individual wells were far from complete. This was partly because of adverse borehole conditions but mostly because of unavailability of high-temperature tools. The most complete set of logs was a combination of Dual Induction Laterolog, Compensated Formation Density Gamma Ray, Compensated Neutron Log, and Saraband. Temperature data about the wells were sketchy, and the logs had been run under pre-cooled mud condition. A system of interpretation consisting of a combination of graphic and numerical studies was used to study the logs. From graphical studies, evidence of hydrothermal alteration may be established from the trend analysis of SP (self potential) and ILD (deep induction log). Furthermore, the cross plot techniques using data from density and neutron logs may help in establishing compaction as well as rock density profile with depth. In the numerical method, R/sub wa/ values from three different resistivity logs were computed and brought into agreement. From this approach, values of formation temperature and mud filtrate resistivity effective at the time of logging were established.
C/O logging neutron generator with floating power supply
Jiang Zhon Gjin; Li Wen Sheng; Guo Jing Fu; Wei Bao Jie
2002-01-01
Floating power supply is used to neutron tube in logging neutron generator, which minimizes the space of high-strength electric field, and reduces the voltage gradient by 1/3 in the high-strength electric field. Under condition of floating power supply pulse width modulation autocontrol is used to stabilize operation of neutron generator in high temperature circumstance
Electronic Modeling and Design for Extreme Temperatures Project
National Aeronautics and Space Administration — We propose to develop electronics for operation at temperatures that range from -230oC to +130oC. This new technology will minimize the requirements for external...
Kinetic Modeling of Temperature Driven Flows in Short Microchannels
National Research Council Canada - National Science Library
Alexeenko, Alina A; Muntz, E. P; Gimelshein, Sergey F; Ketsdever, Andrew D
2005-01-01
The temperature driven gas flow in a two-dimensional finite length microchannel and a cylindrical tube are studied numerically with the goal of performance optimization of a nanomembrane-based Knudsen compressor...
Electronic Modeling and Design for Extreme Temperatures, Phase I
National Aeronautics and Space Administration — We propose to develop electronics for operation at temperatures that range from -230oC to +130oC. This new technology will minimize the requirements for external...
Quasispin model of itinerant magnetism: High-temperature theory
International Nuclear Information System (INIS)
Liu, S.H.
1977-01-01
The high-temperature properties of itinerant magnetic systems are examined by using the coherent-potential approximation. We assume a local moment on each atom so that at elevated temperatures there is a number of reversed spins. The coherent potential is solved, and from that the moment on each atom is determined self-consistently. It is found that when the condition for ferromagnetic ordering is satisfied, the local moments persist even above the critical temperature. Conversely, if local moments do not exist at high temperatures, the system can at most condense into a spin-density-wave state. Furthermore, spin-flip scatterings of the conduction electrons from the local moments give rise to additional correlation not treated in the coherent-potential approximation. This correlation energy is an important part of the coupling energy of the local moments. The relations between our work and the theories of Friedel, Hubbard, and others are discussed
Energy Technology Data Exchange (ETDEWEB)
Lee, S.Y.; Coronella, C.J.; Bhadkamkar, A.S.; Seader, J.D. [Univ. of Utah, Salt Lake City, UT (United States). Dept. of Chemical and Fuels Engineering
1993-12-01
A two-stage, thermally coupled fluidized-bed reactor system has been developed for energy-efficient conversion of tar-sand bitumen to synthetic crude oil. Modeling and temperature control of a system are addressed in this study. A process model and transfer function are determined by a transient response technique and the reactor temperature are controlled by PI controllers with tuning settings determined by an internal model control (IMC) strategy. Using the IMC tuning method, sufficiently good control performance was experimentally observed without lengthy on-line tuning. It is shown that IMC strategy provides a means to directly use process knowledge to make a control decision. Although this control method allows for fine tuning by adjusting a single tuning parameter, it is not easy to determine the optimal value of this tuning parameter, which must be specified by the user. A novel method is presented to evaluate that parameter, which must be specified by the user. A novel method is presented to evaluate that parameter in this study. It was selected based on the magnitude of elements on the off-diagonal of the relative gain array to account for the effect of thermal coupling on control performance. It is shown that this method provides stable and fast control of reactor temperatures. By successfully decoupling the system, a simple method of extending the IMC tuning technique to multiinput/multioutput systems is obtained.
Two-dimensional model of laser alloying of binary alloy powder with interval of melting temperature
Knyzeva, A. G.; Sharkeev, Yu. P.
2017-10-01
The paper contains two-dimensional model of laser beam melting of powders from binary alloy. The model takes into consideration the melting of alloy in some temperature interval between solidus and liquidus temperatures. The external source corresponds to laser beam with energy density distributed by Gauss law. The source moves along the treated surface according to given trajectory. The model allows investigating the temperature distribution and thickness of powder layer depending on technological parameters.
Temperature-Dependent Conformations of Model Viscosity Index Improvers
Energy Technology Data Exchange (ETDEWEB)
Ramasamy, Uma Shantini; Cosimbescu, Lelia; Martini, Ashlie
2015-05-01
Lubricants are comprised of base oils and additives where additives are chemicals that are deliberately added to the oil to enhance properties and inhibit degradation of the base oils. Viscosity index (VI) improvers are an important class of additives that reduce the decline of fluid viscosity with temperature [1], enabling optimum lubricant performance over a wider range of operating temperatures. These additives are typically high molecular weight polymers, such as, but not limited to, polyisobutylenes, olefin copolymer, and polyalkylmethacrylates, that are added in concentrations of 2-5% (w/w). Appropriate polymers, when dissolved in base oil, expand from a coiled to an uncoiled state with increasing temperature [2]. The ability of VI additives to increase their molar volume and improve the temperature-viscosity dependence of lubricants suggests there is a strong relationship between molecular structure and additive functionality [3]. In this work, we aim to quantify the changes in polymer size with temperature for four polyisobutylene (PIB) based molecular structures at the nano-scale using molecular simulation tools. As expected, the results show that the polymers adopt more conformations at higher temperatures, and there is a clear indication that the expandability of a polymer is strongly influenced by molecular structure.
International Nuclear Information System (INIS)
Xu Jun; You Bo; Li Xin; Cui Juan
2007-01-01
To accurately measure temperatures, a novel temperature sensor based on a quartz tuning fork resonator has been designed. The principle of the quartz tuning fork temperature sensor is that the resonant frequency of the quartz resonator changes with the variation in temperature. This type of tuning fork resonator has been designed with a new doubly rotated cut work at flexural vibration mode as temperature sensor. The characteristics of the temperature sensor were evaluated and the results sufficiently met the target of development for temperature sensor. The theoretical model for temperature sensing has been developed and built. The sensor structure was analysed by finite element method (FEM) and optimized, including tuning fork geometry, tine electrode pattern and the sensor's elements size. The performance curve of output versus measured temperature is given. The results from theoretical analysis and experiments indicate that the sensor's sensitivity can reach 60 ppm 0 C -1 with the measured temperature range varying from 0 to 100 0 C
Wide Temperature Range Kinetics of Elementary Combustion Reactions for Army Models
National Research Council Canada - National Science Library
Fontijn, Arthur
2002-01-01
The goals of this program are to provide accurate kinetic data on isolated elementary reactions at temperatures relevant to Army combustion models, particularly for propellant combustion dark zones...
Dynamic temperature modeling of an SOFC using least squares support vector machines
Energy Technology Data Exchange (ETDEWEB)
Kang, Ying-Wei; Li, Jun; Cao, Guang-Yi; Tu, Heng-Yong [Institute of Fuel Cell, Shanghai Jiao Tong University, Shanghai 200240 (China); Li, Jian; Yang, Jie [School of Materials Science and Engineering, Huazhong University of Science and Technology, Wuhan 430074 (China)
2008-05-01
Cell temperature control plays a crucial role in SOFC operation. In order to design effective temperature control strategies by model-based control methods, a dynamic temperature model of an SOFC is presented in this paper using least squares support vector machines (LS-SVMs). The nonlinear temperature dynamics of the SOFC is represented by a nonlinear autoregressive with exogenous inputs (NARXs) model that is implemented using an LS-SVM regression model. Issues concerning the development of the LS-SVM temperature model are discussed in detail, including variable selection, training set construction and tuning of the LS-SVM parameters (usually referred to as hyperparameters). Comprehensive validation tests demonstrate that the developed LS-SVM model is sufficiently accurate to be used independently from the SOFC process, emulating its temperature response from the only process input information over a relatively wide operating range. The powerful ability of the LS-SVM temperature model benefits from the approaches of constructing the training set and tuning hyperparameters automatically by the genetic algorithm (GA), besides the modeling method itself. The proposed LS-SVM temperature model can be conveniently employed to design temperature control strategies of the SOFC. (author)
Energy Technology Data Exchange (ETDEWEB)
Souza, Edmilson Monteiro de; Silva, Ademir Xavier da; Lopes, Ricardo T., E-mail: emonteiro@nuclear.ufrj.b, E-mail: ademir@nuclear.ufrj.b, E-mail: ricardo@lin.ufrj.b [Coordenacao dos Programas de Pos-Graduacao de Engenharia (COPPE/UFRJ), Rio de Janeiro, RJ (Brazil). Programa de Engenharia Nuclear; Lima, Inaya C.B., E-mail: inaya@lin.ufrj.b [Coordenacao dos Programas de Pos-Graduacao de Engenharia (COPPE/UFRJ), Rio de Janeiro, RJ (Brazil). Programa de Engenharia Nuclear; Instituto Politecnico do Rio de Janeiro (IPRJ/UERJ), Nova Friburgo, RJ (Brazil); Correa, Samanda Cristine Arruda, E-mail: scorrea@cnen.gov.b [Comissao Nacional de Energia Nuclear (DIAPI/CGMI/CNEN), Rio de Janeiro, RJ (Brazil); Rocha, Paula L.F., E-mail: ferrucio@acd.ufrj.b [Universidade Federal do Rio de Janeiro (UFRJ)., RJ (Brazil). Dept. de Geologia
2011-10-26
This paper evaluates the absorbed dose and the effective dose on operators during the petroleum well logging with nuclear wireless that uses gamma radiation sources. To obtain the data, a typical scenery of a logging procedure will be simulated with MCNPX Monte Carlo code. The simulated logging probe was the Density Gamma Probe - TRISOND produced by Robertson Geolloging. The absorbed dose values were estimated through the anthropomorphic simulator in male voxel MAX. The effective dose values were obtained using the ICRP 103
Ricard, Ludovic P.; Chanu, Jean-Baptiste
2013-08-01
The evaluation of potential and resources during geothermal exploration requires accurate and consistent temperature characterization and modelling of the sub-surface. Existing interpretation and modelling approaches of 1D temperature measurements are mainly focusing on vertical heat conduction with only few approaches that deals with advective heat transport. Thermal regimes are strongly correlated to rock and fluid properties. Currently, no consensus exists for the identification of the thermal regime and the analysis of such dataset. We developed a new framework allowing the identification of thermal regimes by rock formations, the analysis and modelling of wireline logging and discrete temperature measurements by taking into account the geological, geophysical and petrophysics data. This framework has been implemented in the GeoTemp software package that allows the complete thermal characterization and modelling at the formation scale and that provides a set of standard tools for the processing wireline and discrete temperature data. GeoTempTM operates via a user friendly graphical interface written in Matlab that allows semi-automatic calculation, display and export of the results. Output results can be exported as Microsoft Excel spreadsheets or vector graphics of publication quality. GeoTemp™ is illustrated here with an example geothermal application from Western Australia and can be used for academic, teaching and professional purposes.
Indonesian commercial bus drum brake system temperature model
Wibowo, D. B.; Haryanto, I.; Laksono, N. P.
2016-03-01
Brake system is the most significant aspect of an automobile safety. It must be able to slow the vehicle, quickly intervening and reliable under varying conditions. Commercial bus in Indonesia, which often stops suddenly and has a high initial velocity, will raise the temperature of braking significantly. From the thermal analysis it is observed that for the bus with the vehicle laden mass of 15 tons and initial velocity of 80 km/h the temperature is increasing with time and reaches the highest temperature of 270.1 °C when stops on a flat road and reaches 311.2 °C on a declination road angle, ø, 20°. These temperatures exceeded evaporation temperature of brake oil DOT 3 and DOT 4. Besides that, the magnitude of the braking temperature also potentially lowers the friction coefficient of more than 30%. The brakes are pressed repeatedly and high-g decelerations also causes brake lining wear out quickly and must be replaced every 1 month as well as the emergence of a large thermal stress which can lead to thermal cracking or thermal fatigue crack. Brake fade phenomenon that could be the cause of many buses accident in Indonesia because of the failure of the braking function. The chances of accidents will be even greater when the brake is worn and not immediately replaced which could cause hot spots as rivets attached to the brake drum and brake oil is not changed for more than 2 years that could potentially lower the evaporation temperature because of the effect hygroscopic.
Indonesian commercial bus drum brake system temperature model
Energy Technology Data Exchange (ETDEWEB)
Wibowo, D. B., E-mail: rmt.bowo@gmail.com; Haryanto, I., E-mail: ismoyo2001@yahoo.de; Laksono, N. P., E-mail: priyolaksono89@gmail.com [Mechanical Engineering Dept., Faculty of Engineering, Diponegoro University (Indonesia)
2016-03-29
Brake system is the most significant aspect of an automobile safety. It must be able to slow the vehicle, quickly intervening and reliable under varying conditions. Commercial bus in Indonesia, which often stops suddenly and has a high initial velocity, will raise the temperature of braking significantly. From the thermal analysis it is observed that for the bus with the vehicle laden mass of 15 tons and initial velocity of 80 km/h the temperature is increasing with time and reaches the highest temperature of 270.1 °C when stops on a flat road and reaches 311.2 °C on a declination road angle, ø, 20°. These temperatures exceeded evaporation temperature of brake oil DOT 3 and DOT 4. Besides that, the magnitude of the braking temperature also potentially lowers the friction coefficient of more than 30%. The brakes are pressed repeatedly and high-g decelerations also causes brake lining wear out quickly and must be replaced every 1 month as well as the emergence of a large thermal stress which can lead to thermal cracking or thermal fatigue crack. Brake fade phenomenon that could be the cause of many buses accident in Indonesia because of the failure of the braking function. The chances of accidents will be even greater when the brake is worn and not immediately replaced which could cause hot spots as rivets attached to the brake drum and brake oil is not changed for more than 2 years that could potentially lower the evaporation temperature because of the effect hygroscopic.
Indonesian commercial bus drum brake system temperature model
International Nuclear Information System (INIS)
Wibowo, D. B.; Haryanto, I.; Laksono, N. P.
2016-01-01
Brake system is the most significant aspect of an automobile safety. It must be able to slow the vehicle, quickly intervening and reliable under varying conditions. Commercial bus in Indonesia, which often stops suddenly and has a high initial velocity, will raise the temperature of braking significantly. From the thermal analysis it is observed that for the bus with the vehicle laden mass of 15 tons and initial velocity of 80 km/h the temperature is increasing with time and reaches the highest temperature of 270.1 °C when stops on a flat road and reaches 311.2 °C on a declination road angle, ø, 20°. These temperatures exceeded evaporation temperature of brake oil DOT 3 and DOT 4. Besides that, the magnitude of the braking temperature also potentially lowers the friction coefficient of more than 30%. The brakes are pressed repeatedly and high-g decelerations also causes brake lining wear out quickly and must be replaced every 1 month as well as the emergence of a large thermal stress which can lead to thermal cracking or thermal fatigue crack. Brake fade phenomenon that could be the cause of many buses accident in Indonesia because of the failure of the braking function. The chances of accidents will be even greater when the brake is worn and not immediately replaced which could cause hot spots as rivets attached to the brake drum and brake oil is not changed for more than 2 years that could potentially lower the evaporation temperature because of the effect hygroscopic.
Collazo, Carlimar
2011-01-01
The statement of purpose is to analyze network monitoring logs to support the computer incident response team. Specifically, gain a clear understanding of the Uniform Resource Locator (URL) and its structure, and provide a way to breakdown a URL based on protocol, host name domain name, path, and other attributes. Finally, provide a method to perform data reduction by identifying the different types of advertisements shown on a webpage for incident data analysis. The procedures used for analysis and data reduction will be a computer program which would analyze the URL and identify and advertisement links from the actual content links.
Neutron borehole logging correction technique
International Nuclear Information System (INIS)
Goldman, L.H.
1978-01-01
In accordance with an illustrative embodiment of the present invention, a method and apparatus is disclosed for logging earth formations traversed by a borehole in which an earth formation is irradiated with neutrons and gamma radiation produced thereby in the formation and in the borehole is detected. A sleeve or shield for capturing neutrons from the borehole and producing gamma radiation characteristic of that capture is provided to give an indication of the contribution of borehole capture events to the total detected gamma radiation. It is then possible to correct from those borehole effects the total detected gamma radiation and any earth formation parameters determined therefrom
On detonation initiation by a temperature gradient for a detailed chemical reaction models
International Nuclear Information System (INIS)
Liberman, M.A.; Kiverin, A.D.; Ivanov, M.F.
2011-01-01
The evolution from a temperature gradient to a detonation is investigated for combustion mixture whose chemistry is governed by a detailed chemical kinetics. We show that a detailed chemical reaction model has a profound effect on the spontaneous wave concept for detonation initiation by a gradient of reactivity. The evolution to detonation due to a temperature gradient is considered for hydrogen-oxygen and hydrogen-air mixtures at different initial pressures. It is shown that the minimal length of the temperature gradient for which a detonation can be ignited is much larger than that predicted from a one-step chemical model. - Highlights: → We study detonation initiation by temperature gradient for detailed chemical models. → Detailed chemical models have a profound effect on the spontaneous wave concept. → Initiating detonation by temperature gradient differs from one-step model. → In real fuels DDT can not be initiated by temperature gradient.
Ulmer, Christopher J.; Motta, Arthur T.
2017-11-01
The development of TEM-visible damage in materials under irradiation at cryogenic temperatures cannot be explained using classical rate theory modeling with thermally activated reactions since at low temperatures thermal reaction rates are too low. Although point defect mobility approaches zero at low temperature, the thermal spikes induced by displacement cascades enable some atom mobility as it cools. In this work a model is developed to calculate "athermal" reaction rates from the atomic mobility within the irradiation-induced thermal spikes, including both displacement cascades and electronic stopping. The athermal reaction rates are added to a simple rate theory cluster dynamics model to allow for the simulation of microstructure evolution during irradiation at cryogenic temperatures. The rate theory model is applied to in-situ irradiation of ZrC and compares well at cryogenic temperatures. The results show that the addition of the thermal spike model makes it possible to rationalize microstructure evolution in the low temperature regime.
Booth, B. B. B.; Bernie, D.; McNeall, D.; Hawkins, E.; Caesar, J.; Boulton, C.; Friedlingstein, P.; Sexton, D.
2012-09-01
We compare future changes in global mean temperature in response to different future scenarios which, for the first time, arise from emission driven rather than concentration driven perturbed parameter ensemble of a Global Climate Model (GCM). These new GCM simulations sample uncertainties in atmospheric feedbacks, land carbon cycle, ocean physics and aerosol sulphur cycle processes. We find broader ranges of projected temperature responses arising when considering emission rather than concentration driven simulations (with 10-90 percentile ranges of 1.7 K for the aggressive mitigation scenario up to 3.9 K for the high end business as usual scenario). A small minority of simulations resulting from combinations of strong atmospheric feedbacks and carbon cycle responses show temperature increases in excess of 9 degrees (RCP8.5) and even under aggressive mitigation (RCP2.6) temperatures in excess of 4 K. While the simulations point to much larger temperature ranges for emission driven experiments, they do not change existing expectations (based on previous concentration driven experiments) on the timescale that different sources of uncertainty are important. The new simulations sample a range of future atmospheric concentrations for each emission scenario. Both in case of SRES A1B and the Representative Concentration Pathways (RCPs), the concentration pathways used to drive GCM ensembles lies towards the lower end of our simulated distribution. This design decision (a legecy of previous assessments) is likely to lead concentration driven experiments to under-sample strong feedback responses in concentration driven projections. Our ensemble of emission driven simulations span the global temperature response of other multi-model frameworks except at the low end, where combinations of low climate sensitivity and low carbon cycle feedbacks lead to responses outside our ensemble range. The ensemble simulates a number of high end responses which lie above the CMIP5 carbon
Jacques Regniere; James Powell; Barbara Bentz; Vincent Nealis
2012-01-01
The developmental response of insects to temperature is important in understanding the ecology of insect life histories. Temperature-dependent phenology models permit examination of the impacts of temperature on the geographical distributions, population dynamics and management of insects. The measurement of insect developmental, survival and reproductive responses to...
Directory of Open Access Journals (Sweden)
Darko Medved
2015-01-01
Full Text Available With the introduction of Solvency II a consistent market approach to the valuation of insurance assets and liabilities is required. For the best estimate of life annuity provisions one should estimate the longevity risk of the insured population in Slovenia. In this paper the current minimum standard in Slovenia for calculating pension annuities is tested using the Lee-Carter model. In particular, the mortality of the Slovenian population is projected using the best fit from the stochastic mortality projections method. The projected mortality statistics are then corrected with the selection effect and compared with the current minimum standard.
Artificial intelligence approach to interwell log correlation
Energy Technology Data Exchange (ETDEWEB)
Lim, Jong-Se [Korea Maritime University, Pusan(Korea); Kang, Joo Myung [Seoul National University, Seoul(Korea); Kim, Jung Whan [Korea National Oil Corp., Anyang(Korea)
2000-04-30
This paper describes a new approach to automated interwell log correlation using artificial intelligence and principal component analysis. The approach to correlate wire line logging data is on the basis of a large set of subjective rules that are intended to represent human logical processes. The data processed are mainly the qualitative information such as the characteristics of the shapes extracted along log traces. The apparent geologic zones are identified by pattern recognition for the specific characteristics of log trace collected as a set of objects by object oriented programming. The correlation of zones between wells is made by rule-based inference program. The reliable correlation can be established from the first principal component logs derived from both the important information around well bore and the largest common part of variances of all available well log data. Correlation with field log data shows that this approach can make interwell log correlation more reliable and accurate. (author). 6 refs., 7 figs.
A stream temperature model for the Peace-Athabasca River basin
Morales-Marin, L. A.; Rokaya, P.; Wheater, H. S.; Lindenschmidt, K. E.
2017-12-01
Water temperature plays a fundamental role in water ecosystem functioning. Because it regulates flow energy and metabolic rates in organism productivity over a broad spectrum of space and time scales, water temperature constitutes an important indicator of aquatic ecosystems health. In cold region basins, stream water temperature modelling is also fundamental to predict ice freeze-up and break-up events in order to improve flood management. Multiple model approaches such as linear and multivariable regression methods, neural network and thermal energy budged models have been developed and implemented to simulate stream water temperature. Most of these models have been applied to specific stream reaches and trained using observed data, but very little has been done to simulate water temperature in large catchment river networks. We present the coupling of RBM model, a semi-Lagrangian water temperature model for advection-dominated river system, and MESH, a semi-distributed hydrological model, to simulate stream water temperature in river catchments. The coupled models are implemented in the Peace-Athabasca River basin in order to analyze the variation in stream temperature regimes under changing hydrological and meteorological conditions. Uncertainty of stream temperature simulations is also assessed in order to determine the degree of reliability of the estimates.
Directory of Open Access Journals (Sweden)
K. Rankinen
2004-01-01
Full Text Available Microbial processes in soil are moisture, nutrient and temperature dependent and, consequently, accurate calculation of soil temperature is important for modelling nitrogen processes. Microbial activity in soil occurs even at sub-zero temperatures so that, in northern latitudes, a method to calculate soil temperature under snow cover and in frozen soils is required. This paper describes a new and simple model to calculate daily values for soil temperature at various depths in both frozen and unfrozen soils. The model requires four parameters: average soil thermal conductivity, specific heat capacity of soil, specific heat capacity due to freezing and thawing and an empirical snow parameter. Precipitation, air temperature and snow depth (measured or calculated are needed as input variables. The proposed model was applied to five sites in different parts of Finland representing different climates and soil types. Observed soil temperatures at depths of 20 and 50 cm (September 1981–August 1990 were used for model calibration. The calibrated model was then tested using observed soil temperatures from September 1990 to August 2001. R2-values of the calibration period varied between 0.87 and 0.96 at a depth of 20 cm and between 0.78 and 0.97 at 50 cm. R2-values of the testing period were between 0.87 and 0.94 at a depth of 20cm, and between 0.80 and 0.98 at 50cm. Thus, despite the simplifications made, the model was able to simulate soil temperature at these study sites. This simple model simulates soil temperature well in the uppermost soil layers where most of the nitrogen processes occur. The small number of parameters required means that the model is suitable for addition to catchment scale models. Keywords: soil temperature, snow model
Mathematical model for the strip temperature evolution on a continuous finishing hot mill
International Nuclear Information System (INIS)
Camurri, C.
2003-01-01
The goal of this work is to construct a mathematical model to describe the strip temperature evolution at a continuous finishing hot rolling mill. the model predicts in a satisfactory way the strip temperature, with a finishing one (exit of stand 6) with a deviation of + 6,5 degree centigrade for a mean temperature drop of 150 degree centigrade at the continuous finishing hot mill and a mean error of 4.3 %. It also predicts a coiler temperature with a difference of + 9,2 degree centigrade a mean temperature drop of 240 degree centigrade in the cooling table and a mean error of 3.8%. (Author) 16 refs
Nau, Patrick; Yin, Zhiyao; Geigle, Klaus Peter; Meier, Wolfgang
2017-12-01
Wall temperatures were measured with thermographic phosphors on the quartz walls of a model combustor in ethylene/air swirl flames at 3 bar. Three operating conditions were investigated with different stoichiometries and with or without additional injection of oxidation air downstream of the primary combustion zone. YAG:Eu and YAG:Dy were used to cover a total temperature range of 1000-1800 K. Measurements were challenging due to the high thermal background from soot and window degradation at high temperatures. The heat flux through the windows was estimated from the temperature gradient between the in- and outside of the windows. Differences in temperature and heat flux density profiles for the investigated cases can be explained very well with the previously measured differences in flame temperatures and flame shapes. The heat loss relative to thermal load is quite similar for all investigated flames (15-16%). The results complement previous measurements in these flames to investigate soot formation and oxidation. It is expected, that the data set is a valuable input for numerical simulations of these flames.
Climate change, global warming and coral reefs: modelling the effects of temperature.
Crabbe, M James C
2008-10-01
Climate change and global warming have severe consequences for the survival of scleractinian (reef-building) corals and their associated ecosystems. This review summarizes recent literature on the influence of temperature on coral growth, coral bleaching, and modelling the effects of high temperature on corals. Satellite-based sea surface temperature (SST) and coral bleaching information available on the internet is an important tool in monitoring and modelling coral responses to temperature. Within the narrow temperature range for coral growth, corals can respond to rate of temperature change as well as to temperature per se. We need to continue to develop models of how non-steady-state processes such as global warming and climate change will affect coral reefs.
Rodriguez-Sinobas, Leonor; Zubelzu, Sergio; Sobrino, Fernando Fernando; Sánchez, Raúl
2017-04-01
Most of the studies dealing with the development of water flow simulation models in soils, are calibrated using experimental data measured by soil probe sensors or tensiometers which locate at specific points in the study area. However since the beginning of the XXI century, the use of Distributed Fiber Optic Temperature Measurement for estimating temperature variation along a cable of fiber optic has been assessed in multiple environmental applications. Recently, its application combined with an active heating pulses technique (AHFO) has been reported as a sensor to estimate soil moisture. This method applies a known amount of heat to the soil and monitors the temperature evolution, which mainly depends on the soil moisture content. Thus, it allows estimations of soil water content every 12.5 cm along the fiber optic cable, as long as 1500 m , with 2 % accuracy , every second. This study presents the calibration of a soil water flow model (developed in Hydrus 2D) with the AHFO technique. The model predicts the distribution of soil water content of a green area irrigated by sprinkler irrigation. Several irrigation events have been evaluated in a green area located at the ETSI Agronómica, Agroalimentaria y Biosistemas in Madrid where an installation of 147 m of fiber optic cable at 15 cm depth is deployed. The Distribute Temperature Sensing unit was a SILIXA ULTIMA SR (Silixa Ltd, UK) and has spatial and temporal resolution of 0.29 m. Data logged in the DTS unit before, during and after the irrigation event were used to calibrate the estimations in the Hydrus 2D model during the infiltration and redistribution of soil water content within the irrigation interval. References: Karandish, F., & Šimůnek, J. (2016). A field-modeling study for assessing temporal variations of soil-water-crop interactions under water-saving irrigation strategies. Agricultural Water Management, 178, 291-303. Li, Y., Šimůnek, J., Jing, L., Zhang, Z., & Ni, L. (2014). Evaluation of
2010-10-01
... 47 Telecommunication 4 2010-10-01 2010-10-01 false Station log. 73.1820 Section 73.1820... Rules Applicable to All Broadcast Stations § 73.1820 Station log. (a) Entries must be made in the station log either manually by a person designated by the licensee who is in actual charge of the...
29 CFR 1917.18 - Log handling.
2010-07-01
... 29 Labor 7 2010-07-01 2010-07-01 false Log handling. 1917.18 Section 1917.18 Labor Regulations...) MARINE TERMINALS Marine Terminal Operations § 1917.18 Log handling. (a) The employer shall ensure that structures (bunks) used to contain logs have rounded corners and rounded structural parts to avoid sling...
Linking log quality with product performance
D. W. Green; Robert Ross
1997-01-01
In the United States, log grading procedures use visual assessment of defects, in relation to the log scaling diameter, to estimate the yield of lumber that maybe expected from the log. This procedure was satisfactory when structural grades were based only on defect size and location. In recent years, however, structural products have increasingly been graded using a...
Selective logging in the Brazilian Amazon.
G. P. Asner; D. E. Knapp; E. N. Broadbent; P. J. C. Oliveira; M Keller; J. N. Silva
2005-01-01
Amazon deforestation has been measured by remote sensing for three decades. In comparison, selective logging has been mostly invisible to satellites. We developed a large-scale, high-resolution, automated remote-sensing analysis of selective logging in the top five timber-producing states of the Brazilian Amazon. Logged areas ranged from 12,075 to 19,823 square...
Selective logging and its relation to deforestation
Gregory P. Asner; Michael Keller; Marco Lentini; Frank Merry; Souza Jr. Carlos
2009-01-01
Selective logging is a major contributor to the social, economic, and ecological dynamics of Brazilian Amazonia. Logging activities have expanded from low-volume floodplain harvests in past centuries to high-volume operations today that take about 25 million m3 of wood from the forest each year. The most common high-impact conventional and often illegal logging...
High Temperature Flow Response Modeling of Ultra-Fine Grained Titanium
Directory of Open Access Journals (Sweden)
Seyed Vahid Sajadifar
2015-07-01
Full Text Available This work presents the mechanical behavior modeling of commercial purity titanium subjected to severe plastic deformation (SPD during post-SPD compression, at temperatures of 600-900 °C and at strain rates of 0.001-0.1 s−1. The flow response of the ultra-fine grained microstructure is modeled using the modified Johnson-Cook model as a predictive tool, aiding high temperature forming applications. It was seen that the model was satisfactory at all deformation conditions except for the deformation temperature of 600 °C. In order to improve the predictive capability, the model was extended with a corrective term for predictions at temperatures below 700 °C. The accuracy of the model was displayed with reasonable agreement, resulting in error levels of less than 5% at all deformation temperatures.
Modelling of flame temperature of solution combustion synthesis of ...
Indian Academy of Sciences (India)
Hydroxyapatite (HAp), an important bio-ceramic was successfully synthesized by combustion in the aqueous system containing calcium nitrate-di-ammonium hydrogen orthophosphate-urea. The combustion flame temperature of solution combustion reaction depends on various process parameters, and it plays a significant ...
A temperature dependent slip factor based thermal model for friction ...
Indian Academy of Sciences (India)
alloy using the finite element software ABACUS considering constant frictional heat source. Chao & Tang (2003) simulated a temperature distribution of FSW process using finite element software ABACUS .... Nandan et al (2006) stated that the material flow is significant when the viscosity is less than. 4 MPa-s for the ...
Mathematical Modelling of Effect of Ambient Temperature and ...
African Journals Online (AJOL)
Temperature distributions on the soil surface strongly depend on the state of the processes of mass and energy exchanges (radiation and convection, evaporation and water condensation, supply of water through precipitation and gaseous exchange). It was assumed that soil medium is homogeneous and parameters ...
Modeling and analytical simulation of high-temperature gas filtration ...
African Journals Online (AJOL)
High temperature filtration in combustion and gasification processes is a highly interdisciplinary field. Thus, particle technology in general has to be supported by elements of physics, chemistry, thermodynamics and heat and mass transfer processes. Presented in this paper is the analytical method for describing ...
Modelling of flame temperature of solution combustion synthesis of ...
Indian Academy of Sciences (India)
Administrator
other, the health risks associated with the use of allograft. (Hing et al 1999). .... very important reaction parameter. The lower value of furnace temperature was selected as 400°C and the upper value as 700°C. (iii) The precursor batch size was also identified as an ... of the value leads to unstable combustion flame. The con-.
A temperature dependent slip factor based thermal model for friction ...
Indian Academy of Sciences (India)
the tool shoulder and pin to predict the thermal history of aluminium alloy was developed by. Rajamanickam et al .... where σy is the temperature dependent yield stress of the workpiece material as shown in table 2. ... greater than the material yield shear stress, hence the material accelerates to a velocity less than the tool ...
Analytical model of transient temperature and thermal stress in ...
Indian Academy of Sciences (India)
for certain tensile strain as modulus of elasticity increases. The most effective factor that may influence maximum hoop stress is the absorption coefficient, since it is the reciprocal of the effective depth that absorbs power. Increasing absorption power within a small depth means high temperature gradient and consequently, ...
A physically-based model of global freshwater surface temperature
van Beek, L.P.H.; Eikelboom, T.; van Vliet, M.T.H.; Bierkens, M.F.P.
2012-01-01
Temperature determines a range of physical properties of water and exerts a strong control on surface water biogeochemistry. Thus, in freshwater ecosystems the thermal regime directly affects the geographical distribution of aquatic species through their growth and metabolism and indirectly through
A physically based model of global freshwater surface temperature
Beek, van L.P.H.; Eikelboom, T.; Vliet, van M.T.H.; Bierkens, M.F.P.
2012-01-01
Temperature determines a range of physical properties of water and exerts a strong control on surface water biogeochemistry. Thus, in freshwater ecosystems the thermal regime directly affects the geographical distribution of aquatic species through their growth and metabolism and indirectly through
Identifying the optimal supply temperature in district heating networks - A modelling approach
DEFF Research Database (Denmark)
Mohammadi, Soma; Bojesen, Carsten
2014-01-01
dynamically while the flow and pressure are calculated on the basis of steady state conditions. The implicit finite element method is applied to simulate the transient temperature behaviour in the network. Pipe network heat losses, pressure drop in the network and return temperature to the plant...... of this study is to develop a model for thermo-hydraulic calculation of low temperature DH system. The modelling is performed with emphasis on transient heat transfer in pipe networks. The pseudo-dynamic approach is adopted to model the District Heating Network [DHN] behaviour which estimates the temperature...
Maiorano, Andrea; Martre, Pierre; Asseng, Senthold; Ewert, Frank; Mueller, Christoph; Roetter, Reimund P.; Ruane, Alex C.; Semenov, Mikhail A.; Wallach, Daniel; Wang, Enli
2016-01-01
To improve climate change impact estimates and to quantify their uncertainty, multi-model ensembles (MMEs) have been suggested. Model improvements can improve the accuracy of simulations and reduce the uncertainty of climate change impact assessments. Furthermore, they can reduce the number of models needed in a MME. Herein, 15 wheat growth models of a larger MME were improved through re-parameterization and/or incorporating or modifying heat stress effects on phenology, leaf growth and senescence, biomass growth, and grain number and size using detailed field experimental data from the USDA Hot Serial Cereal experiment (calibration data set). Simulation results from before and after model improvement were then evaluated with independent field experiments from a CIMMYT worldwide field trial network (evaluation data set). Model improvements decreased the variation (10th to 90th model ensemble percentile range) of grain yields simulated by the MME on average by 39% in the calibration data set and by 26% in the independent evaluation data set for crops grown in mean seasonal temperatures greater than 24 C. MME mean squared error in simulating grain yield decreased by 37%. A reduction in MME uncertainty range by 27% increased MME prediction skills by 47%. Results suggest that the mean level of variation observed in field experiments and used as a benchmark can be reached with half the number of models in the MME. Improving crop models is therefore important to increase the certainty of model-based impact assessments and allow more practical, i.e. smaller MMEs to be used effectively.
A hierarchical bayesian model to quantify uncertainty of stream water temperature forecasts.
Directory of Open Access Journals (Sweden)
Guillaume Bal
Full Text Available Providing generic and cost effective modelling approaches to reconstruct and forecast freshwater temperature using predictors as air temperature and water discharge is a prerequisite to understanding ecological processes underlying the impact of water temperature and of global warming on continental aquatic ecosystems. Using air temperature as a simple linear predictor of water temperature can lead to significant bias in forecasts as it does not disentangle seasonality and long term trends in the signal. Here, we develop an alternative approach based on hierarchical Bayesian statistical time series modelling of water temperature, air temperature and water discharge using seasonal sinusoidal periodic signals and time varying means and amplitudes. Fitting and forecasting performances of this approach are compared with that of simple linear regression between water and air temperatures using i an emotive simulated example, ii application to three French coastal streams with contrasting bio-geographical conditions and sizes. The time series modelling approach better fit data and does not exhibit forecasting bias in long term trends contrary to the linear regression. This new model also allows for more accurate forecasts of water temperature than linear regression together with a fair assessment of the uncertainty around forecasting. Warming of water temperature forecast by our hierarchical Bayesian model was slower and more uncertain than that expected with the classical regression approach. These new forecasts are in a form that is readily usable in further ecological analyses and will allow weighting of outcomes from different scenarios to manage climate change impacts on freshwater wildlife.
Modeling and simulation of a wheatstone bridge pressure sensor in high temperature with VHDL-AMS
Baccar, Sahbi; Levi, Timothée; Dallet, Dominique; Barbara, François
2013-01-01
International audience; This paper presents a model of a Wheatstone bridge sensor in VHDL-AMS. This model is useful to take into account the temperature effect on the sensor accuracy. The model is developed on the basis of a resistor model. Simulations are performed for three different combinations of parameters values. They confirm the resistors mismatch effect on the sensor accuracy in high temperature (HT).
A model-data comparison of the Holocene global sea surface temperature evolution
Lohmann, G.; Pfeiffer, M.; Laepple, T.; Leduc, G.; Kim, J.-H.
2013-01-01
We compare the ocean temperature evolution of the Holocene as simulated by climate models and reconstructed from marine temperature proxies. We use transient simulations from a coupled atmosphere-ocean general circulation model, as well as an ensemble of time slice simulations from the Paleoclimate
Modeling of Schottky Barrier Diode Millimeter-Wave Multipliers at Cryogenic Temperatures
DEFF Research Database (Denmark)
Johansen, Tom K.; Rybalko, Oleksandr; Zhurbenko, Vitaliy
2015-01-01
We report on the evaluation of Schottky barrier diode GaAs multipliers at cryogenic temperatures. A GaAs Schottky barrier diode model is developed for theoretical estimation of doubler performance. The model is used to predict efficiency of doublers from room to cryogenic temperatures. The theore...
Experimental Validation of Various Temperature Modells for Semi-Physical Tyre Model Approaches
Hackl, Andreas; Scherndl, Christoph; Hirschberg, Wolfgang; Lex, Cornelia
2017-10-01
With increasing level of complexity and automation in the area of automotive engineering, the simulation of safety relevant Advanced Driver Assistance Systems (ADAS) leads to increasing accuracy demands in the description of tyre contact forces. In recent years, with improvement in tyre simulation, the needs for coping with tyre temperatures and the resulting changes in tyre characteristics are rising significantly. Therefore, experimental validation of three different temperature model approaches is carried out, discussed and compared in the scope of this article. To investigate or rather evaluate the range of application of the presented approaches in combination with respect of further implementation in semi-physical tyre models, the main focus lies on the a physical parameterisation. Aside from good modelling accuracy, focus is held on computational time and complexity of the parameterisation process. To evaluate this process and discuss the results, measurements from a Hoosier racing tyre 6.0 / 18.0 10 LCO C2000 from an industrial flat test bench are used. Finally the simulation results are compared with the measurement data.
van Wees, Jan-Diederik; Bonte, Damien; Verweij, Hanneke; Kramers, Leslie
2010-05-01
Key to geothermal exploration success is sufficiently high temperature. This paper focusses on high resolution temperature prediction for geothermal exploration in sedimentary basins. In existing thermal basin models for oil and gas exploration, the focus is on predicting past temperature histories in the sedimentary cover for assessment of oil and gas maturation and expulsion. For detailed 3D models (i.e. involving millions of temperature nodes) these models take long to run and are hard to calibrate to both temperature data in wells and lithosphere boundary conditions. Moreover, spatial variations in basal heat flow is generally not controlled by tectonic boundary conditions. Tectonic models, capable of modelling the thermal consequences of basin evolution, allow to asses spatial heat flow variability based on lithosphere deformation, and provide additional constraints and better quantitative understanding of temperature anomalies. In order to improve modeling capability in terms of model resolution and incorporating tectonic effects, we have developed a novel 3D thermal basin model. In the model transient temperatures are calculated over the last 20 Million years for a 3D heat equation on a regular 3D finite difference grid, allowing for spatial variation in thermal properties, temporal variation in surface temperature and spatial and temporal variations in basal heat flow. Furthermore the model takes into account heat advection, including effects of sedimentation, and lithosphere deformation. The model is iteratively calibrated to temperature data at the well locations, typically taking less than 5 runs. In addition well locations basal heat flow conditions are interpolated based on tectonic constraints. The capabilities of the model are demonstrated for various sedimentary basins, including the Netherlands. The models have been calibrated to extensive well data, showing considerable spatial variability which appears to be related to both tectonic variation as
Development of well logging neutron tube with diameter 22 mm
International Nuclear Information System (INIS)
Xiao Kunxiang; Xiang Wei; Mei Lin; Liang Chuan
2014-01-01
The structure design and performance of well logging neutron tube with diameter 22 mm are introduced. The ion source of neutron tube is penning ion source, the alnico is placed in the elicitation cathode which make the ion densely and concentration in elicitation hole. The single electrode is used in ion optics and the resistor is used in target electrode, which increase the stably performances of neutron tube. The discharge current is over 200 microampere when pressure is 10 -2 Pa in ion source and the neutron output is over l.5 × lO 8 neutron per second. The well logging neutron tube has good characteristics, such as small volume, enduring high temperature, high neutron yield etc. The development neutron tube can be used in well logging in oil field. (authors)
Wood moisture monitoring during log house thermal insulation mounting
Directory of Open Access Journals (Sweden)
Pavla Kotásková
2011-01-01
Full Text Available The current designs of thermal insulation for buildings concentrate on the achievement of the required heat transmission coefficient. However, another factor that cannot be neglected is the assessment of the possible water vapour condensation inside the construction. The aim of the study was to find out whether the designed modification of the cladding structure of an existing log house will or will not lead to a risk of possible water vapour condensation in the walls after an additional thermal insulation mounting. The condensation could result in the increase in moisture of the walls and consequently the constructional timber, which would lead to the reduction of the timber construction strength, wood degradation by biotic factors – wood-destroying insects, mildew or wood-destroying fungi. The main task was to compare the theoretically established values of moisture of the constructional timber with the values measured inside the construction using a specific example of a thermal insulated log house. Three versions of thermal insulation were explored to find the solution of a log house reconstruction which would be the optimum for living purposes. Two versions deal with the cladding structure with the insulation from the interior, the third version deals with an external insulation.In a calculation model the results can be affected to a great degree by input values (boundary conditions. This especially concerns the factor of vapour barrier diffusion resistance, which is entered in accordance with the producer’s specifications; however, its real value can be lower as it depends on the perfectness and correctness of the technological procedure. That is why the study also includes thermal technical calculations of all designed insulation versions in the most unfavourable situation, which includes the degradation of the vapour barrier down to 10% efficiency, i.e. the reduction of the diffusion resistance factor to 10% of the original value
3D Discrete Dislocation Modelling of High Temperature Plasticity
Czech Academy of Sciences Publication Activity Database
Záležák, Tomáš; Dlouhý, Antonín
2011-01-01
Roč. 465, - (2011), s. 115-118 ISSN 1013-9826. [MSMF /6./ Materials Structure and Micromechanics of Fracture. Brno, 28.06.2010-30.06.2010] R&D Projects: GA MŠk OC 162 Institutional research plan: CEZ:AV0Z20410507 Keywords : discrete dislocation dynamics * high temperature deformation * meso-scale simulations of plasticity * diffusion Subject RIV: BE - Theoretical Physics
Modelling of peak temperature during friction stir processing of magnesium alloy AZ91
Vaira Vignesh, R.; Padmanaban, R.
2018-02-01
Friction stir processing (FSP) is a solid state processing technique with potential to modify the properties of the material through microstructural modification. The study of heat transfer in FSP aids in the identification of defects like flash, inadequate heat input, poor material flow and mixing etc. In this paper, transient temperature distribution during FSP of magnesium alloy AZ91 was simulated using finite element modelling. The numerical model results were validated using the experimental results from the published literature. The model was used to predict the peak temperature obtained during FSP for various process parameter combinations. The simulated peak temperature results were used to develop a statistical model. The effect of process parameters namely tool rotation speed, tool traverse speed and shoulder diameter of the tool on the peak temperature was investigated using the developed statistical model. It was found that peak temperature was directly proportional to tool rotation speed and shoulder diameter and inversely proportional to tool traverse speed.
Fei, Bennie; Eloff, Jan; Olivier, Martin; Venter, Hein
Network forensics involves capturing, recording and analysing network audit trails. A crucial part of network forensics is to gather evidence at the server level, proxy level and from other sources. A web proxy relays URL requests from clients to a server. Analysing web proxy logs can give unobtrusive insights to the browsing behavior of computer users and provide an overview of the Internet usage in an organisation. More importantly, in terms of network forensics, it can aid in detecting anomalous browsing behavior. This paper demonstrates the use of a self-organising map (SOM), a powerful data mining technique, in network forensics. In particular, it focuses on how a SOM can be used to analyse data gathered at the web proxy level.
International Nuclear Information System (INIS)
1980-01-01
A pulsed neutron well logging system using a sealed off neutron generator tube is provided with a programmable digital neutron output control system. The control system monitors the target beam current and compares a function of this current with a pre-programmed control function to develop a control signal for the neutron generator. The control signal is used in a series regulator to control the average replenisher current of the neutron generator tube. The programmable digital control system of the invention also provides digital control signals as a function of time to provide ion source voltages. This arrangement may be utilized to control neutron pulses durations and repetition rates or to produce other modulated wave forms for intensity modulating the output of the neutron generator as a function of time. (Auth.)
Formation of Hydroxylamine in Low-Temperature Interstellar Model Ices.
Tsegaw, Yetsedaw A; Góbi, Sándor; Förstel, Marko; Maksyutenko, Pavlo; Sander, Wolfram; Kaiser, Ralf I
2017-10-12
We irradiated binary ice mixtures of ammonia (NH 3 ) and oxygen (O 2 ) ices at astrophysically relevant temperatures of 5.5 K with energetic electrons to mimic the energy transfer process that occurs in the track of galactic cosmic rays. By monitoring the newly formed molecules online and in situ utilizing Fourier transform infrared spectroscopy complemented by temperature-programmed desorption studies with single-photon photoionization reflectron time-of-flight mass spectrometry, the synthesis of hydroxylamine (NH 2 OH), water (H 2 O), hydrogen peroxide (H 2 O 2 ), nitrosyl hydride (HNO), and a series of nitrogen oxides (NO, N 2 O, NO 2 , N 2 O 2 , N 2 O 3 ) was evident. The synthetic pathway of the newly formed species, along with their rate constants, is discussed exploiting the kinetic fitting of the coupled differential equations representing the decomposition steps in the irradiated ice mixtures. Our studies suggest the hydroxylamine is likely formed through an insertion mechanism of suprathermal oxygen into the nitrogen-hydrogen bond of ammonia at such low temperatures. An isotope-labeled experiment examining the electron-irradiated D3-ammonia-oxygen (ND 3 -O 2 ) ices was also conducted, which confirmed our findings. This study provides clear, concise evidence of the formation of hydroxylamine by irradiation of interstellar analogue ices and can help explain the question how potential precursors to complex biorelevant molecules may form in the interstellar medium.
Mechanistic modeling of transition temperature shift of Japanese RPV materials
Energy Technology Data Exchange (ETDEWEB)
Hiranuma, N. [Tokyo Electric Power Co., Tokyo (Japan); Soneda, N.; Dohi, K.; Ishino, S. [Central Research Inst. of Electric Power Industry, Tokyo (Japan); Dohi, N. [Kansai Electric Power Co., Osaka (Japan); Ohata, H. [The Japan Atomic Power Co., Tokyo (Japan)
2004-07-01
A new correlation method to predict neutron irradiation embrittlement of reactor pressure vessel (RPV) materials of Japanese nuclear power plants is developed based on the understandings of the embrittlement mechanisms. A set of rate equations is constructed to describe the microstructural changes in the RPV materials during irradiation. Formation of copper-enriched clusters (CEC) and matrix damage (MD) are considered as the two primary causes of the embrittlement. Not only the effects of chemical compositions, such as copper and nickel, and neutron fluence, but also the effects of irradiation temperature as well as neutron flux are formulated in the rate equations to describe the evolution of CEC and MD. Transition temperature shifts corresponding to the microstructural changes are calculated using the predicted number densities of the CEC and MD. Coefficients of the rate equations are optimized using the Japanese surveillance database with a specific attention to reproduce the embrittlement trend of each material of the Japanese RPVs. The standard deviation of 12.1 C of the current Japanese correlation method, JEAC 4201, is reduced down to 10.6 C in the proposed new correlation method. Possibility of adjusting the uncertainty in the initial transition temperatures is discussed. (orig.)
Modelling of Temperature Profiles and Transport Scaling in Auxiliary Heated Tokamaks
DEFF Research Database (Denmark)
Callen, J.D.; Christiansen, J.P.; Cordey, J.G.
1987-01-01
in detail: (i) a heat pinch or excess temperature gradient model with constant coefficients; and (ii) a non-linear heat diffusion coefficient (χ) model. Both models predict weak (lesssim20%) temperature profile responses to physically relevant changes in the heat deposition profile – primarily because...... that result from the models clarify why temperature profiles in many tokamaks are often characterized as exhibiting a high degree of 'profile consistency'. Global transport scaling laws are also derived from the two models. The non-linear model with χ ∝ dT/dr produces a non-linear energy confinement time (L......-mode) scaling with input power, . The constant heat pinch or excess temperature gradient model leads to the offset linear law for the total stored energy W with Pin, W = τinc Pin + W(0), which describes JET auxiliary heating data quite well. It also provides definitions for the incremental energy confinement...
Directory of Open Access Journals (Sweden)
Ricardo Hamad
2011-01-01
Full Text Available A consideração dos custos de inventário tem se tornado cada vez mais importante na análise dos trade-offs e na tomada de decisões sobre redes logísticas. O artigo propõe uma metodologia para tratamento do custo de inventário a partir da cobertura de estoque esperada em cada instalação, incorporada na modelagem matemática da localização de fábricas e/ou centros de distribuição em redes logísticas com vários elos, representando uma nova versão do modelo apresentado em Hamad e Gualda (2008. As principais contribuições desta metodologia em relação a outros métodos encontrados na literatura são a facilidade de aplicação do método, a inclusão de restrições ligadas à capacidade de armazenagem e a consideração do custo total de inventário, e não apenas do custo relacionado aos produtos modelados.Inventory carrying cost has become very important in the analysis of the trade-offs and an important component to be considered when taking decisions about a logistic network. This paper proposes a methodology incorporated in a multi-echelon sourcing decision model, to be considered as a new version of the model presented in Hamad, Gualda (2008. It treats the carrying costs using the inventory Days-on-hand estimate in each echelon of the chain (plants and/or Distribution Centers. The main contributions of this methodology compared to other options found in the literature are the simplicity of its application, the consideration of all inventory costs (not only the ones related to the products being modeled and, also, the inclusion of constraints related to warehousing capacity.
Static dictionaries on AC0 RAMs: query time (√log n/log log n) is necessary and sufficient
DEFF Research Database (Denmark)
Andersson, Arne; Miltersen, Peter Bro; Riis, Søren
1996-01-01
In this paper we consider solutions to the static dictionary problem on AC0 RAMs, i.e. random access machines where the only restriction on the finite instruction set is that all computational instructions are in AC0. Our main result is a tight upper and lower bound of θ(√log n/log log n) on the ......In this paper we consider solutions to the static dictionary problem on AC0 RAMs, i.e. random access machines where the only restriction on the finite instruction set is that all computational instructions are in AC0. Our main result is a tight upper and lower bound of θ(√log n/log log n......, we show a tradeoff between time and circuit depth under the unit-cost assumption: any RAM instruction set which permits a linear space, constant query time solution to the static dictionary problem must have an instruction of depth Ω(log w/log log to), where w is the word size of the machine (and log...
Temperature-Dependent Kinetic Model for Nitrogen-Limited Wine Fermentations▿
Coleman, Matthew C.; Fish, Russell; Block, David E.
2007-01-01
A physical and mathematical model for wine fermentation kinetics was adapted to include the influence of temperature, perhaps the most critical factor influencing fermentation kinetics. The model was based on flask-scale white wine fermentations at different temperatures (11 to 35°C) and different initial concentrations of sugar (265 to 300 g/liter) and nitrogen (70 to 350 mg N/liter). The results show that fermentation temperature and inadequate levels of nitrogen will cause stuck or sluggish fermentations. Model parameters representing cell growth rate, sugar utilization rate, and the inactivation rate of cells in the presence of ethanol are highly temperature dependent. All other variables (yield coefficient of cell mass to utilized nitrogen, yield coefficient of ethanol to utilized sugar, Monod constant for nitrogen-limited growth, and Michaelis-Menten-type constant for sugar transport) were determined to vary insignificantly with temperature. The resulting mathematical model accurately predicts the observed wine fermentation kinetics with respect to different temperatures and different initial conditions, including data from fermentations not used for model development. This is the first wine fermentation model that accurately predicts a transition from sluggish to normal to stuck fermentations as temperature increases from 11 to 35°C. Furthermore, this comprehensive model provides insight into combined effects of time, temperature, and ethanol concentration on yeast (Saccharomyces cerevisiae) activity and physiology. PMID:17616615
Wang, Yanru; Li, Bincheng
2011-03-20
In the international standard (International Organization for Standardization 11551) for measuring the absorptance of optical components (i.e., laser calorimetry), the absorptance is obtained by fitting the temporal behavior of laser irradiation-induced temperature rise to a homogeneous temperature model in which the infinite thermal conductivity of the sample is assumed. In this paper, an accurate temperature model, in which both the finite thermal conductivity and size of the sample are taken into account, is developed to fit the experimental temperature data for a more precise determination of the absorptance. The difference and repeatability of the results fitted with the two theoretical models for the same experimental data are compared. The optimum detection position when the homogeneous model is employed in the data-fitting procedure is also analyzed with the accurate temperature model. The results show that the optimum detection location optimized for a wide thermal conductivity range of 0.2-50W/m·K moves toward the center of the sample as the sample thickness increases and deviates from the center as the radius and irradiation time increase. However, if the detection position is optimized for an individual sample with known sample size and thermal conductivity by applying the accurate temperature model, the influence of the finite thermal conductivity and sample size on the absorptance determination can be fully compensated for by fitting the temperature data recorded at the optimum detection position to the homogeneous temperature model.
Nonlinear plasticity model for structural alloys at elevated temperature. [LMFBR
Energy Technology Data Exchange (ETDEWEB)
Robinson, D N
1978-11-01
A nonlinear, time-independent plasticity model is presented which incorporates some aspects of both isotropic and kinematic hardening. The model characterizes a material with limited memory, i.e., in the sense that part of the deformation history as recorded in the internal dislocation structure is erased at stress reversals. This feature ensures that the predicted response eventually reaches a limit cycle under cyclic stressing, even in the presence of creep and relaxation. The model is intended as a candidate for replacing the nonlinear model now residing in Sect. 4.3.6 of RDT Standard F9-5T.
2d Model Field Theories at Finite Temperature and Density
Schoen, Verena; Thies, Michael
2000-01-01
In certain 1+1 dimensional field theoretic toy models, one can go all the way from microscopic quarks via the hadron spectrum to the properties of hot and dense baryonic matter in an essentially analytic way. This "miracle" is illustrated through case studies of two popular large N models, the Gross-Neveu and the 't Hooft model - caricatures of the Nambu-Jona-Lasinio model and real QCD, respectively. The main emphasis will be on aspects related to spontaneous symmetry breaking (discrete or co...
International Nuclear Information System (INIS)
Debbarma, Ajoy; Pandey, Krishna Murari
2016-01-01
Numerical investigation of the rewetting of single sector fuel assembly of Advanced Heavy Water Reactor (AHWR) has been carried out to exhibit the effect of coolant jet diameters (2, 3 and 4 mm) and jet directions (Model: M, X and X2). The rewetting phenomena with various jet models are compared on the basis of rewetting temperature and wetting delay. Temperature-time curve have been evaluated from rods surfaces at different circumference, radial and axial locations of rod bundle. The cooling curve indicated the presence of vapor in respected location, where it prevents the contact between the firm and fluid phases. The peak wall temperature represents as rewetting temperature. The time period observed between initial to rewetting temperature point is wetting delay. It was noted that as improved in various jet models, rewetting temperature and wetting delay reduced, which referred the coolant stipulation in the rod bundle dominant vapor formation.
Modeling and Numerical Simulation of the Grinding Temperature Field with Nanoparticle Jet of MQL
Directory of Open Access Journals (Sweden)
C. H. Li
2013-01-01
Full Text Available In this research, the heat transfer model of surface grinding temperature field with nanoparticle jet flow of MQL as well as the proportionality coefficient model of energy input workpiece was established, respectively. The numerical simulation of surface grinding temperature field of three workpiece materials was conducted. The results present that, in the workpiece, the surface temperature was significantly higher than the subsurface temperature, presenting relatively large temperature gradient along the direction of workpiece thickness. The impact of the grinding depth on grinding temperature was significant. With the increase of the cut depth, peak values of the grinding temperature rocketed. Distribution rules of the temperature field of 2Cr13 in four cooling and lubrication approaches were the same. Based on the excellent heat transfer property of nanofluids, the output heat through the grinding medium acquired an increasingly high proportion, leading to the drop of the temperature in the grinding zone. For the same cooling and lubrication conditions, grinding temperature presented insignificant changes along the direction of grinding width. Yet, under different cooling conditions, the temperature variation was significant. MQL grinding conditions with additive nanoparticles demonstrated great impact on the weakening of temperature effect on the grinding zone.
A Comparative Study of Cox Regression vs. Log-Logistic ...
African Journals Online (AJOL)
Colorectal cancer is common and lethal disease with different incidence rate in different parts of the world which is taken into account as the third cause of cancer-related deaths. In the present study, using non-parametric Cox model and parametric Log-logistic model, factors influencing survival of patients with colorectal ...
Evaluation of brightness temperature from a forward model of ...
Indian Academy of Sciences (India)
sensing technique from ground-based instruments by which high vertical resolution measurements at surface levels are ... the planetary boundary layer, this technique may get importance by providing high resolution data ... Elements of forward model for an upward-looking microwave radiometer. this forward model with a ...
High Temperature Test Facility Preliminary RELAP5-3D Input Model Description
Energy Technology Data Exchange (ETDEWEB)
Bayless, Paul David [Idaho National Lab. (INL), Idaho Falls, ID (United States)
2015-12-01
A RELAP5-3D input model is being developed for the High Temperature Test Facility at Oregon State University. The current model is described in detail. Further refinements will be made to the model as final as-built drawings are released and when system characterization data are available for benchmarking the input model.
The analytical calibration model of temperature effects on a silicon piezoresistive pressure sensor
Directory of Open Access Journals (Sweden)
Meng Nie
2017-03-01
Full Text Available Presently, piezoresistive pressure sensors are highly demanded for using in various microelectronic devices. The electrical behavior of these pressure sensor is mainly dependent on the temperature gradient. In this paper, various factors,which includes effect of temperature, doping concentration on the pressure sensitive resistance, package stress, and temperature on the Young’s modulus etc., are responsible for the temperature drift of the pressure sensor are analyzed. Based on the above analysis, an analytical calibration model of the output voltage of the sensor is proposed and the experimental data is validated through a suitable model.
Directory of Open Access Journals (Sweden)
Orlova K.Y.
2017-01-01
Full Text Available The goal of the presented research is to perform numerical modelling of fuel low-temperature vortex combustion in once-through industrial steam boiler. Full size and scaled-down furnace model created with FIRE 3D software and was used for the research. All geometrical features were observed. The baseline information for the low-temperature vortex furnace process are velocity and temperature of low, upper and burner blast, air-fuel ratio, fuel consumption, coal dust size range. The obtained results are: temperature and velocity three dimensional fields, furnace gases and solid fuel ash particles concentration.
Modeling Transit Patterns Via Mobile App Logs.
2016-01-01
Transit planners need detailed information of the trips people take using public transit in : order to design more optimal routes, address new construction projects, and address the : constantly changing needs of a city and metro region. Better trans...
Neural models on temperature regulation for cold-stressed animals
Horowitz, J. M.
1975-01-01
The present review evaluates several assumptions common to a variety of current models for thermoregulation in cold-stressed animals. Three areas covered by the models are discussed: signals to and from the central nervous system (CNS), portions of the CNS involved, and the arrangement of neurons within networks. Assumptions in each of these categories are considered. The evaluation of the models is based on the experimental foundations of the assumptions. Regions of the nervous system concerned here include the hypothalamus, the skin, the spinal cord, the hippocampus, and the septal area of the brain.
International Nuclear Information System (INIS)
Barukčić, M.; Ćorluka, V.; Miklošević, K.
2015-01-01
Highlights: • The temperature and irradiance dependent model for the I–V curve estimation is presented. • The purely mathematical model based on the analysis of the I–V curve shape is presented. • The model includes the Gompertz function with temperature and irradiance dependent parameters. • The input data are extracted from the data sheet I–V curves. - Abstract: The temperature and irradiance dependent mathematical model for photovoltaic panel performances estimation is proposed in the paper. The base of the model is the mathematical function of the photovoltaic panel current–voltage curve. The model of the current–voltage curve is based on the sigmoid function with temperature and irradiance dependent parameters. The temperature and irradiance dependencies of the parameters are proposed in the form of analytic functions. The constant parameters are involved in the analytical functions. The constant parameters need to be estimated to get the temperature and irradiance dependent current–voltage curve. The mathematical model contains 12 constant parameters and they are estimated by using the evolutionary algorithm. The optimization problem is defined for this purpose. The optimization problem objective function is based on estimated and extracted (measured) current and voltage values. The current and voltage values are extracted from current–voltage curves given in datasheet of the photovoltaic panels. The new procedure for estimation of open circuit voltage value at any temperature and irradiance is proposed in the model. The performance of the proposed mathematical model is presented for three different photovoltaic panel technologies. The simulation results indicate that the proposed mathematical model is acceptable for estimation of temperature and irradiance dependent current–voltage curve and photovoltaic panel performances within temperature and irradiance ranges
Temperature dependency of the hysteresis behaviour of PZT actuators using Preisach model
DEFF Research Database (Denmark)
Mangeot, Charles; Zsurzsan, Tiberiu-Gabriel
2016-01-01
The Preisach model is a powerful tool for modelling the hysteresis phenomenon on multilayer piezo actuators under large signal excitation. In this paper, measurements at different temperatures are presented, showing the effect on the density of the Preisach matrix. An energy-based approach...... is presented, aiming at defining a temperature-dependent phenomenological model of hysteresis for a better understanding of the non-linear effects in piezo actuators....
Zhang, Jian; Yang, Xiao-hua; Chen, Xiao-juan
2015-01-01
Due to nonlinear and multiscale characteristics of temperature time series, a new model called wavelet network model based on multiple criteria decision making (WNMCDM) has been proposed, which combines the advantage of wavelet analysis, multiple criteria decision making, and artificial neural network. One case for forecasting extreme monthly maximum temperature of Miyun Reservoir has been conducted to examine the performance of WNMCDM model. Compared with nearest neighbor bootstrapping regr...
Quark mass density- and temperature- dependent model for bulk strange quark matter
al, Yun Zhang et.
2002-01-01
It is shown that the quark mass density-dependent model can not be used to explain the process of the quark deconfinement phase transition because the quark confinement is permanent in this model. A quark mass density- and temperature-dependent model in which the quark confinement is impermanent has been suggested. We argue that the vacuum energy density B is a function of temperature. The dynamical and thermodynamical properties of bulk strange quark matter for quark mass density- and temper...
Modeling of temperature profile during magnetic thermotherapy for cancer treatment
Sawyer, Carolyn A.; Habib, Ashfaque H.; Miller, Kelsey; Collier, Kelly N.; Ondeck, Courtney L.; McHenry, Michael E.
2009-04-01
Magnetic nanoparticles (MNPs) used as heat sources for cancer thermotherapy have received much recent attention. While the mechanism for power dissipation in MNPs in a rf field is well understood, a challenge in moving to clinical trials is an inadequate understanding of the power dissipation in MNP-impregnated systems and the discrepancy between the predicted and observed heating rates in the same. Here we use the Rosensweig [J. Magn. Magn. Mater. 252, 370 (2002)] model for heat generation in a single MNP, considering immediate heating of the MNPs, and the double spherical-shell heat transfer equations developed by Andrä et al. [J. Magn. Magn. Mater. 194, 197 (1999)] to model the heat distribution in and around a ferrofluid sample or a tumor impregnated with MNPs. We model the heat generated at the edge of a 2.15 cm spherical sample of FeCo/(Fe,Co)3O4 agglomerates containing 95 vol % MNPs with mean radius of 9 nm, dispersed at 1.5-1.6 vol % in bisphenol F. We match the model against experimental data for a similar system produced in our laboratory and find good agreement. Finite element models, extensible to more complex systems, have also been developed and checked against the analytical model and the data.