Pedotransfer functions to estimate soil water content at field capacity ...
Indian Academy of Sciences (India)
20
available scarce water resources in dry land agriculture, but direct measurement thereof for multiple locations in the field is not always feasible. Therefore, pedotransfer functions (PTFs) were developed to estimate soil water retention at FC and PWP for dryland soils of India. A soil database available for Arid Western India ...
Pedotransfer functions estimating soil hydraulic properties using different soil parameters
DEFF Research Database (Denmark)
Børgesen, Christen Duus; Iversen, Bo Vangsø; Jacobsen, Ole Hørbye
2008-01-01
Estimates of soil hydraulic properties using pedotransfer functions (PTF) are useful in many studies such as hydrochemical modelling and soil mapping. The objective of this study was to calibrate and test parametric PTFs that predict soil water retention and unsaturated hydraulic conductivity...... parameters. The PTFs are based on neural networks and the Bootstrap method using different sets of predictors and predict the van Genuchten/Mualem parameters. A Danish soil data set (152 horizons) dominated by sandy and sandy loamy soils was used in the development of PTFs to predict the Mualem hydraulic...... conductivity parameters. A larger data set (1618 horizons) with a broader textural range was used in the development of PTFs to predict the van Genuchten parameters. The PTFs using either three or seven textural classes combined with soil organic mater and bulk density gave the most reliable predictions...
Pedotransfer functions to estimate soil water content at field capacity ...
Indian Academy of Sciences (India)
20
Soil water retention, Dry lands, Western India, Pedotransfer functions, Soil moisture calculator. 1. 2. 3. 4 ..... samples although it is known that structure and macro-porosity of the sample affect water retention (Unger ..... and OC content has positive influence on water retention whereas interaction of clay and OC has negative ...
Pedotransfer functions to estimate soil water content at field capacity ...
Indian Academy of Sciences (India)
Priyabrata Santra
2018-03-27
Mar 27, 2018 ... of the global population (Millennium Ecosystem. Assessment 2005). Likewise, there is a .... Therefore, the main objective of this study was to develop PTFs for arid soils of India to estimate soil water content at FC and PWP.
A PEDOTRANSFER FUNCTION FOR ESTIMATING THE SOIL ERODIBILITY FACTOR IN SICILY
Directory of Open Access Journals (Sweden)
Vincenzo Bagarello
2009-09-01
Full Text Available The soil erodibility factor, K, of the Universal Soil Loss Equation (USLE is a simple descriptor of the soil susceptibility to rill and interrill erosion. The original procedure for determining K needs a knowledge of soil particle size distribution (PSD, soil organic matter, OM, content, and soil structure and permeability characteristics. However, OM data are often missing and soil structure and permeability are not easily evaluated in regional analyses. The objective of this investigation was to develop a pedotransfer function (PTF for estimating the K factor of the USLE in Sicily (south Italy using only soil textural data. The nomograph soil erodibility factor and its associated first approximation, K’, were determined at 471 sampling points distributed throughout the island of Sicily. Two existing relationships for estimating K on the basis of the measured geometric mean particle diameter were initially tested. Then, two alternative PTFs for estimating K’ and K, respectively, on the basis of the measured PSD were derived. Testing analysis showed that the K estimate by the proposed PTF (eq.11, which was characterized by a Nash-Suttcliffe efficiency index, NSEI, varying between 0.68 and 0.76, depending on the considered data set, was appreciably more accurate than the one obtained by other existing equations, yielding NSEI values varying between 0.21 and 0.32.
Directory of Open Access Journals (Sweden)
João Carlos Medeiros
2014-06-01
Full Text Available Knowledge of the soil water retention curve (SWRC is essential for understanding and modeling hydraulic processes in the soil. However, direct determination of the SWRC is time consuming and costly. In addition, it requires a large number of samples, due to the high spatial and temporal variability of soil hydraulic properties. An alternative is the use of models, called pedotransfer functions (PTFs, which estimate the SWRC from easy-to-measure properties. The aim of this paper was to test the accuracy of 16 point or parametric PTFs reported in the literature on different soils from the south and southeast of the State of Pará, Brazil. The PTFs tested were proposed by Pidgeon (1972, Lal (1979, Aina & Periaswamy (1985, Arruda et al. (1987, Dijkerman (1988, Vereecken et al. (1989, Batjes (1996, van den Berg et al. (1997, Tomasella et al. (2000, Hodnett & Tomasella (2002, Oliveira et al. (2002, and Barros (2010. We used a database that includes soil texture (sand, silt, and clay, bulk density, soil organic carbon, soil pH, cation exchange capacity, and the SWRC. Most of the PTFs tested did not show good performance in estimating the SWRC. The parametric PTFs, however, performed better than the point PTFs in assessing the SWRC in the tested region. Among the parametric PTFs, those proposed by Tomasella et al. (2000 achieved the best accuracy in estimating the empirical parameters of the van Genuchten (1980 model, especially when tested in the top soil layer.
Directory of Open Access Journals (Sweden)
Amir LAKZIAN
2010-09-01
Full Text Available This paper presents the comparison of three different approaches to estimate soil water content at defined values of soil water potential based on selected parameters of soil solid phase. Forty different sampling locations in northeast of Iran were selected and undisturbed samples were taken to measure the water content at field capacity (FC, -33 kPa, and permanent wilting point (PWP, -1500 kPa. At each location solid particle of each sample including the percentage of sand, silt and clay were measured. Organic carbon percentage and soil texture were also determined for each soil sample at each location. Three different techniques including pattern recognition approach (k nearest neighbour, k-NN, Artificial Neural Network (ANN and pedotransfer functions (PTF were used to predict the soil water at each sampling location. Mean square deviation (MSD and its components, index of agreement (d, root mean square difference (RMSD and normalized RMSD (RMSDr were used to evaluate the performance of all the three approaches. Our results showed that k-NN and PTF performed better than ANN in prediction of water content at both FC and PWP matric potential. Various statistics criteria for simulation performance also indicated that between kNN and PTF, the former, predicted water content at PWP more accurate than PTF, however both approach showed a similar accuracy to predict water content at FC.
Pedotransfer functions to estimate water retention parameters of soils in northeastern Brazil
Directory of Open Access Journals (Sweden)
Alexandre Hugo Cezar Barros
2013-04-01
Full Text Available Pedotransfer functions (PTF were developed to estimate the parameters (α, n, θr and θs of the van Genuchten model (1980 to describe soil water retention curves. The data came from various sources, mainly from studies conducted by universities in Northeast Brazil, by the Brazilian Agricultural Research Corporation (Embrapa and by a corporation for the development of the São Francisco and Parnaíba river basins (Codevasf, totaling 786 retention curves, which were divided into two data sets: 85 % for the development of PTFs, and 15 % for testing and validation, considered independent data. Aside from the development of general PTFs for all soils together, specific PTFs were developed for the soil classes Ultisols, Oxisols, Entisols, and Alfisols by multiple regression techniques, using a stepwise procedure (forward and backward to select the best predictors. Two types of PTFs were developed: the first included all predictors (soil density, proportions of sand, silt, clay, and organic matter, and the second only the proportions of sand, silt and clay. The evaluation of adequacy of the PTFs was based on the correlation coefficient (R and Willmott index (d. To evaluate the PTF for the moisture content at specific pressure heads, we used the root mean square error (RMSE. The PTF-predicted retention curve is relatively poor, except for the residual water content. The inclusion of organic matter as a PTF predictor improved the prediction of parameter a of van Genuchten. The performance of soil-class-specific PTFs was not better than of the general PTF. Except for the water content of saturated soil estimated by particle size distribution, the tested models for water content prediction at specific pressure heads proved satisfactory. Predictions of water content at pressure heads more negative than -0.6 m, using a PTF considering particle size distribution, are only slightly lower than those obtained by PTFs including bulk density and organic matter
Directory of Open Access Journals (Sweden)
Theophilo Benedicto Ottoni Filho
2014-12-01
Full Text Available Taking into account the nature of the hydrological processes involved in in situ measurement of Field Capacity (FC, this study proposes a variation of the definition of FC aiming not only at minimizing the inadequacies of its determination, but also at maintaining its original, practical meaning. Analysis of FC data for 22 Brazilian soils and additional FC data from the literature, all measured according to the proposed definition, which is based on a 48-h drainage time after infiltration by shallow ponding, indicates a weak dependency on the amount of infiltrated water, antecedent moisture level, soil morphology, and the level of the groundwater table, but a strong dependency on basic soil properties. The dependence on basic soil properties allowed determination of FC of the 22 soil profiles by pedotransfer functions (PTFs using the input variables usually adopted in prediction of soil water retention. Among the input variables, soil moisture content θ (6 kPa had the greatest impact. Indeed, a linear PTF based only on it resulted in an FC with a root mean squared residue less than 0.04 m³ m-3 for most soils individually. Such a PTF proved to be a better FC predictor than the traditional method of using moisture content at an arbitrary suction. Our FC data were compatible with an equivalent and broader USA database found in the literature, mainly for medium-texture soil samples. One reason for differences between FCs of the two data sets of fine-textured soils is due to their different drainage times. Thus, a standardized procedure for in situ determination of FC is recommended.
Santra, Priyabrata; Kumar, Mahesh; Kumawat, R. N.; Painuli, D. K.; Hati, K. M.; Heuvelink, G. B. M.; Batjes, N. H.
2018-04-01
Characterization of soil water retention, e.g., water content at field capacity (FC) and permanent wilting point (PWP) over a landscape plays a key role in efficient utilization of available scarce water resources in dry land agriculture; however, direct measurement thereof for multiple locations in the field is not always feasible. Therefore, pedotransfer functions (PTFs) were developed to estimate soil water retention at FC and PWP for dryland soils of India. A soil database available for Arid Western India ( N=370) was used to develop PTFs. The developed PTFs were tested in two independent datasets from arid regions of India ( N=36) and an arid region of USA ( N=1789). While testing these PTFs using independent data from India, root mean square error (RMSE) was found to be 2.65 and 1.08 for FC and PWP, respectively, whereas for most of the tested `established' PTFs, the RMSE was >3.41 and >1.15, respectively. Performance of the developed PTFs from the independent dataset from USA was comparable with estimates derived from `established' PTFs. For wide applicability of the developed PTFs, a user-friendly soil moisture calculator was developed. The PTFs developed in this study may be quite useful to farmers for scheduling irrigation water as per soil type.
Saturated hydraulic conductivity Ksat is a fundamental characteristic in modeling flow and contaminant transport in soils and sediments. Therefore, many models have been developed to estimate Ksat from easily measureable parameters, such as textural properties, bulk density, etc. However, Ksat is no...
PEDO-TRANSFER FUNCTIONS FOR ESTIMATING SOIL BULK DENSITY IN CENTRAL AMAZONIA
Directory of Open Access Journals (Sweden)
Henrique Seixas Barros
2015-04-01
Full Text Available Under field conditions in the Amazon forest, soil bulk density is difficult to measure. Rigorous methodological criteria must be applied to obtain reliable inventories of C stocks and soil nutrients, making this process expensive and sometimes unfeasible. This study aimed to generate models to estimate soil bulk density based on parameters that can be easily and reliably measured in the field and that are available in many soil-related inventories. Stepwise regression models to predict bulk density were developed using data on soil C content, clay content and pH in water from 140 permanent plots in terra firme (upland forests near Manaus, Amazonas State, Brazil. The model results were interpreted according to the coefficient of determination (R2 and Akaike information criterion (AIC and were validated with a dataset consisting of 125 plots different from those used to generate the models. The model with best performance in estimating soil bulk density under the conditions of this study included clay content and pH in water as independent variables and had R2 = 0.73 and AIC = -250.29. The performance of this model for predicting soil density was compared with that of models from the literature. The results showed that the locally calibrated equation was the most accurate for estimating soil bulk density for upland forests in the Manaus region.
Reynolds, C. A.; Jackson, T. J.; Rawls, W. J.
2000-12-01
Spatial soil water-holding capacities were estimated for the Food and Agriculture Organization (FAO) digital Soil Map of the World (SMW) by employing continuous pedotransfer functions (PTF) within global pedon databases and linking these results to the SMW. The procedure first estimated representative soil properties for the FAO soil units by statistical analyses and taxotransfer depth algorithms [Food and Agriculture Organization (FAO), 1996]. The representative soil properties estimated for two layers of depths (0-30 and 30-100 cm) included particle-size distribution, dominant soil texture, organic carbon content, coarse fragments, bulk density, and porosity. After representative soil properties for the FAO soil units were estimated, these values were substituted into three different pedotransfer functions (PTF) models by Rawls et al. [1982], Saxton et al. [1986], and Batjes [1996a]. The Saxton PTF model was finally selected to calculate available water content because it only required particle-size distribution data and results closely agreed with the Rawls and Batjes PTF models that used both particle-size distribution and organic matter data. Soil water-holding capacities were then estimated by multiplying the available water content by the soil layer thickness and integrating over an effective crop root depth of 1 m or less (i.e., encountered shallow impermeable layers) and another soil depth data layer of 2.5 m or less.
Functional evaluation of pedotransfer functions derived from different scales of data collection
Nemes, A.; Schaap, M.G.; Wösten, J.H.M.
2003-01-01
Estimation of soil hydraulic properties by pedotransfer functions (PTFs) can be an alternative to troublesome and expensive measurements. New approaches to develop PTFs are continuously being introduced, however, PTF applicability in locations other than those of data collection has been rarely
TESTING SOME PEDO-TRANSFER FUNCTIONS (PTFS IN APULIA REGION
Directory of Open Access Journals (Sweden)
Floriano Buccigrossi
2009-03-01
Full Text Available The knowledge of soil water retention vs. soil water matric potential is used to study irrigation and drainage schedules, soil water storage capacity (plant available water, solute movement, plant growth and water stress. The hydraulic soil properties measuring is expensive, laborious and takes too long time, so, frequently, matemathic models, called pedo-transfer functions (PTFs are utilized to estimate hydraulic soil properties through soil chimical and phisical characteristics. Six pedo-transfer functions have been evaluated (Gupta & Larson, 1979; Rawls et al., 1982; De Jong et al., 1983; Rawls & Brakensiek, 1985; Saxton et al., 1986; Vereecken et al., 1989 by comparing estimated with measured soil moisture values at soil water matric potential of –33 and –1500 kPa of 361 soil samples collected from 185 pedons of Apulia Region (South Italy, having various combinations of particle-size distribution, soil organic matter content and bulk density. Accuracy of the soil moisture predictions have been evaluated by statistic indexes such as Weighted stantard error (WSEE, Mean Deviation (MD, Root Mean Squared Deviation (RMSD and the determination coefficient (R2 between estimated and measured water retention values. The Rawls PTF model demostrated to have the lowest values of WSEE, MD and RMSD indexes (0.044, -0.007 and 0.059 m3 H2O m-3 soil, respectively at –33 Kpa soil water matric potential (Field Capacity, while for estimating soil moisture at the Wilting Point (-1500 kPa Rawls & Brakensiek model is adequate (WSEE, MD and RMSD of 0.034, -0.016 and 0.046 m3 H2O m-3 soil. De Jong, Saxton and Rawls & Brakensiek models, at –33 kPa soil water matric potential and Gupta & Larson and De Jong models at –1500 kPa soil water matric potential, showed the highest statistic errors.
Simulations of soil water flow are often carried out with parameters estimated using pedotransfer functions (PTFs), which are empirical relationships between the soil hydraulic properties and more easily obtainable basic soil properties available, for example, from soil surveys. The use of pedotrans...
Informing soil models using pedotransfer functions: challenges and perspectives
Pachepsky, Yakov; Romano, Nunzio
2015-04-01
Pedotransfer functions (PTFs) are empirical relationships between parameters of soil models and more easily obtainable data on soil properties. PTFs have become an indispensable tool in modeling soil processes. As alternative methods to direct measurements, they bridge the data we have and data we need by using soil survey and monitoring data to enable modeling for real-world applications. Pedotransfer is extensively used in soil models addressing the most pressing environmental issues. The following is an attempt to provoke a discussion by listing current issues that are faced by PTF development. 1. As more intricate biogeochemical processes are being modeled, development of PTFs for parameters of those processes becomes essential. 2. Since the equations to express PTF relationships are essentially unknown, there has been a trend to employ highly nonlinear equations, e.g. neural networks, which in theory are flexible enough to simulate any dependence. This, however, comes with the penalty of large number of coefficients that are difficult to estimate reliably. A preliminary classification applied to PTF inputs and PTF development for each of the resulting groups may provide simple, transparent, and more reliable pedotransfer equations. 3. The multiplicity of models, i.e. presence of several models producing the same output variables, is commonly found in soil modeling, and is a typical feature in the PTF research field. However, PTF intercomparisons are lagging behind PTF development. This is aggravated by the fact that coefficients of PTF based on machine-learning methods are usually not reported. 4. The existence of PTFs is the result of some soil processes. Using models of those processes to generate PTFs, and more general, developing physics-based PTFs remains to be explored. 5. Estimating the variability of soil model parameters becomes increasingly important, as the newer modeling technologies such as data assimilation, ensemble modeling, and model
Improved Saturated Hydraulic Conductivity Pedotransfer Functions Using Machine Learning Methods
Araya, S. N.; Ghezzehei, T. A.
2017-12-01
Saturated hydraulic conductivity (Ks) is one of the fundamental hydraulic properties of soils. Its measurement, however, is cumbersome and instead pedotransfer functions (PTFs) are often used to estimate it. Despite a lot of progress over the years, generic PTFs that estimate hydraulic conductivity generally don't have a good performance. We develop significantly improved PTFs by applying state of the art machine learning techniques coupled with high-performance computing on a large database of over 20,000 soils—USKSAT and the Florida Soil Characterization databases. We compared the performance of four machine learning algorithms (k-nearest neighbors, gradient boosted model, support vector machine, and relevance vector machine) and evaluated the relative importance of several soil properties in explaining Ks. An attempt is also made to better account for soil structural properties; we evaluated the importance of variables derived from transformations of soil water retention characteristics and other soil properties. The gradient boosted models gave the best performance with root mean square errors less than 0.7 and mean errors in the order of 0.01 on a log scale of Ks [cm/h]. The effective particle size, D10, was found to be the single most important predictor. Other important predictors included percent clay, bulk density, organic carbon percent, coefficient of uniformity and values derived from water retention characteristics. Model performances were consistently better for Ks values greater than 10 cm/h. This study maximizes the extraction of information from a large database to develop generic machine learning based PTFs to estimate Ks. The study also evaluates the importance of various soil properties and their transformations in explaining Ks.
An improved Rosetta pedotransfer function and evaluation in earth system models
Zhang, Y.; Schaap, M. G.
2017-12-01
Soil hydraulic parameters are often difficult and expensive to measure, leading to the pedotransfer functions (PTFs) an alternative to predict those parameters. Rosetta (Schaap et al., 2001, denoted as Rosetta1) are widely used PTFs, which is based on artificial neural network (ANN) analysis coupled with the bootstrap re-sampling method, allowing the estimation of van Genuchten water retention parameters (van Genuchten, 1980, abbreviated here as VG), saturated hydraulic conductivity (Ks), as well as their uncertainties. We present an improved hierarchical pedotransfer functions (Rosetta3) that unify the VG water retention and Ks submodels into one, thus allowing the estimation of uni-variate and bi-variate probability distributions of estimated parameters. Results show that the estimation bias of moisture content was reduced significantly. Rosetta1 and Posetta3 were implemented in the python programming language, and the source code are available online. Based on different soil water retention equations, there are diverse PTFs used in different disciplines of earth system modelings. PTFs based on Campbell [1974] or Clapp and Hornberger [1978] are frequently used in land surface models and general circulation models, while van Genuchten [1980] based PTFs are more widely used in hydrology and soil sciences. We use an independent global scale soil database to evaluate the performance of diverse PTFs used in different disciplines of earth system modelings. PTFs are evaluated based on different soil characteristics and environmental characteristics, such as soil textural data, soil organic carbon, soil pH, as well as precipitation and soil temperature. This analysis provides more quantitative estimation error information for PTF predictions in different disciplines of earth system modelings.
Comparison of class and continuous pedotransfer functions to generate soil hydraulic characteristics
Wösten, J.H.M.; Finke, P.A.; Jansen, M.J.W.
1995-01-01
Class pedotransfer functions (PTF) and continuous PTFs were used to generate soil hydraulic characteristics. Both approaches were used to predict the soil physical input data to calculate five functional aspects of soil behaviour: number of workable days, number of days with adequate soil aeration,
Pedotransfer functions for isoproturon sorption on soils and vadose zone materials.
Moeys, Julien; Bergheaud, Valérie; Coquet, Yves
2011-10-01
Sorption coefficients (the linear K(D) or the non-linear K(F) and N(F)) are critical parameters in models of pesticide transport to groundwater or surface water. In this work, a dataset of isoproturon sorption coefficients and corresponding soil properties (264 K(D) and 55 K(F)) was compiled, and pedotransfer functions were built for predicting isoproturon sorption in soils and vadose zone materials. These were benchmarked against various other prediction methods. The results show that the organic carbon content (OC) and pH are the two main soil properties influencing isoproturon K(D) . The pedotransfer function is K(D) = 1.7822 + 0.0162 OC(1.5) - 0.1958 pH (K(D) in L kg(-1) and OC in g kg(-1)). For low-OC soils (OC isoproturon sorption in soils in unsampled locations should rely, whenever possible, and by order of preference, on (a) site- or soil-specific pedotransfer functions, (b) pedotransfer functions calibrated on a large dataset, (c) K(OC) values calculated on a large dataset or (d) K(OC) values taken from existing pesticide properties databases. Copyright © 2011 Society of Chemical Industry.
How accurate are pedotransfer functions for bulk density for Brazilian soils?
Directory of Open Access Journals (Sweden)
Raquel Stucchi Boschi
Full Text Available ABSTRACT: The aim of this study was to evaluate the performance of pedotransfer functions (PTFs available in the literature to estimate soil bulk density (ρb in different regions of Brazil, using different metrics. The predictive capacity of 25 PTFs was evaluated using the mean absolute error (MAE, mean error (ME, root mean squared error (RMSE, coefficient of determination (R2 and the regression error characteristic (REC curve. The models performed differently when comparing observed and estimated ρb values. In general, the PTFs showed a performance close to the mean value of the bulk density data, considered as the simplest possible estimation of an attribute and used as a parameter to compare the performance of existing models (null model. The models developed by Benites et al. (2007 (BEN-C and by Manrique and Jones (1991 (M&J-B presented the best results. The separation of data into two layers according to depth (0-10 cm and 10-30 cm demonstrated better performance in the 10-30 cm layer. The REC curve allowed for a simple and visual evaluation of the PTFs.
Risk predicting of macropore flow using pedotransfer functions, textural maps and modeling
DEFF Research Database (Denmark)
Iversen, Bo Vangsø; Børgesen, Christen Duus; Lægdsmand, Mette
2011-01-01
of this study were first to develop pedotransfer functions (PTFs) predicting near-saturated [k(−1)] and saturated (Ks) hydraulic conductivity using simple soil parameters as predictors and second to use this information and a newly developed rasterbased soil property map of Denmark to identify risk areas...... modeling were used to construct a new map dividing Denmark into risk categories for macropore flow. This map can be combined with other tools to identify areas where there is a high risk of contaminants leaching out of the root zone....
Microstructural strength of tidal soils – a rheometric approach to develop pedotransfer functions
Directory of Open Access Journals (Sweden)
Stoppe Nina
2018-03-01
Full Text Available Differences in soil stability, especially in visually comparable soils can occur due to microstructural processes and interactions. By investigating these microstructural processes with rheological investigations, it is possible to achieve a better understanding of soil behaviour from the mesoscale (soil aggregates to macroscale (bulk soil. In this paper, a rheological investigation of the factors influencing microstructural stability of riparian soils was conducted. Homogenized samples of Marshland soils from the riparian zone of the Elbe River (North Germany were analyzed with amplitude sweeps (AS under controlled shear deformation in a modular compact rheometer MCR 300 (Anton Paar, Germany at different matric potentials. A range physicochemical parameters were determined (texture, pH, organic matter, CaCO3 etc. and these factors were used to parameterize pedotransfer functions. The results indicate a clear dependence of microstructural elasticity on texture and water content. Although the influence of individual physicochemical factors varies depending on texture, the relevant features were identified taking combined effects into account. Thus, stabilizing factors are: organic matter, calcium ions, CaCO3 and pedogenic iron oxides; whereas sodium ions and water content represent structurally unfavorable factors. Based on the determined statistical relationships between rheological and physicochemical parameters, pedotransfer functions (PTF have been developed.
Comparison Of Selected Pedotransfer Functions For The Determination Of Soil Water Retention Curves
Directory of Open Access Journals (Sweden)
Kupec Michal
2015-09-01
Full Text Available Soil water retention curves were measured using a sandbox and the pressure plate extractor method on undisturbed soil samples from the Borská Lowland. The basic soil properties (e.g. soil texture, dry bulk density of the samples were determined. The soil water retention curve was described using the van Genuchten model (Van Genuchten, 1980. The parameters of the model were obtained using the RETC program (Van Genuchten et al., 1991. For the determination of the soil water retention curve parameters, two pedotransfer functions (PTF were also used that were derived for this area by Skalová (2003 and the Rosetta computer program (Schaap et al., 2001. The performance of the PTFs was characterized using the mean difference and root mean square error.
Mandal, Krishna Gopal; Kundu, Dilip Kumar; Singh, Ravender; Kumar, Ashwani; Rout, Rajalaxmi; Padhi, Jyotiprakash; Majhi, Pradipta; Sahoo, Dillip Kumar
2013-01-01
Effects of cropping practices on soil properties viz. particle size distribution, pH, bulk density (BD), field capacity (FC, -33 kPa), permanent wilting point (PWP, -1500 kPa), available water capacity (AWC) and soil organic carbon (SOC) were assessed. The pedotransfer functions (PTFs) were developed for saturated hydraulic conductivity (Ks), water retention at FC and PWP of soils for different sites under major cropping system in a canal irrigated area. The results revealed that the soils ar...
Directory of Open Access Journals (Sweden)
Cinara Xavier de Almeida
2012-12-01
sense, several pedotransfer functions were proposed in the literature, designed to predict the soil resistance to penetration. The purpose of this study was to compare the efficiency of five pedotransfer functions for the penetration resistance curve in the literature, by matching the data obtained from an impact penetrometer (field and from an electronic penetrometer (laboratory of a clay Oxisol, under different management systems (conventional and no-tillage. Soil was sampled between crop rows (layers 0-0.10, 0.10-0.20 and 0.20-0.30 m soon after sowing, at flowering and the end of the crop cycle to determine the physic-hydrical soil properties as well as their resistance to penetration with the electronic penetrometer. For the impact penetrometer, resistance to penetration was determined according to the variation in the soil water content during the crop cycle. The curves of penetration resistance were adjusted, while their precision and accuracy were tested by means of statistical parameters and compared by the F-test. By matching the curves, an overlapping was observed between the estimated values, showing that the way to determine soil penetration resistance (in the field or laboratory did not influence the relationship between penetration resistance and other soil properties. The equations RP = aUg b; RP = a(1-Ugb; RP = ae bUg e RP = a + be did not differ and were the most precise and accurate in predicting soil resistance to penetration.
Directory of Open Access Journals (Sweden)
Ahmed M. Abdelbaki
2016-06-01
Full Text Available Pedotransfer functions (PTFs are an easy way to predict saturated hydraulic conductivity (Ksat without measurements. This study aims to auto calibrate 22 PTFs. The PTFs were divided into three groups according to its input requirements and the shuffled complex evolution algorithm was used in calibration. The results showed great modification in the performance of the functions compared to the original published functions. For group 1 PTFs, the geometric mean error ratio (GMER and the geometric standard deviation of error ratio (GSDER values were modified from range (1.27–6.09, (5.2–7.01 to (0.91–1.15, (4.88–5.85 respectively. For group 2 PTFs, the GMER and the GSDER values were modified from (0.3–1.55, (5.9–12.38 to (1.00–1.03, (5.5–5.9 respectively. For group 3 PTFs, the GMER and the GSDER values were modified from (0.11–2.06, (5.55–16.42 to (0.82–1.01, (5.1–6.17 respectively. The result showed that the automatic calibration is an efficient and accurate method to enhance the performance of the PTFs.
Directory of Open Access Journals (Sweden)
J. Moeys
2012-07-01
Full Text Available Estimating pesticide leaching risks at the regional scale requires the ability to completely parameterise a pesticide fate model using only survey data, such as soil and land-use maps. Such parameterisations usually rely on a set of lookup tables and (pedotransfer functions, relating elementary soil and site properties to model parameters. The aim of this paper is to describe and test a complete set of parameter estimation algorithms developed for the pesticide fate model MACRO, which accounts for preferential flow in soil macropores. We used tracer monitoring data from 16 lysimeter studies, carried out in three European countries, to evaluate the ability of MACRO and this "blind parameterisation" scheme to reproduce measured solute leaching at the base of each lysimeter. We focused on the prediction of early tracer breakthrough due to preferential flow, because this is critical for pesticide leaching. We then calibrated a selected number of parameters in order to assess to what extent the prediction of water and solute leaching could be improved.
Our results show that water flow was generally reasonably well predicted (median model efficiency, ME, of 0.42. Although the general pattern of solute leaching was reproduced well by the model, the overall model efficiency was low (median ME = −0.26 due to errors in the timing and magnitude of some peaks. Preferential solute leaching at early pore volumes was also systematically underestimated. Nonetheless, the ranking of soils according to solute loads at early pore volumes was reasonably well estimated (concordance correlation coefficient, CCC, between 0.54 and 0.72. Moreover, we also found that ignoring macropore flow leads to a significant deterioration in the ability of the model to reproduce the observed leaching pattern, and especially the early breakthrough in some soils. Finally, the calibration procedure showed that improving the estimation of solute transport parameters is
Moeys, J.; Larsbo, M.; Bergström, L.; Brown, C. D.; Coquet, Y.; Jarvis, N. J.
2012-07-01
Estimating pesticide leaching risks at the regional scale requires the ability to completely parameterise a pesticide fate model using only survey data, such as soil and land-use maps. Such parameterisations usually rely on a set of lookup tables and (pedo)transfer functions, relating elementary soil and site properties to model parameters. The aim of this paper is to describe and test a complete set of parameter estimation algorithms developed for the pesticide fate model MACRO, which accounts for preferential flow in soil macropores. We used tracer monitoring data from 16 lysimeter studies, carried out in three European countries, to evaluate the ability of MACRO and this "blind parameterisation" scheme to reproduce measured solute leaching at the base of each lysimeter. We focused on the prediction of early tracer breakthrough due to preferential flow, because this is critical for pesticide leaching. We then calibrated a selected number of parameters in order to assess to what extent the prediction of water and solute leaching could be improved. Our results show that water flow was generally reasonably well predicted (median model efficiency, ME, of 0.42). Although the general pattern of solute leaching was reproduced well by the model, the overall model efficiency was low (median ME = -0.26) due to errors in the timing and magnitude of some peaks. Preferential solute leaching at early pore volumes was also systematically underestimated. Nonetheless, the ranking of soils according to solute loads at early pore volumes was reasonably well estimated (concordance correlation coefficient, CCC, between 0.54 and 0.72). Moreover, we also found that ignoring macropore flow leads to a significant deterioration in the ability of the model to reproduce the observed leaching pattern, and especially the early breakthrough in some soils. Finally, the calibration procedure showed that improving the estimation of solute transport parameters is probably more important than the
Directory of Open Access Journals (Sweden)
M. Tombul
2004-01-01
Full Text Available Spatial and temporal variations in soil hydraulic properties such as soil moisture q(h and hydraulic conductivity K(q or K(h, may affect the performance of hydrological models. Moreover, the cost of determining soil hydraulic properties by field or laboratory methods makes alternative indirect methods desirable. In this paper, various pedotransfer functions (PTFs are used to estimate soil hydraulic properties for a small semi-arid basin (Kurukavak in the north-west of Turkey. The field measurements were a good fit with the retention curve derived using Rosetta SSC-BD for a loamy soil. To predict parameters to describe soil hydraulic characteristics, continuous PTFs such as Rosetta SSC-BD (Model H3 and SSC-BD-q33q1500 (Model H5 have been applied. Using soil hydraulic properties that vary in time and space, the characteristic curves for three soil types, loam, sandy clay loam and sandy loam have been developed. Spatial and temporal variations in soil moisture have been demonstrated on a plot and catchment scale for loamy soil. It is concluded that accurate site-specific measurements of the soil hydraulic characteristics are the only and probably the most promising method to progress in the future. Keywords: soil hydraulic properties, soil characteristic curves, PTFs
von Götz, N; Richter, O
1999-03-01
The degradation behaviour of bentazone in 14 different soils was examined at constant temperature and moisture conditions. Two soils were examined at different temperatures. On the basis of these data the influence of soil properties and temperature on degradation was assessed and modelled. Pedo-transfer functions (PTF) in combination with a linear and a non-linear model were found suitable to describe the bentazone degradation in the laboratory as related to soil properties. The linear PTF can be combined with a rate related to the temperature to account for both soil property and temperature influence at the same time.
Pineda, M C; Viloria, J; Martínez-Casasnovas, J A; Valera, A; Lobo, D; Timm, L C; Pires, L F; Gabriels, D
2018-02-22
Soil water content is a key property in the study of water available for plants, infiltration, drainage, hydraulic conductivity, irrigation, plant water stress and solute movement. However, its measurement consumes time and, in the case of stony soils, the presence of stones difficult to determinate the water content. An alternative is the use of pedotransfer functions (PTFs), as models to predict these properties from readily available data. The present work shows a comparison of different widely used PTFs to estimate water content at-33 kPa (WR -33kPa ) in high stoniness soils. The work was carried out in the Caramacate River, an area of high interest because the frequent landslides worsen the quality of drinking water. The performance of all evaluated PTFs was compared with a PTF generated for the study area. Results showed that the Urach's PTF presented the best performance in relation to the others and could be used to estimate WR -33kPa in soils of Caramacate River basin. The calculated PTFs had a R 2 of 0.65. This was slightly higher than the R 2 of the Urach's PTF. The inclusion of the rock fragment volume could have the better results. The weak performance of the other PTFs could be related to the fact that the mountain soils of the basin are rich in 2:1 clay and high stoniness, which were not used as independent variables for PTFs to estimate the WR -33kPa .
Directory of Open Access Journals (Sweden)
Yameli Guadalupe Aguilar Duarte
2011-04-01
Full Text Available The aim of this study was the spatial identification of the suitability of soils as reactors in the treatment of swine wastewater in the Mexican state of Yucatan, as well as the development of a map with validation procedures. Pedotransfer functions were applied to the existing soils database. A methodological approach was adopted that allowed the spatialization of pedotransfer function data points. A map of the suitability of soil associations as reactors was produced, as well as a map of the level of accuracy of the associations using numerical classification technique, such as discriminant analysis. Soils with the highest suitability indices were found to be Vertisols, Stagnosols, Nitisols and Luvisols. Some 83.9% of the area of Yucatan is marginally suitable for the reception of swine wastewater, 6.5% is moderately suitable, while 6% is suitable. The percentages of the spatial accuracy of the pedotransfer functions range from 62% to 95% with an overall value of 71.5%. The methodological approach proved to be practical, accurate and inexpensive.
Estimation of water retention and availability in soils of Rio Grande do Sul
Reichert,José Miguel; Albuquerque,Jackson Adriano; Kaiser,Douglas Rodrigo; Reinert,Dalvan José; Urach,Felipe Lavarda; Carlesso,Reimar
2009-01-01
Dispersed information on water retention and availability in soils may be compiled in databases to generate pedotransfer functions. The objectives of this study were: to generate pedotransfer functions to estimate soil water retention based on easily measurable soil properties; to evaluate the efficiency of existing pedotransfer functions for different geographical regions for the estimation of water retention in soils of Rio Grande do Sul (RS); and to estimate plant-available water capacity ...
DEFF Research Database (Denmark)
Chrysodonta, Zampela Pittaki; Møldrup, Per; Knadel, Maria
2018-01-01
The soil water retention curve (SWRC) is essential for the modeling of water flow and chemical transport in the vadose zone. The Campbell function and its b (pore-size distribution index) parameter fitted to measured data is a simple method to quantify retention under relatively moist conditions...
George, N. J.; Obiora, D. N.; Ekanem, A. M.; Akpan, A. E.
2016-10-01
The task involved in the interpretation of Vertical Electrical Sounding (VES) data is how to get unique results in the absence/limited number of borehole information, which is usually limited to information on the spot. Geological and geochemical mapping of electrical properties are usually limited to direct observations on the surface and therefore, conclusions and extrapolations that can be drawn about the system electrical characteristics and possible underlying structures may be masked as geology changes with positions. The electrical resistivity study pedotransfer functions (PTFs) have been linked with the electromagnetic (EM) resolved PTFs at chosen frequencies of skin/penetration depth corresponding to the VES resolved investigation depth in order to determine the local geological attributes of hydrogeological repository in the coastal formation dominated with fine sand. The illustrative application of effective skin depth depicts that effective skin depth has direct relation with the EM response of the local source over the layered earth and thus, can be linked to the direct current earth response functions as an aid for estimating the optimum depth and electrical parameters through comparative analysis. Though the VES and EM resolved depths of investigation at appropriate effective and theoretical frequencies have wide gaps, diagnostic relations characterising the subsurface depth of interest have been established. The determining factors of skin effect have been found to include frequency/period, resistivity/conductivity, absorption/attenuation coefficient and energy loss factor. The novel diagnostic relations and their corresponding constants between 1-D resistivity data and EM skin depth are robust PTFs necessary for checking the accuracy associated with the non-unique interpretations that characterise the 1-D resistivity data, mostly when lithostratigraphic data are not available.
Directory of Open Access Journals (Sweden)
A. M. Paz
2009-01-01
Full Text Available As funções de pedo-transferência (PTFs permitem estimar propriedades hidrodinâmicas do solo a partir das suas propriedades básicas. Neste estudo desenvolveram-se PTFs para a determinação de pontos específicos da curva de retenção da água no solo por meio de análise de regressão linear múltipla. Relacionaramse os teores de água retida no solo contra sucções de 0.25 kPa, 9.8 kPa e 1554 kPa, considerando estes valores correspondentes respectivamente à porosidade total, capacidade de campo e coeficiente de emurchecimento, com propriedades básicas do solo: textura, teor em matéria orgânica, massa volúmica aparente, profundidade média da camada, média geométrica do diâmetro das partículas e o seu desvio padrão. Utilizou-se uma base de dados de propriedades do solo com 304 observações de horizontes ou camadas de diversas famílias de solos de várias regiões de Portugal Continental. As PTFs obtidas apresentaram coeficientes de determinação superiores a 0.84. Para a validação estatística das PTFs utilizou-se uma série de dados independente, obtidos para as unidades-solo do Aproveitamento Hidroagrícola do Lucefécit com 55 observações. O coeficiente de correlação simples entre os valores medidos e estimados para os valores do teor de água retidos a 0.25 kPa, 9.8 kPa e 1554 kPa, respectivamente, foi de 0.90, 0.73 e 0.85, significantes ao nível de 0.1% de probabilidade.Pedotransfer functions allow prediction of the soil hydraulic characteristics from basic soil data. In this study pedotransfer functions were developed in order to obtain three specific points of the soil water retention curve: field capacity, wilting point and maximum capacity, which were considered to be correspondent to the water held in soil against suctions of 0.25 kPa, 9.8 kPa and 1554 kPa. The basic properties of soil used were particle size distribution, organic content, bulk density, depth of the layer and the statistical variables
Directory of Open Access Journals (Sweden)
Ana M Landini
2007-12-01
las hipótesis del modelo.The knowledge of the process of water infiltration in soil is important in the design of irrigation systems and in the prediction of the vulnerability to the contamination of soil and groundwater. Moreover, it is important to evaluate the efficiency of the hydrological models that predict the movement of water in soil. The objective of this study was to evaluate and to compare the goodness of fitting of Kostiakov-Lewis (K-L and Philip (Ph infiltration models to experimental data obtained from three soils: two of them at the Province of Buenos Aires, and the third one at the School of Agronomy's campus of the Buenos Aires University, (Argentina. Efficiency of Saxton and Rawls (SyR pedotransfer functions (FPT on the determination of the Green and Ampt (GA model input hydraulic parameters and the prediction of the soil-moisture release curve were analyzed too. K-L and Ph models fitted data with R² coefficient greater than 0.6. Then it was concluded that these models accurately describe the infiltration process of the studied soils. The highest basic infiltration rate (fo was 0.42 cm min-1 and corresponded to a silty clay soil with organic amendment, and for the other two soils (silt loam and clay loam were 0.03 and 0.07 cm min-1 respectively. For two of the studied soils, GA model obtained from input parameters determined with the FPT, predicted the infiltration process with an efficiency coefficient (CE greater than 0.8. However, at some cases, the fitting was not so good for dephts greater than 20 cm. For the silt loam soil, the FPT predicted the soil-moisture release curve with an CE close to 0.9. It might be suggested to carry out a preliminary few number of infiltration tests on any soil under study, and analyze the FPT and the GA model goodness of fit. In this way, the convenience of using these models could be evaluated.
Santra, P.; Kumar, M.; Kumawat, R.N.; Painuli, D.K.; Hati, K.M.; Heuvelink, G.B.M.; Batjes, N.H.
2018-01-01
Characterization of soil water retention, e.g., water content at field capacity (FC) and permanent wilting point (PWP) over a landscape plays a key role in efficient utilization of available scarce water resources in dry land agriculture; however, direct measurement thereof for multiple locations in
On Functional Calculus Estimates
Schwenninger, F.L.
2015-01-01
This thesis presents various results within the field of operator theory that are formulated in estimates for functional calculi. Functional calculus is the general concept of defining operators of the form $f(A)$, where f is a function and $A$ is an operator, typically on a Banach space. Norm
DEFF Research Database (Denmark)
Andersen, C K; Andersen, K; Kragh-Sørensen, P
2000-01-01
on these criteria, a two-part model was chosen. In this model, the probability of incurring any costs was estimated using a logistic regression, while the level of the costs was estimated in the second part of the model. The choice of model had a substantial impact on the predicted health care costs, e...
Variance Function Estimation. Revision.
1987-03-01
UNLSIFIED RFOSR-TR-87-±112 F49620-85-C-O144 F/C 12/3 NL EEEEEEh LOUA28~ ~ L53 11uLoo MICROOP REOUINTS-’HR ------ N L E U INARF-% - IS %~1 %i % 0111...and 9 jointly. If 7,, 0. and are any preliminary estimators for 71, 6. and 3. define 71 and 6 to be the solutions of (4.1) N1 IN2 (7., ’ Td " ~ - / =0P
Directory of Open Access Journals (Sweden)
José Miguel Reichert
2009-12-01
Full Text Available Dispersed information on water retention and availability in soils may be compiled in databases to generate pedotransfer functions. The objectives of this study were: to generate pedotransfer functions to estimate soil water retention based on easily measurable soil properties; to evaluate the efficiency of existing pedotransfer functions for different geographical regions for the estimation of water retention in soils of Rio Grande do Sul (RS; and to estimate plant-available water capacity based on soil particle-size distribution. Two databases were set up for soil properties, including water retention: one based on literature data (725 entries and the other with soil data from an irrigation scheduling and management system (239 entries. From the literature database, pedotransfer functions were generated, nine pedofunctions available in the literature were evaluated and the plant-available water capacity was calculated. The coefficient of determination of some pedotransfer functions ranged from 0.56 to 0.66. Pedotransfer functions generated based on soils from other regions were not appropriate for estimating the water retention for RS soils. The plant-available water content varied with soil texture classes, from 0.089 kg kg-1 for the sand class to 0.191 kg kg-1 for the silty clay class. These variations were more related to sand and silt than to clay content. The soils with a greater silt/clay ratio, which were less weathered and with a greater quantity of smectite clay minerals, had high water retention and plant-available water capacity.Informações dispersas sobre retenção e disponibilidade de água em solos podem ser agrupadas em bancos de dados para gerar funções de pedotransferência. Os objetivos do trabalho foram: gerar equações de pedotransferência para estimar a retenção de água a partir de atributos do solo de fácil obtenção; avaliar a eficiência de pedofunções existentes para várias regiões para a estimativa da
A neural network model for estimating soil phosphorus using terrain analysis
Directory of Open Access Journals (Sweden)
Ali Keshavarzi
2015-12-01
Full Text Available Artificial neural network (ANN model was developed and tested for estimating soil phosphorus (P in Kouhin watershed area (1000 ha, Qazvin province, Iran using terrain analysis. Based on the soil distribution correlation, vegetation growth pattern across the topographically heterogeneous landscape, the topographic and vegetation attributes were used in addition to pedologic information for the development of ANN model in area for estimating of soil phosphorus. Totally, 85 samples were collected and tested for phosphorus contents and corresponding attributes were estimated by the digital elevation model (DEM. In order to develop the pedo-transfer functions, data linearity was checked, correlated and 80% was used for modeling and ANN was tested using 20% of collected data. Results indicate that 68% of the variation in soil phosphorus could be explained by elevation and Band 1 data and significant correlation was observed between input variables and phosphorus contents. There was a significant correlation between soil P and terrain attributes which can be used to derive the pedo-transfer function for soil P estimation to manage nutrient deficiency. Results showed that P values can be calculated more accurately with the ANN-based pedo-transfer function with the input topographic variables along with the Band 1.
PHAZE, Parametric Hazard Function Estimation
International Nuclear Information System (INIS)
2002-01-01
1 - Description of program or function: Phaze performs statistical inference calculations on a hazard function (also called a failure rate or intensity function) based on reported failure times of components that are repaired and restored to service. Three parametric models are allowed: the exponential, linear, and Weibull hazard models. The inference includes estimation (maximum likelihood estimators and confidence regions) of the parameters and of the hazard function itself, testing of hypotheses such as increasing failure rate, and checking of the model assumptions. 2 - Methods: PHAZE assumes that the failures of a component follow a time-dependent (or non-homogenous) Poisson process and that the failure counts in non-overlapping time intervals are independent. Implicit in the independence property is the assumption that the component is restored to service immediately after any failure, with negligible repair time. The failures of one component are assumed to be independent of those of another component; a proportional hazards model is used. Data for a component are called time censored if the component is observed for a fixed time-period, or plant records covering a fixed time-period are examined, and the failure times are recorded. The number of these failures is random. Data are called failure censored if the component is kept in service until a predetermined number of failures has occurred, at which time the component is removed from service. In this case, the number of failures is fixed, but the end of the observation period equals the final failure time and is random. A typical PHAZE session consists of reading failure data from a file prepared previously, selecting one of the three models, and performing data analysis (i.e., performing the usual statistical inference about the parameters of the model, with special emphasis on the parameter(s) that determine whether the hazard function is increasing). The final goals of the inference are a point estimate
Multi-scale hydraulic pedotransfer functions for Hungarian soils
Nemes, A.
2003-01-01
Water and nutrient balance are among the main concerns about the sustainability of our soils. Numerous computer models have been developed to simulate soil water and solute transport and plant growth. However, use of these models has often been limited by lack of accurate input parameters. Often,
Variance function estimation for immunoassays
International Nuclear Information System (INIS)
Raab, G.M.; Thompson, R.; McKenzie, I.
1980-01-01
A computer program is described which implements a recently described, modified likelihood method of determining an appropriate weighting function to use when fitting immunoassay dose-response curves. The relationship between the variance of the response and its mean value is assumed to have an exponential form, and the best fit to this model is determined from the within-set variability of many small sets of repeated measurements. The program estimates the parameter of the exponential function with its estimated standard error, and tests the fit of the experimental data to the proposed model. Output options include a list of the actual and fitted standard deviation of the set of responses, a plot of actual and fitted standard deviation against the mean response, and an ordered list of the 10 sets of data with the largest ratios of actual to fitted standard deviation. The program has been designed for a laboratory user without computing or statistical expertise. The test-of-fit has proved valuable for identifying outlying responses, which may be excluded from further analysis by being set to negative values in the input file. (Auth.)
Estimation of Correlation Functions by Random Decrement
DEFF Research Database (Denmark)
Asmussen, J. C.; Brincker, Rune
This paper illustrates how correlation functions can be estimated by the random decrement technique. Several different formulations of the random decrement technique, estimating the correlation functions are considered. The speed and accuracy of the different formulations of the random decrement...... and the length of the correlation functions. The accuracy of the estimates with respect to the theoretical correlation functions and the modal parameters are both investigated. The modal parameters are extracted from the correlation functions using the polyreference time domain technique....
Non-Parametric Estimation of Correlation Functions
DEFF Research Database (Denmark)
Brincker, Rune; Rytter, Anders; Krenk, Steen
In this paper three methods of non-parametric correlation function estimation are reviewed and evaluated: the direct method, estimation by the Fast Fourier Transform and finally estimation by the Random Decrement technique. The basic ideas of the techniques are reviewed, sources of bias are point...
Mohammadi, Mohammad Hossein; Vanclooster, Marnik
2012-05-01
Solute transport in partially saturated soils is largely affected by fluid velocity distribution and pore size distribution within the solute transport domain. Hence, it is possible to describe the solute transport process in terms of the pore size distribution of the soil, and indirectly in terms of the soil hydraulic properties. In this paper, we present a conceptual approach that allows predicting the parameters of the Convective Lognormal Transfer model from knowledge of soil moisture and the Soil Moisture Characteristic (SMC), parameterized by means of the closed-form model of Kosugi (1996). It is assumed that in partially saturated conditions, the air filled pore volume act as an inert solid phase, allowing the use of the Arya et al. (1999) pragmatic approach to estimate solute travel time statistics from the saturation degree and SMC parameters. The approach is evaluated using a set of partially saturated transport experiments as presented by Mohammadi and Vanclooster (2011). Experimental results showed that the mean solute travel time, μ(t), increases proportionally with the depth (travel distance) and decreases with flow rate. The variance of solute travel time σ²(t) first decreases with flow rate up to 0.4-0.6 Ks and subsequently increases. For all tested BTCs predicted solute transport with μ(t) estimated from the conceptual model performed much better as compared to predictions with μ(t) and σ²(t) estimated from calibration of solute transport at shallow soil depths. The use of μ(t) estimated from the conceptual model therefore increases the robustness of the CLT model in predicting solute transport in heterogeneous soils at larger depths. In view of the fact that reasonable indirect estimates of the SMC can be made from basic soil properties using pedotransfer functions, the presented approach may be useful for predicting solute transport at field or watershed scales. Copyright © 2012 Elsevier B.V. All rights reserved.
Estimating Stochastic Volatility Models using Prediction-based Estimating Functions
DEFF Research Database (Denmark)
Lunde, Asger; Brix, Anne Floor
to the performance of the GMM estimator based on conditional moments of integrated volatility from Bollerslev and Zhou (2002). The case where the observed log-price process is contaminated by i.i.d. market microstructure (MMS) noise is also investigated. First, the impact of MMS noise on the parameter estimates from......In this paper prediction-based estimating functions (PBEFs), introduced in Sørensen (2000), are reviewed and PBEFs for the Heston (1993) stochastic volatility model are derived. The finite sample performance of the PBEF based estimator is investigated in a Monte Carlo study, and compared...... to correctly account for the noise are investigated. Our Monte Carlo study shows that the estimator based on PBEFs outperforms the GMM estimator, both in the setting with and without MMS noise. Finally, an empirical application investigates the possible challenges and general performance of applying the PBEF...
Receiver function estimated by maximum entropy deconvolution
Institute of Scientific and Technical Information of China (English)
吴庆举; 田小波; 张乃铃; 李卫平; 曾融生
2003-01-01
Maximum entropy deconvolution is presented to estimate receiver function, with the maximum entropy as the rule to determine auto-correlation and cross-correlation functions. The Toeplitz equation and Levinson algorithm are used to calculate the iterative formula of error-predicting filter, and receiver function is then estimated. During extrapolation, reflective coefficient is always less than 1, which keeps maximum entropy deconvolution stable. The maximum entropy of the data outside window increases the resolution of receiver function. Both synthetic and real seismograms show that maximum entropy deconvolution is an effective method to measure receiver function in time-domain.
Efficient Estimating Functions for Stochastic Differential Equations
DEFF Research Database (Denmark)
Jakobsen, Nina Munkholt
The overall topic of this thesis is approximate martingale estimating function-based estimationfor solutions of stochastic differential equations, sampled at high frequency. Focuslies on the asymptotic properties of the estimators. The first part of the thesis deals with diffusions observed over...
Thresholding projection estimators in functional linear models
Cardot, Hervé; Johannes, Jan
2010-01-01
We consider the problem of estimating the regression function in functional linear regression models by proposing a new type of projection estimators which combine dimension reduction and thresholding. The introduction of a threshold rule allows to get consistency under broad assumptions as well as minimax rates of convergence under additional regularity hypotheses. We also consider the particular case of Sobolev spaces generated by the trigonometric basis which permits to get easily mean squ...
Estimating Function Approaches for Spatial Point Processes
Deng, Chong
Spatial point pattern data consist of locations of events that are often of interest in biological and ecological studies. Such data are commonly viewed as a realization from a stochastic process called spatial point process. To fit a parametric spatial point process model to such data, likelihood-based methods have been widely studied. However, while maximum likelihood estimation is often too computationally intensive for Cox and cluster processes, pairwise likelihood methods such as composite likelihood, Palm likelihood usually suffer from the loss of information due to the ignorance of correlation among pairs. For many types of correlated data other than spatial point processes, when likelihood-based approaches are not desirable, estimating functions have been widely used for model fitting. In this dissertation, we explore the estimating function approaches for fitting spatial point process models. These approaches, which are based on the asymptotic optimal estimating function theories, can be used to incorporate the correlation among data and yield more efficient estimators. We conducted a series of studies to demonstrate that these estmating function approaches are good alternatives to balance the trade-off between computation complexity and estimating efficiency. First, we propose a new estimating procedure that improves the efficiency of pairwise composite likelihood method in estimating clustering parameters. Our approach combines estimating functions derived from pairwise composite likeli-hood estimation and estimating functions that account for correlations among the pairwise contributions. Our method can be used to fit a variety of parametric spatial point process models and can yield more efficient estimators for the clustering parameters than pairwise composite likelihood estimation. We demonstrate its efficacy through a simulation study and an application to the longleaf pine data. Second, we further explore the quasi-likelihood approach on fitting
Piecewise Geometric Estimation of a Survival Function.
1985-04-01
Langberg (1982). One of the by- products of the estimation process is an estimate of the failure rate function: here, another issue is raised. It is evident...envisaged as the infinite product probability space that may be constructed in the usual way from the sequence of probability spaces corresponding to the...received 6 MP (a mercaptopurine used in the treatment of leukemia). The ordered remis- sion times in weeks are: 6, 6, 6, 6+, 7, 9+, 10, 10+, 11+, 13, 16
Evaluation of Regression and Neuro_Fuzzy Models in Estimating Saturated Hydraulic Conductivity
Directory of Open Access Journals (Sweden)
J. Behmanesh
2015-06-01
Full Text Available Study of soil hydraulic properties such as saturated and unsaturated hydraulic conductivity is required in the environmental investigations. Despite numerous research, measuring saturated hydraulic conductivity using by direct methods are still costly, time consuming and professional. Therefore estimating saturated hydraulic conductivity using rapid and low cost methods such as pedo-transfer functions with acceptable accuracy was developed. The purpose of this research was to compare and evaluate 11 pedo-transfer functions and Adaptive Neuro-Fuzzy Inference System (ANFIS to estimate saturated hydraulic conductivity of soil. In this direct, saturated hydraulic conductivity and physical properties in 40 points of Urmia were calculated. The soil excavated was used in the lab to determine its easily accessible parameters. The results showed that among existing models, Aimrun et al model had the best estimation for soil saturated hydraulic conductivity. For mentioned model, the Root Mean Square Error and Mean Absolute Error parameters were 0.174 and 0.028 m/day respectively. The results of the present research, emphasises the importance of effective porosity application as an important accessible parameter in accuracy of pedo-transfer functions. sand and silt percent, bulk density and soil particle density were selected to apply in 561 ANFIS models. In training phase of best ANFIS model, the R2 and RMSE were calculated 1 and 1.2×10-7 respectively. These amounts in the test phase were 0.98 and 0.0006 respectively. Comparison of regression and ANFIS models showed that the ANFIS model had better results than regression functions. Also Nuro-Fuzzy Inference System had capability to estimatae with high accuracy in various soil textures.
ESTIMATION OF FUNCTIONALS OF SPARSE COVARIANCE MATRICES.
Fan, Jianqing; Rigollet, Philippe; Wang, Weichen
High-dimensional statistical tests often ignore correlations to gain simplicity and stability leading to null distributions that depend on functionals of correlation matrices such as their Frobenius norm and other ℓ r norms. Motivated by the computation of critical values of such tests, we investigate the difficulty of estimation the functionals of sparse correlation matrices. Specifically, we show that simple plug-in procedures based on thresholded estimators of correlation matrices are sparsity-adaptive and minimax optimal over a large class of correlation matrices. Akin to previous results on functional estimation, the minimax rates exhibit an elbow phenomenon. Our results are further illustrated in simulated data as well as an empirical study of data arising in financial econometrics.
Estimating state-contingent production functions
DEFF Research Database (Denmark)
Rasmussen, Svend; Karantininis, Kostas
The paper reviews the empirical problem of estimating state-contingent production functions. The major problem is that states of nature may not be registered and/or that the number of observation per state is low. Monte Carlo simulation is used to generate an artificial, uncertain production...... environment based on Cobb Douglas production functions with state-contingent parameters. The pa-rameters are subsequently estimated based on different sizes of samples using Generalized Least Squares and Generalized Maximum Entropy and the results are compared. It is concluded that Maximum Entropy may...
Efficient Estimating Functions for Stochastic Differential Equations
DEFF Research Database (Denmark)
Jakobsen, Nina Munkholt
The overall topic of this thesis is approximate martingale estimating function-based estimationfor solutions of stochastic differential equations, sampled at high frequency. Focuslies on the asymptotic properties of the estimators. The first part of the thesis deals with diffusions observed over...... a fixed time interval. Rate optimal and effcient estimators areobtained for a one-dimensional diffusion parameter. Stable convergence in distribution isused to achieve a practically applicable Gaussian limit distribution for suitably normalisedestimators. In a simulation example, the limit distributions...... multidimensional parameter. Conditions for rate optimality and effciency of estimatorsof drift-jump and diffusion parameters are given in some special cases. Theseconditions are found to extend the pre-existing conditions applicable to continuous diffusions,and impose much stronger requirements on the estimating...
Wang, Ji-Peng; Hu, Nian; François, Bertrand; Lambert, Pierre
2017-07-01
This study proposed two pedotransfer functions (PTFs) to estimate sandy soil water retention curves. It is based on the van Genuchten's water retention model and from a semiphysical and semistatistical approach. Basic gradation parameters of d60 as particle size at 60% passing and the coefficient of uniformity Cu are employed in the PTFs with two idealized conditions, the monosized scenario and the extremely polydisperse condition, satisfied. Water retention tests are carried out on eight granular materials with narrow particle size distributions as supplementary data of the UNSODA database. The air entry value is expressed as inversely proportional to d60 and the parameter n, which is related to slope of water retention curve, is a function of Cu. The proposed PTFs, although have fewer parameters, have better fitness than previous PTFs for sandy soils. Furthermore, by incorporating with the suction stress definition, the proposed pedotransfer functions are imbedded in shear strength equations which provide a way to estimate capillary induced tensile strength or cohesion at a certain suction or degree of saturation from basic soil gradation parameters. The estimation shows quantitative agreement with experimental data in literature, and it also explains that the capillary-induced cohesion is generally higher for materials with finer mean particle size or higher polydispersity.
Directory of Open Access Journals (Sweden)
Gholam Reza Sheykhzadeh
2017-02-01
Full Text Available Introduction: Penetration resistance is one of the criteria for evaluating soil compaction. It correlates with several soil properties such as vehicle trafficability, resistance to root penetration, seedling emergence, and soil compaction by farm machinery. Direct measurement of penetration resistance is time consuming and difficult because of high temporal and spatial variability. Therefore, many different regressions and artificial neural network pedotransfer functions have been proposed to estimate penetration resistance from readily available soil variables such as particle size distribution, bulk density (Db and gravimetric water content (θm. The lands of Ardabil Province are one of the main production regions of potato in Iran, thus, obtaining the soil penetration resistance in these regions help with the management of potato production. The objective of this research was to derive pedotransfer functions by using regression and artificial neural network to predict penetration resistance from some soil variations in the agricultural soils of Ardabil plain and to compare the performance of artificial neural network with regression models. Materials and methods: Disturbed and undisturbed soil samples (n= 105 were systematically taken from 0-10 cm soil depth with nearly 3000 m distance in the agricultural lands of the Ardabil plain ((lat 38°15' to 38°40' N, long 48°16' to 48°61' E. The contents of sand, silt and clay (hydrometer method, CaCO3 (titration method, bulk density (cylinder method, particle density (Dp (pychnometer method, organic carbon (wet oxidation method, total porosity(calculating from Db and Dp, saturated (θs and field soil water (θf using the gravimetric method were measured in the laboratory. Mean geometric diameter (dg and standard deviation (σg of soil particles were computed using the percentages of sand, silt and clay. Penetration resistance was measured in situ using cone penetrometer (analog model at 10
Comparison of density estimators. [Estimation of probability density functions
Energy Technology Data Exchange (ETDEWEB)
Kao, S.; Monahan, J.F.
1977-09-01
Recent work in the field of probability density estimation has included the introduction of some new methods, such as the polynomial and spline methods and the nearest neighbor method, and the study of asymptotic properties in depth. This earlier work is summarized here. In addition, the computational complexity of the various algorithms is analyzed, as are some simulations. The object is to compare the performance of the various methods in small samples and their sensitivity to change in their parameters, and to attempt to discover at what point a sample is so small that density estimation can no longer be worthwhile. (RWR)
Estimating functions for inhomogeneous Cox processes
DEFF Research Database (Denmark)
Waagepetersen, Rasmus
2006-01-01
Estimation methods are reviewed for inhomogeneous Cox processes with tractable first and second order properties. We illustrate the various suggestions by means of data examples.......Estimation methods are reviewed for inhomogeneous Cox processes with tractable first and second order properties. We illustrate the various suggestions by means of data examples....
A logistic regression estimating function for spatial Gibbs point processes
DEFF Research Database (Denmark)
Baddeley, Adrian; Coeurjolly, Jean-François; Rubak, Ege
We propose a computationally efficient logistic regression estimating function for spatial Gibbs point processes. The sample points for the logistic regression consist of the observed point pattern together with a random pattern of dummy points. The estimating function is closely related to the p......We propose a computationally efficient logistic regression estimating function for spatial Gibbs point processes. The sample points for the logistic regression consist of the observed point pattern together with a random pattern of dummy points. The estimating function is closely related...
Slope Estimation in Noisy Piecewise Linear Functions.
Ingle, Atul; Bucklew, James; Sethares, William; Varghese, Tomy
2015-03-01
This paper discusses the development of a slope estimation algorithm called MAPSlope for piecewise linear data that is corrupted by Gaussian noise. The number and locations of slope change points (also known as breakpoints) are assumed to be unknown a priori though it is assumed that the possible range of slope values lies within known bounds. A stochastic hidden Markov model that is general enough to encompass real world sources of piecewise linear data is used to model the transitions between slope values and the problem of slope estimation is addressed using a Bayesian maximum a posteriori approach. The set of possible slope values is discretized, enabling the design of a dynamic programming algorithm for posterior density maximization. Numerical simulations are used to justify choice of a reasonable number of quantization levels and also to analyze mean squared error performance of the proposed algorithm. An alternating maximization algorithm is proposed for estimation of unknown model parameters and a convergence result for the method is provided. Finally, results using data from political science, finance and medical imaging applications are presented to demonstrate the practical utility of this procedure.
Container Surface Evaluation by Function Estimation
Energy Technology Data Exchange (ETDEWEB)
Wendelberger, James G. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2017-08-03
Container images are analyzed for specific surface features, such as, pits, cracks, and corrosion. The detection of these features is confounded with complicating features. These complication features include: shape/curvature, welds, edges, scratches, foreign objects among others. A method is provided to discriminate between the various features. The method consists of estimating the image background, determining a residual image and post processing to determine the features present. The methodology is not finalized but demonstrates the feasibility of a method to determine the kind and size of the features present.
Usng subjective percentiles and test data for estimating fragility functions
International Nuclear Information System (INIS)
George, L.L.; Mensing, R.W.
1981-01-01
Fragility functions are cumulative distribution functions (cdfs) of strengths at failure. They are needed for reliability analyses of systems such as power generation and transmission systems. Subjective opinions supplement sparse test data for estimating fragility functions. Often the opinions are opinions on the percentiles of the fragility function. Subjective percentiles are likely to be less biased than opinions on parameters of cdfs. Solutions to several problems in the estimation of fragility functions are found for subjective percentiles and test data. How subjective percentiles should be used to estimate subjective fragility functions, how subjective percentiles should be combined with test data, how fragility functions for several failure modes should be combined into a composite fragility function, and how inherent randomness and uncertainty due to lack of knowledge should be represented are considered. Subjective percentiles are treated as independent estimates of percentiles. The following are derived: least-squares parameter estimators for normal and lognormal cdfs, based on subjective percentiles (the method is applicable to any invertible cdf); a composite fragility function for combining several failure modes; estimators of variation within and between groups of experts for nonidentically distributed subjective percentiles; weighted least-squares estimators when subjective percentiles have higher variation at higher percents; and weighted least-squares and Bayes parameter estimators based on combining subjective percentiles and test data. 4 figures, 2 tables
Bias-corrected estimation of stable tail dependence function
DEFF Research Database (Denmark)
Beirlant, Jan; Escobar-Bach, Mikael; Goegebeur, Yuri
2016-01-01
We consider the estimation of the stable tail dependence function. We propose a bias-corrected estimator and we establish its asymptotic behaviour under suitable assumptions. The finite sample performance of the proposed estimator is evaluated by means of an extensive simulation study where...
On estimation of the intensity function of a point process
Lieshout, van M.N.M.
2010-01-01
Abstract. Estimation of the intensity function of spatial point processes is a fundamental problem. In this paper, we interpret the Delaunay tessellation field estimator recently introduced by Schaap and Van de Weygaert as an adaptive kernel estimator and give explicit expressions for the mean and
On a family of Bessel type functions: Estimations, series, overconvergence
Paneva-Konovska, Jordanka
2017-12-01
A family of the Bessel-Maitland functions are considered in this paper and some useful estimations are obtained for them. Series defined by means of these functions are considered and their behaviour on the boundaries of the convergence domains is discussed. Using the obtained estimations, necessary and sufficient conditions for the series overconvergence, as well as Hadamard type theorem are proposed.
Malware Function Estimation Using API in Initial Behavior
KAWAGUCHI, Naoto; OMOTE, Kazumasa
2017-01-01
Malware proliferation has become a serious threat to the Internet in recent years. Most current malware are subspecies of existing malware that have been automatically generated by illegal tools. To conduct an efficient analysis of malware, estimating their functions in advance is effective when we give priority to analyze malware. However, estimating the malware functions has been difficult due to the increasing sophistication of malware. Actually, the previous researches do not estimate the...
Coefficient Estimate Problem for a New Subclass of Biunivalent Functions
N. Magesh; T. Rosy; S. Varma
2013-01-01
We introduce a unified subclass of the function class Σ of biunivalent functions defined in the open unit disc. Furthermore, we find estimates on the coefficients |a2| and |a3| for functions in this subclass. In addition, many relevant connections with known or new results are pointed out.
Variance computations for functional of absolute risk estimates.
Pfeiffer, R M; Petracci, E
2011-07-01
We present a simple influence function based approach to compute the variances of estimates of absolute risk and functions of absolute risk. We apply this approach to criteria that assess the impact of changes in the risk factor distribution on absolute risk for an individual and at the population level. As an illustration we use an absolute risk prediction model for breast cancer that includes modifiable risk factors in addition to standard breast cancer risk factors. Influence function based variance estimates for absolute risk and the criteria are compared to bootstrap variance estimates.
Unstable volatility functions: the break preserving local linear estimator
DEFF Research Database (Denmark)
Casas, Isabel; Gijbels, Irene
The objective of this paper is to introduce the break preserving local linear (BPLL) estimator for the estimation of unstable volatility functions. Breaks in the structure of the conditional mean and/or the volatility functions are common in Finance. Markov switching models (Hamilton, 1989......) and threshold models (Lin and Terasvirta, 1994) are amongst the most popular models to describe the behaviour of data with structural breaks. The local linear (LL) estimator is not consistent at points where the volatility function has a break and it may even report negative values for finite samples...
Estimating Functions with Prior Knowledge, (EFPK) for diffusions
DEFF Research Database (Denmark)
Nolsøe, Kim; Kessler, Mathieu; Madsen, Henrik
2003-01-01
In this paper a method is formulated in an estimating function setting for parameter estimation, which allows the use of prior information. The main idea is to use prior knowledge of the parameters, either specified as moments restrictions or as a distribution, and use it in the construction of a...... of an estimating function. It may be useful when the full Bayesian analysis is difficult to carry out for computational reasons. This is almost always the case for diffusions, which is the focus of this paper, though the method applies in other settings.......In this paper a method is formulated in an estimating function setting for parameter estimation, which allows the use of prior information. The main idea is to use prior knowledge of the parameters, either specified as moments restrictions or as a distribution, and use it in the construction...
Lipschitz estimates for convex functions with respect to vector fields
Directory of Open Access Journals (Sweden)
Valentino Magnani
2012-12-01
Full Text Available We present Lipschitz continuity estimates for a class of convex functions with respect to Hörmander vector fields. These results have been recently obtained in collaboration with M. Scienza, [22].
Unbiased estimators for spatial distribution functions of classical fluids
Adib, Artur B.; Jarzynski, Christopher
2005-01-01
We use a statistical-mechanical identity closely related to the familiar virial theorem, to derive unbiased estimators for spatial distribution functions of classical fluids. In particular, we obtain estimators for both the fluid density ρ(r) in the vicinity of a fixed solute and the pair correlation g(r) of a homogeneous classical fluid. We illustrate the utility of our estimators with numerical examples, which reveal advantages over traditional histogram-based methods of computing such distributions.
An improved method for estimating the frequency correlation function
Chelli, Ali; Pä tzold, Matthias
2012-01-01
For time-invariant frequency-selective channels, the transfer function is a superposition of waves having different propagation delays and path gains. In order to estimate the frequency correlation function (FCF) of such channels, the frequency averaging technique can be utilized. The obtained FCF can be expressed as a sum of auto-terms (ATs) and cross-terms (CTs). The ATs are caused by the autocorrelation of individual path components. The CTs are due to the cross-correlation of different path components. These CTs have no physical meaning and leads to an estimation error. We propose a new estimation method aiming to improve the estimation accuracy of the FCF of a band-limited transfer function. The basic idea behind the proposed method is to introduce a kernel function aiming to reduce the CT effect, while preserving the ATs. In this way, we can improve the estimation of the FCF. The performance of the proposed method and the frequency averaging technique is analyzed using a synthetically generated transfer function. We show that the proposed method is more accurate than the frequency averaging technique. The accurate estimation of the FCF is crucial for the system design. In fact, we can determine the coherence bandwidth from the FCF. The exact knowledge of the coherence bandwidth is beneficial in both the design as well as optimization of frequency interleaving and pilot arrangement schemes. © 2012 IEEE.
An improved method for estimating the frequency correlation function
Chelli, Ali
2012-04-01
For time-invariant frequency-selective channels, the transfer function is a superposition of waves having different propagation delays and path gains. In order to estimate the frequency correlation function (FCF) of such channels, the frequency averaging technique can be utilized. The obtained FCF can be expressed as a sum of auto-terms (ATs) and cross-terms (CTs). The ATs are caused by the autocorrelation of individual path components. The CTs are due to the cross-correlation of different path components. These CTs have no physical meaning and leads to an estimation error. We propose a new estimation method aiming to improve the estimation accuracy of the FCF of a band-limited transfer function. The basic idea behind the proposed method is to introduce a kernel function aiming to reduce the CT effect, while preserving the ATs. In this way, we can improve the estimation of the FCF. The performance of the proposed method and the frequency averaging technique is analyzed using a synthetically generated transfer function. We show that the proposed method is more accurate than the frequency averaging technique. The accurate estimation of the FCF is crucial for the system design. In fact, we can determine the coherence bandwidth from the FCF. The exact knowledge of the coherence bandwidth is beneficial in both the design as well as optimization of frequency interleaving and pilot arrangement schemes. © 2012 IEEE.
Bayesian error estimation in density-functional theory
DEFF Research Database (Denmark)
Mortensen, Jens Jørgen; Kaasbjerg, Kristen; Frederiksen, Søren Lund
2005-01-01
We present a practical scheme for performing error estimates for density-functional theory calculations. The approach, which is based on ideas from Bayesian statistics, involves creating an ensemble of exchange-correlation functionals by comparing with an experimental database of binding energies...
On approximation and energy estimates for delta 6-convex functions.
Saleem, Muhammad Shoaib; Pečarić, Josip; Rehman, Nasir; Khan, Muhammad Wahab; Zahoor, Muhammad Sajid
2018-01-01
The smooth approximation and weighted energy estimates for delta 6-convex functions are derived in this research. Moreover, we conclude that if 6-convex functions are closed in uniform norm, then their third derivatives are closed in weighted [Formula: see text]-norm.
On approximation and energy estimates for delta 6-convex functions
Directory of Open Access Journals (Sweden)
Muhammad Shoaib Saleem
2018-02-01
Full Text Available Abstract The smooth approximation and weighted energy estimates for delta 6-convex functions are derived in this research. Moreover, we conclude that if 6-convex functions are closed in uniform norm, then their third derivatives are closed in weighted L2 $L^{2}$-norm.
Investigation of MLE in nonparametric estimation methods of reliability function
International Nuclear Information System (INIS)
Ahn, Kwang Won; Kim, Yoon Ik; Chung, Chang Hyun; Kim, Kil Yoo
2001-01-01
There have been lots of trials to estimate a reliability function. In the ESReDA 20 th seminar, a new method in nonparametric way was proposed. The major point of that paper is how to use censored data efficiently. Generally there are three kinds of approach to estimate a reliability function in nonparametric way, i.e., Reduced Sample Method, Actuarial Method and Product-Limit (PL) Method. The above three methods have some limits. So we suggest an advanced method that reflects censored information more efficiently. In many instances there will be a unique maximum likelihood estimator (MLE) of an unknown parameter, and often it may be obtained by the process of differentiation. It is well known that the three methods generally used to estimate a reliability function in nonparametric way have maximum likelihood estimators that are uniquely exist. So, MLE of the new method is derived in this study. The procedure to calculate a MLE is similar just like that of PL-estimator. The difference of the two is that in the new method, the mass (or weight) of each has an influence of the others but the mass in PL-estimator not
Development on electromagnetic impedance function modeling and its estimation
Energy Technology Data Exchange (ETDEWEB)
Sutarno, D., E-mail: Sutarno@fi.itb.ac.id [Earth Physics and Complex System Division Faculty of Mathematics and Natural Sciences Institut Teknologi Bandung (Indonesia)
2015-09-30
Today the Electromagnetic methods such as magnetotellurics (MT) and controlled sources audio MT (CSAMT) is used in a broad variety of applications. Its usefulness in poor seismic areas and its negligible environmental impact are integral parts of effective exploration at minimum cost. As exploration was forced into more difficult areas, the importance of MT and CSAMT, in conjunction with other techniques, has tended to grow continuously. However, there are obviously important and difficult problems remaining to be solved concerning our ability to collect process and interpret MT as well as CSAMT in complex 3D structural environments. This talk aim at reviewing and discussing the recent development on MT as well as CSAMT impedance functions modeling, and also some improvements on estimation procedures for the corresponding impedance functions. In MT impedance modeling, research efforts focus on developing numerical method for computing the impedance functions of three dimensionally (3-D) earth resistivity models. On that reason, 3-D finite elements numerical modeling for the impedances is developed based on edge element method. Whereas, in the CSAMT case, the efforts were focused to accomplish the non-plane wave problem in the corresponding impedance functions. Concerning estimation of MT and CSAMT impedance functions, researches were focused on improving quality of the estimates. On that objective, non-linear regression approach based on the robust M-estimators and the Hilbert transform operating on the causal transfer functions, were used to dealing with outliers (abnormal data) which are frequently superimposed on a normal ambient MT as well as CSAMT noise fields. As validated, the proposed MT impedance modeling method gives acceptable results for standard three dimensional resistivity models. Whilst, the full solution based modeling that accommodate the non-plane wave effect for CSAMT impedances is applied for all measurement zones, including near-, transition
On Improving Density Estimators which are not Bona Fide Functions
Gajek, Leslaw
1986-01-01
In order to improve the rate of decrease of the IMSE for nonparametric kernel density estimators with nonrandom bandwidth beyond $O(n^{-4/5})$ all current methods must relax the constraint that the density estimate be a bona fide function, that is, be nonnegative and integrate to one. In this paper we show how to achieve similar improvement without relaxing any of these constraints. The method can also be applied for orthogonal series, adaptive orthogonal series, spline, jackknife, and other ...
Optimal Bandwidth Selection for Kernel Density Functionals Estimation
Directory of Open Access Journals (Sweden)
Su Chen
2015-01-01
Full Text Available The choice of bandwidth is crucial to the kernel density estimation (KDE and kernel based regression. Various bandwidth selection methods for KDE and local least square regression have been developed in the past decade. It has been known that scale and location parameters are proportional to density functionals ∫γ(xf2(xdx with appropriate choice of γ(x and furthermore equality of scale and location tests can be transformed to comparisons of the density functionals among populations. ∫γ(xf2(xdx can be estimated nonparametrically via kernel density functionals estimation (KDFE. However, the optimal bandwidth selection for KDFE of ∫γ(xf2(xdx has not been examined. We propose a method to select the optimal bandwidth for the KDFE. The idea underlying this method is to search for the optimal bandwidth by minimizing the mean square error (MSE of the KDFE. Two main practical bandwidth selection techniques for the KDFE of ∫γ(xf2(xdx are provided: Normal scale bandwidth selection (namely, “Rule of Thumb” and direct plug-in bandwidth selection. Simulation studies display that our proposed bandwidth selection methods are superior to existing density estimation bandwidth selection methods in estimating density functionals.
A comparison of dependence function estimators in multivariate extremes
Vettori, Sabrina; Huser, Raphaë l; Genton, Marc G.
2017-01-01
Various nonparametric and parametric estimators of extremal dependence have been proposed in the literature. Nonparametric methods commonly suffer from the curse of dimensionality and have been mostly implemented in extreme-value studies up to three dimensions, whereas parametric models can tackle higher-dimensional settings. In this paper, we assess, through a vast and systematic simulation study, the performance of classical and recently proposed estimators in multivariate settings. In particular, we first investigate the performance of nonparametric methods and then compare them with classical parametric approaches under symmetric and asymmetric dependence structures within the commonly used logistic family. We also explore two different ways to make nonparametric estimators satisfy the necessary dependence function shape constraints, finding a general improvement in estimator performance either (i) by substituting the estimator with its greatest convex minorant, developing a computational tool to implement this method for dimensions $$D\\ge 2$$D≥2 or (ii) by projecting the estimator onto a subspace of dependence functions satisfying such constraints and taking advantage of Bernstein–Bézier polynomials. Implementing the convex minorant method leads to better estimator performance as the dimensionality increases.
A comparison of dependence function estimators in multivariate extremes
Vettori, Sabrina
2017-05-11
Various nonparametric and parametric estimators of extremal dependence have been proposed in the literature. Nonparametric methods commonly suffer from the curse of dimensionality and have been mostly implemented in extreme-value studies up to three dimensions, whereas parametric models can tackle higher-dimensional settings. In this paper, we assess, through a vast and systematic simulation study, the performance of classical and recently proposed estimators in multivariate settings. In particular, we first investigate the performance of nonparametric methods and then compare them with classical parametric approaches under symmetric and asymmetric dependence structures within the commonly used logistic family. We also explore two different ways to make nonparametric estimators satisfy the necessary dependence function shape constraints, finding a general improvement in estimator performance either (i) by substituting the estimator with its greatest convex minorant, developing a computational tool to implement this method for dimensions $$D\\\\ge 2$$D≥2 or (ii) by projecting the estimator onto a subspace of dependence functions satisfying such constraints and taking advantage of Bernstein–Bézier polynomials. Implementing the convex minorant method leads to better estimator performance as the dimensionality increases.
Consistent Parameter and Transfer Function Estimation using Context Free Grammars
Klotz, Daniel; Herrnegger, Mathew; Schulz, Karsten
2017-04-01
This contribution presents a method for the inference of transfer functions for rainfall-runoff models. Here, transfer functions are defined as parametrized (functional) relationships between a set of spatial predictors (e.g. elevation, slope or soil texture) and model parameters. They are ultimately used for estimation of consistent, spatially distributed model parameters from a limited amount of lumped global parameters. Additionally, they provide a straightforward method for parameter extrapolation from one set of basins to another and can even be used to derive parameterizations for multi-scale models [see: Samaniego et al., 2010]. Yet, currently an actual knowledge of the transfer functions is often implicitly assumed. As a matter of fact, for most cases these hypothesized transfer functions can rarely be measured and often remain unknown. Therefore, this contribution presents a general method for the concurrent estimation of the structure of transfer functions and their respective (global) parameters. Note, that by consequence an estimation of the distributed parameters of the rainfall-runoff model is also undertaken. The method combines two steps to achieve this. The first generates different possible transfer functions. The second then estimates the respective global transfer function parameters. The structural estimation of the transfer functions is based on the context free grammar concept. Chomsky first introduced context free grammars in linguistics [Chomsky, 1956]. Since then, they have been widely applied in computer science. But, to the knowledge of the authors, they have so far not been used in hydrology. Therefore, the contribution gives an introduction to context free grammars and shows how they can be constructed and used for the structural inference of transfer functions. This is enabled by new methods from evolutionary computation, such as grammatical evolution [O'Neill, 2001], which make it possible to exploit the constructed grammar as a
Local gradient estimate for harmonic functions on Finsler manifolds
Xia, Chao
2013-01-01
In this paper, we prove the local gradient estimate for harmonic functions on complete, noncompact Finsler measure spaces under the condition that the weighted Ricci curvature has a lower bound. As applications, we obtain Liouville type theorem on Finsler manifolds with nonnegative Ricci curvature.
Estimating variability in functional images using a synthetic resampling approach
International Nuclear Information System (INIS)
Maitra, R.; O'Sullivan, F.
1996-01-01
Functional imaging of biologic parameters like in vivo tissue metabolism is made possible by Positron Emission Tomography (PET). Many techniques, such as mixture analysis, have been suggested for extracting such images from dynamic sequences of reconstructed PET scans. Methods for assessing the variability in these functional images are of scientific interest. The nonlinearity of the methods used in the mixture analysis approach makes analytic formulae for estimating variability intractable. The usual resampling approach is infeasible because of the prohibitive computational effort in simulating a number of sinogram. datasets, applying image reconstruction, and generating parametric images for each replication. Here we introduce an approach that approximates the distribution of the reconstructed PET images by a Gaussian random field and generates synthetic realizations in the imaging domain. This eliminates the reconstruction steps in generating each simulated functional image and is therefore practical. Results of experiments done to evaluate the approach on a model one-dimensional problem are very encouraging. Post-processing of the estimated variances is seen to improve the accuracy of the estimation method. Mixture analysis is used to estimate functional images; however, the suggested approach is general enough to extend to other parametric imaging methods
Estimating Aggregate Import-Demand Function In Nigeria: A Co ...
African Journals Online (AJOL)
This paper investigates the behaviour of Nigeria's aggregate imports between the periods 1980-2005. In the empirical analysis of the aggregate import demand function for Nigeria, cointegration and Error Correction modeling approaches have been used. Our econometric estimates suggest that real GDP largely explains ...
School District Inputs and Biased Estimation of Educational Production Functions.
Watts, Michael
1985-01-01
In 1979, Eric Hanushek pointed out a potential problem in estimating educational production functions, particularly at the precollege level. He observed that it is frequently inappropriate to include school-system variables in equations using the individual student as the unit of observation. This study offers limited evidence supporting this…
On the robust nonparametric regression estimation for a functional regressor
Azzedine , Nadjia; Laksaci , Ali; Ould-Saïd , Elias
2009-01-01
On the robust nonparametric regression estimation for a functional regressor correspondance: Corresponding author. (Ould-Said, Elias) (Azzedine, Nadjia) (Laksaci, Ali) (Ould-Said, Elias) Departement de Mathematiques--> , Univ. Djillali Liabes--> , BP 89--> , 22000 Sidi Bel Abbes--> - ALGERIA (Azzedine, Nadjia) Departement de Mathema...
estimating an aggregate import demand function for ghana
African Journals Online (AJOL)
Administrator
we estimate an import demand function for Ghana for the period 1970 to ... results also indicate that economic growth (real GDP) and depreciation in the ... 80% of shocks to real exchange rates, merchandise imports and GDP ... imports; capital goods, 43 percent; intermediate ... merchandise imports (World Bank, 2004). For.
Functional Mixed Effects Model for Small Area Estimation.
Maiti, Tapabrata; Sinha, Samiran; Zhong, Ping-Shou
2016-09-01
Functional data analysis has become an important area of research due to its ability of handling high dimensional and complex data structures. However, the development is limited in the context of linear mixed effect models, and in particular, for small area estimation. The linear mixed effect models are the backbone of small area estimation. In this article, we consider area level data, and fit a varying coefficient linear mixed effect model where the varying coefficients are semi-parametrically modeled via B-splines. We propose a method of estimating the fixed effect parameters and consider prediction of random effects that can be implemented using a standard software. For measuring prediction uncertainties, we derive an analytical expression for the mean squared errors, and propose a method of estimating the mean squared errors. The procedure is illustrated via a real data example, and operating characteristics of the method are judged using finite sample simulation studies.
Impact of Base Functional Component Types on Software Functional Size based Effort Estimation
Gencel, Cigdem; Buglione, Luigi
2008-01-01
Software effort estimation is still a significant challenge for software management. Although Functional Size Measurement (FSM) methods have been standardized and have become widely used by the software organizations, the relationship between functional size and development effort still needs further investigation. Most of the studies focus on the project cost drivers and consider total software functional size as the primary input to estimation models. In this study, we investigate whether u...
On Estimation of the CES Production Function - Revisited
DEFF Research Database (Denmark)
Henningsen, Arne; Henningsen, Geraldine
2012-01-01
Estimation of the non-linear Constant Elasticity of Scale (CES) function is generally considered problematic due to convergence problems and unstable and/or meaningless results. These problems often arise from a non-smooth objective function with large flat areas, the discontinuity of the CES...... function where the elasticity of substitution is one, and possibly significant rounding errors where the elasticity of substitution is close to one. We suggest three (combinable) solutions that alleviate these problems and improve the reliability and stability of the results....
A note on reliability estimation of functionally diverse systems
International Nuclear Information System (INIS)
Littlewood, B.; Popov, P.; Strigini, L.
1999-01-01
It has been argued that functional diversity might be a plausible means of claiming independence of failures between two versions of a system. We present a model of functional diversity, in the spirit of earlier models of diversity such as those of Eckhardt and Lee, and Hughes. In terms of the model, we show that the claims for independence between functionally diverse systems seem rather unrealistic. Instead, it seems likely that functionally diverse systems will exhibit positively correlated failures, and thus will be less reliable than an assumption of independence would suggest. The result does not, of course, suggest that functional diversity is not worthwhile; instead, it places upon the evaluator of such a system the onus to estimate the degree of dependence so as to evaluate the reliability of the system
A single model procedure for tank calibration function estimation
International Nuclear Information System (INIS)
York, J.C.; Liebetrau, A.M.
1995-01-01
Reliable tank calibrations are a vital component of any measurement control and accountability program for bulk materials in a nuclear reprocessing facility. Tank volume calibration functions used in nuclear materials safeguards and accountability programs are typically constructed from several segments, each of which is estimated independently. Ideally, the segments correspond to structural features in the tank. In this paper the authors use an extension of the Thomas-Liebetrau model to estimate the entire calibration function in a single step. This procedure automatically takes significant run-to-run differences into account and yields an estimate of the entire calibration function in one operation. As with other procedures, the first step is to define suitable calibration segments. Next, a polynomial of low degree is specified for each segment. In contrast with the conventional practice of constructing a separate model for each segment, this information is used to set up the design matrix for a single model that encompasses all of the calibration data. Estimation of the model parameters is then done using conventional statistical methods. The method described here has several advantages over traditional methods. First, modeled run-to-run differences can be taken into account automatically at the estimation step. Second, no interpolation is required between successive segments. Third, variance estimates are based on all the data, rather than that from a single segment, with the result that discontinuities in confidence intervals at segment boundaries are eliminated. Fourth, the restrictive assumption of the Thomas-Liebetrau method, that the measured volumes be the same for all runs, is not required. Finally, the proposed methods are readily implemented using standard statistical procedures and widely-used software packages
Estimations for the Schwinger functions of relativistic quantum field theories
International Nuclear Information System (INIS)
Mayer, C.D.
1981-01-01
Schwinger functions of a relativistic neutral scalar field the basing test function space of which is S or D are estimated by methods of the analytic continuation. Concerning the behaviour in coincident points it is shown: The two-point singularity of the n-point Schwinger function of a field theory is dominated by an inverse power of the distance of both points modulo a multiplicative constant, if the other n-2 points a sufficiently distant and remain fixed. The power thereby, depends only on n. Using additional conditions on the field the independence of the power on n may be proved. Concerning the behaviour at infinite it is shown: The n-point Schwinger functions of a field theory are globally bounded, if the minimal distance of the arguments is positive. The bound depends only on n and the minimal distance of the arguments. (orig.) [de
Joint brain connectivity estimation from diffusion and functional MRI data
Chu, Shu-Hsien; Lenglet, Christophe; Parhi, Keshab K.
2015-03-01
Estimating brain wiring patterns is critical to better understand the brain organization and function. Anatomical brain connectivity models axonal pathways, while the functional brain connectivity characterizes the statistical dependencies and correlation between the activities of various brain regions. The synchronization of brain activity can be inferred through the variation of blood-oxygen-level dependent (BOLD) signal from functional MRI (fMRI) and the neural connections can be estimated using tractography from diffusion MRI (dMRI). Functional connections between brain regions are supported by anatomical connections, and the synchronization of brain activities arises through sharing of information in the form of electro-chemical signals on axon pathways. Jointly modeling fMRI and dMRI data may improve the accuracy in constructing anatomical connectivity as well as functional connectivity. Such an approach may lead to novel multimodal biomarkers potentially able to better capture functional and anatomical connectivity variations. We present a novel brain network model which jointly models the dMRI and fMRI data to improve the anatomical connectivity estimation and extract the anatomical subnetworks associated with specific functional modes by constraining the anatomical connections as structural supports to the functional connections. The key idea is similar to a multi-commodity flow optimization problem that minimizes the cost or maximizes the efficiency for flow configuration and simultaneously fulfills the supply-demand constraint for each commodity. In the proposed network, the nodes represent the grey matter (GM) regions providing brain functionality, and the links represent white matter (WM) fiber bundles connecting those regions and delivering information. The commodities can be thought of as the information corresponding to brain activity patterns as obtained for instance by independent component analysis (ICA) of fMRI data. The concept of information
Some aspects of the translog production function estimation
Directory of Open Access Journals (Sweden)
Florin-Marius PAVELESCU
2011-06-01
Full Text Available In a translog production function, the number of parameters practically öexplodesö as the number of considered production factors increases. Consequently, the shortcoming in the estimation of the respective production function is the occurrence of collinearity. Theoretically, the collinearity impact is minimum if a single production factor is taken into account. In this case, we can determine not only the output elasticity but also the elasticity of scale related to the respective production factor. In the present paper, we demonstrate that the relationship between the output elasticity and estimated average elasticity of scale depends on the dynamics trajectory of the production factor, underexponential and overexponential, respectively. At the end, a practical example is offered, dealing with the computation of the Gross Domestic Product elasticity and average elasticity of scale related to employed population in the United Kingdom and France during 1999-2009.
Estimation of Correlation Functions by the Random Decrement Technique
DEFF Research Database (Denmark)
Brincker, Rune; Krenk, Steen; Jensen, Jakob Laigaard
responses simulated by two SDOF ARMA models loaded by the same bandlimited white noise. The speed and the accuracy of the RDD technique is compared to the Fast Fourier Transform (FFT) technique. The RDD technique does not involve multiplications, but only additions. Therefore, the technique is very fast......The Random Decrement (RDD) Technique is a versatile technique for characterization of random signals in the time domain. In this paper a short review of the theoretical basis is given, and the technique is illustrated by estimating auto-correlation functions and cross-correlation functions on modal...
Estimation of Correlation Functions by the Random Decrement Technique
DEFF Research Database (Denmark)
Brincker, Rune; Krenk, Steen; Jensen, Jacob Laigaard
1991-01-01
responses simulated by two SDOF ARMA models loaded by the same band-limited white noise. The speed and the accuracy of the RDD technique is compared to the Fast Fourier Transform (FFT) technique. The RDD technique does not involve multiplications, but only additions. Therefore, the technique is very fast......The Random Decrement (RDD) Technique is a versatile technique for characterization of random signals in the time domain. In this paper a short review of the theoretical basis is given, and the technique is illustrated by estimating auto-correlation functions and cross-correlation functions on modal...
Estimation of Correlation Functions by the Random Decrement Technique
DEFF Research Database (Denmark)
Brincker, Rune; Krenk, Steen; Jensen, Jakob Laigaard
1992-01-01
responses simulated by two SDOF ARMA models loaded by the same bandlimited white noise. The speed and the accuracy of the RDD technique is compared to the Fast Fourier Transform (FFT) technique. The RDD technique does not involve multiplications, but only additions. Therefore, the technique is very fast......The Random Decrement (RDD) Technique is a versatile technique for characterization of random signals in the time domain. In this paper a short review of the theoretical basis is given, and the technique is illustrated by estimating auto-correlation functions and cross-correlation functions on modal...
Estimating unsaturated hydraulic conductivity from soil moisture-tim function
International Nuclear Information System (INIS)
El Gendy, R.W.
2002-01-01
The unsaturated hydraulic conductivity for soil can be estimated from o(t) function, and the dimensionless soil water content parameter (Se)Se (β - βr)/ (φ - θ)), where θ, is the soil water content at any time (from soil moisture depletion curve l; θ is the residual water content and θ, is the total soil porosity (equals saturation point). Se can be represented as a time function (Se = a t b ), where t, is the measurement time and (a and b) are the regression constants. The recommended equation in this method is given by
Asymptotic normality of kernel estimator of $\\psi$-regression function for functional ergodic data
Laksaci ALI; Benziadi Fatima; Gheriballak Abdelkader
2016-01-01
In this paper we consider the problem of the estimation of the $\\psi$-regression function when the covariates take values in an infinite dimensional space. Our main aim is to establish, under a stationary ergodic process assumption, the asymptotic normality of this estimate.
Klement, Aleš; Kodešová, Radka; Bauerová, Martina; Golovko, Oksana; Kočárek, Martin; Fér, Miroslav; Koba, Olga; Nikodem, Antonín; Grabic, Roman
2018-03-01
The sorption of 3 pharmaceuticals, which may exist in 4 different forms depending on the solution pH (irbesartan in cationic, neutral and anionic, fexofenadine in cationic, zwitter-ionic and anionic, and citalopram cationic and neutral), in seven different soils was studied. The measured sorption isotherms were described by Freundlich equations, and the sorption coefficients, K F (for the fixed n exponent for each compound), were related to the soil properties to derive relationships for estimating the sorption coefficients from the soil properties (i.e., pedotransfer rules). The largest sorption was obtained for citalopram (average K F value for n = 1 was 1838 cm 3 g -1 ) followed by fexofenadine (K F = 35.1 cm 3/n μg 1-1/n g -1 , n = 1.19) and irbesartan (K F = 3.96 cm 3/n μg 1-1/n g -1 , n = 1.10). The behavior of citalopram (CIT) in soils was different than the behaviors of irbesartan (IRB) and fexofenadine (FEX). Different trends were documented according to the correlation coefficients between the K F values for different compounds (R IRB,FEX = 0.895, p-valuesoil properties in the pedotransfer functions. While the K F value for citalopram was positively related to base cation saturation (BCS) or sorption complex saturation (SCS) and negatively correlated to the organic carbon content (Cox), the K F values of irbesartan and fexofenadine were negatively related to BCS, SCS or the clay content and positively related to Cox. The best estimates were obtained by combining BCS and Cox for citalopram (R 2 = 93.4), SCS and Cox for irbesartan (R 2 = 96.3), and clay content and Cox for fexofenadine (R 2 = 82.9). Copyright © 2017 Elsevier Ltd. All rights reserved.
Machine Learning Estimation of Atom Condensed Fukui Functions.
Zhang, Qingyou; Zheng, Fangfang; Zhao, Tanfeng; Qu, Xiaohui; Aires-de-Sousa, João
2016-02-01
To enable the fast estimation of atom condensed Fukui functions, machine learning algorithms were trained with databases of DFT pre-calculated values for ca. 23,000 atoms in organic molecules. The problem was approached as the ranking of atom types with the Bradley-Terry (BT) model, and as the regression of the Fukui function. Random Forests (RF) were trained to predict the condensed Fukui function, to rank atoms in a molecule, and to classify atoms as high/low Fukui function. Atomic descriptors were based on counts of atom types in spheres around the kernel atom. The BT coefficients assigned to atom types enabled the identification (93-94 % accuracy) of the atom with the highest Fukui function in pairs of atoms in the same molecule with differences ≥0.1. In whole molecules, the atom with the top Fukui function could be recognized in ca. 50 % of the cases and, on the average, about 3 of the top 4 atoms could be recognized in a shortlist of 4. Regression RF yielded predictions for test sets with R(2) =0.68-0.69, improving the ability of BT coefficients to rank atoms in a molecule. Atom classification (as high/low Fukui function) was obtained with RF with sensitivity of 55-61 % and specificity of 94-95 %. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Klotz, Daniel; Herrnegger, Mathew; Schulz, Karsten
2016-04-01
This contribution presents a framework, which enables the use of an Evolutionary Algorithm (EA) for the calibration and regionalization of the hydrological model COSEROreg. COSEROreg uses an updated version of the HBV-type model COSERO (Kling et al. 2014) for the modelling of hydrological processes and is embedded in a parameter regionalization scheme based on Samaniego et al. (2010). The latter uses subscale-information to estimate model via a-priori chosen transfer functions (often derived from pedotransfer functions). However, the transferability of the regionalization scheme to different model-concepts and the integration of new forms of subscale information is not straightforward. (i) The usefulness of (new) single sub-scale information layers is unknown beforehand. (ii) Additionally, the establishment of functional relationships between these (possibly meaningless) sub-scale information layers and the distributed model parameters remain a central challenge in the implementation of a regionalization procedure. The proposed method theoretically provides a framework to overcome this challenge. The implementation of the EA encompasses the following procedure: First, a formal grammar is specified (Ryan et al., 1998). The construction of the grammar thereby defines the set of possible transfer functions and also allows to incorporate hydrological domain knowledge into the search itself. The EA iterates over the given space by combining parameterized basic functions (e.g. linear- or exponential functions) and sub-scale information layers into transfer functions, which are then used in COSEROreg. However, a pre-selection model is applied beforehand to sort out unfeasible proposals by the EA and to reduce the necessary model runs. A second optimization routine is used to optimize the parameters of the transfer functions proposed by the EA. This concept, namely using two nested optimization loops, is inspired by the idea of Lamarckian Evolution and Baldwin Effect
Conical square function estimates in UMD Banach spaces and applications to H?-functional calculi
Hytönen, T.; Van Neerven, J.; Portal, P.
2008-01-01
We study conical square function estimates for Banach-valued functions and introduce a vector-valued analogue of the Coifman-Meyer-Stein tent spaces. Following recent work of Auscher-M(c)Intosh-Russ, the tent spaces in turn are used to construct a scale of vector-valued Hardy spaces associated with
Method for estimating modulation transfer function from sample images.
Saiga, Rino; Takeuchi, Akihisa; Uesugi, Kentaro; Terada, Yasuko; Suzuki, Yoshio; Mizutani, Ryuta
2018-02-01
The modulation transfer function (MTF) represents the frequency domain response of imaging modalities. Here, we report a method for estimating the MTF from sample images. Test images were generated from a number of images, including those taken with an electron microscope and with an observation satellite. These original images were convolved with point spread functions (PSFs) including those of circular apertures. The resultant test images were subjected to a Fourier transformation. The logarithm of the squared norm of the Fourier transform was plotted against the squared distance from the origin. Linear correlations were observed in the logarithmic plots, indicating that the PSF of the test images can be approximated with a Gaussian. The MTF was then calculated from the Gaussian-approximated PSF. The obtained MTF closely coincided with the MTF predicted from the original PSF. The MTF of an x-ray microtomographic section of a fly brain was also estimated with this method. The obtained MTF showed good agreement with the MTF determined from an edge profile of an aluminum test object. We suggest that this approach is an alternative way of estimating the MTF, independently of the image type. Copyright © 2017 Elsevier Ltd. All rights reserved.
Directory of Open Access Journals (Sweden)
Sanjay Kumar Singh
2011-06-01
Full Text Available In this Paper we propose Bayes estimators of the parameters of Exponentiated Exponential distribution and Reliability functions under General Entropy loss function for Type II censored sample. The proposed estimators have been compared with the corresponding Bayes estimators obtained under Squared Error loss function and maximum likelihood estimators for their simulated risks (average loss over sample space.
Towards an Early Software Effort Estimation Based on Functional and Non-Functional Requirements
Kassab, Mohamed; Daneva, Maya; Ormandjieva, Olga
The increased awareness of the non-functional requirements as a key to software project and product success makes explicit the need to include them in any software project effort estimation activity. However, the existing approaches to defining size-based effort relationships still pay insufficient attention to this need. This paper presents a flexible, yet systematic approach to the early requirements-based effort estimation, based on Non-Functional Requirements ontology. It complementarily uses one standard functional size measurement model and a linear regression technique. We report on a case study which illustrates the application of our solution approach in context and also helps evaluate our experiences in using it.
Power estimation on functional level for programmable processors
Directory of Open Access Journals (Sweden)
M. Schneider
2004-01-01
Full Text Available In diesem Beitrag werden verschiedene Ansätze zur Verlustleistungsschätzung von programmierbaren Prozessoren vorgestellt und bezüglich ihrer Übertragbarkeit auf moderne Prozessor-Architekturen wie beispielsweise Very Long Instruction Word (VLIW-Architekturen bewertet. Besonderes Augenmerk liegt hierbei auf dem Konzept der sogenannten Functional-Level Power Analysis (FLPA. Dieser Ansatz basiert auf der Einteilung der Prozessor-Architektur in funktionale Blöcke wie beispielsweise Processing-Unit, Clock-Netzwerk, interner Speicher und andere. Die Verlustleistungsaufnahme dieser Bl¨ocke wird parameterabhängig durch arithmetische Modellfunktionen beschrieben. Durch automatisierte Analyse von Assemblercodes des zu schätzenden Systems mittels eines Parsers können die Eingangsparameter wie beispielsweise der erzielte Parallelitätsgrad oder die Art des Speicherzugriffs gewonnen werden. Dieser Ansatz wird am Beispiel zweier moderner digitaler Signalprozessoren durch eine Vielzahl von Basis-Algorithmen der digitalen Signalverarbeitung evaluiert. Die ermittelten Schätzwerte für die einzelnen Algorithmen werden dabei mit physikalisch gemessenen Werten verglichen. Es ergibt sich ein sehr kleiner maximaler Schätzfehler von 3%. In this contribution different approaches for power estimation for programmable processors are presented and evaluated concerning their capability to be applied to modern digital signal processor architectures like e.g. Very Long InstructionWord (VLIW -architectures. Special emphasis will be laid on the concept of so-called Functional-Level Power Analysis (FLPA. This approach is based on the separation of the processor architecture into functional blocks like e.g. processing unit, clock network, internal memory and others. The power consumption of these blocks is described by parameter dependent arithmetic model functions. By application of a parser based automized analysis of assembler codes of the systems to be estimated
Power estimation on functional level for programmable processors
Schneider, M.; Blume, H.; Noll, T. G.
2004-05-01
In diesem Beitrag werden verschiedene Ansätze zur Verlustleistungsschätzung von programmierbaren Prozessoren vorgestellt und bezüglich ihrer Übertragbarkeit auf moderne Prozessor-Architekturen wie beispielsweise Very Long Instruction Word (VLIW)-Architekturen bewertet. Besonderes Augenmerk liegt hierbei auf dem Konzept der sogenannten Functional-Level Power Analysis (FLPA). Dieser Ansatz basiert auf der Einteilung der Prozessor-Architektur in funktionale Blöcke wie beispielsweise Processing-Unit, Clock-Netzwerk, interner Speicher und andere. Die Verlustleistungsaufnahme dieser Bl¨ocke wird parameterabhängig durch arithmetische Modellfunktionen beschrieben. Durch automatisierte Analyse von Assemblercodes des zu schätzenden Systems mittels eines Parsers können die Eingangsparameter wie beispielsweise der erzielte Parallelitätsgrad oder die Art des Speicherzugriffs gewonnen werden. Dieser Ansatz wird am Beispiel zweier moderner digitaler Signalprozessoren durch eine Vielzahl von Basis-Algorithmen der digitalen Signalverarbeitung evaluiert. Die ermittelten Schätzwerte für die einzelnen Algorithmen werden dabei mit physikalisch gemessenen Werten verglichen. Es ergibt sich ein sehr kleiner maximaler Schätzfehler von 3%. In this contribution different approaches for power estimation for programmable processors are presented and evaluated concerning their capability to be applied to modern digital signal processor architectures like e.g. Very Long InstructionWord (VLIW) -architectures. Special emphasis will be laid on the concept of so-called Functional-Level Power Analysis (FLPA). This approach is based on the separation of the processor architecture into functional blocks like e.g. processing unit, clock network, internal memory and others. The power consumption of these blocks is described by parameter dependent arithmetic model functions. By application of a parser based automized analysis of assembler codes of the systems to be estimated the input
Optimal estimation of the intensity function of a spatial point process
DEFF Research Database (Denmark)
Guan, Yongtao; Jalilian, Abdollah; Waagepetersen, Rasmus
easily computable estimating functions. We derive the optimal estimating function in a class of first-order estimating functions. The optimal estimating function depends on the solution of a certain Fredholm integral equation and reduces to the likelihood score in case of a Poisson process. We discuss...
Influence function method for fast estimation of BWR core performance
International Nuclear Information System (INIS)
Rahnema, F.; Martin, C.L.; Parkos, G.R.; Williams, R.D.
1993-01-01
The model, which is based on the influence function method, provides rapid estimate of important quantities such as margins to fuel operating limits, the effective multiplication factor, nodal power and void and bundle flow distributions as well as the traversing in-core probe (TIP) and local power range monitor (LPRM) readings. The fast model has been incorporated into GE's three-dimensional core monitoring system (3D Monicore). In addition to its predicative capability, the model adapts to LPRM readings in the monitoring mode. Comparisons have shown that the agreement between the results of the fast method and those of the standard 3D Monicore is within a few percent. (orig.)
Spectral velocity estimation using autocorrelation functions for sparse data sets
DEFF Research Database (Denmark)
2006-01-01
The distribution of velocities of blood or tissue is displayed using ultrasound scanners by finding the power spectrum of the received signal. This is currently done by making a Fourier transform of the received signal and then showing spectra in an M-mode display. It is desired to show a B......-mode image for orientation, and data for this has to acquired interleaved with the flow data. The power spectrum can be calculated from the Fourier transform of the autocorrelation function Ry (k), where its span of lags k is given by the number of emission N in the data segment for velocity estimation...
Bayesian Parameter Estimation via Filtering and Functional Approximations
Matthies, Hermann G.
2016-11-25
The inverse problem of determining parameters in a model by comparing some output of the model with observations is addressed. This is a description for what hat to be done to use the Gauss-Markov-Kalman filter for the Bayesian estimation and updating of parameters in a computational model. This is a filter acting on random variables, and while its Monte Carlo variant --- the Ensemble Kalman Filter (EnKF) --- is fairly straightforward, we subsequently only sketch its implementation with the help of functional representations.
Bayesian Parameter Estimation via Filtering and Functional Approximations
Matthies, Hermann G.; Litvinenko, Alexander; Rosic, Bojana V.; Zander, Elmar
2016-01-01
The inverse problem of determining parameters in a model by comparing some output of the model with observations is addressed. This is a description for what hat to be done to use the Gauss-Markov-Kalman filter for the Bayesian estimation and updating of parameters in a computational model. This is a filter acting on random variables, and while its Monte Carlo variant --- the Ensemble Kalman Filter (EnKF) --- is fairly straightforward, we subsequently only sketch its implementation with the help of functional representations.
Asiri, Sharefa M.; Laleg-Kirati, Taous-Meriem
2017-01-01
In this paper, a method based on modulating functions is proposed to estimate the Cerebral Blood Flow (CBF). The problem is written in an input estimation problem for a damped wave equation which is used to model the spatiotemporal variations
Fused Adaptive Lasso for Spatial and Temporal Quantile Function Estimation
Sun, Ying
2015-09-01
Quantile functions are important in characterizing the entire probability distribution of a random variable, especially when the tail of a skewed distribution is of interest. This article introduces new quantile function estimators for spatial and temporal data with a fused adaptive Lasso penalty to accommodate the dependence in space and time. This method penalizes the difference among neighboring quantiles, hence it is desirable for applications with features ordered in time or space without replicated observations. The theoretical properties are investigated and the performances of the proposed methods are evaluated by simulations. The proposed method is applied to particulate matter (PM) data from the Community Multiscale Air Quality (CMAQ) model to characterize the upper quantiles, which are crucial for studying spatial association between PM concentrations and adverse human health effects. © 2016 American Statistical Association and the American Society for Quality.
Estimation of cost function in the natural gas industry
Energy Technology Data Exchange (ETDEWEB)
Kim, Young Duk [Korea Energy Economics Institute, Euiwang (Korea)
1999-02-01
The natural gas industry in Korea has characteristics of a dual industrial structure with wholesale and retail and a regional monopoly of city gas company. Recently there have been discussions on the restructuring of gas industry and the problems arising from such industrial organization. At this point, the labor and capital cost of KOGAS were analyzed to find out efficiency of KOGAS, the wholesaler and the cost function focusing on distribution was estimated to find out effect of scale of city gas company, the retailer. As a result, in the case of KOGAS, it is prove that enhancing competitive power is needed by improving labor productivity through stabilization of labor structure and by maximizing value-added through stability of capital combination. From the estimation of cost function of city gas companies, the existing regional monopoly of city gas company have effects on its scale only when the area of operation and end users used the same amount per end user are increased. (author). 31 refs., 10 figs., 43 tabs.
Development of fragility functions to estimate homelessness after an earthquake
Brink, Susan A.; Daniell, James; Khazai, Bijan; Wenzel, Friedemann
2014-05-01
Immediately after an earthquake, many stakeholders need to make decisions about their response. These decisions often need to be made in a data poor environment as accurate information on the impact can take months or even years to be collected and publicized. Social fragility functions have been developed and applied to provide an estimate of the impact in terms of building damage, deaths and injuries in near real time. These rough estimates can help governments and response agencies determine what aid may be required which can improve their emergency response and facilitate planning for longer term response. Due to building damage, lifeline outages, fear of aftershocks, or other causes, people may become displaced or homeless after an earthquake. Especially in cold and dangerous locations, the rapid provision of safe emergency shelter can be a lifesaving necessity. However, immediately after an event there is little information available about the number of homeless, their locations and whether they require public shelter to aid the response agencies in decision making. In this research, we analyze homelessness after historic earthquakes using the CATDAT Damaging Earthquakes Database. CATDAT includes information on the hazard as well as the physical and social impact of over 7200 damaging earthquakes from 1900-2013 (Daniell et al. 2011). We explore the relationship of both earthquake characteristics and area characteristics with homelessness after the earthquake. We consider modelled variables such as population density, HDI, year, measures of ground motion intensity developed in Daniell (2014) over the time period from 1900-2013 as well as temperature. Using a base methodology based on that used for PAGER fatality fragility curves developed by Jaiswal and Wald (2010), but using regression through time using the socioeconomic parameters developed in Daniell et al. (2012) for "socioeconomic fragility functions", we develop a set of fragility curves that can be
Towards an Early Software Effort Estimation Based on Functional and Non-Functional Requirements
Kassab, M.; Daneva, Maia; Ormanjieva, Olga; Abran, A.; Braungarten, R.; Dumke, R.; Cuadrado-Gallego, J.; Brunekreef, J.
2009-01-01
The increased awareness of the non-functional requirements as a key to software project and product success makes explicit the need to include them in any software project effort estimation activity. However, the existing approaches to defining size-based effort relationships still pay insufficient
Sedaghat, A.; Bayat, H.; Safari Sinegani, A. A.
2016-03-01
The saturated hydraulic conductivity ( K s ) of the soil is one of the main soil physical properties. Indirect estimation of this parameter using pedo-transfer functions (PTFs) has received considerable attention. The Purpose of this study was to improve the estimation of K s using fractal parameters of particle and micro-aggregate size distributions in smectitic soils. In this study 260 disturbed and undisturbed soil samples were collected from Guilan province, the north of Iran. The fractal model of Bird and Perrier was used to compute the fractal parameters of particle and micro-aggregate size distributions. The PTFs were developed by artificial neural networks (ANNs) ensemble to estimate K s by using available soil data and fractal parameters. There were found significant correlations between K s and fractal parameters of particles and microaggregates. Estimation of K s was improved significantly by using fractal parameters of soil micro-aggregates as predictors. But using geometric mean and geometric standard deviation of particles diameter did not improve K s estimations significantly. Using fractal parameters of particles and micro-aggregates simultaneously, had the most effect in the estimation of K s . Generally, fractal parameters can be successfully used as input parameters to improve the estimation of K s in the PTFs in smectitic soils. As a result, ANNs ensemble successfully correlated the fractal parameters of particles and micro-aggregates to K s .
Estimation of Cumulative Absolute Velocity using Empirical Green's Function Method
International Nuclear Information System (INIS)
Park, Dong Hee; Yun, Kwan Hee; Chang, Chun Joong; Park, Se Moon
2009-01-01
In recognition of the needs to develop a new criterion for determining when the OBE (Operating Basis Earthquake) has been exceeded at nuclear power plants, Cumulative Absolute Velocity (CAV) was introduced by EPRI. The concept of CAV is the area accumulation with the values more than 0.025g occurred during every one second. The equation of the CAV is as follows. CAV = ∫ 0 max |a(t)|dt (1) t max = duration of record, a(t) = acceleration (>0.025g) Currently, the OBE exceedance criteria in Korea is Peak Ground Acceleration (PGA, PGA>0.1g). When Odesan earthquake (M L =4.8, January 20th, 2007) and Gyeongju earthquake (M L =3.4, June 2nd, 1999) were occurred, we have had already experiences of PGA greater than 0.1g that did not even cause any damage to the poorly-designed structures nearby. This moderate earthquake has motivated Korea to begin the use of the CAV for OBE exceedance criteria for NPPs. Because the present OBE level has proved itself to be a poor indicator for small-to-moderate earthquakes, for which the low OBE level can cause an inappropriate shut down the plant. A more serious possibility is that this scenario will become a reality at a very high level. Empirical Green's Function method was a simulation technique which can estimate the CAV value and it is hereby introduced
Arreola, José Luis Preciado; Johnson, Andrew L.
2016-01-01
Organizations like census bureaus rely on non-exhaustive surveys to estimate industry population-level production functions. In this paper we propose selecting an estimator based on a weighting of its in-sample and predictive performance on actual application datasets. We compare Cobb-Douglas functional assumptions to existing nonparametric shape constrained estimators and a newly proposed estimated presented in this paper. For simulated data, we find that our proposed estimator has the lowes...
Topological estimation of aerodynamic controlled airplane system functionality of quality
Directory of Open Access Journals (Sweden)
С.В. Павлова
2005-01-01
Full Text Available It is suggested to use topological methods for stage estimation of aerodynamic airplane control in widespread range of its conditions The estimation is based on normalized stage virtual non-isotropy of configurational airplane systems calculation.
Estimation of field capacity from ring infiltrometer-drainage data
Directory of Open Access Journals (Sweden)
Theophilo Benedicto Ottoni Filho
2014-12-01
Full Text Available Field capacity (FC is a parameter widely used in applied soil science. However, its in situ method of determination may be difficult to apply, generally because of the need of large supplies of water at the test sites. Ottoni Filho et al. (2014 proposed a standardized procedure for field determination of FC and showed that such in situ FC can be estimated by a linear pedotransfer function (PTF based on volumetric soil water content at the matric potential of -6 kPa [θ(6] for the same soils used in the present study. The objective of this study was to use soil moisture data below a double ring infiltrometer measured 48 h after the end of the infiltration test in order to develop PTFs for standard in situ FC. We found that such ring FC data were an average of 0.03 m³ m- 3 greater than standard FC values. The linear PTF that was developed for the ring FC data based only on θ(6 was nearly as accurate as the equivalent PTF reported by Ottoni Filho et al. (2014, which was developed for the standard FC data. The root mean squared residues of FC determined from both PTFs were about 0.02 m³ m- 3. The proposed method has the advantage of estimating the soil in situ FC using the water applied in the infiltration test.
Estimating functions for inhomogeneous spatial point processes with incomplete covariate data
DEFF Research Database (Denmark)
Waagepetersen, Rasmus
and this leads to parameter estimation error which is difficult to quantify. In this paper we introduce a Monte Carlo version of the estimating function used in "spatstat" for fitting inhomogeneous Poisson processes and certain inhomogeneous cluster processes. For this modified estimating function it is feasible...
Estimating functions for inhomogeneous spatial point processes with incomplete covariate data
DEFF Research Database (Denmark)
Waagepetersen, Rasmus
2008-01-01
and this leads to parameter estimation error which is difficult to quantify. In this paper, we introduce a Monte Carlo version of the estimating function used in spatstat for fitting inhomogeneous Poisson processes and certain inhomogeneous cluster processes. For this modified estimating function, it is feasible...
Estimation of functional preparedness of young handballers in setup time
Directory of Open Access Journals (Sweden)
Favoritоv V.N.
2012-11-01
Full Text Available The dynamics of level of functional preparedness of young handballers in setup time is shown. It was foreseen to make alteration in educational-training process with the purpose of optimization of their functional preparedness. 11 youths were plugged in research, calendar age 14 - 15 years. For determination of level of their functional preparedness the computer program "SVSM" was applied. It is set that at the beginning of setup time of 18,18% of all respondent functional preparedness is characterized by a "middle" level, 27,27% - below the "average", 54,54% - "above" the average. At the end of setup time among sportsmen representatives prevailed with the level of functional preparedness "above" average - 63,63%, with level "high" - 27,27%, sportsmen with level below the average were not observed. Efficiency of the offered system of trainings employments for optimization of functional preparedness of young handballers is well-proven.
Estimation of Multiple Point Sources for Linear Fractional Order Systems Using Modulating Functions
Belkhatir, Zehor; Laleg-Kirati, Taous-Meriem
2017-01-01
This paper proposes an estimation algorithm for the characterization of multiple point inputs for linear fractional order systems. First, using polynomial modulating functions method and a suitable change of variables the problem of estimating
DEFF Research Database (Denmark)
Demirel, Mehmet C.; Mai, Juliane; Mendiguren Gonzalez, Gorka
2018-01-01
selected due to its soil parameter distribution approach based on pedo-transfer functions and the build in multi-scale parameter regionalisation. In addition two new spatial parameter distribution options have been incorporated in the model in order to increase the flexibility of root fraction coefficient...
Estimate of K-functionals and modulus of smoothness constructed ...
Indian Academy of Sciences (India)
... and -functionals. The main result of the paper is the proof of the equivalence theorem for a -functional and a modulus of smoothness for the Dunkl transform on R d . Author Affiliations. M El Hamma1 R Daher1. Department of Mathematics, Faculty of Sciences Aïn Chock, University of Hassan II, Casablanca, Morocco ...
Micro-Economic Estimation On The Demand Function For ...
African Journals Online (AJOL)
The article focused on the estimation of the prostitution demand behaviour in Adamawa State. An econometric model was specified based on economic theory and confronted with both primary and secondary data. Ordinary least square multiple regression techniques were adopted and the linear model was chosen as a ...
Multivariable Frequency Response Functions Estimation for Industrial Robots
Hardeman, T.; Aarts, Ronald G.K.M.; Jonker, Jan B.
2005-01-01
The accuracy of industrial robots limits its applicability for high demanding processes, like robotised laser welding. We are working on a nonlinear exible model of the robot manipulator to predict these inaccuracies. This poster presents the experimental results on estimating the Multivariable
Estimating Functions of Distributions Defined over Spaces of Unknown Size
Directory of Open Access Journals (Sweden)
David H. Wolpert
2013-10-01
Full Text Available We consider Bayesian estimation of information-theoretic quantities from data, using a Dirichlet prior. Acknowledging the uncertainty of the event space size m and the Dirichlet prior’s concentration parameter c, we treat both as random variables set by a hyperprior. We show that the associated hyperprior, P(c, m, obeys a simple “Irrelevance of Unseen Variables” (IUV desideratum iff P(c, m = P(cP(m. Thus, requiring IUV greatly reduces the number of degrees of freedom of the hyperprior. Some information-theoretic quantities can be expressed multiple ways, in terms of different event spaces, e.g., mutual information. With all hyperpriors (implicitly used in earlier work, different choices of this event space lead to different posterior expected values of these information-theoretic quantities. We show that there is no such dependence on the choice of event space for a hyperprior that obeys IUV. We also derive a result that allows us to exploit IUV to greatly simplify calculations, like the posterior expected mutual information or posterior expected multi-information. We also use computer experiments to favorably compare an IUV-based estimator of entropy to three alternative methods in common use. We end by discussing how seemingly innocuous changes to the formalization of an estimation problem can substantially affect the resultant estimates of posterior expectations.
On the Reliability of Source Time Functions Estimated Using Empirical Green's Function Methods
Gallegos, A. C.; Xie, J.; Suarez Salas, L.
2017-12-01
The Empirical Green's Function (EGF) method (Hartzell, 1978) has been widely used to extract source time functions (STFs). In this method, seismograms generated by collocated events with different magnitudes are deconvolved. Under a fundamental assumption that the STF of the small event is a delta function, the deconvolved Relative Source Time Function (RSTF) yields the large event's STF. While this assumption can be empirically justified by examination of differences in event size and frequency content of the seismograms, there can be a lack of rigorous justification of the assumption. In practice, a small event might have a finite duration when the RSTF is retrieved and interpreted as the large event STF with a bias. In this study, we rigorously analyze this bias using synthetic waveforms generated by convolving a realistic Green's function waveform with pairs of finite-duration triangular or parabolic STFs. The RSTFs are found using a time-domain based matrix deconvolution. We find when the STFs of smaller events are finite, the RSTFs are a series of narrow non-physical spikes. Interpreting these RSTFs as a series of high-frequency source radiations would be very misleading. The only reliable and unambiguous information we can retrieve from these RSTFs is the difference in durations and the moment ratio of the two STFs. We can apply a Tikhonov smoothing to obtain a single-pulse RSTF, but its duration is dependent on the choice of weighting, which may be subjective. We then test the Multi-Channel Deconvolution (MCD) method (Plourde & Bostock, 2017) which assumes that both STFs have finite durations to be solved for. A concern about the MCD method is that the number of unknown parameters is larger, which would tend to make the problem rank-deficient. Because the kernel matrix is dependent on the STFs to be solved for under a positivity constraint, we can only estimate the rank-deficiency with a semi-empirical approach. Based on the results so far, we find that the
Econometric estimation of the “Constant Elasticity of Substitution" function in R
DEFF Research Database (Denmark)
Henningsen, Arne; Henningsen, Geraldine
for estimating the traditional CES function with two inputs as well as nested CES functions with three and four inputs. Furthermore, we demonstrate how these approaches can be applied in R using the add-on package micEconCES and we describe how the various estimation approaches are implemented in the mic......EconCES package. Finally, we illustrate the usage of this package by replicating some estimations of CES functions that are reported in the literature....
Asiri, Sharefa M.
2017-10-19
In this paper, a method based on modulating functions is proposed to estimate the Cerebral Blood Flow (CBF). The problem is written in an input estimation problem for a damped wave equation which is used to model the spatiotemporal variations of blood mass density. The method is described and its performance is assessed through some numerical simulations. The robustness of the method in presence of noise is also studied.
Estimate of K-functionals and modulus of smoothness constructed ...
Indian Academy of Sciences (India)
2016-08-26
functional and a modulus of smoothness for the Dunkl transform on Rd. Author Affiliations. M El Hamma1 R Daher1. Department of Mathematics, Faculty of Sciences Aïn Chock, University of Hassan II, Casablanca, Morocco. Dates.
Argument estimates of certain multivalent functions involving a linear operator
Directory of Open Access Journals (Sweden)
Nak Eun Cho
2002-01-01
Full Text Available The purpose of this paper is to derive some argument properties of certain multivalent functions in the open unit disk involving a linear operator. We also investigate their integral preserving property in a sector.
Directory of Open Access Journals (Sweden)
Feng Qi
2014-10-01
Full Text Available The authors find the absolute monotonicity and complete monotonicity of some functions involving trigonometric functions and related to estimates the lower bounds of the first eigenvalue of Laplace operator on Riemannian manifolds.
Teeples, Ronald; Glyer, David
1987-05-01
Both policy and technical analysis of water delivery systems have been based on cost functions that are inconsistent with or are incomplete representations of the neoclassical production functions of economics. We present a full-featured production function model of water delivery which can be estimated from a multiproduct, dual cost function. The model features implicit prices for own-water inputs and is implemented as a jointly estimated system of input share equations and a translog cost function. Likelihood ratio tests are performed showing that a minimally constrained, full-featured production function is a necessary specification of the water delivery operations in our sample. This, plus the model's highly efficient and economically correct parameter estimates, confirms the usefulness of a production function approach to modeling the economic activities of water delivery systems.
HEDONIC PRICE FUNCTION ESTIMATION FOR MOBILE PHONE IN IRAN
Directory of Open Access Journals (Sweden)
Sayed Mahdi Mostafavi
2013-01-01
Full Text Available The aim of this paper is the survey of mobile price determinants by hedonic model. We have applied the hedonic price model for mobile phone market in Iran in the year of 2008. The brands conclude NOKIA, QTEK, HTC, MOTOROLA, SONY ERICSSON and SAMSUNG that comprise 193 types of handset mobile phone. The results show that in the hedonic function, the maximum amount of parameters of hedonic price function related to the following variables respectively: touch screen, hands free and connectivity tools, and the minimum amount of them are belonged to clarification of monitor images, phone volume and phone memory. Moreover, except Motorola brand the type of brand has not a significant parameter in the hedonic price function.
Saturated hydraulic conductivity of US soils grouped according to textural class and bulk density
Importance of the saturated hydraulic conductivity as soil hydraulic property led to the development of multiple pedotransfer functions for estimating it. One approach to estimating Ksat was using textural classes rather than specific textural fraction contents as pedotransfer inputs. The objective...
Saturated hydraulic conductivity of US soils grouped according textural class and bulk density
Importance of the saturated hydraulic conductivity as soil hydraulic property led to the development of multiple pedotransfer functions for estimating it. One approach to estimating Ksat was using textural classes rather than specific textural fraction contents as pedotransfer inputs. The objective...
Verhave, JC; Gansevoort, RT; Hillege, HL; De Zeeuw, D; Curhan, GC; De Jong, PE
Many epidemiologic studies presently aim to evaluate the effect of risk factors on renal function. As direct measurement of renal function is cumbersome to perform, epidentiologic studies generally use an indirect estimate of renal function. The consequences of using different methods of renal
Estimate of K-functionals and modulus of smoothness constructed ...
Indian Academy of Sciences (India)
Casablanca, Morocco. E-mail: m_elhamma@yahoo.fr. MS received 17 January 2013. Abstract. Using a generalized spherical mean operator, we define generalized modu- lus of smoothness in the space L2 k. (Rd). Based on the Dunkl operator we define. Sobolev-type space and K-functionals. The main result of the paper ...
Estimation of acoustic resonances for room transfer function equalization
DEFF Research Database (Denmark)
Gil-Cacho, Pepe; van Waterschoot, Toon; Moonen, Marc
2010-01-01
Strong acoustic resonances create long room impulse responses (RIRs) which may harm the speech transmission in an acoustic space and hence reduce speech intelligibility. Equalization is performed by cancelling the main acoustic resonances common to multiple room transfer functions (RTFs), i...
Bayesian Nonparametric Mixture Estimation for Time-Indexed Functional Data in R
Directory of Open Access Journals (Sweden)
Terrance D. Savitsky
2016-08-01
Full Text Available We present growfunctions for R that offers Bayesian nonparametric estimation models for analysis of dependent, noisy time series data indexed by a collection of domains. This data structure arises from combining periodically published government survey statistics, such as are reported in the Current Population Study (CPS. The CPS publishes monthly, by-state estimates of employment levels, where each state expresses a noisy time series. Published state-level estimates from the CPS are composed from household survey responses in a model-free manner and express high levels of volatility due to insufficient sample sizes. Existing software solutions borrow information over a modeled time-based dependence to extract a de-noised time series for each domain. These solutions, however, ignore the dependence among the domains that may be additionally leveraged to improve estimation efficiency. The growfunctions package offers two fully nonparametric mixture models that simultaneously estimate both a time and domain-indexed dependence structure for a collection of time series: (1 A Gaussian process (GP construction, which is parameterized through the covariance matrix, estimates a latent function for each domain. The covariance parameters of the latent functions are indexed by domain under a Dirichlet process prior that permits estimation of the dependence among functions across the domains: (2 An intrinsic Gaussian Markov random field prior construction provides an alternative to the GP that expresses different computation and estimation properties. In addition to performing denoised estimation of latent functions from published domain estimates, growfunctions allows estimation of collections of functions for observation units (e.g., households, rather than aggregated domains, by accounting for an informative sampling design under which the probabilities for inclusion of observation units are related to the response variable. growfunctions includes plot
INCREASING OF PRECISE ESTIMATION OF OPTIMAL CRITERIA BOILER FUNCTIONING
Directory of Open Access Journals (Sweden)
Y. M. Skakovsk
2016-08-01
Full Text Available Results of laboratory and industrial research allowed offering a way to improve the accuracy of estimation the optimal criterion of boilers' operation depending on fuel quality. Criterion is calculated continuously during boiler operation as heat ratio transmitted in production with superheated steam to the thermal energy obtained by combustion in boiler’s furnace fuel (natural gas .The non-linearity dependence of steam enthalpy from its temperature and pressure are considered when calculating, as well as changes in calorific value of natural gas, depending on variety in nitrogen content therein. The control algorithm and program for Ukrainian PLC MIC-52 are offered. The user selection program implements two searching modes for criterion maximum: automated and automatic. The results are going to be used for upgrading the existing control system on sugar factory.
$L^{p}$-square function estimates on spaces of homogeneous type and on uniformly rectifiable sets
Hofmann, Steve; Mitrea, Marius; Morris, Andrew J
2017-01-01
The authors establish square function estimates for integral operators on uniformly rectifiable sets by proving a local T(b) theorem and applying it to show that such estimates are stable under the so-called big pieces functor. More generally, they consider integral operators associated with Ahlfors-David regular sets of arbitrary codimension in ambient quasi-metric spaces. The local T(b) theorem is then used to establish an inductive scheme in which square function estimates on so-called big pieces of an Ahlfors-David regular set are proved to be sufficient for square function estimates to hold on the entire set. Extrapolation results for L^p and Hardy space versions of these estimates are also established. Moreover, the authors prove square function estimates for integral operators associated with variable coefficient kernels, including the Schwartz kernels of pseudodifferential operators acting between vector bundles on subdomains with uniformly rectifiable boundaries on manifolds.
ON THE ESTIMATION OF DISTANCE DISTRIBUTION FUNCTIONS FOR POINT PROCESSES AND RANDOM SETS
Directory of Open Access Journals (Sweden)
Dietrich Stoyan
2011-05-01
Full Text Available This paper discusses various estimators for the nearest neighbour distance distribution function D of a stationary point process and for the quadratic contact distribution function Hq of a stationary random closed set. It recommends the use of Hanisch's estimator of D, which is of Horvitz-Thompson type, and the minussampling estimator of Hq. This recommendation is based on simulations for Poisson processes and Boolean models.
Smoothed Conditional Scale Function Estimation in AR(1-ARCH(1 Processes
Directory of Open Access Journals (Sweden)
Lema Logamou Seknewna
2018-01-01
Full Text Available The estimation of the Smoothed Conditional Scale Function for time series was taken out under the conditional heteroscedastic innovations by imitating the kernel smoothing in nonparametric QAR-QARCH scheme. The estimation was taken out based on the quantile regression methodology proposed by Koenker and Bassett. And the proof of the asymptotic properties of the Conditional Scale Function estimator for this type of process was given and its consistency was shown.
Directory of Open Access Journals (Sweden)
Azam Zaka
2014-10-01
Full Text Available This paper is concerned with the modifications of maximum likelihood, moments and percentile estimators of the two parameter Power function distribution. Sampling behavior of the estimators is indicated by Monte Carlo simulation. For some combinations of parameter values, some of the modified estimators appear better than the traditional maximum likelihood, moments and percentile estimators with respect to bias, mean square error and total deviation.
Cochlear function tests in estimation of speech dynamic range.
Han, Jung Ju; Park, So Young; Park, Shi Nae; Na, Mi Sun; Lee, Philip; Han, Jae Sang
2016-10-01
The loss of active cochlear mechanics causes elevated thresholds, loudness recruitment, and reduced frequency selectivity. The problems faced by hearing-impaired listeners are largely related with reduced dynamic range (DR). The aim of this study was to determine which index of the cochlear function tests correlates best with the DR to speech stimuli. Audiological data on 516 ears with pure tone average (PTA) of ≤55 dB and word recognition score of ≥70% were analyzed. PTA, speech recognition threshold (SRT), uncomfortable loudness (UCL), and distortion product otoacoustic emission (DPOAE) were explored as the indices of cochlear function. Audiometric configurations were classified. Correlation between each index and the DR was assessed and multiple regression analysis was done. PTA and SRT demonstrated strong negative correlations with the DR (r = -0.788 and -0.860, respectively), while DPOAE sum was moderately correlated (r = 0.587). UCLs remained quite constant for the total range of the DR. The regression equation was Y (DR) = 75.238 - 0.719 × SRT (R(2 )=( )0.721, p equation.
Directory of Open Access Journals (Sweden)
SANKU DEY
2010-11-01
Full Text Available The generalized exponential (GE distribution proposed by Gupta and Kundu (1999 is an important lifetime distribution in survival analysis. In this article, we propose to obtain Bayes estimators and its associated risk based on a class of non-informative prior under the assumption of three loss functions, namely, quadratic loss function (QLF, squared log-error loss function (SLELF and general entropy loss function (GELF. The motivation is to explore the most appropriate loss function among these three loss functions. The performances of the estimators are, therefore, compared on the basis of their risks obtained under QLF, SLELF and GELF separately. The relative efficiency of the estimators is also obtained. Finally, Monte Carlo simulations are performed to compare the performances of the Bayes estimates under different situations.
Selection of the wavelet function for the frequencies estimation
International Nuclear Information System (INIS)
Garcia R, A.
2007-01-01
At the moment the signals are used to diagnose the state of the systems, by means of the extraction of their more important characteristics such as the frequencies, tendencies, changes and temporary evolutions. This characteristics are detected by means of diverse analysis techniques, as Autoregressive methods, Fourier Transformation, Fourier transformation in short time, Wavelet transformation, among others. The present work uses the one Wavelet transformation because it allows to analyze stationary, quasi-stationary and transitory signals in the time-frequency plane. It also describes a methodology to select the scales and the Wavelet function to be applied the one Wavelet transformation with the objective of detecting to the dominant system frequencies. (Author)
Estimation of a monotone percentile residual life function under random censorship.
Franco-Pereira, Alba M; de Uña-Álvarez, Jacobo
2013-01-01
In this paper, we introduce a new estimator of a percentile residual life function with censored data under a monotonicity constraint. Specifically, it is assumed that the percentile residual life is a decreasing function. This assumption is useful when estimating the percentile residual life of units, which degenerate with age. We establish a law of the iterated logarithm for the proposed estimator, and its n-equivalence to the unrestricted estimator. The asymptotic normal distribution of the estimator and its strong approximation to a Gaussian process are also established. We investigate the finite sample performance of the monotone estimator in an extensive simulation study. Finally, data from a clinical trial in primary biliary cirrhosis of the liver are analyzed with the proposed methods. One of the conclusions of our work is that the restricted estimator may be much more efficient than the unrestricted one. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
mBEEF-vdW: Robust fitting of error estimation density functionals
DEFF Research Database (Denmark)
Lundgård, Keld Troen; Wellendorff, Jess; Voss, Johannes
2016-01-01
. The functional is fitted within the Bayesian error estimation functional (BEEF) framework [J. Wellendorff et al., Phys. Rev. B 85, 235149 (2012); J. Wellendorff et al., J. Chem. Phys. 140, 144107 (2014)]. We improve the previously used fitting procedures by introducing a robust MM-estimator based loss function...... catalysis, including datasets that were not used for its training. Overall, we find that mBEEF-vdW has a higher general accuracy than competing popular functionals, and it is one of the best performing functionals on chemisorption systems, surface energies, lattice constants, and dispersion. We also show...
Directory of Open Access Journals (Sweden)
Farhad Yahgmaei
2013-01-01
Full Text Available This paper proposes different methods of estimating the scale parameter in the inverse Weibull distribution (IWD. Specifically, the maximum likelihood estimator of the scale parameter in IWD is introduced. We then derived the Bayes estimators for the scale parameter in IWD by considering quasi, gamma, and uniform priors distributions under the square error, entropy, and precautionary loss functions. Finally, the different proposed estimators have been compared by the extensive simulation studies in corresponding the mean square errors and the evolution of risk functions.
Clinical use of estimated glomerular filtration rate for evaluation of kidney function
DEFF Research Database (Denmark)
Broberg, Bo; Lindhardt, Morten; Rossing, Peter
2013-01-01
is a significant predictor for cardiovascular disease and may along with classical cardiovascular risk factors add useful information to risk estimation. Several cautions need to be taken into account, e.g. rapid changes in kidney function, dialysis, high age, obesity, underweight and diverging and unanticipated......Estimating glomerular filtration rate by the Modification of Diet in Renal Disease or Chronic Kidney Disease Epidemiology Collaboration formulas gives a reasonable estimate of kidney function for e.g. classification of chronic kidney disease. Additionally the estimated glomerular filtration rate...
Chen, Xiaohong; Fan, Yanqin; Pouzo, Demian; Ying, Zhiliang
2010-07-01
We study estimation and model selection of semiparametric models of multivariate survival functions for censored data, which are characterized by possibly misspecified parametric copulas and nonparametric marginal survivals. We obtain the consistency and root- n asymptotic normality of a two-step copula estimator to the pseudo-true copula parameter value according to KLIC, and provide a simple consistent estimator of its asymptotic variance, allowing for a first-step nonparametric estimation of the marginal survivals. We establish the asymptotic distribution of the penalized pseudo-likelihood ratio statistic for comparing multiple semiparametric multivariate survival functions subject to copula misspecification and general censorship. An empirical application is provided.
Modulating functions method for parameters estimation in the fifth order KdV equation
Asiri, Sharefa M.; Liu, Da-Yan; Laleg-Kirati, Taous-Meriem
2017-01-01
In this work, the modulating functions method is proposed for estimating coefficients in higher-order nonlinear partial differential equation which is the fifth order Kortewegde Vries (KdV) equation. The proposed method transforms the problem into a
Voynelenko Natalya Vaselyevna
2012-01-01
In article the maintenance of activity of the head of special (correctional) educational institution on the organization of estimation of quality of educational system is discussed. The model of joint activity of participants of educational process on estimation of educational objects, as component of system of quality management in Educational institution is presented. Functions of estimation of educational system in activity of the head of educational institution are formulated.
Müller, Benjamin; Bernhardt, Matthias; Jackisch, Conrad; Schulz, Karsten
2016-09-01
For understanding water and solute transport processes, knowledge about the respective hydraulic properties is necessary. Commonly, hydraulic parameters are estimated via pedo-transfer functions using soil texture data to avoid cost-intensive measurements of hydraulic parameters in the laboratory. Therefore, current soil texture information is only available at a coarse spatial resolution of 250 to 1000 m. Here, a method is presented to derive high-resolution (15 m) spatial topsoil texture patterns for the meso-scale Attert catchment (Luxembourg, 288 km2) from 28 images of ASTER (advanced spaceborne thermal emission and reflection radiometer) thermal remote sensing. A principle component analysis of the images reveals the most dominant thermal patterns (principle components, PCs) that are related to 212 fractional soil texture samples. Within a multiple linear regression framework, distributed soil texture information is estimated and related uncertainties are assessed. An overall root mean squared error (RMSE) of 12.7 percentage points (pp) lies well within and even below the range of recent studies on soil texture estimation, while requiring sparser sample setups and a less diverse set of basic spatial input. This approach will improve the generation of spatially distributed topsoil maps, particularly for hydrologic modeling purposes, and will expand the usage of thermal remote sensing products.
The Effect of Error in Item Parameter Estimates on the Test Response Function Method of Linking.
Kaskowitz, Gary S.; De Ayala, R. J.
2001-01-01
Studied the effect of item parameter estimation for computation of linking coefficients for the test response function (TRF) linking/equating method. Simulation results showed that linking was more accurate when there was less error in the parameter estimates, and that 15 or 25 common items provided better results than 5 common items under both…
An estimating function approach to inference for inhomogeneous Neyman-Scott processes
DEFF Research Database (Denmark)
Waagepetersen, Rasmus Plenge
“This paper is concerned with inference for a certain class of inhomogeneous Neyman-Scott point processes depending on spatial covariates. Regression parameter estimates obtained from a simple estimating function are shown to be asymptotically normal when the “mother” intensity for the Neyman-Scott...
An estimating function approach to inference for inhomogeneous Neyman-Scott processes
DEFF Research Database (Denmark)
Waagepetersen, Rasmus
2007-01-01
This article is concerned with inference for a certain class of inhomogeneous Neyman-Scott point processes depending on spatial covariates. Regression parameter estimates obtained from a simple estimating function are shown to be asymptotically normal when the "mother" intensity for the Neyman-Sc...
The risk function approach to profit maximizing estimation in direct mailing
Muus, Lars; Scheer, Hiek van der; Wansbeek, Tom
1999-01-01
When the parameters of the model describing consumers' reaction to a mailing are known, addresses for a future mailing can be selected in a profit-maximizing way. Usually, these parameters are unknown and are to be estimated. Standard estimation are based on a quadratic loss function. In the present
Linear estimates of structure functions from deep inelastic lepton-nucleon scattering data. Part 1
International Nuclear Information System (INIS)
Anikeev, V.B.; Zhigunov, V.P.
1991-01-01
This paper concerns the linear estimation of structure functions from muon(electron)-nucleon scattering. The expressions obtained for the structure functions estimate provide correct analysis of the random error and the bias The bias arises because of the finite number of experimental data and the finite resolution of experiment. The approach suggested may become useful for data handling from experiments at HERA. 9 refs
Quasi-Newton methods for parameter estimation in functional differential equations
Brewer, Dennis W.
1988-01-01
A state-space approach to parameter estimation in linear functional differential equations is developed using the theory of linear evolution equations. A locally convergent quasi-Newton type algorithm is applied to distributed systems with particular emphasis on parameters that induce unbounded perturbations of the state. The algorithm is computationally implemented on several functional differential equations, including coefficient and delay estimation in linear delay-differential equations.
Headphone-To-Ear Transfer Function Estimation Using Measured Acoustic Parameters
Directory of Open Access Journals (Sweden)
Jinlin Liu
2018-06-01
Full Text Available This paper proposes to use an optimal five-microphone array method to measure the headphone acoustic reflectance and equivalent sound sources needed in the estimation of headphone-to-ear transfer functions (HpTFs. The performance of this method is theoretically analyzed and experimentally investigated. With the measured acoustic parameters HpTFs for different headphones and ear canal area functions are estimated based on a computational acoustic model. The estimation results show that HpTFs vary considerably with headphones and ear canals, which suggests that individualized compensations for HpTFs are necessary for headphones to reproduce desired sounds for different listeners.
An estimator of the survival function based on the semi-Markov model under dependent censorship.
Lee, Seung-Yeoun; Tsai, Wei-Yann
2005-06-01
Lee and Wolfe (Biometrics vol. 54 pp. 1176-1178, 1998) proposed the two-stage sampling design for testing the assumption of independent censoring, which involves further follow-up of a subset of lost-to-follow-up censored subjects. They also proposed an adjusted estimator for the survivor function for a proportional hazards model under the dependent censoring model. In this paper, a new estimator for the survivor function is proposed for the semi-Markov model under the dependent censorship on the basis of the two-stage sampling data. The consistency and the asymptotic distribution of the proposed estimator are derived. The estimation procedure is illustrated with an example of lung cancer clinical trial and simulation results are reported of the mean squared errors of estimators under a proportional hazards and two different nonproportional hazards models.
Pishravian, Arash; Aghabozorgi Sahaf, Masoud Reza
2012-12-01
In this paper speech-music separation using Blind Source Separation is discussed. The separating algorithm is based on the mutual information minimization where the natural gradient algorithm is used for minimization. In order to do that, score function estimation from observation signals (combination of speech and music) samples is needed. The accuracy and the speed of the mentioned estimation will affect on the quality of the separated signals and the processing time of the algorithm. The score function estimation in the presented algorithm is based on Gaussian mixture based kernel density estimation method. The experimental results of the presented algorithm on the speech-music separation and comparing to the separating algorithm which is based on the Minimum Mean Square Error estimator, indicate that it can cause better performance and less processing time
On the a priori estimation of collocation error covariance functions: a feasibility study
DEFF Research Database (Denmark)
Arabelos, D.N.; Forsberg, René; Tscherning, C.C.
2007-01-01
and the associated error covariance functions were conducted in the Arctic region north of 64 degrees latitude. The correlation between the known features of the data and the parameters variance and correlation length of the computed error covariance functions was estimated using multiple regression analysis...
International Nuclear Information System (INIS)
Telyakovskii, S A
2002-01-01
The functions under consideration are those satisfying the condition Δa i =Δb i =0 for all i≠n j , where {n j } is a lacunary sequence. An asymptotic estimate of the rate of decrease of the modulus of continuity in the L-metric of such functions in terms of their Fourier coefficients is obtained
Nonparametric estimation of the stationary M/G/1 workload distribution function
DEFF Research Database (Denmark)
Hansen, Martin Bøgsted
2005-01-01
In this paper it is demonstrated how a nonparametric estimator of the stationary workload distribution function of the M/G/1-queue can be obtained by systematic sampling the workload process. Weak convergence results and bootstrap methods for empirical distribution functions for stationary associ...
Using step and path selection functions for estimating resistance to movement: Pumas as a case study
Katherine A. Zeller; Kevin McGarigal; Samuel A. Cushman; Paul Beier; T. Winston Vickers; Walter M. Boyce
2015-01-01
GPS telemetry collars and their ability to acquire accurate and consistently frequent locations have increased the use of step selection functions (SSFs) and path selection functions (PathSFs) for studying animal movement and estimating resistance. However, previously published SSFs and PathSFs often do not accommodate multiple scales or multiscale modeling....
Survival Bayesian Estimation of Exponential-Gamma Under Linex Loss Function
Rizki, S. W.; Mara, M. N.; Sulistianingsih, E.
2017-06-01
This paper elaborates a research of the cancer patients after receiving a treatment in cencored data using Bayesian estimation under Linex Loss function for Survival Model which is assumed as an exponential distribution. By giving Gamma distribution as prior and likelihood function produces a gamma distribution as posterior distribution. The posterior distribution is used to find estimatior {\\hat{λ }}BL by using Linex approximation. After getting {\\hat{λ }}BL, the estimators of hazard function {\\hat{h}}BL and survival function {\\hat{S}}BL can be found. Finally, we compare the result of Maximum Likelihood Estimation (MLE) and Linex approximation to find the best method for this observation by finding smaller MSE. The result shows that MSE of hazard and survival under MLE are 2.91728E-07 and 0.000309004 and by using Bayesian Linex worths 2.8727E-07 and 0.000304131, respectively. It concludes that the Bayesian Linex is better than MLE.
LARF: Instrumental Variable Estimation of Causal Effects through Local Average Response Functions
Directory of Open Access Journals (Sweden)
Weihua An
2016-07-01
Full Text Available LARF is an R package that provides instrumental variable estimation of treatment effects when both the endogenous treatment and its instrument (i.e., the treatment inducement are binary. The method (Abadie 2003 involves two steps. First, pseudo-weights are constructed from the probability of receiving the treatment inducement. By default LARF estimates the probability by a probit regression. It also provides semiparametric power series estimation of the probability and allows users to employ other external methods to estimate the probability. Second, the pseudo-weights are used to estimate the local average response function conditional on treatment and covariates. LARF provides both least squares and maximum likelihood estimates of the conditional treatment effects.
A Scale Elasticity Measure for Directional Distance Function and its Dual: Theory and DEA Estimation
Valentin Zelenyuk
2012-01-01
In this paper we focus on scale elasticity measure based on directional distance function for multi-output-multi-input technologies, explore its fundamental properties and show its equivalence with the input oriented and output oriented scale elasticity measures. We also establish duality relationship between the scale elasticity measure based on the directional distance function with scale elasticity measure based on the profit function. Finally, we discuss the estimation issues of the scale...
Directory of Open Access Journals (Sweden)
Anupam Pathak
2014-11-01
Full Text Available Abstract: Problem Statement: The two-parameter exponentiated Rayleigh distribution has been widely used especially in the modelling of life time event data. It provides a statistical model which has a wide variety of application in many areas and the main advantage is its ability in the context of life time event among other distributions. The uniformly minimum variance unbiased and maximum likelihood estimation methods are the way to estimate the parameters of the distribution. In this study we explore and compare the performance of the uniformly minimum variance unbiased and maximum likelihood estimators of the reliability function R(t=P(X>t and P=P(X>Y for the two-parameter exponentiated Rayleigh distribution. Approach: A new technique of obtaining these parametric functions is introduced in which major role is played by the powers of the parameter(s and the functional forms of the parametric functions to be estimated are not needed. We explore the performance of these estimators numerically under varying conditions. Through the simulation study a comparison are made on the performance of these estimators with respect to the Biasness, Mean Square Error (MSE, 95% confidence length and corresponding coverage percentage. Conclusion: Based on the results of simulation study the UMVUES of R(t and ‘P’ for the two-parameter exponentiated Rayleigh distribution found to be superior than MLES of R(t and ‘P’.
Estimating the Partition Function Zeros by Using the Wang-Landau Monte Carlo Algorithm
Energy Technology Data Exchange (ETDEWEB)
Kim, Seung-Yeon [Korea National University of Transportation, Chungju (Korea, Republic of)
2017-03-15
The concept of the partition function zeros is one of the most efficient methods for investigating the phase transitions and the critical phenomena in various physical systems. Estimating the partition function zeros requires information on the density of states Ω(E) as a function of the energy E. Currently, the Wang-Landau Monte Carlo algorithm is one of the best methods for calculating Ω(E). The partition function zeros in the complex temperature plane of the Ising model on an L × L square lattice (L = 10 ∼ 80) with a periodic boundary condition have been estimated by using the Wang-Landau Monte Carlo algorithm. The efficiency of the Wang-Landau Monte Carlo algorithm and the accuracies of the partition function zeros have been evaluated for three different, 5%, 10%, and 20%, flatness criteria for the histogram H(E).
Nonparametric adaptive estimation of linear functionals for low frequency observed Lévy processes
Kappus, Johanna
2012-01-01
For a Lévy process X having finite variation on compact sets and finite first moments, Âµ( dx) = xv( dx) is a finite signed measure which completely describes the jump dynamics. We construct kernel estimators for linear functionals of Âµ and provide rates of convergence under regularity assumptions. Moreover, we consider adaptive estimation via model selection and propose a new strategy for the data driven choice of the smoothing parameter.
Optimal replacement time estimation for machines and equipment based on cost function
J. Šebo; J. Buša; P. Demeč; J. Svetlík
2013-01-01
The article deals with a multidisciplinary issue of estimating the optimal replacement time for the machines. Considered categories of machines, for which the optimization method is usable, are of the metallurgical and engineering production. Different models of cost function are considered (both with one and two variables). Parameters of the models were calculated through the least squares method. Models testing show that all are good enough, so for estimation of optimal replacement time is ...
Directory of Open Access Journals (Sweden)
Il Young Song
2015-01-01
Full Text Available This paper focuses on estimation of a nonlinear function of state vector (NFS in discrete-time linear systems with time-delays and model uncertainties. The NFS represents a multivariate nonlinear function of state variables, which can indicate useful information of a target system for control. The optimal nonlinear estimator of an NFS (in mean square sense represents a function of the receding horizon estimate and its error covariance. The proposed receding horizon filter represents the standard Kalman filter with time-delays and special initial horizon conditions described by the Lyapunov-like equations. In general case to calculate an optimal estimator of an NFS we propose using the unscented transformation. Important class of polynomial NFS is considered in detail. In the case of polynomial NFS an optimal estimator has a closed-form computational procedure. The subsequent application of the proposed receding horizon filter and nonlinear estimator to a linear stochastic system with time-delays and uncertainties demonstrates their effectiveness.
A method of moments to estimate bivariate survival functions: the copula approach
Directory of Open Access Journals (Sweden)
Silvia Angela Osmetti
2013-05-01
Full Text Available In this paper we discuss the problem on parametric and non parametric estimation of the distributions generated by the Marshall-Olkin copula. This copula comes from the Marshall-Olkin bivariate exponential distribution used in reliability analysis. We generalize this model by the copula and different marginal distributions to construct several bivariate survival functions. The cumulative distribution functions are not absolutely continuous and they unknown parameters are often not be obtained in explicit form. In order to estimate the parameters we propose an easy procedure based on the moments. This method consist in two steps: in the first step we estimate only the parameters of marginal distributions and in the second step we estimate only the copula parameter. This procedure can be used to estimate the parameters of complex survival functions in which it is difficult to find an explicit expression of the mixed moments. Moreover it is preferred to the maximum likelihood one for its simplex mathematic form; in particular for distributions whose maximum likelihood parameters estimators can not be obtained in explicit form.
International Nuclear Information System (INIS)
Lythgoe, M.F.; Gordon, I.; Khader, Z.; Smith, T.; Anderson, P.J.
1999-01-01
Differential renal function (DRF) is an important parameter that should be assessed from virtually every dynamic renogram. With the introduction of technetium-99m mercaptoacetyltriglycine ( 99m Tc-MAG3), a tracer with a high renal extraction, the estimation of DRF might hopefully become accurate and reproducible both between observers in the same institution and also between institutions. The aim of this study was to assess the effect of different parameters on the estimation of DRF. To this end we investigated two groups of children: group A, comprising 35 children with a single kidney (27 of whom had poor renal function), and group B, comprising 20 children with two kidneys and normal global function who also had an associated 99m Tc-dimercaptosuccinic acid scan ( 99m Tc-DMSA). The variables assessed for their effect on the estimation of DRF were: different operators, the choice of renal regions of interest (ROIs), the applied background subtraction, and six different techniques for analysis of the renogram. The six techniques were based on: linear regression of the slopes in the Rutland-Patlak plot, matrix deconvolution, differential method, integral method, linear regression of the slope of the renograms, and the area under the curve of the renogram. The estimation of DRF was less dependent upon both observer and method in patients with two normally functioning kidneys than in patients with a single kidney. The inter-observer comparison among children in either group was not dependent on either ROI or background subtraction. However, in patients with poor renal function the method of choice for the estimation of DRF was dependent on background subtraction, though not ROI. In children with two kidneys and normal renal function, the estimation of DRF from the 24 techniques gave similar results. Methods that produced DRF values closest to expected results, from either group of children, were the Rutland-Patlak plot and matrix deconvolution methods. (orig.)
Shen, Yi
2013-05-01
A subject's sensitivity to a stimulus variation can be studied by estimating the psychometric function. Generally speaking, three parameters of the psychometric function are of interest: the performance threshold, the slope of the function, and the rate at which attention lapses occur. In the present study, three psychophysical procedures were used to estimate the three-parameter psychometric function for an auditory gap detection task. These were an up-down staircase (up-down) procedure, an entropy-based Bayesian (entropy) procedure, and an updated maximum-likelihood (UML) procedure. Data collected from four young, normal-hearing listeners showed that while all three procedures provided similar estimates of the threshold parameter, the up-down procedure performed slightly better in estimating the slope and lapse rate for 200 trials of data collection. When the lapse rate was increased by mixing in random responses for the three adaptive procedures, the larger lapse rate was especially detrimental to the efficiency of the up-down procedure, and the UML procedure provided better estimates of the threshold and slope than did the other two procedures.
Estimation and Application of Ecological Memory Functions in Time and Space
Itter, M.; Finley, A. O.; Dawson, A.
2017-12-01
A common goal in quantitative ecology is the estimation or prediction of ecological processes as a function of explanatory variables (or covariates). Frequently, the ecological process of interest and associated covariates vary in time, space, or both. Theory indicates many ecological processes exhibit memory to local, past conditions. Despite such theoretical understanding, few methods exist to integrate observations from the recent past or within a local neighborhood as drivers of these processes. We build upon recent methodological advances in ecology and spatial statistics to develop a Bayesian hierarchical framework to estimate so-called ecological memory functions; that is, weight-generating functions that specify the relative importance of local, past covariate observations to ecological processes. Memory functions are estimated using a set of basis functions in time and/or space, allowing for flexible ecological memory based on a reduced set of parameters. Ecological memory functions are entirely data driven under the Bayesian hierarchical framework—no a priori assumptions are made regarding functional forms. Memory function uncertainty follows directly from posterior distributions for model parameters allowing for tractable propagation of error to predictions of ecological processes. We apply the model framework to simulated spatio-temporal datasets generated using memory functions of varying complexity. The framework is also applied to estimate the ecological memory of annual boreal forest growth to local, past water availability. Consistent with ecological understanding of boreal forest growth dynamics, memory to past water availability peaks in the year previous to growth and slowly decays to zero in five to eight years. The Bayesian hierarchical framework has applicability to a broad range of ecosystems and processes allowing for increased understanding of ecosystem responses to local and past conditions and improved prediction of ecological
An estimation of the structure function xF3 in neutrino-proton scattering
International Nuclear Information System (INIS)
Aoki, Kenzaburo; Arimoto, Shinsuke; Hoshino, Shigetoshi; Itoh, Nobuhisa; Konno, Toshiharu.
1981-01-01
The structure function xF 3 (x, Q 2 ) in the deep-inelastic neutrino-proton scattering was estimated without differentiating with respect to Q 2 in the evolution function. At first, the moment of the non-singlet structure function xF 3 (x, Q 2 ) is defined. Then, the kernel function f(z, Q 2 ) is presented. Finally, the expression for the structure function xF 3 is given. The values of the structure function for various Q 2 are shown in five figures. A peak is seen in each figure, and the highest peak is at about Q 2 = 14GeV 2 . The analysis suggests very small value of xF 3 in small Q 2 region. The kernel function f(x/y, Q 2 ) may be interpreted as the probability of finding a quark of momentum fraction x arising from that of y is quantum chromodynamics. (Kato, T.)
Operational production of Geodetic Excitation Functions from EOP estimated values at ASI-CGS
Sciarretta, C.; Luceri, V.; Bianco, G.
2009-04-01
ASI-CGS is routinely providing geodetic excitation functions from its own estimated EOP values (at present SLR and VLBI; the current use of GPS EOP's is also planned as soon as this product will be fully operational) on the ASI geodetic web site (http://geodaf.mt.asi.it). This product has been generated and monitored (for ASI internal use only) in a long pre-operational phase (more than two years), including validation and testing. The daily geodetic excitation functions are now weekly updated along with the operational ASI SLR and VLBI EOP solutions and compared, whenever possible, with the atmospheric excitation functions available at the IERS SBAAM, under the IB and not-IB assumption, including the "wind" term. The work will present the available estimated geodetic function time series and its comparison with the relevant atmospheric excitation functions, deriving quantitative indicators on the quality of the estimates. The similarities as well as the discrepancies among the atmospheric and geodetic series will be analysed and commented, evaluating in particular the degree of correlation among the two estimated time series and the likelihood of a linear dependence hypothesis.
Estimation of parameters of constant elasticity of substitution production functional model
Mahaboob, B.; Venkateswarlu, B.; Sankar, J. Ravi
2017-11-01
Nonlinear model building has become an increasing important powerful tool in mathematical economics. In recent years the popularity of applications of nonlinear models has dramatically been rising up. Several researchers in econometrics are very often interested in the inferential aspects of nonlinear regression models [6]. The present research study gives a distinct method of estimation of more complicated and highly nonlinear model viz Constant Elasticity of Substitution (CES) production functional model. Henningen et.al [5] proposed three solutions to avoid serious problems when estimating CES functions in 2012 and they are i) removing discontinuities by using the limits of the CES function and its derivative. ii) Circumventing large rounding errors by local linear approximations iii) Handling ill-behaved objective functions by a multi-dimensional grid search. Joel Chongeh et.al [7] discussed the estimation of the impact of capital and labour inputs to the gris output agri-food products using constant elasticity of substitution production function in Tanzanian context. Pol Antras [8] presented new estimates of the elasticity of substitution between capital and labour using data from the private sector of the U.S. economy for the period 1948-1998.
Bias Errors due to Leakage Effects When Estimating Frequency Response Functions
Directory of Open Access Journals (Sweden)
Andreas Josefsson
2012-01-01
Full Text Available Frequency response functions are often utilized to characterize a system's dynamic response. For a wide range of engineering applications, it is desirable to determine frequency response functions for a system under stochastic excitation. In practice, the measurement data is contaminated by noise and some form of averaging is needed in order to obtain a consistent estimator. With Welch's method, the discrete Fourier transform is used and the data is segmented into smaller blocks so that averaging can be performed when estimating the spectrum. However, this segmentation introduces leakage effects. As a result, the estimated frequency response function suffers from both systematic (bias and random errors due to leakage. In this paper the bias error in the H1 and H2-estimate is studied and a new method is proposed to derive an approximate expression for the relative bias error at the resonance frequency with different window functions. The method is based on using a sum of real exponentials to describe the window's deterministic autocorrelation function. Simple expressions are derived for a rectangular window and a Hanning window. The theoretical expressions are verified with numerical simulations and a very good agreement is found between the results from the proposed bias expressions and the empirical results.
An open tool for input function estimation and quantification of dynamic PET FDG brain scans.
Bertrán, Martín; Martínez, Natalia; Carbajal, Guillermo; Fernández, Alicia; Gómez, Álvaro
2016-08-01
Positron emission tomography (PET) analysis of clinical studies is mostly restricted to qualitative evaluation. Quantitative analysis of PET studies is highly desirable to be able to compute an objective measurement of the process of interest in order to evaluate treatment response and/or compare patient data. But implementation of quantitative analysis generally requires the determination of the input function: the arterial blood or plasma activity which indicates how much tracer is available for uptake in the brain. The purpose of our work was to share with the community an open software tool that can assist in the estimation of this input function, and the derivation of a quantitative map from the dynamic PET study. Arterial blood sampling during the PET study is the gold standard method to get the input function, but is uncomfortable and risky for the patient so it is rarely used in routine studies. To overcome the lack of a direct input function, different alternatives have been devised and are available in the literature. These alternatives derive the input function from the PET image itself (image-derived input function) or from data gathered from previous similar studies (population-based input function). In this article, we present ongoing work that includes the development of a software tool that integrates several methods with novel strategies for the segmentation of blood pools and parameter estimation. The tool is available as an extension to the 3D Slicer software. Tests on phantoms were conducted in order to validate the implemented methods. We evaluated the segmentation algorithms over a range of acquisition conditions and vasculature size. Input function estimation algorithms were evaluated against ground truth of the phantoms, as well as on their impact over the final quantification map. End-to-end use of the tool yields quantification maps with [Formula: see text] relative error in the estimated influx versus ground truth on phantoms. The main
Estimation of Soil Moisture Under Vegetation Cover at Multiple Frequencies
Jadghuber, Thomas; Hajnsek, Irena; Weiß, Thomas; Papathanassiou, Konstantinos P.
2015-04-01
Soil moisture under vegetation cover was estimated by a polarimetric, iterative, generalized, hybrid decomposition and inversion approach at multiple frequencies (X-, C- and L-band). Therefore the algorithm, originally designed for longer wavelength (L-band), was adapted to deal with the short wavelength scattering scenarios of X- and C-band. The Integral Equation Method (IEM) was incorporated together with a pedo-transfer function of Dobson et al. to account for the peculiarities of short wavelength scattering at X- and C-band. DLR's F-SAR system acquired fully polarimetric SAR data in X-, C- and L-band over the Wallerfing test site in Lower Bavaria, Germany in 2014. Simultaneously, soil and vegetation measurements were conducted on different agricultural test fields. The results indicate a spatially continuous inversion of soil moisture in all three frequencies (inversion rates >92%), mainly due to the careful adaption of the vegetation volume removal including a physical constraining of the decomposition algorithm. However, for X- and C-band the inversion results reveal moisture pattern inconsistencies and in some cases an incorrectly high inversion of soil moisture at X-band. The validation with in situ measurements states a stable performance of 2.1- 7.6vol.% at L-band for the entire growing period. At C- and X-band a reliable performance of 3.7-13.4vol.% in RMSE can only be achieved after distinct filtering (X- band) leading to a loss of almost 60% in spatial inversion rate. Hence, a robust inversion for soil moisture estimation under vegetation cover can only be conducted at L-band due to a constant availability of the soil signal in contrast to higher frequencies (X- and C-band).
Suto, Noriko; Harada, Makoto; Izutsu, Jun; Nagao, Toshiyasu
2006-07-01
In order to accurately estimate the geomagnetic transfer functions in the area of the volcano Mt. Iwate (IWT), we applied the interstation transfer function (ISTF) method to the three-component geomagnetic field data observed at Mt. Iwate station (IWT), using the Kakioka Magnetic Observatory, JMA (KAK) as remote reference station. Instead of the conventional Fourier transform, in which temporary transient noises badly degrade the accuracy of long term properties, continuous wavelet transform has been used. The accuracy of the results was as high as that of robust estimations of transfer functions obtained by the Fourier transform method. This would provide us with possibilities for routinely monitoring the transfer functions, without sophisticated statistical procedures, to detect changes in the underground electrical conductivity structure.
Directory of Open Access Journals (Sweden)
Roman Urban
2004-12-01
Full Text Available We consider the Green functions for second-order left-invariant differential operators on homogeneous manifolds of negative curvature, being a semi-direct product of a nilpotent Lie group $N$ and $A=mathbb{R}^+$. We obtain estimates for mixed derivatives of the Green functions both in the coercive and non-coercive case. The current paper completes the previous results obtained by the author in a series of papers [14,15,16,19].
Wang, Bingyuan; Zhang, Yao; Liu, Dongyuan; Ding, Xuemei; Dan, Mai; Pan, Tiantian; Wang, Yihan; Li, Jiao; Zhou, Zhongxing; Zhang, Limin; Zhao, Huijuan; Gao, Feng
2018-02-01
Functional near-infrared spectroscopy (fNIRS) is a non-invasive neuroimaging method to monitor the cerebral hemodynamic through the optical changes measured at the scalp surface. It has played a more and more important role in psychology and medical imaging communities. Real-time imaging of brain function using NIRS makes it possible to explore some sophisticated human brain functions unexplored before. Kalman estimator has been frequently used in combination with modified Beer-Lamber Law (MBLL) based optical topology (OT), for real-time brain function imaging. However, the spatial resolution of the OT is low, hampering the application of OT in exploring some complicated brain functions. In this paper, we develop a real-time imaging method combining diffuse optical tomography (DOT) and Kalman estimator, much improving the spatial resolution. Instead of only presenting one spatially distributed image indicating the changes of the absorption coefficients at each time point during the recording process, one real-time updated image using the Kalman estimator is provided. Its each voxel represents the amplitude of the hemodynamic response function (HRF) associated with this voxel. We evaluate this method using some simulation experiments, demonstrating that this method can obtain more reliable spatial resolution images. Furthermore, a statistical analysis is also conducted to help to decide whether a voxel in the field of view is activated or not.
Asiri, Sharefa M.
2017-10-08
Partial Differential Equations (PDEs) are commonly used to model complex systems that arise for example in biology, engineering, chemistry, and elsewhere. The parameters (or coefficients) and the source of PDE models are often unknown and are estimated from available measurements. Despite its importance, solving the estimation problem is mathematically and numerically challenging and especially when the measurements are corrupted by noise, which is often the case. Various methods have been proposed to solve estimation problems in PDEs which can be classified into optimization methods and recursive methods. The optimization methods are usually heavy computationally, especially when the number of unknowns is large. In addition, they are sensitive to the initial guess and stop condition, and they suffer from the lack of robustness to noise. Recursive methods, such as observer-based approaches, are limited by their dependence on some structural properties such as observability and identifiability which might be lost when approximating the PDE numerically. Moreover, most of these methods provide asymptotic estimates which might not be useful for control applications for example. An alternative non-asymptotic approach with less computational burden has been proposed in engineering fields based on the so-called modulating functions. In this dissertation, we propose to mathematically and numerically analyze the modulating functions based approaches. We also propose to extend these approaches to different situations. The contributions of this thesis are as follows. (i) Provide a mathematical analysis of the modulating function-based method (MFBM) which includes: its well-posedness, statistical properties, and estimation errors. (ii) Provide a numerical analysis of the MFBM through some estimation problems, and study the sensitivity of the method to the modulating functions\\' parameters. (iii) Propose an effective algorithm for selecting the method\\'s design parameters
DEFF Research Database (Denmark)
Effraimidis, Georgios; Dahl, Christian Møller
In this paper, we develop a fully nonparametric approach for the estimation of the cumulative incidence function with Missing At Random right-censored competing risks data. We obtain results on the pointwise asymptotic normality as well as the uniform convergence rate of the proposed nonparametric...
Clinical use of estimated glomerular filtration rate for evaluation of kidney function
DEFF Research Database (Denmark)
Broberg, Bo; Lindhardt, Morten; Rossing, Peter
2013-01-01
is a significant predictor for cardiovascular disease and may along with classical cardiovascular risk factors add useful information to risk estimation. Several cautions need to be taken into account, e.g. rapid changes in kidney function, dialysis, high age, obesity, underweight and diverging and unanticipated...
Groeneboom, P.; Jongbloed, G.; Wellner, J.A.
2001-01-01
A process associated with integrated Brownian motion is introduced that characterizes the limit behavior of nonparametric least squares and maximum likelihood estimators of convex functions and convex densities, respectively. We call this process “the invelope” and show that it is an almost surely
Chaudhuri, Shomesh E; Merfeld, Daniel M
2013-03-01
Psychophysics generally relies on estimating a subject's ability to perform a specific task as a function of an observed stimulus. For threshold studies, the fitted functions are called psychometric functions. While fitting psychometric functions to data acquired using adaptive sampling procedures (e.g., "staircase" procedures), investigators have encountered a bias in the spread ("slope" or "threshold") parameter that has been attributed to the serial dependency of the adaptive data. Using simulations, we confirm this bias for cumulative Gaussian parametric maximum likelihood fits on data collected via adaptive sampling procedures, and then present a bias-reduced maximum likelihood fit that substantially reduces the bias without reducing the precision of the spread parameter estimate and without reducing the accuracy or precision of the other fit parameters. As a separate topic, we explain how to implement this bias reduction technique using generalized linear model fits as well as other numeric maximum likelihood techniques such as the Nelder-Mead simplex. We then provide a comparison of the iterative bootstrap and observed information matrix techniques for estimating parameter fit variance from adaptive sampling procedure data sets. The iterative bootstrap technique is shown to be slightly more accurate; however, the observed information technique executes in a small fraction (0.005 %) of the time required by the iterative bootstrap technique, which is an advantage when a real-time estimate of parameter fit variance is required.
Colclough, Giles L; Woolrich, Mark W; Harrison, Samuel J; Rojas López, Pedro A; Valdes-Sosa, Pedro A; Smith, Stephen M
2018-05-07
A Bayesian model for sparse, hierarchical inverse covariance estimation is presented, and applied to multi-subject functional connectivity estimation in the human brain. It enables simultaneous inference of the strength of connectivity between brain regions at both subject and population level, and is applicable to fmri, meg and eeg data. Two versions of the model can encourage sparse connectivity, either using continuous priors to suppress irrelevant connections, or using an explicit description of the network structure to estimate the connection probability between each pair of regions. A large evaluation of this model, and thirteen methods that represent the state of the art of inverse covariance modelling, is conducted using both simulated and resting-state functional imaging datasets. Our novel Bayesian approach has similar performance to the best extant alternative, Ng et al.'s Sparse Group Gaussian Graphical Model algorithm, which also is based on a hierarchical structure. Using data from the Human Connectome Project, we show that these hierarchical models are able to reduce the measurement error in meg beta-band functional networks by 10%, producing concomitant increases in estimates of the genetic influence on functional connectivity. Copyright © 2018. Published by Elsevier Inc.
Estimation of demand function on natural gas and study of demand analysis
Energy Technology Data Exchange (ETDEWEB)
Kim, Y.D. [Korea Energy Economics Institute, Euiwang (Korea, Republic of)
1998-04-01
Demand Function is estimated with several methods about the demand on natural gas, and analyzed per usage. Since the demand on natural gas, which has big share of heating use, has a close relationship with temperature, the inter-season trend of price and income elasticity is estimated considering temperature and economic formation. Per usage response of natural gas demand on the changes of price and income is also estimated. It was estimated that the response of gas demand on the changes of price and income occurs by the change of number of users in long term. In case of the response of unit consumption, only industrial use shows long-term response to price. Since gas price barely responds to the change of exchange rate, it seems to express the price-making mechanism that does not reflect timely the import condition such as exchange rate, etc. 16 refs., 12 figs., 13 tabs.
Modulating functions method for parameters estimation in the fifth order KdV equation
Asiri, Sharefa M.
2017-07-25
In this work, the modulating functions method is proposed for estimating coefficients in higher-order nonlinear partial differential equation which is the fifth order Kortewegde Vries (KdV) equation. The proposed method transforms the problem into a system of linear algebraic equations of the unknowns. The statistical properties of the modulating functions solution are described in this paper. In addition, guidelines for choosing the number of modulating functions, which is an important design parameter, are provided. The effectiveness and robustness of the proposed method are shown through numerical simulations in both noise-free and noisy cases.
Comparing performance level estimation of safety functions in three distributed structures
International Nuclear Information System (INIS)
Hietikko, Marita; Malm, Timo; Saha, Heikki
2015-01-01
The capability of a machine control system to perform a safety function is expressed using performance levels (PL). This paper presents the results of a study where PL estimation was carried out for a safety function implemented using three different distributed control system structures. Challenges relating to the process of estimating PLs for safety related distributed machine control functions are highlighted. One of these examines the use of different cabling schemes in the implementation of a safety function and its effect on the PL evaluation. The safety function used as a generic example in PL calculations relates to a mobile work machine. It is a safety stop function where different technologies (electrical, hydraulic and pneumatic) can be utilized. It was detected that by replacing analogue cables with digital communication the system structure becomes simpler with less number of failing components, which can better the PL of the safety function. - Highlights: • Integration in distributed systems enables systems with less components. • It offers high reliability and diagnostic properties. • Analogue signals create uncertainty in signal reliability and difficult diagnostics
DEFF Research Database (Denmark)
Kirwan, L; Connolly, J; Finn, J A
2009-01-01
to the roles of evenness, functional groups, and functional redundancy. These more parsimonious descriptions can be especially useful in identifying general diversity-function relationships in communities with large numbers of species. We provide an example of the application of the modeling framework......We develop a modeling framework that estimates the effects of species identity and diversity on ecosystem function and permits prediction of the diversity-function relationship across different types of community composition. Rather than just measure an overall effect of diversity, we separately....... These models describe community-level performance and thus do not require separate measurement of the performance of individual species. This flexible modeling approach can be tailored to test many hypotheses in biodiversity research and can suggest the interaction mechanisms that may be acting....
Land-use change and carbon sinks: Econometric estimation of the carbon sequestration supply function
Energy Technology Data Exchange (ETDEWEB)
Lubowski, Ruben N.; Plantinga, Andrew J.; Stavins, Robert N.
2001-01-01
Increased attention by policy makers to the threat of global climate change has brought with it considerable interest in the possibility of encouraging the expansion of forest area as a means of sequestering carbon dioxide. The marginal costs of carbon sequestration or, equivalently, the carbon sequestration supply function will determine the ultimate effects and desirability of policies aimed at enhancing carbon uptake. In particular, marginal sequestration costs are the critical statistic for identifying a cost-effective policy mix to mitigate net carbon dioxide emissions. We develop a framework for conducting an econometric analysis of land use for the forty-eight contiguous United States and employing it to estimate the carbon sequestration supply function. By estimating the opportunity costs of land on the basis of econometric evidence of landowners' actual behavior, we aim to circumvent many of the shortcomings of previous sequestration cost assessments. By conducting the first nationwide econometric estimation of sequestration costs, endogenizing prices for land-based commodities, and estimating land-use transition probabilities in a framework that explicitly considers the range of land-use alternatives, we hope to provide better estimates eventually of the true costs of large-scale carbon sequestration efforts. In this way, we seek to add to understanding of the costs and potential of this strategy for addressing the threat of global climate change.
International Nuclear Information System (INIS)
Lubowski, Ruben N.; Plantinga, Andrew J.; Stavins, Robert N.
2001-01-01
Increased attention by policy makers to the threat of global climate change has brought with it considerable interest in the possibility of encouraging the expansion of forest area as a means of sequestering carbon dioxide. The marginal costs of carbon sequestration or, equivalently, the carbon sequestration supply function will determine the ultimate effects and desirability of policies aimed at enhancing carbon uptake. In particular, marginal sequestration conts are the critical statistic for identifying a cost-effective policy mix to mitigate net carbon dioxide emissions. We develop a framework for conducting an econometric analysis of land use for the forty-eight contiguous United States and employing it to estimate the carbon sequestration supply function. By estimating the opportunity costs of land on the basis of econometric evidence of landowners' actual behavior, we aim to circumvent many of the shortcomings of previous sequestration cost assessments. By conducting the first nationwide econometric estimation of sequestration costs, endogenizing prices for land-based commodities, and estimating land-use transition probabilities in a framework that explicitly considers the range of land-use alternatives, we hope to provide better estimates eventually of the true costs of large-scale carbon sequestration efforts. In this way, we seek to add to understanding of the costs and potential of this strategy for addressing the threat of global climate change
International Nuclear Information System (INIS)
Mariya, Yasushi; Saito, Fumio; Kimura, Tamaki
1999-01-01
Cerebral function of 12 patients accompanied with brain tumor, managed by radiotherapy, were serially estimated using electroencephalography (EEG), and the results were compared with tumor responses, analyzed by magnetic resonance imaging (MRI), and clinical courses. After radiotherapy, EEG findings were improved in 7 patients, unchanged in 3, and worsened in 1. Clinical courses were generally correlated with serial changes in EEG findings and tumor responses. However, in 3 patients, clinical courses were explained better with EEG findings than tumor responses. It is suggested that the combination of EEG and image analysis is clinically useful for comprehensive estimation of radiotherapeutic effects. (author)
Effect of large weight reductions on measured and estimated kidney function
DEFF Research Database (Denmark)
von Scholten, Bernt Johan; Persson, Frederik; Svane, Maria S
2017-01-01
GFR (creatinine-based equations), whereas measured GFR (mGFR) and cystatin C-based eGFR would be unaffected if adjusted for body surface area. METHODS: Prospective, intervention study including 19 patients. All attended a baseline visit before gastric bypass surgery followed by a visit six months post-surgery. m...... for body surface area was unchanged. Estimates of GFR based on creatinine overestimate renal function likely due to changes in muscle mass, whereas cystatin C based estimates are unaffected. TRIAL REGISTRATION: ClinicalTrials.gov, NCT02138565 . Date of registration: March 24, 2014....
Estimated conditional score function for missing mechanism model with nonignorable nonresponse
Institute of Scientific and Technical Information of China (English)
CUI Xia; ZHOU Yong
2017-01-01
Missing data mechanism often depends on the values of the responses,which leads to nonignorable nonresponses.In such a situation,inference based on approaches that ignore the missing data mechanism could not be valid.A crucial step is to model the nature of missingness.We specify a parametric model for missingness mechanism,and then propose a conditional score function approach for estimation.This approach imputes the score function by taking the conditional expectation of the score function for the missing data given the available information.Inference procedure is then followed by replacing unknown terms with the related nonparametric estimators based on the observed data.The proposed score function does not suffer from the non-identifiability problem,and the proposed estimator is shown to be consistent and asymptotically normal.We also construct a confidence region for the parameter of interest using empirical likelihood method.Simulation studies demonstrate that the proposed inference procedure performs well in many settings.We apply the proposed method to a data set from research in a growth hormone and exercise intervention study.
Zhan, Hanyu; Voelz, David G.
2016-12-01
The polarimetric bidirectional reflectance distribution function (pBRDF) describes the relationships between incident and scattered Stokes parameters, but the familiar surface-only microfacet pBRDF cannot capture diffuse scattering contributions and depolarization phenomena. We propose a modified pBRDF model with a diffuse scattering component developed from the Kubelka-Munk and Le Hors et al. theories, and apply it in the development of a method to jointly estimate refractive index, slope variance, and diffuse scattering parameters from a series of Stokes parameter measurements of a surface. An application of the model and estimation approach to experimental data published by Priest and Meier shows improved correspondence with measurements of normalized Mueller matrix elements. By converting the Stokes/Mueller calculus formulation of the model to a degree of polarization (DOP) description, the estimation results of the parameters from measured DOP values are found to be consistent with a previous DOP model and results.
Bayesian Estimation Of Shift Point In Poisson Model Under Asymmetric Loss Functions
Directory of Open Access Journals (Sweden)
uma srivastava
2012-01-01
Full Text Available The paper deals with estimating shift point which occurs in any sequence of independent observations of Poisson model in statistical process control. This shift point occurs in the sequence when i.e. m life data are observed. The Bayes estimator on shift point 'm' and before and after shift process means are derived for symmetric and asymmetric loss functions under informative and non informative priors. The sensitivity analysis of Bayes estimators are carried out by simulation and numerical comparisons with R-programming. The results shows the effectiveness of shift in sequence of Poisson disribution .
Estimation of Multiple Point Sources for Linear Fractional Order Systems Using Modulating Functions
Belkhatir, Zehor
2017-06-28
This paper proposes an estimation algorithm for the characterization of multiple point inputs for linear fractional order systems. First, using polynomial modulating functions method and a suitable change of variables the problem of estimating the locations and the amplitudes of a multi-pointwise input is decoupled into two algebraic systems of equations. The first system is nonlinear and solves for the time locations iteratively, whereas the second system is linear and solves for the input’s amplitudes. Second, closed form formulas for both the time location and the amplitude are provided in the particular case of single point input. Finally, numerical examples are given to illustrate the performance of the proposed technique in both noise-free and noisy cases. The joint estimation of pointwise input and fractional differentiation orders is also presented. Furthermore, a discussion on the performance of the proposed algorithm is provided.
International Nuclear Information System (INIS)
Yoo, Seung-Hoon; Lim, Hea-Jin; Kwak, Seung-Jun
2009-01-01
Over the last twenty years, the consumption of natural gas in Korea has increased dramatically. This increase has mainly resulted from the rise of consumption in the residential sector. The main objective of the study is to estimate households' demand function for natural gas by applying a sample selection model using data from a survey of households in Seoul. The results show that there exists a selection bias in the sample and that failure to correct for sample selection bias distorts the mean estimate, of the demand for natural gas, downward by 48.1%. In addition, according to the estimation results, the size of the house, the dummy variable for dwelling in an apartment, the dummy variable for having a bed in an inner room, and the household's income all have positive relationships with the demand for natural gas. On the other hand, the size of the family and the price of gas negatively contribute to the demand for natural gas. (author)
Murphy, K. A.
1990-01-01
A parameter estimation algorithm is developed which can be used to estimate unknown time- or state-dependent delays and other parameters (e.g., initial condition) appearing within a nonlinear nonautonomous functional differential equation. The original infinite dimensional differential equation is approximated using linear splines, which are allowed to move with the variable delay. The variable delays are approximated using linear splines as well. The approximation scheme produces a system of ordinary differential equations with nice computational properties. The unknown parameters are estimated within the approximating systems by minimizing a least-squares fit-to-data criterion. Convergence theorems are proved for time-dependent delays and state-dependent delays within two classes, which say essentially that fitting the data by using approximations will, in the limit, provide a fit to the data using the original system. Numerical test examples are presented which illustrate the method for all types of delay.
Bayesian estimation of dynamic matching function for U-V analysis in Japan
Kyo, Koki; Noda, Hideo; Kitagawa, Genshiro
2012-05-01
In this paper we propose a Bayesian method for analyzing unemployment dynamics. We derive a Beveridge curve for unemployment and vacancy (U-V) analysis from a Bayesian model based on a labor market matching function. In our framework, the efficiency of matching and the elasticities of new hiring with respect to unemployment and vacancy are regarded as time varying parameters. To construct a flexible model and obtain reasonable estimates in an underdetermined estimation problem, we treat the time varying parameters as random variables and introduce smoothness priors. The model is then described in a state space representation, enabling the parameter estimation to be carried out using Kalman filter and fixed interval smoothing. In such a representation, dynamic features of the cyclic unemployment rate and the structural-frictional unemployment rate can be accurately captured.
Zhong, M.; Zhan, Z.
2017-12-01
Receiver functions (RF) estimated on dense arrays have been widely used for studies of Earth structures at different scales. However, there are still challenges in estimating and interpreting RF images due to non-uniqueness of deconvolution, noise in data, and lack of uncertainty. Here, we develop a dense-array-based RF method towards robust and high-resolution RF images. We cast RF images as the models in a sparsity-promoted inverse problem, in which waveforms from multiple events recorded by neighboring stations are jointly inverted. We use the Neighborhood Algorithm to find the optimal model (i.e., RF image) as well as an ensemble of models for further uncertainty quantification. Synthetic tests and application to the IRIS Community Wavefield Experiment in Oklahoma demonstrate that the new method is able to deal with challenging dataset, retrieve reliable high-resolution RF images, and provide realistic uncertainty estimates.
An Energy-Based Limit State Function for Estimation of Structural Reliability in Shock Environments
Directory of Open Access Journals (Sweden)
Michael A. Guthrie
2013-01-01
Full Text Available limit state function is developed for the estimation of structural reliability in shock environments. This limit state function uses peak modal strain energies to characterize environmental severity and modal strain energies at failure to characterize the structural capacity. The Hasofer-Lind reliability index is briefly reviewed and its computation for the energy-based limit state function is discussed. Applications to two degree of freedom mass-spring systems and to a simple finite element model are considered. For these examples, computation of the reliability index requires little effort beyond a modal analysis, but still accounts for relevant uncertainties in both the structure and environment. For both examples, the reliability index is observed to agree well with the results of Monte Carlo analysis. In situations where fast, qualitative comparison of several candidate designs is required, the reliability index based on the proposed limit state function provides an attractive metric which can be used to compare and control reliability.
International Nuclear Information System (INIS)
Skrable, K.W.; Chabot, G.E.; French, C.S.; La Bone, T.R.
1988-01-01
This paper describes a way of obtaining and gives applications of intake retention functions. These functions give the fraction of an intake of radioactive material expected to be present in a specified bioassay compartment at any time after a single acute exposure or after onset of a continuous exposure. The intake retention functions are derived from a multicompartmental model and a recursive catenary kinetics equation that completely describe the metabolism of radioelements from intake to excretion, accounting for the delay in uptake from compartments in the respiratory and gastrointestinal tracts and the recycling of radioelements between systemic compartments. This approach, which treats excretion as the 'last' compartment of all catenary metabolic pathways, avoids the use of convolution integrals and provides algebraic solutions that can be programmed on hand held calculators or personal computers. The estimation of intakes and internal radiation doses and the use of intake retention functions in the design of bioassay programs are discussed along with several examples
International Nuclear Information System (INIS)
Wang Baosheng; Wang Dongqing; Zhang Jianmin; Jiang Jing
2012-01-01
In order to estimate the functional failure probability of passive systems, an innovative adaptive importance sampling methodology is presented. In the proposed methodology, information of variables is extracted with some pre-sampling of points in the failure region. An important sampling density is then constructed from the sample distribution in the failure region. Taking the AP1000 passive residual heat removal system as an example, the uncertainties related to the model of a passive system and the numerical values of its input parameters are considered in this paper. And then the probability of functional failure is estimated with the combination of the response surface method and adaptive importance sampling method. The numerical results demonstrate the high computed efficiency and excellent computed accuracy of the methodology compared with traditional probability analysis methods. (authors)
An Estimation of the Gamma-Ray Burst Afterglow Apparent Optical Brightness Distribution Function
Akerlof, Carl W.; Swan, Heather F.
2007-12-01
By using recent publicly available observational data obtained in conjunction with the NASA Swift gamma-ray burst (GRB) mission and a novel data analysis technique, we have been able to make some rough estimates of the GRB afterglow apparent optical brightness distribution function. The results suggest that 71% of all burst afterglows have optical magnitudes with mRa strong indication that the apparent optical magnitude distribution function peaks at mR~19.5. Such estimates may prove useful in guiding future plans to improve GRB counterpart observation programs. The employed numerical techniques might find application in a variety of other data analysis problems in which the intrinsic distributions must be inferred from a heterogeneous sample.
Estimation of functional failure probability of passive systems based on subset simulation method
International Nuclear Information System (INIS)
Wang Dongqing; Wang Baosheng; Zhang Jianmin; Jiang Jing
2012-01-01
In order to solve the problem of multi-dimensional epistemic uncertainties and small functional failure probability of passive systems, an innovative reliability analysis algorithm called subset simulation based on Markov chain Monte Carlo was presented. The method is found on the idea that a small failure probability can be expressed as a product of larger conditional failure probabilities by introducing a proper choice of intermediate failure events. Markov chain Monte Carlo simulation was implemented to efficiently generate conditional samples for estimating the conditional failure probabilities. Taking the AP1000 passive residual heat removal system, for example, the uncertainties related to the model of a passive system and the numerical values of its input parameters were considered in this paper. And then the probability of functional failure was estimated with subset simulation method. The numerical results demonstrate that subset simulation method has the high computing efficiency and excellent computing accuracy compared with traditional probability analysis methods. (authors)
Correlation Function Approach for Estimating Thermal Conductivity in Highly Porous Fibrous Materials
Martinez-Garcia, Jorge; Braginsky, Leonid; Shklover, Valery; Lawson, John W.
2011-01-01
Heat transport in highly porous fiber networks is analyzed via two-point correlation functions. Fibers are assumed to be long and thin to allow a large number of crossing points per fiber. The network is characterized by three parameters: the fiber aspect ratio, the porosity and the anisotropy of the structure. We show that the effective thermal conductivity of the system can be estimated from knowledge of the porosity and the correlation lengths of the correlation functions obtained from a fiber structure image. As an application, the effects of the fiber aspect ratio and the network anisotropy on the thermal conductivity is studied.
International Nuclear Information System (INIS)
Sotiropoulou, P; Koukou, V; Martini, N; Nikiforidis, G; Michail, C; Kandarakis, I; Fountos, G; Kounadi, E
2015-01-01
In this study an analytical approximation of dual-energy inverse functions is presented for the estimation of the calcium-to-phosphorous (Ca/P) mass ratio, which is a crucial parameter in bone health. Bone quality could be examined by the X-ray dual-energy method (XDEM), in terms of bone tissue material properties. Low- and high-energy, log- intensity measurements were combined by using a nonlinear function, to cancel out the soft tissue structures and generate the dual energy bone Ca/P mass ratio. The dual-energy simulated data were obtained using variable Ca and PO 4 thicknesses on a fixed total tissue thickness. The XDEM simulations were based on a bone phantom. Inverse fitting functions with least-squares estimation were used to obtain the fitting coefficients and to calculate the thickness of each material. The examined inverse mapping functions were linear, quadratic, and cubic. For every thickness, the nonlinear quadratic function provided the optimal fitting accuracy while requiring relative few terms. The dual-energy method, simulated in this work could be used to quantify bone Ca/P mass ratio with photon-counting detectors. (paper)
Estimation Methods of the Point Spread Function Axial Position: A Comparative Computational Study
Directory of Open Access Journals (Sweden)
Javier Eduardo Diaz Zamboni
2017-01-01
Full Text Available The precise knowledge of the point spread function is central for any imaging system characterization. In fluorescence microscopy, point spread function (PSF determination has become a common and obligatory task for each new experimental device, mainly due to its strong dependence on acquisition conditions. During the last decade, algorithms have been developed for the precise calculation of the PSF, which fit model parameters that describe image formation on the microscope to experimental data. In order to contribute to this subject, a comparative study of three parameter estimation methods is reported, namely: I-divergence minimization (MIDIV, maximum likelihood (ML and non-linear least square (LSQR. They were applied to the estimation of the point source position on the optical axis, using a physical model. Methods’ performance was evaluated under different conditions and noise levels using synthetic images and considering success percentage, iteration number, computation time, accuracy and precision. The main results showed that the axial position estimation requires a high SNR to achieve an acceptable success level and higher still to be close to the estimation error lower bound. ML achieved a higher success percentage at lower SNR compared to MIDIV and LSQR with an intrinsic noise source. Only the ML and MIDIV methods achieved the error lower bound, but only with data belonging to the optical axis and high SNR. Extrinsic noise sources worsened the success percentage, but no difference was found between noise sources for the same method for all methods studied.
On Estimation Of The Orientation Of Mobile Robots Using Turning Functions And SONAR Information
Directory of Open Access Journals (Sweden)
Dorel AIORDACHIOAIE
2003-12-01
Full Text Available SONAR systems are widely used by some artificial objects, e.g. robots, and by animals, e.g. bats, for navigation and pattern recognition. The objective of this paper is to present a solution on the estimation of the orientation in the environment of mobile robots, in the context of navigation, using the turning function approach. The results are shown to be accurate and can be used further in the design of navigation strategies of mobile robots.
Moser , Gabriele; Zerubia , Josiane; Serpico , Sebastiano B.
2006-01-01
International audience; In remotely sensed data analysis, a crucial problem is represented by the need to develop accurate models for the statistics of the pixel intensities. This paper deals with the problem of probability density function (pdf) estimation in the context of synthetic aperture radar (SAR) amplitude data analysis. Several theoretical and heuristic models for the pdfs of SAR data have been proposed in the literature, which have been proved to be effective for different land-cov...
Milanesi, P; Holderegger, R; Bollmann, K; Gugerli, F; Zellweger, F
2017-02-01
Estimating connectivity among fragmented habitat patches is crucial for evaluating the functionality of ecological networks. However, current estimates of landscape resistance to animal movement and dispersal lack landscape-level data on local habitat structure. Here, we used a landscape genetics approach to show that high-fidelity habitat structure maps derived from Light Detection and Ranging (LiDAR) data critically improve functional connectivity estimates compared to conventional land cover data. We related pairwise genetic distances of 128 Capercaillie (Tetrao urogallus) genotypes to least-cost path distances at multiple scales derived from land cover data. Resulting β values of linear mixed effects models ranged from 0.372 to 0.495, while those derived from LiDAR ranged from 0.558 to 0.758. The identification and conservation of functional ecological networks suffering from habitat fragmentation and homogenization will thus benefit from the growing availability of detailed and contiguous data on three-dimensional habitat structure and associated habitat quality. © 2016 by the Ecological Society of America.
A recursive Monte Carlo method for estimating importance functions in deep penetration problems
International Nuclear Information System (INIS)
Goldstein, M.
1980-04-01
A pratical recursive Monte Carlo method for estimating the importance function distribution, aimed at importance sampling for the solution of deep penetration problems in three-dimensional systems, was developed. The efficiency of the recursive method was investigated for sample problems including one- and two-dimensional, monoenergetic and and multigroup problems, as well as for a practical deep-penetration problem with streaming. The results of the recursive Monte Carlo calculations agree fairly well with Ssub(n) results. It is concluded that the recursive Monte Carlo method promises to become a universal method for estimating the importance function distribution for the solution of deep-penetration problems, in all kinds of systems: for many systems the recursive method is likely to be more efficient than previously existing methods; for three-dimensional systems it is the first method that can estimate the importance function with the accuracy required for an efficient solution based on importance sampling of neutron deep-penetration problems in those systems
Modulation transfer function estimation of optical lens system by adaptive neuro-fuzzy methodology
Petković, Dalibor; Shamshirband, Shahaboddin; Pavlović, Nenad T.; Anuar, Nor Badrul; Kiah, Miss Laiha Mat
2014-07-01
The quantitative assessment of image quality is an important consideration in any type of imaging system. The modulation transfer function (MTF) is a graphical description of the sharpness and contrast of an imaging system or of its individual components. The MTF is also known and spatial frequency response. The MTF curve has different meanings according to the corresponding frequency. The MTF of an optical system specifies the contrast transmitted by the system as a function of image size, and is determined by the inherent optical properties of the system. In this study, the adaptive neuro-fuzzy (ANFIS) estimator is designed and adapted to estimate MTF value of the actual optical system. Neural network in ANFIS adjusts parameters of membership function in the fuzzy logic of the fuzzy inference system. The back propagation learning algorithm is used for training this network. This intelligent estimator is implemented using Matlab/Simulink and the performances are investigated. The simulation results presented in this paper show the effectiveness of the developed method.
Smith, Andrew; LaVerde, Bruce; Hunt, Ron; Fulcher, Clay; Towner, Robert; McDonald, Emmett
2012-01-01
The design and theoretical basis of a new database tool that quickly generates vibroacoustic response estimates using a library of transfer functions (TFs) is discussed. During the early stages of a launch vehicle development program, these response estimates can be used to provide vibration environment specification to hardware vendors. The tool accesses TFs from a database, combines the TFs, and multiplies these by input excitations to estimate vibration responses. The database is populated with two sets of uncoupled TFs; the first set representing vibration response of a bare panel, designated as H(sup s), and the second set representing the response of the free-free component equipment by itself, designated as H(sup c). For a particular configuration undergoing analysis, the appropriate H(sup s) and H(sup c) are selected and coupled to generate an integrated TF, designated as H(sup s +c). This integrated TF is then used with the appropriate input excitations to estimate vibration responses. This simple yet powerful tool enables a user to estimate vibration responses without directly using finite element models, so long as suitable H(sup s) and H(sup c) sets are defined in the database libraries. The paper discusses the preparation of the database tool and provides the assumptions and methodologies necessary to combine H(sup s) and H(sup c) sets into an integrated H(sup s + c). An experimental validation of the approach is also presented.
On the method of logarithmic cumulants for parametric probability density function estimation.
Krylov, Vladimir A; Moser, Gabriele; Serpico, Sebastiano B; Zerubia, Josiane
2013-10-01
Parameter estimation of probability density functions is one of the major steps in the area of statistical image and signal processing. In this paper we explore several properties and limitations of the recently proposed method of logarithmic cumulants (MoLC) parameter estimation approach which is an alternative to the classical maximum likelihood (ML) and method of moments (MoM) approaches. We derive the general sufficient condition for a strong consistency of the MoLC estimates which represents an important asymptotic property of any statistical estimator. This result enables the demonstration of the strong consistency of MoLC estimates for a selection of widely used distribution families originating from (but not restricted to) synthetic aperture radar image processing. We then derive the analytical conditions of applicability of MoLC to samples for the distribution families in our selection. Finally, we conduct various synthetic and real data experiments to assess the comparative properties, applicability and small sample performance of MoLC notably for the generalized gamma and K families of distributions. Supervised image classification experiments are considered for medical ultrasound and remote-sensing SAR imagery. The obtained results suggest that MoLC is a feasible and computationally fast yet not universally applicable alternative to MoM. MoLC becomes especially useful when the direct ML approach turns out to be unfeasible.
Liu, Fushun; Liu, Chengcheng; Chen, Jiefeng; Wang, Bin
2017-08-01
The key concept of spectrum response estimation with commercial software, such as the SESAM software tool, typically includes two main steps: finding a suitable loading spectrum and computing the response amplitude operators (RAOs) subjected to a frequency-specified wave component. In this paper, we propose a nontraditional spectrum response estimation method that uses a numerical representation of the retardation functions. Based on estimated added mass and damping matrices of the structure, we decompose and replace the convolution terms with a series of poles and corresponding residues in the Laplace domain. Then, we estimate the power density corresponding to each frequency component using the improved periodogram method. The advantage of this approach is that the frequency-dependent motion equations in the time domain can be transformed into the Laplace domain without requiring Laplace-domain expressions for the added mass and damping. To validate the proposed method, we use a numerical semi-submerged pontoon from the SESAM. The numerical results show that the responses of the proposed method match well with those obtained from the traditional method. Furthermore, the estimated spectrum also matches well, which indicates its potential application to deep-water floating structures.
Lee, Yu; Yu, Chanki; Lee, Sang Wook
2018-01-10
We present a sequential fitting-and-separating algorithm for surface reflectance components that separates individual dominant reflectance components and simultaneously estimates the corresponding bidirectional reflectance distribution function (BRDF) parameters from the separated reflectance values. We tackle the estimation of a Lafortune BRDF model, which combines a nonLambertian diffuse reflection and multiple specular reflectance components with a different specular lobe. Our proposed method infers the appropriate number of BRDF lobes and their parameters by separating and estimating each of the reflectance components using an interval analysis-based branch-and-bound method in conjunction with iterative K-ordered scale estimation. The focus of this paper is the estimation of the Lafortune BRDF model. Nevertheless, our proposed method can be applied to other analytical BRDF models such as the Cook-Torrance and Ward models. Experiments were carried out to validate the proposed method using isotropic materials from the Mitsubishi Electric Research Laboratories-Massachusetts Institute of Technology (MERL-MIT) BRDF database, and the results show that our method is superior to a conventional minimization algorithm.
Silveira, Vladímir de Aquino; Souza, Givago da Silva; Gomes, Bruno Duarte; Rodrigues, Anderson Raiol; Silveira, Luiz Carlos de Lima
2014-01-01
We used psychometric functions to estimate the joint entropy for space discrimination and spatial frequency discrimination. Space discrimination was taken as discrimination of spatial extent. Seven subjects were tested. Gábor functions comprising unidimensionalsinusoidal gratings (0.4, 2, and 10 cpd) and bidimensionalGaussian envelopes (1°) were used as reference stimuli. The experiment comprised the comparison between reference and test stimulithat differed in grating's spatial frequency or envelope's standard deviation. We tested 21 different envelope's standard deviations around the reference standard deviation to study spatial extent discrimination and 19 different grating's spatial frequencies around the reference spatial frequency to study spatial frequency discrimination. Two series of psychometric functions were obtained for 2%, 5%, 10%, and 100% stimulus contrast. The psychometric function data points for spatial extent discrimination or spatial frequency discrimination were fitted with Gaussian functions using the least square method, and the spatial extent and spatial frequency entropies were estimated from the standard deviation of these Gaussian functions. Then, joint entropy was obtained by multiplying the square root of space extent entropy times the spatial frequency entropy. We compared our results to the theoretical minimum for unidimensional Gábor functions, 1/4π or 0.0796. At low and intermediate spatial frequencies and high contrasts, joint entropy reached levels below the theoretical minimum, suggesting non-linear interactions between two or more visual mechanisms. We concluded that non-linear interactions of visual pathways, such as the M and P pathways, could explain joint entropy values below the theoretical minimum at low and intermediate spatial frequencies and high contrasts. These non-linear interactions might be at work at intermediate and high contrasts at all spatial frequencies once there was a substantial decrease in joint
DEFF Research Database (Denmark)
Jepsen, Morten Løve; Dau, Torsten
To partly characterize the function of cochlear processing in humans, the basilar membrane (BM) input-output function can be estimated. In recent studies, forward masking has been used to estimate BM compression. If an on-frequency masker is processed compressively, while an off-frequency masker...... is transformed more linearly, the ratio between the slopes of growth of masking (GOM) functions provides an estimate of BM compression at the signal frequency. In this study, this paradigm is extended to also estimate the knee-point of the I/O-function between linear rocessing at low levels and compressive...... processing at medium levels. If a signal can be masked by a low-level on-frequency masker such that signal and masker fall in the linear region of the I/O-function, then a steeper GOM function is expected. The knee-point can then be estimated in the input level region where the GOM changes significantly...
Journal of Earth System Science | Indian Academy of Sciences
Indian Academy of Sciences (India)
Pedotransfer functions to estimate soil water content at field capacity and ... and human activities for a typical basin in the Northern Taihang Mountain, China ... Contamination of sediments in the floodplain wetlands of the lower uMngeni River, ...
Dosing of cytotoxic chemotherapy: impact of renal function estimates on dose.
Dooley, M J; Poole, S G; Rischin, D
2013-11-01
Oncology clinicians are now routinely provided with an estimated glomerular filtration rate on pathology reports whenever serum creatinine is requested. The utility of using this for the dose determination of renally excreted drugs compared with other existing methods is needed to inform practice. Renal function was determined by [Tc(99m)]DTPA clearance in adult patients presenting for chemotherapy. Renal function was calculated using the 4-variable Modification of Diet in Renal Disease (4v-MDRD), Chronic Kidney Disease Epidemiology Collaboration (CKD-EPI), Cockcroft and Gault (CG), Wright and Martin formulae. Doses for renal excreted cytotoxic drugs, including carboplatin, were calculated. The concordance of the renal function estimates according to the CKD classification with measured Tc(99m)DPTA clearance in 455 adults (median age 64.0 years: range 17-87 years) for the 4v-MDRD, CKD-EPI, CG, Martin and Wright formulae was 47.7%, 56.3%, 46.2%, 56.5% and 60.2%, respectively. Concordance for chemotherapy dose for these formulae was 89.0%, 89.5%, 85.1%, 89.9% and 89.9%, respectively. Concordance for carboplatin dose specifically was 66.4%, 71.4%, 64.0%, 73.8% and 73.2%. All bedside formulae provide similar levels of concordance in dosage selection for the renal excreted chemotherapy drugs when compared with the use of a direct measure of renal function.
International Nuclear Information System (INIS)
Hwang, Eui-Hyo
1999-01-01
The aim of this study is the assessment of the physiological implication of estimated parameters and the clinical value of this analyzing method for hepatic functional reserve estimation. After venous injection of 185 MBq of GSA, fifteen sequential sets of SPECT data were acquired for 15 minutes. First 5 sets SPECT images were analyzed by Patlak plot and hepatic GSA clearance was obtained in each matrix. The sum of hepatic GSA clearance in each matrix (total hepatic GSA clearance) was calculated as an index of whole liver functional reserve. Total hepatic GSA clearance was compared with receptor index or effective blood flow (EHBF) of whole liver which were analyzed by Direct Integral Linear Least Square Regression (DILS) method for the assessment of the physiological implications of hepatic GSA clearance. The clinical value of total hepatic GSA clearance was assessed in comparisons with the conventional hepatic function test. A very good correlations were observed between total hepatic GSA clearance and receptor index, whereas the correlations between total hepatic GSA clearance and EHBF were not significant. Significant correlations were also observed between total hepatic GSA clearance and the conventional hepatic function tests, such as choline esterase, albumin, hepaplastin test, ICG R15. (K.H.)
Functional soil microbial diversity across Europe estimated by EEA, MicroResp and BIOLOG
DEFF Research Database (Denmark)
Winding, Anne; Rutgers, Michiel; Creamer, Rachel
consisting of 81 soil samples covering five Biogeograhical Zones and three land-uses in order to test the sensitivity, ease and cost of performance and biological significance of the data output. The techniques vary in how close they are to in situ functions; dependency on growth during incubation......Soil microorganisms are abundant and essential for the bio-geochemical processes of soil, soil quality and soil ecosystem services. All this is dependent on the actual functions the microbial communities are performing in the soil. Measuring soil respiration has for many years been the basis...... of estimating soil microbial activity. However, today several techniques are in use for determining microbial functional diversity and assessing soil biodiversity: Methods based on CO2 development by the microbes such as substrate induced respiration (SIR) on specific substrates have lead to the development...
Directory of Open Access Journals (Sweden)
Cheng Liu
2010-01-01
Full Text Available Time-varying coherence is a powerful tool for revealing functional dynamics between different regions in the brain. In this paper, we address ways of estimating evolutionary spectrum and coherence using the general Cohen's class distributions. We show that the intimate connection between the Cohen's class-based spectra and the evolutionary spectra defined on the locally stationary time series can be linked by the kernel functions of the Cohen's class distributions. The time-varying spectra and coherence are further generalized with the Stockwell transform, a multiscale time-frequency representation. The Stockwell measures can be studied in the framework of the Cohen's class distributions with a generalized frequency-dependent kernel function. A magnetoencephalography study using the Stockwell coherence reveals an interesting temporal interaction between contralateral and ipsilateral motor cortices under the multisource interference task.
Quantitative pre-surgical lung function estimation with SPECT/CT
International Nuclear Information System (INIS)
Bailey, D. L.; Willowson, K. P.; Timmins, S.; Harris, B. E.; Bailey, E. A.; Roach, P. J.
2009-01-01
Full text:Objectives: To develop methodology to predict lobar lung function based on SPECT/CT ventilation and perfusion (V/Q) scanning in candidates for lobectomy for lung cancer. Methods: This combines two development areas from our group: quantitative SPECT based on CT-derived corrections for scattering and attenuation of photons, and SPECT V/Q scanning with lobar segmentation from CT. Eight patients underwent baseline pulmonary function testing (PFT) including spirometry, measure of DLCO and cario-pulmonary exercise testing. A SPECT/CT V/Q scan was acquired at baseline. Using in-house software each lobe was anatomically defined using CT to provide lobar ROIs which could be applied to the SPECT data. From these, individual lobar contribution to overall function was calculated from counts within the lobe and post-operative FEV1, DLCO and VO2 peak were predicted. This was compared with the quantitative planar scan method using 3 rectangular ROIs over each lung. Results: Post-operative FEV1 most closely matched that predicted by the planar quantification method, with SPECT V/Q over-estimating the loss of function by 8% (range - 7 - +23%). However, post-operative DLCO and VO2 peak were both accurately predicted by SPECT V/Q (average error of 0 and 2% respectively) compared with planar. Conclusions: More accurate anatomical definition of lobar anatomy provides better estimates of post-operative loss of function for DLCO and VO2 peak than traditional planar methods. SPECT/CT provides the tools for accurate anatomical defintions of the surgical target as well as being useful in producing quantitative 3D functional images for ventilation and perfusion.
Bayesian switching factor analysis for estimating time-varying functional connectivity in fMRI.
Taghia, Jalil; Ryali, Srikanth; Chen, Tianwen; Supekar, Kaustubh; Cai, Weidong; Menon, Vinod
2017-07-15
There is growing interest in understanding the dynamical properties of functional interactions between distributed brain regions. However, robust estimation of temporal dynamics from functional magnetic resonance imaging (fMRI) data remains challenging due to limitations in extant multivariate methods for modeling time-varying functional interactions between multiple brain areas. Here, we develop a Bayesian generative model for fMRI time-series within the framework of hidden Markov models (HMMs). The model is a dynamic variant of the static factor analysis model (Ghahramani and Beal, 2000). We refer to this model as Bayesian switching factor analysis (BSFA) as it integrates factor analysis into a generative HMM in a unified Bayesian framework. In BSFA, brain dynamic functional networks are represented by latent states which are learnt from the data. Crucially, BSFA is a generative model which estimates the temporal evolution of brain states and transition probabilities between states as a function of time. An attractive feature of BSFA is the automatic determination of the number of latent states via Bayesian model selection arising from penalization of excessively complex models. Key features of BSFA are validated using extensive simulations on carefully designed synthetic data. We further validate BSFA using fingerprint analysis of multisession resting-state fMRI data from the Human Connectome Project (HCP). Our results show that modeling temporal dependencies in the generative model of BSFA results in improved fingerprinting of individual participants. Finally, we apply BSFA to elucidate the dynamic functional organization of the salience, central-executive, and default mode networks-three core neurocognitive systems with central role in cognitive and affective information processing (Menon, 2011). Across two HCP sessions, we demonstrate a high level of dynamic interactions between these networks and determine that the salience network has the highest temporal
Lefort-Besnard, Jérémy; Bassett, Danielle S; Smallwood, Jonathan; Margulies, Daniel S; Derntl, Birgit; Gruber, Oliver; Aleman, Andre; Jardri, Renaud; Varoquaux, Gaël; Thirion, Bertrand; Eickhoff, Simon B; Bzdok, Danilo
2018-02-01
Schizophrenia is a devastating mental disease with an apparent disruption in the highly associative default mode network (DMN). Interplay between this canonical network and others probably contributes to goal-directed behavior so its disturbance is a candidate neural fingerprint underlying schizophrenia psychopathology. Previous research has reported both hyperconnectivity and hypoconnectivity within the DMN, and both increased and decreased DMN coupling with the multimodal saliency network (SN) and dorsal attention network (DAN). This study systematically revisited network disruption in patients with schizophrenia using data-derived network atlases and multivariate pattern-learning algorithms in a multisite dataset (n = 325). Resting-state fluctuations in unconstrained brain states were used to estimate functional connectivity, and local volume differences between individuals were used to estimate structural co-occurrence within and between the DMN, SN, and DAN. In brain structure and function, sparse inverse covariance estimates of network coupling were used to characterize healthy participants and patients with schizophrenia, and to identify statistically significant group differences. Evidence did not confirm that the backbone of the DMN was the primary driver of brain dysfunction in schizophrenia. Instead, functional and structural aberrations were frequently located outside of the DMN core, such as in the anterior temporoparietal junction and precuneus. Additionally, functional covariation analyses highlighted dysfunctional DMN-DAN coupling, while structural covariation results highlighted aberrant DMN-SN coupling. Our findings reframe the role of the DMN core and its relation to canonical networks in schizophrenia. We thus underline the importance of large-scale neural interactions as effective biomarkers and indicators of how to tailor psychiatric care to single patients. © 2017 Wiley Periodicals, Inc.
International Nuclear Information System (INIS)
Jouvie, Camille
2013-01-01
Positron Emission Tomography (PET) is a method of functional imaging, used in particular for drug development and tumor imaging. In PET, the estimation of the arterial plasmatic activity concentration of the non-metabolized compound (the 'input function') is necessary for the extraction of the pharmacokinetic parameters. These parameters enable the quantification of the compound dynamics in the tissues. This PhD thesis contributes to the study of the input function by the development of a minimally invasive method to estimate the input function. This method uses the PET image and a few blood samples. In this work, the example of the FDG tracer is chosen. The proposed method relies on compartmental modeling: it deconvoluates the three-compartment-model. The originality of the method consists in using a large number of regions of interest (ROIs), a large number of sets of three ROIs, and an iterative process. To validate the method, simulations of PET images of increasing complexity have been performed, from a simple image simulated with an analytic simulator to a complex image simulated with a Monte-Carlo simulator. After simulation of the acquisition, reconstruction and corrections, the images were segmented (through segmentation of an IRM image and registration between PET and IRM images) and corrected for partial volume effect by a variant of Rousset's method, to obtain the kinetics in the ROIs, which are the input data of the estimation method. The evaluation of the method on simulated and real data is presented, as well as a study of the method robustness to different error sources, for example in the segmentation, in the registration or in the activity of the used blood samples. (author) [fr
Forester, James D; Im, Hae Kyung; Rathouz, Paul J
2009-12-01
Patterns of resource selection by animal populations emerge as a result of the behavior of many individuals. Statistical models that describe these population-level patterns of habitat use can miss important interactions between individual animals and characteristics of their local environment; however, identifying these interactions is difficult. One approach to this problem is to incorporate models of individual movement into resource selection models. To do this, we propose a model for step selection functions (SSF) that is composed of a resource-independent movement kernel and a resource selection function (RSF). We show that standard case-control logistic regression may be used to fit the SSF; however, the sampling scheme used to generate control points (i.e., the definition of availability) must be accommodated. We used three sampling schemes to analyze simulated movement data and found that ignoring sampling and the resource-independent movement kernel yielded biased estimates of selection. The level of bias depended on the method used to generate control locations, the strength of selection, and the spatial scale of the resource map. Using empirical or parametric methods to sample control locations produced biased estimates under stronger selection; however, we show that the addition of a distance function to the analysis substantially reduced that bias. Assuming a uniform availability within a fixed buffer yielded strongly biased selection estimates that could be corrected by including the distance function but remained inefficient relative to the empirical and parametric sampling methods. As a case study, we used location data collected from elk in Yellowstone National Park, USA, to show that selection and bias may be temporally variable. Because under constant selection the amount of bias depends on the scale at which a resource is distributed in the landscape, we suggest that distance always be included as a covariate in SSF analyses. This approach to
The efficiency of different estimation methods of hydro-physical limits
Directory of Open Access Journals (Sweden)
Emma María Martínez
2012-12-01
Full Text Available The soil water available to crops is defined by specific values of water potential limits. Underlying the estimation of hydro-physical limits, identified as permanent wilting point (PWP and field capacity (FC, is the selection of a suitable method based on a multi-criteria analysis that is not always clear and defined. In this kind of analysis, the time required for measurements must be taken into consideration as well as other external measurement factors, e.g., the reliability and suitability of the study area, measurement uncertainty, cost, effort and labour invested. In this paper, the efficiency of different methods for determining hydro-physical limits is evaluated by using indices that allow for the calculation of efficiency in terms of effort and cost. The analysis evaluates both direct determination methods (pressure plate - PP and water activity meter - WAM and indirect estimation methods (pedotransfer functions - PTFs. The PTFs must be validated for the area of interest before use, but the time and cost associated with this validation are not included in the cost of analysis. Compared to the other methods, the combined use of PP and WAM to determine hydro-physical limits differs significantly in time and cost required and quality of information. For direct methods, increasing sample size significantly reduces cost and time. This paper assesses the effectiveness of combining a general analysis based on efficiency indices and more specific analyses based on the different influencing factors, which were considered separately so as not to mask potential benefits or drawbacks that are not evidenced in efficiency estimation.
International Nuclear Information System (INIS)
Jorgensen, E.J.
1987-01-01
This study is an application of production-cost duality theory. Duality theory is reviewed for the competitive and rate-of-return regulated firm. The cost function is developed for the nuclear electric-power-generating industry of the United States using capital, fuel, and labor factor inputs. A comparison is made between the Generalized Box-Cox (GBC) and Fourier Flexible (FF) functional forms. The GBC functional form nests the Generalized Leontief, Generalized Square Root Quadratic and Translog functional forms, and is based upon a second-order Taylor-series expansion. The FF form follows from a Fourier-series expansion in sine and cosine terms using the Sobolev norm as the goodness-of-fit measure. The Sobolev norm takes into account first and second derivatives. The cost function and two factor shares are estimated as a system of equations using maximum-likelihood techniques, with Additive Standard Normal and Logistic Normal error distributions. In summary, none of the special cases of the GBC function form are accepted. Homotheticity of the underlying production technology can be rejected for both GBC and FF forms, leaving only the unrestricted versions supported by the data. Residual analysis indicates a slight improvement in skewness and kurtosis for univariate and multivariate cases when the Logistic Normal distribution is used
Directory of Open Access Journals (Sweden)
Delaram Houshmand Kouchi
2017-05-01
Full Text Available The successful application of hydrological models relies on careful calibration and uncertainty analysis. However, there are many different calibration/uncertainty analysis algorithms, and each could be run with different objective functions. In this paper, we highlight the fact that each combination of optimization algorithm-objective functions may lead to a different set of optimum parameters, while having the same performance; this makes the interpretation of dominant hydrological processes in a watershed highly uncertain. We used three different optimization algorithms (SUFI-2, GLUE, and PSO, and eight different objective functions (R2, bR2, NSE, MNS, RSR, SSQR, KGE, and PBIAS in a SWAT model to calibrate the monthly discharges in two watersheds in Iran. The results show that all three algorithms, using the same objective function, produced acceptable calibration results; however, with significantly different parameter ranges. Similarly, an algorithm using different objective functions also produced acceptable calibration results, but with different parameter ranges. The different calibrated parameter ranges consequently resulted in significantly different water resource estimates. Hence, the parameters and the outputs that they produce in a calibrated model are “conditioned” on the choices of the optimization algorithm and objective function. This adds another level of non-negligible uncertainty to watershed models, calling for more attention and investigation in this area.
Estimation of gas and tissue lung volumes by MRI: functional approach of lung imaging.
Qanadli, S D; Orvoen-Frija, E; Lacombe, P; Di Paola, R; Bittoun, J; Frija, G
1999-01-01
The purpose of this work was to assess the accuracy of MRI for the determination of lung gas and tissue volumes. Fifteen healthy subjects underwent MRI of the thorax and pulmonary function tests [vital capacity (VC) and total lung capacity (TLC)] in the supine position. MR examinations were performed at inspiration and expiration. Lung volumes were measured by a previously validated technique on phantoms. Both individual and total lung volumes and capacities were calculated. MRI total vital capacity (VC(MRI)) was compared with spirometric vital capacity (VC(SP)). Capacities were correlated to lung volumes. Tissue volume (V(T)) was estimated as the difference between the total lung volume at full inspiration and the TLC. No significant difference was seen between VC(MRI) and VC(SP). Individual capacities were well correlated (r = 0.9) to static volume at full inspiration. The V(T) was estimated to be 836+/-393 ml. This preliminary study demonstrates that MRI can accurately estimate lung gas and tissue volumes. The proposed approach appears well suited for functional imaging of the lung.
Assessing a learning process with functional ANOVA estimators of EEG power spectral densities.
Gutiérrez, David; Ramírez-Moreno, Mauricio A
2016-04-01
We propose to assess the process of learning a task using electroencephalographic (EEG) measurements. In particular, we quantify changes in brain activity associated to the progression of the learning experience through the functional analysis-of-variances (FANOVA) estimators of the EEG power spectral density (PSD). Such functional estimators provide a sense of the effect of training in the EEG dynamics. For that purpose, we implemented an experiment to monitor the process of learning to type using the Colemak keyboard layout during a twelve-lessons training. Hence, our aim is to identify statistically significant changes in PSD of various EEG rhythms at different stages and difficulty levels of the learning process. Those changes are taken into account only when a probabilistic measure of the cognitive state ensures the high engagement of the volunteer to the training. Based on this, a series of statistical tests are performed in order to determine the personalized frequencies and sensors at which changes in PSD occur, then the FANOVA estimates are computed and analyzed. Our experimental results showed a significant decrease in the power of [Formula: see text] and [Formula: see text] rhythms for ten volunteers during the learning process, and such decrease happens regardless of the difficulty of the lesson. These results are in agreement with previous reports of changes in PSD being associated to feature binding and memory encoding.
Xu, Yonghong; Gao, Xiaohuan; Wang, Zhengxi
2014-04-01
Missing data represent a general problem in many scientific fields, especially in medical survival analysis. Dealing with censored data, interpolation method is one of important methods. However, most of the interpolation methods replace the censored data with the exact data, which will distort the real distribution of the censored data and reduce the probability of the real data falling into the interpolation data. In order to solve this problem, we in this paper propose a nonparametric method of estimating the survival function of right-censored and interval-censored data and compare its performance to SC (self-consistent) algorithm. Comparing to the average interpolation and the nearest neighbor interpolation method, the proposed method in this paper replaces the right-censored data with the interval-censored data, and greatly improves the probability of the real data falling into imputation interval. Then it bases on the empirical distribution theory to estimate the survival function of right-censored and interval-censored data. The results of numerical examples and a real breast cancer data set demonstrated that the proposed method had higher accuracy and better robustness for the different proportion of the censored data. This paper provides a good method to compare the clinical treatments performance with estimation of the survival data of the patients. This pro vides some help to the medical survival data analysis.
See food diet? Cultural differences in estimating fullness and intake as a function of plate size.
Peng, Mei; Adam, Sarah; Hautus, Michael J; Shin, Myoungju; Duizer, Lisa M; Yan, Huiquan
2017-10-01
Previous research has suggested that manipulations of plate size can have a direct impact on perception of food intake, measured by estimated fullness and intake. The present study, involving 570 individuals across Canada, China, Korea, and New Zealand, is the first empirical study to investigate cultural influences on perception of food portion as a function of plate size. The respondents viewed photographs of ten culturally diverse dishes presented on large (27 cm) and small (23 cm) plates, and then rated their estimated usual intake and expected fullness after consuming the dish, using 100-point visual analog scales. The data were analysed with a mixed-model ANCOVA controlling for individual BMI, liking and familiarity of the presented food. The results showed clear cultural differences: (1) manipulations of the plate size had no effect on the expected fullness or the estimated intake of the Chinese and Korean respondents, as opposed to significant effects in Canadians and New Zealanders (p Asian respondents. Overall, these findings, from a cultural perspective, support the notion that estimation of fullness and intake are learned through dining experiences, and highlight the importance of considering eating environments and contexts when assessing individual behaviours relating to food intake. Copyright © 2017 Elsevier Ltd. All rights reserved.
Optimal replacement time estimation for machines and equipment based on cost function
Directory of Open Access Journals (Sweden)
J. Šebo
2013-01-01
Full Text Available The article deals with a multidisciplinary issue of estimating the optimal replacement time for the machines. Considered categories of machines, for which the optimization method is usable, are of the metallurgical and engineering production. Different models of cost function are considered (both with one and two variables. Parameters of the models were calculated through the least squares method. Models testing show that all are good enough, so for estimation of optimal replacement time is sufficient to use simpler models. In addition to the testing of models we developed the method (tested on selected simple model which enable us in actual real time (with limited data set to indicate the optimal replacement time. The indicated time moment is close enough to the optimal replacement time t*.
Estimation of the POD function and the LOD of a qualitative microbiological measurement method.
Wilrich, Cordula; Wilrich, Peter-Theodor
2009-01-01
Qualitative microbiological measurement methods in which the measurement results are either 0 (microorganism not detected) or 1 (microorganism detected) are discussed. The performance of such a measurement method is described by its probability of detection as a function of the contamination (CFU/g or CFU/mL) of the test material, or by the LOD(p), i.e., the contamination that is detected (measurement result 1) with a specified probability p. A complementary log-log model was used to statistically estimate these performance characteristics. An intralaboratory experiment for the detection of Listeria monocytogenes in various food matrixes illustrates the method. The estimate of LOD50% is compared with the Spearman-Kaerber method.
Gan, L.; Yang, F.; Shi, Y. F.; He, H. L.
2017-11-01
Many occasions related to batteries demand to know how much continuous and instantaneous power can batteries provide such as the rapidly developing electric vehicles. As the large-scale applications of lithium-ion batteries, lithium-ion batteries are used to be our research object. Many experiments are designed to get the lithium-ion battery parameters to ensure the relevance and reliability of the estimation. To evaluate the continuous and instantaneous load capability of a battery called state-of-function (SOF), this paper proposes a fuzzy logic algorithm based on battery state-of-charge(SOC), state-of-health(SOH) and C-rate parameters. Simulation and experimental results indicate that the proposed approach is suitable for battery SOF estimation.
A Gaussian mixture model based cost function for parameter estimation of chaotic biological systems
Shekofteh, Yasser; Jafari, Sajad; Sprott, Julien Clinton; Hashemi Golpayegani, S. Mohammad Reza; Almasganj, Farshad
2015-02-01
As we know, many biological systems such as neurons or the heart can exhibit chaotic behavior. Conventional methods for parameter estimation in models of these systems have some limitations caused by sensitivity to initial conditions. In this paper, a novel cost function is proposed to overcome those limitations by building a statistical model on the distribution of the real system attractor in state space. This cost function is defined by the use of a likelihood score in a Gaussian mixture model (GMM) which is fitted to the observed attractor generated by the real system. Using that learned GMM, a similarity score can be defined by the computed likelihood score of the model time series. We have applied the proposed method to the parameter estimation of two important biological systems, a neuron and a cardiac pacemaker, which show chaotic behavior. Some simulated experiments are given to verify the usefulness of the proposed approach in clean and noisy conditions. The results show the adequacy of the proposed cost function.
Aggarwal, Ankush
2017-08-01
Motivated by the well-known result that stiffness of soft tissue is proportional to the stress, many of the constitutive laws for soft tissues contain an exponential function. In this work, we analyze properties of the exponential function and how it affects the estimation and comparison of elastic parameters for soft tissues. In particular, we find that as a consequence of the exponential function there are lines of high covariance in the elastic parameter space. As a result, one can have widely varying mechanical parameters defining the tissue stiffness but similar effective stress-strain responses. Drawing from elementary algebra, we propose simple changes in the norm and the parameter space, which significantly improve the convergence of parameter estimation and robustness in the presence of noise. More importantly, we demonstrate that these changes improve the conditioning of the problem and provide a more robust solution in the case of heterogeneous material by reducing the chances of getting trapped in a local minima. Based upon the new insight, we also propose a transformed parameter space which will allow for rational parameter comparison and avoid misleading conclusions regarding soft tissue mechanics.
ESTIMATING THE PRODUCTION FUNCTION IN THE CASE OF ROMANIA METODOLOGY AND RESULTS
Directory of Open Access Journals (Sweden)
Simuț Ramona Marinela
2015-07-01
Full Text Available The problem of economic growth is a headline concern among economists, mathematicians and politicians. This is because of the major impact of economic growth on the entire population of a country, which has made achieving or maintaining a sustained growth rate the major objective of macroeconomic policy of any country. Thus, in order to identify present sources of economic growth for Romania in our study we used the Cobb-Douglas type production function. The basic variables of this model are represented by work factors, capital stock and the part of economic growth determined by the technical progress, the Solow residue or total productivity of production factors. To estimate this production function in the case of Romania, we used the quarter statistical data from the period between 2000 – first quarter and 2014 – fourth quarter; the source of the data was Eurostat. The Cobb-Douglas production function with the variables work and capital is valid in Romania’s case because it has the parameters of the exogenous variables significantly different from zero. This model became valid after we eliminated the autocorrelation of errors. Removing the autocorrelation of errors does not alter the structure of the production function. The adjusted R2 determination coefficient, as well as the α and β coefficients have values close to those from the first estimated equation. The regression of the GDP is characterized by marginal decreasing efficiency of the capital stock (α > 1 and decreasing efficiency of work (β < 1. In our case the sum of the α and β coefficients is below 1 (it is 0.75 as well as in the case of the second model (0.89, which corresponds to the decreasing efficiency of the production function. Concerning the working population of Romania, it registered a growing trend, starting with 2000 until 2005, a period that coincided with a sustained economic growth.
International Nuclear Information System (INIS)
Stenmark, Matthew H.; Cao, Yue; Wang, Hesheng; Jackson, Andrew; Ben-Josef, Edgar; Ten Haken, Randall K.; Lawrence, Theodore S.; Feng, Mary
2014-01-01
Purpose: To estimate the limit of functional liver reserve for safe application of hepatic irradiation using changes in indocyanine green, an established assay of liver function. Materials and methods: From 2005 to 2011, 60 patients undergoing hepatic irradiation were enrolled in a prospective study assessing the plasma retention fraction of indocyanine green at 15-min (ICG-R15) prior to, during (at 60% of planned dose), and after radiotherapy (RT). The limit of functional liver reserve was estimated from the damage fraction of functional liver (DFL) post-RT [1 − (ICG-R15 pre-RT /ICG-R15 post-RT )] where no toxicity was observed using a beta distribution function. Results: Of 48 evaluable patients, 3 (6%) developed RILD, all within 2.5 months of completing RT. The mean ICG-R15 for non-RILD patients pre-RT, during-RT and 1-month post-RT was 20.3%(SE 2.6), 22.0%(3.0), and 27.5%(2.8), and for RILD patients was 6.3%(4.3), 10.8%(2.7), and 47.6%(8.8). RILD was observed at post-RT damage fractions of ⩾78%. Both DFL assessed by during-RT ICG and MLD predicted for DFL post-RT (p < 0.0001). Limiting the post-RT DFL to 50%, predicted a 99% probability of a true complication rate <15%. Conclusion: The DFL as assessed by changes in ICG during treatment serves as an early indicator of a patient’s tolerance to hepatic irradiation
Estimation of the pulmonary input function in dynamic whole body PET
International Nuclear Information System (INIS)
Ho-Shon, K.; Buchen, P.; Meikle, S.R.; Fulham, M.J.; University of Sydney, Sydney, NSW
1998-01-01
Full text: Dynamic data acquisition in Whole Body PET (WB-PET) has the potential to measure the metabolic rate of glucose (MRGlc) in tissue in-vivo. Estimation of changes in tumoral MRGlc may be a valuable tool in cancer by providing an quantitative index of response to treatment. A necessary requirement is an input function (IF) that can be obtained from arterial, 'arterialised' venous or pulmonary arterial blood in the case of lung tumours. Our aim was to extract the pulmonary input function from dynamic WB-PET data using Principal Component Analysis (PCA), Factor Analysis (FA) and Maximum Entropy (ME) for the evaluation of patients undergoing induction chemotherapy for non-small cell lung cancer. PCA is first used as a method of dimension reduction to obtain a signal space, defined by an optimal metric and a set of vectors. FA is used together with a ME constraint to rotate these vectors to obtain 'physiological' factors. A form of entropy function that does not require normalised data was used. This enabled the introduction of a penalty function based on the blood concentration at the last time point which provides an additional constraint. Tissue functions from 10 planes through normal lung were simulated. The model was a linear combination of an IF and a tissue time activity curve (TAC). The proportion of the IF to TAC was varied over the planes to simulate the apical to basal gradient in vascularity of the lung and pseudo Poisson noise was added. The method accurately extracted the IF at noise levels spanning the expected range for dynamic ROI data acquired with the interplane septa extended. Our method is minimally invasive because it requires only 1 late venous blood sample and is applicable to a wide range of tracers since it does not assume a particular compartmental model. Pilot data from 2 patients have been collected enabling comparison of the estimated IF with direct blood sampling from the pulmonary artery
International Nuclear Information System (INIS)
Coelli, Tim J.; Gautier, Axel; Perelman, Sergio; Saplacan-Pop, Roxana
2013-01-01
The quality of electricity distribution is being more and more scrutinized by regulatory authorities, with explicit reward and penalty schemes based on quality targets having been introduced in many countries. It is then of prime importance to know the cost of improving the quality for a distribution system operator. In this paper, we focus on one dimension of quality, the continuity of supply, and we estimated the cost of preventing power outages. For that, we make use of the parametric distance function approach, assuming that outages enter in the firm production set as an input, an imperfect substitute for maintenance activities and capital investment. This allows us to identify the sources of technical inefficiency and the underlying trade-off faced by operators between quality and other inputs and costs. For this purpose, we use panel data on 92 electricity distribution units operated by ERDF (Electricité de France - Réseau Distribution) in the 2003–2005 financial years. Assuming a multi-output multi-input translog technology, we estimate that the cost of preventing one interruption is equal to 10.7€ for an average DSO. Furthermore, as one would expect, marginal quality improvements tend to be more expensive as quality itself improves. - Highlights: ► We estimate the implicit cost of outages for the main distribution company in France. ► For this purpose, we make use of a parametric distance function approach. ► Marginal quality improvements tend to be more expensive as quality itself improves. ► The cost of preventing one interruption varies from 1.8 € to 69.2 € (2005 prices). ► We estimate that, in average, it lays 33% above the regulated price of quality.
Grid occupancy estimation for environment perception based on belief functions and PCR6
Moras, Julien; Dezert, Jean; Pannetier, Benjamin
2015-05-01
In this contribution, we propose to improve the grid map occupancy estimation method developed so far based on belief function modeling and the classical Dempster's rule of combination. Grid map offers a useful representation of the perceived world for mobile robotics navigation. It will play a major role for the security (obstacle avoidance) of next generations of terrestrial vehicles, as well as for future autonomous navigation systems. In a grid map, the occupancy of each cell representing a small piece of the surrounding area of the robot must be estimated at first from sensors measurements (typically LIDAR, or camera), and then it must also be classified into different classes in order to get a complete and precise perception of the dynamic environment where the robot moves. So far, the estimation and the grid map updating have been done using fusion techniques based on the probabilistic framework, or on the classical belief function framework thanks to an inverse model of the sensors. Mainly because the latter offers an interesting management of uncertainties when the quality of available information is low, and when the sources of information appear as conflicting. To improve the performances of the grid map estimation, we propose in this paper to replace Dempster's rule of combination by the PCR6 rule (Proportional Conflict Redistribution rule #6) proposed in DSmT (Dezert-Smarandache) Theory. As an illustrating scenario, we consider a platform moving in dynamic area and we compare our new realistic simulation results (based on a LIDAR sensor) with those obtained by the probabilistic and the classical belief-based approaches.
Cook, Ellyn J.; van der Kaars, Sander
2006-10-01
We review attempts to derive quantitative climatic estimates from Australian pollen data, including the climatic envelope, climatic indicator and modern analogue approaches, and outline the need to pursue alternatives for use as input to, or validation of, simulations by models of past, present and future climate patterns. To this end, we have constructed and tested modern pollen-climate transfer functions for mainland southeastern Australia and Tasmania using the existing southeastern Australian pollen database and for northern Australia using a new pollen database we are developing. After testing for statistical significance, 11 parameters were selected for mainland southeastern Australia, seven for Tasmania and six for northern Australia. The functions are based on weighted-averaging partial least squares regression and their predictive ability evaluated against modern observational climate data using leave-one-out cross-validation. Functions for summer, annual and winter rainfall and temperatures are most robust for southeastern Australia, while in Tasmania functions for minimum temperature of the coldest period, mean winter and mean annual temperature are the most reliable. In northern Australia, annual and summer rainfall and annual and summer moisture indexes are the strongest. The validation of all functions means all can be applied to Quaternary pollen records from these three areas with confidence. Copyright
Volume-assisted estimation of liver function based on Gd-EOB-DTPA-enhanced MR relaxometry
Energy Technology Data Exchange (ETDEWEB)
Haimerl, Michael; Schlabeck, Mona; Verloh, Niklas; Fellner, Claudia; Stroszczynski, Christian; Wiggermann, Philipp [University Hospital Regensburg, Department of Radiology, Regensburg (Germany); Zeman, Florian [University Hospital Regensburg, Center for Clinical Trials, Regensburg (Germany); Nickel, Dominik [MR Applications Development, Siemens AG, Healthcare Sector, Erlangen (Germany); Barreiros, Ana Paula [University Hospital Regensburg, Department of Internal Medicine I, Regensburg (Germany); Loss, Martin [University Hospital Regensburg, Department of Surgery, Regensburg (Germany)
2016-04-15
To determine whether liver function as determined by indocyanine green (ICG) clearance can be estimated quantitatively from hepatic magnetic resonance (MR) relaxometry with gadoxetic acid (Gd-EOB-DTPA). One hundred and seven patients underwent an ICG clearance test and Gd-EOB-DTPA-enhanced MRI, including MR relaxometry at 3 Tesla. A transverse 3D VIBE sequence with an inline T1 calculation was acquired prior to and 20 minutes post-Gd-EOB-DTPA administration. The reduction rate of T1 relaxation time (rrT1) between pre- and post-contrast images and the liver volume-assisted index of T1 reduction rate (LVrrT1) were evaluated. The plasma disappearance rate of ICG (ICG-PDR) was correlated with the liver volume (LV), rrT1 and LVrrT1, providing an MRI-based estimated ICG-PDR value (ICG-PDR{sub est}). Simple linear regression model showed a significant correlation of ICG-PDR with LV (r = 0.32; p = 0.001), T1{sub post} (r = 0.65; p < 0.001) and rrT1 (r = 0.86; p < 0.001). Assessment of LV and consecutive evaluation of multiple linear regression model revealed a stronger correlation of ICG-PDR with LVrrT1 (r = 0.92; p < 0.001), allowing for the calculation of ICG-PDR{sub est}. Liver function as determined using ICG-PDR can be estimated quantitatively from Gd-EOB-DTPA-enhanced MR relaxometry. Volume-assisted MR relaxometry has a stronger correlation with liver function than does MR relaxometry. (orig.)
Saupe, Florian; Knoblach, Andreas
2015-02-01
Two different approaches for the determination of frequency response functions (FRFs) are used for the non-parametric closed loop identification of a flexible joint industrial manipulator with serial kinematics. The two applied experiment designs are based on low power multisine and high power chirp excitations. The main challenge is to eliminate disturbances of the FRF estimates caused by the numerous nonlinearities of the robot. For the experiment design based on chirp excitations, a simple iterative procedure is proposed which allows exploiting the good crest factor of chirp signals in a closed loop setup. An interesting synergy of the two approaches, beyond validation purposes, is pointed out.
DEFF Research Database (Denmark)
Wellendorff, Jess; Lundgård, Keld Troen; Møgelhøj, Andreas
2012-01-01
A methodology for semiempirical density functional optimization, using regularization and cross-validation methods from machine learning, is developed. We demonstrate that such methods enable well-behaved exchange-correlation approximations in very flexible model spaces, thus avoiding the overfit......A methodology for semiempirical density functional optimization, using regularization and cross-validation methods from machine learning, is developed. We demonstrate that such methods enable well-behaved exchange-correlation approximations in very flexible model spaces, thus avoiding...... the energetics of intramolecular and intermolecular, bulk solid, and surface chemical bonding, and the developed optimization method explicitly handles making the compromise based on the directions in model space favored by different materials properties. The approach is applied to designing the Bayesian error...... sets validates the applicability of BEEF-vdW to studies in chemistry and condensed matter physics. Applications of the approximation and its Bayesian ensemble error estimate to two intricate surface science problems support this....
Hahor, Waraporn; Thongprajukaew, Karun; Yoonram, Krueawan; Rodjaroen, Somrak
2016-11-01
Postmortem changes have been previously studied in some terrestrial animal models, but no prior information is available on aquatic species. Gastrointestinal functionality was investigated in terms of indices, protein concentration, digestive enzyme activity, and scavenging activity, in an aquatic animal model, Nile tilapia, to assess the postmortem changes. Dead fish were floated indoors, and samples were collected within 48 h after death. Stomasomatic index decreased with postmortem time and correlated positively with protein, pepsin-specific activity, and stomach scavenging activity. Also intestosomatic index decreased significantly and correlated positively with protein, specific activity of trypsin, chymotrypsin, amylase, lipase, and intestinal scavenging activity. In their postmortem changes, the digestive enzymes exhibited earlier lipid degradation than carbohydrate or protein. The intestine changed more rapidly than the stomach. The findings suggest that the postmortem changes of gastrointestinal functionality can serve as primary data for the estimation of time of death of an aquatic animal. © 2016 American Academy of Forensic Sciences.
Estimation of leaf area index in the sunflower as a function of thermal time1
Directory of Open Access Journals (Sweden)
Dioneia Daiane Pitol Lucas
Full Text Available The aim of this study was to obtain a mathematical model for estimating the leaf area index (LAI of a sunflower crop as a function of accumulated thermal time. Generating the models and testing their coefficients was carried out using data obtained from experiments carried out for different sowing dates in the crop years of 2007/08, 2008/09, 2009/10 and 2010/11 with two sunflower hybrids, Aguará 03 and Hélio 358. Linear leaf dimensions were used for the non-destructive measurement of the leaf area, and thermal time was used to quantify the biological time. With the data for accumulated thermal time (TTa and LAI known for any one day after emergence, mathematical models were generated for estimating the LAI. The following models were obtained, as they presented the best fit (lowest rootmean- square error, RMSE: gaussian peak, cubic polynomial, sigmoidal and an adjusted compound model, the modified sigmoidal. The modified sigmoidal model had the best fit to the generation data and the highest value for the coefficient of determination (R2. In testing the models, the lowest values for root-mean-square error, and the highest R2 between the observed and estimated values were obtained with the modified sigmoidal model.
Directory of Open Access Journals (Sweden)
Enric Vilar
Full Text Available Residual Kidney Function (RKF is associated with survival benefits in haemodialysis (HD but is difficult to measure without urine collection. Middle molecules such as Cystatin C and β2-microglobulin accumulate in renal disease and plasma levels have been used to estimate kidney function early in this condition. We investigated their use to estimate RKF in patients on HD.Cystatin C, β2-microglobulin, urea and creatinine levels were studied in patients on incremental high-flux HD or hemodiafiltration(HDF. Over sequential HD sessions, blood was sampled pre- and post-session 1 and pre-session 2, for estimation of these parameters. Urine was collected during the whole interdialytic interval, for estimation of residual GFR (GFRResidual = mean of urea and creatinine clearance. The relationships of plasma Cystatin C and β2-microglobulin levels to GFRResidual and urea clearance were determined.Of the 341 patients studied, 64% had urine output>100 ml/day, 32.6% were on high-flux HD and 67.4% on HDF. Parameters most closely correlated with GFRResidual were 1/β2-micoglobulin (r2 0.67 and 1/Cystatin C (r2 0.50. Both these relationships were weaker at low GFRResidual. The best regression model for GFRResidual, explaining 67% of the variation, was: GFRResidual = 160.3 · (1/β2m - 4.2. Where β2m is the pre-dialysis β2 microglobulin concentration (mg/L. This model was validated in a separate cohort of 50 patients using Bland-Altman analysis. Areas under the curve in Receiver Operating Characteristic analysis aimed at identifying subjects with urea clearance≥2 ml/min/1.73 m2 was 0.91 for β2-microglobulin and 0.86 for Cystatin C. A plasma β2-microglobulin cut-off of ≤19.2 mg/L allowed identification of patients with urea clearance ≥2 ml/min/1.73 m2 with 90% specificity and 65% sensitivity.Plasma pre-dialysis β2-microglobulin levels can provide estimates of RKF which may have clinical utility and appear superior to cystatin C. Use of cut-off levels
Sisson, James B.; van Genuchten, Martinus Th.
1991-04-01
The unsaturated hydraulic properties are important parameters in any quantitative description of water and solute transport in partially saturated soils. Currently, most in situ methods for estimating the unsaturated hydraulic conductivity (K) are based on analyses that require estimates of the soil water flux and the pressure head gradient. These analyses typically involve differencing of field-measured pressure head (h) and volumetric water content (θ) data, a process that can significantly amplify instrumental and measurement errors. More reliable methods result when differencing of field data can be avoided. One such method is based on estimates of the gravity drainage curve K'(θ) = dK/dθ which may be computed from observations of θ and/or h during the drainage phase of infiltration drainage experiments assuming unit gradient hydraulic conditions. The purpose of this study was to compare estimates of the unsaturated soil hydraulic functions on the basis of different combinations of field data θ, h, K, and K'. Five different data sets were used for the analysis: (1) θ-h, (2) K-θ, (3) K'-θ (4) K-θ-h, and (5) K'-θ-h. The analysis was applied to previously published data for the Norfolk, Troup, and Bethany soils. The K-θ-h and K'-θ-h data sets consistently produced nearly identical estimates of the hydraulic functions. The K-θ and K'-θ data also resulted in similar curves, although results in this case were less consistent than those produced by the K-θ-h and K'-θ-h data sets. We conclude from this study that differencing of field data can be avoided and hence that there is no need to calculate soil water fluxes and pressure head gradients from inherently noisy field-measured θ and h data. The gravity drainage analysis also provides results over a much broader range of hydraulic conductivity values than is possible with the more standard instantaneous profile analysis, especially when augmented with independently measured soil water retention data.
International Nuclear Information System (INIS)
Ali Akkemik, K.
2009-01-01
Turkish electricity sector has undergone significant institutional changes since 1984. The recent developments since 2001 including the setting up of a regulatory agency to undertake the regulation of the sector and increasing participation of private investors in the field of electricity generation are of special interest. This paper estimates cost functions and investigates the degree of scale economies, overinvestment, and technological progress in the Turkish electricity generation sector for the period 1984-2006 using long-run and short-run translog cost functions. Estimations were done for six groups of firms, public and private. The results indicate existence of scale economies throughout the period of analysis, hence declining long-run average costs. The paper finds empirical support for the Averch-Johnson effect until 2001, i.e., firms overinvested in an environment where there are excess returns to capital. But this effect was reduced largely after 2002. Technological progress deteriorated slightly from 1984-1993 to 1994-2001 but improved after 2002. Overall, the paper found that regulation of the market under the newly established regulating agency after 2002 was effective and there are potential gains from such regulation. (author)
Directory of Open Access Journals (Sweden)
Betsie le Roux
2016-10-01
Full Text Available Water footprint (WF accounting as proposed by the Water Footprint Network (WFN can potentially provide important information for water resource management, especially in water scarce countries relying on irrigation to help meet their food requirements. However, calculating accurate WFs of short-season vegetable crops such as carrots, cabbage, beetroot, broccoli and lettuce presented some challenges. Planting dates and inter-annual weather conditions impact WF results. Joining weather datasets of just rainfall, minimum and maximum temperature with ones that include solar radiation and wind-speed affected crop model estimates and WF results. The functional unit selected can also have a major impact on results. For example, WFs according to the WFN approach do not account for crop residues used for other purposes, like composting and animal feed. Using yields in dry matter rather than fresh mass also impacts WF metrics, making comparisons difficult. To overcome this, using the nutritional value of crops as a functional unit can connect water use more directly to potential benefits derived from different crops and allow more straightforward comparisons. Grey WFs based on nitrogen only disregards water pollution caused by phosphates, pesticides and salinization. Poor understanding of the fate of nitrogen complicates estimation of nitrogen loads into the aquifer.
Complex mode indication function and its applications to spatial domain parameter estimation
Shih, C. Y.; Tsuei, Y. G.; Allemang, R. J.; Brown, D. L.
1988-10-01
This paper introduces the concept of the Complex Mode Indication Function (CMIF) and its application in spatial domain parameter estimation. The concept of CMIF is developed by performing singular value decomposition (SVD) of the Frequency Response Function (FRF) matrix at each spectral line. The CMIF is defined as the eigenvalues, which are the square of the singular values, solved from the normal matrix formed from the FRF matrix, [ H( jω)] H[ H( jω)], at each spectral line. The CMIF appears to be a simple and efficient method for identifying the modes of the complex system. The CMIF identifies modes by showing the physical magnitude of each mode and the damped natural frequency for each root. Since multiple reference data is applied in CMIF, repeated roots can be detected. The CMIF also gives global modal parameters, such as damped natural frequencies, mode shapes and modal participation vectors. Since CMIF works in the spatial domain, uneven frequency spacing data such as data from spatial sine testing can be used. A second-stage procedure for accurate damped natural frequency and damping estimation as well as mode shape scaling is also discussed in this paper.
Directory of Open Access Journals (Sweden)
Patrick McNamara
2010-01-01
Results. Patients' estimates of their own social functioning were not significantly different from examiners' estimates. The impact of clinical variables on social functioning in PD revealed depression to be the strongest association of social functioning in PD on both the patient and the examiner version of the Social Adaptation Self-Evaluation Scale. Conclusions. PD patients appear to be well aware of their social strengths and weaknesses. Depression and motor symptom severity are significant predictors of both self- and examiner reported social functioning in patients with PD. Assessment and treatment of depression in patients with PD may improve social functioning and overall quality of life.
Efficient Estimation of Dynamic Density Functions with Applications in Streaming Data
Qahtan, Abdulhakim
2016-05-11
Recent advances in computing technology allow for collecting vast amount of data that arrive continuously in the form of streams. Mining data streams is challenged by the speed and volume of the arriving data. Furthermore, the underlying distribution of the data changes over the time in unpredicted scenarios. To reduce the computational cost, data streams are often studied in forms of condensed representation, e.g., Probability Density Function (PDF). This thesis aims at developing an online density estimator that builds a model called KDE-Track for characterizing the dynamic density of the data streams. KDE-Track estimates the PDF of the stream at a set of resampling points and uses interpolation to estimate the density at any given point. To reduce the interpolation error and computational complexity, we introduce adaptive resampling where more/less resampling points are used in high/low curved regions of the PDF. The PDF values at the resampling points are updated online to provide up-to-date model of the data stream. Comparing with other existing online density estimators, KDE-Track is often more accurate (as reflected by smaller error values) and more computationally efficient (as reflected by shorter running time). The anytime available PDF estimated by KDE-Track can be applied for visualizing the dynamic density of data streams, outlier detection and change detection in data streams. In this thesis work, the first application is to visualize the taxi traffic volume in New York city. Utilizing KDE-Track allows for visualizing and monitoring the traffic flow on real time without extra overhead and provides insight analysis of the pick up demand that can be utilized by service providers to improve service availability. The second application is to detect outliers in data streams from sensor networks based on the estimated PDF. The method detects outliers accurately and outperforms baseline methods designed for detecting and cleaning outliers in sensor data. The
Kerimov, M. K.
2018-01-01
This paper is the fourth in a series of survey articles concerning zeros of Bessel functions and methods for their computation. Various inequalities, estimates, expansions, etc. for positive zeros are analyzed, and some results are described in detail with proofs.
Wang, Christina Hao; Rubinsky, Anna D; Minichiello, Tracy; Shlipak, Michael G; Price, Erika Leemann
2018-05-31
Current practice in anticoagulation dosing relies on kidney function estimated by serum creatinine using the Cockcroft-Gault equation. However, creatinine can be unreliable in patients with low or high muscle mass. Cystatin C provides an alternative estimation of glomerular filtration rate (eGFR) that is independent of muscle. We compared cystatin C-based eGFR (eGFR cys ) with multiple creatinine-based estimates of kidney function in hospitalized patients receiving anticoagulants, to assess for discordant results that could impact medication dosing. Retrospective chart review of hospitalized patients over 1 year who received non-vitamin K antagonist anticoagulation, and who had same-day measurements of cystatin C and creatinine. Seventy-five inpatient veterans (median age 68) at the San Francisco VA Medical Center (SFVAMC). We compared the median difference between eGFR by the Chronic Kidney Disease Epidemiology Collaboration (CKD-EPI) study equation using cystatin C (eGFR cys ) and eGFRs using three creatinine-based equations: CKD-EPI (eGFR EPI ), Modified Diet in Renal Disease (eGFR MDRD ), and Cockcroft-Gault (eGFR CG ). We categorized patients into standard KDIGO kidney stages and into drug-dosing categories based on each creatinine equation and calculated proportions of patients reclassified across these categories based on cystatin C. Cystatin C predicted overall lower eGFR compared to creatinine-based equations, with a median difference of - 7.1 (IQR - 17.2, 2.6) mL/min/1.73 m 2 versus eGFR EPI , - 21.2 (IQR - 43.7, - 8.1) mL/min/1.73 m 2 versus eGFR MDRD , and - 25.9 (IQR - 46.8, - 8.7) mL/min/1.73 m 2 versus eGFR CG . Thirty-one to 52% of patients were reclassified into lower drug-dosing categories using cystatin C compared to creatinine-based estimates. We found substantial discordance in eGFR comparing cystatin C with creatinine in this group of anticoagulated inpatients. Our sample size was limited and included few women. Further
Del Casale, Antonio; Ferracuti, Stefano; Rapinesi, Chiara; De Rossi, Pietro; Angeletti, Gloria; Sani, Gabriele; Kotzalidis, Georgios D; Girardi, Paolo
2015-12-01
Several studies reported that hypnosis can modulate pain perception and tolerance by affecting cortical and subcortical activity in brain regions involved in these processes. We conducted an Activation Likelihood Estimation (ALE) meta-analysis on functional neuroimaging studies of pain perception under hypnosis to identify brain activation-deactivation patterns occurring during hypnotic suggestions aiming at pain reduction, including hypnotic analgesic, pleasant, or depersonalization suggestions (HASs). We searched the PubMed, Embase and PsycInfo databases; we included papers published in peer-reviewed journals dealing with functional neuroimaging and hypnosis-modulated pain perception. The ALE meta-analysis encompassed data from 75 healthy volunteers reported in 8 functional neuroimaging studies. HASs during experimentally-induced pain compared to control conditions correlated with significant activations of the right anterior cingulate cortex (Brodmann's Area [BA] 32), left superior frontal gyrus (BA 6), and right insula, and deactivation of right midline nuclei of the thalamus. HASs during experimental pain impact both cortical and subcortical brain activity. The anterior cingulate, left superior frontal, and right insular cortices activation increases could induce a thalamic deactivation (top-down inhibition), which may correlate with reductions in pain intensity. Copyright © 2016 Elsevier Ltd. All rights reserved.
Asiri, Sharefa M.; Laleg-Kirati, Taous-Meriem
2016-01-01
In this paper, modulating functions-based method is proposed for estimating space–time-dependent unknowns in one-dimensional partial differential equations. The proposed method simplifies the problem into a system of algebraic equations linear
Directory of Open Access Journals (Sweden)
Rafael de Ávila Rodrigues
2012-01-01
Full Text Available In recent years, crop models have increasingly been used to simulate agricultural features. The DSSAT (Decision Support System for Agrotechnology Transfer is an important tool in modeling growth; however, one of its limitations is related to the unaccounted-for effect of diseases. Therefore, the goals of this study were to calibrate and validate the CSM CROPGRO-Soybean for the soybean cultivars M-SOY 6101 and MG/BR 46 (Conquista, analyze the performance and the effect of Asian soybean rust on these cultivars under the environmental conditions of Viçosa, Minas Gerais, Brazil. The experimental data for the evaluation, testing, and adjustment of the genetic coefficients for the cultivars, M-SOY 6101 and MG/BR 46 (Conquista, were obtained during the 2006/2007, 2007/2008 and 2009/2010 growing seasons. GLUE (Generalized Likelihood Uncertainty Estimation was used for the estimation of the genetic coefficients, and pedotransfer functions have been utilized to estimate the physical characteristics of the soil. For all of the sowing dates, the early season cultivar, M-SOY 6101, exhibited a lower variance in yield, which represents more stability with regard to the interannual climate variability, i.e., the farmers who use this cultivar will have in 50% of the crop years analyzed, a higher yield than a late-season cultivar. The MG/BR 46 (Conquista cultivar demonstrated a greater probability of obtaining higher yield in years with favorable weather conditions. However, in the presence of the Asian soybean rust, yield is heavily affected. The early cultivar, M-SOY 6101, showed a lower risk of being affected by the rust and consequently exhibited less yield loss considering the scenario D90 (condensation on the leaf surface occurs when the relative humidity is greater than or equal to 90%, for a sowing date of November 14.
International Nuclear Information System (INIS)
AGUIAR, Pablo; RUIBAL, Álvaro; CORTÉS, Julia; PÉREZ-FENTES, Daniel; GARCÍA, Camilo; GARRIDO, Miguel
2016-01-01
The aim of this study was to develop a method for estimating DMSA SPECT renal function on each renal pole in order to evaluate the effect of percutaneous nephrolithotripsy by focusing the measurements on the region through which the percutaneous approach is performed. Twenty patients undergoing percutaneous nephrolithotripsy between November 2010 and June 2012 were included in this study. Both Planar and SPECT-DMSA studies were carried out before and after nephrolithotripsy. The effect of percutaneous nephrolithotripsy was evaluated by estimating the total renal function and the regional renal function of each renal pole. Despite PCNL has been previously reported as a minimally invasive technique, our results showed regional renal function decreases in the treated pole in most patients, affecting the total renal function in a few of them. A quantification method was used for estimating the SPECT DMSA renal function of the upper, inter polar and lower renal poles. Our results confirmed that total renal function was preserved after nephrolithotripsy. Nevertheless, the proposed method showed that the regional renal function of the treated pole decreased in most patients (15 of 20 patients), allowing us to find differences in patients who had not shown changes in the total renal function obtained from conventional quantification methods. In conclusion, a method for estimating the SPECT DMSA renal function focused on the treated pole enabled us to show for the first time that nephrolithotripsy can lead to a renal parenchymal damage restricted to the treated pole.
Directory of Open Access Journals (Sweden)
Meyer Karin
2001-11-01
Full Text Available Abstract A random regression model for the analysis of "repeated" records in animal breeding is described which combines a random regression approach for additive genetic and other random effects with the assumption of a parametric correlation structure for within animal covariances. Both stationary and non-stationary correlation models involving a small number of parameters are considered. Heterogeneity in within animal variances is modelled through polynomial variance functions. Estimation of parameters describing the dispersion structure of such model by restricted maximum likelihood via an "average information" algorithm is outlined. An application to mature weight records of beef cow is given, and results are contrasted to those from analyses fitting sets of random regression coefficients for permanent environmental effects.
Katherine A. Zeller; Kevin McGarigal; Paul Beier; Samuel A. Cushman; T. Winston Vickers; Walter M. Boyce
2014-01-01
Estimating landscape resistance to animal movement is the foundation for connectivity modeling, and resource selection functions based on point data are commonly used to empirically estimate resistance. In this study, we used GPS data points acquired at 5-min intervals from radiocollared pumas in southern California to model context-dependent point selection...
Tøndel, Camilla; Ramaswami, Uma; Aakre, Kristin Moberg; Wijburg, Frits; Bouwman, Machtelt; Svarstad, Einar
2010-01-01
Studies on renal function in children with Fabry disease have mainly been done using estimated creatinine-based glomerular filtration rate (GFR). The aim of this study was to compare estimated creatinine-based GFR (eGFR) with measured GFR (mGFR) in children with Fabry disease and normal renal
Eilers, Anna-Christina; Hennawi, Joseph F.; Lee, Khee-Gan
2017-08-01
We present a new Bayesian algorithm making use of Markov Chain Monte Carlo sampling that allows us to simultaneously estimate the unknown continuum level of each quasar in an ensemble of high-resolution spectra, as well as their common probability distribution function (PDF) for the transmitted Lyα forest flux. This fully automated PDF regulated continuum fitting method models the unknown quasar continuum with a linear principal component analysis (PCA) basis, with the PCA coefficients treated as nuisance parameters. The method allows one to estimate parameters governing the thermal state of the intergalactic medium (IGM), such as the slope of the temperature-density relation γ -1, while marginalizing out continuum uncertainties in a fully Bayesian way. Using realistic mock quasar spectra created from a simplified semi-numerical model of the IGM, we show that this method recovers the underlying quasar continua to a precision of ≃ 7 % and ≃ 10 % at z = 3 and z = 5, respectively. Given the number of principal component spectra, this is comparable to the underlying accuracy of the PCA model itself. Most importantly, we show that we can achieve a nearly unbiased estimate of the slope γ -1 of the IGM temperature-density relation with a precision of +/- 8.6 % at z = 3 and +/- 6.1 % at z = 5, for an ensemble of ten mock high-resolution quasar spectra. Applying this method to real quasar spectra and comparing to a more realistic IGM model from hydrodynamical simulations would enable precise measurements of the thermal and cosmological parameters governing the IGM, albeit with somewhat larger uncertainties, given the increased flexibility of the model.
International Nuclear Information System (INIS)
Saito, Reiko; Uemura, Koji; Uchiyama, Akihiko; Toyama, Hinako; Ishii, Kenji; Senda, Michio
2001-01-01
The purpose of this paper is to estimate the extent of atrophy and the decline in brain function objectively and quantitatively. Two-dimensional (2D) projection images of three-dimensional (3D) transaxial images of positron emission tomography (PET) and magnetic resonance imaging (MRI) were made by means of the Mollweide method which keeps the area of the brain surface. A correlation image was generated between 2D projection images of MRI and cerebral blood flow (CBF) or 18 F-fluorodeoxyglucose (FDG) PET images and the sulcus was extracted from the correlation image clustered by K-means method. Furthermore, the extent of atrophy was evaluated from the extracted sulcus on 2D-projection MRI and the cerebral cortical function such as blood flow or glucose metabolic rate was assessed in the cortex excluding sulcus on 2D-projection PET image, and then the relationship between the cerebral atrophy and function was evaluated. This method was applied to the two groups, the young and the aged normal subjects, and the relationship between the age and the rate of atrophy or the cerebral blood flow was investigated. This method was also applied to FDG-PET and MRI studies in the normal controls and in patients with corticobasal degeneration. The mean rate of atrophy in the aged group was found to be higher than that in the young. The mean value and the variance of the cerebral blood flow for the young are greater than those of the aged. The sulci were similarly extracted using either CBF or FDG PET images. The purposed method using 2-D projection images of MRI and PET is clinically useful for quantitative assessment of atrophic change and functional disorder of cerebral cortex. (author)
Smooth semi-nonparametric (SNP) estimation of the cumulative incidence function.
Duc, Anh Nguyen; Wolbers, Marcel
2017-08-15
This paper presents a novel approach to estimation of the cumulative incidence function in the presence of competing risks. The underlying statistical model is specified via a mixture factorization of the joint distribution of the event type and the time to the event. The time to event distributions conditional on the event type are modeled using smooth semi-nonparametric densities. One strength of this approach is that it can handle arbitrary censoring and truncation while relying on mild parametric assumptions. A stepwise forward algorithm for model estimation and adaptive selection of smooth semi-nonparametric polynomial degrees is presented, implemented in the statistical software R, evaluated in a sequence of simulation studies, and applied to data from a clinical trial in cryptococcal meningitis. The simulations demonstrate that the proposed method frequently outperforms both parametric and nonparametric alternatives. They also support the use of 'ad hoc' asymptotic inference to derive confidence intervals. An extension to regression modeling is also presented, and its potential and challenges are discussed. © 2017 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. © 2017 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.
International Nuclear Information System (INIS)
Azadeh, A.; Saberi, M.; Ghaderi, S.F.; Gitiforouz, A.; Ebrahimipour, V.
2008-01-01
This study presents an integrated fuzzy system, data mining and time series framework to estimate and predict electricity demand for seasonal and monthly changes in electricity consumption especially in developing countries such as China and Iran with non-stationary data. Furthermore, it is difficult to model uncertain behavior of energy consumption with only conventional fuzzy system or time series and the integrated algorithm could be an ideal substitute for such cases. To construct fuzzy systems, a rule base is needed. Because a rule base is not available, for the case of demand function, look up table which is one of the extracting rule methods is used to extract the rule base. This system is defined as FLT. Also, decision tree method which is a data mining approach is similarly utilized to extract the rule base. This system is defined as FDM. Preferred time series model is selected from linear (ARMA) and nonlinear model. For this, after selecting preferred ARMA model, McLeod-Li test is applied to determine nonlinearity condition. When, nonlinearity condition is satisfied, preferred nonlinear model is selected and compare with preferred ARMA model and finally one of this is selected as time series model. At last, ANOVA is used for selecting preferred model from fuzzy models and time series model. Also, the impact of data preprocessing and postprocessing on the fuzzy system performance is considered by the algorithm. In addition, another unique feature of the proposed algorithm is utilization of autocorrelation function (ACF) to define input variables, whereas conventional methods which use trial and error method. Monthly electricity consumption of Iran from 1995 to 2005 is considered as the case of this study. The MAPE estimation of genetic algorithm (GA), artificial neural network (ANN) versus the proposed algorithm shows the appropriateness of the proposed algorithm
Tetegan, Marion; Richer de Forges, Anne; Verbeque, Bernard; Nicoullaud, Bernard; Desbourdes, Caroline; Bouthier, Alain; Arrouays, Dominique
2015-01-01
Estimation of the water retention capacity of a heterogeneous soil requires knowledge of the hydric properties of each soil phase. Nevertheless, for stony soils, the rock fragments have often been neglected. The objective of this work was then to propose a methodology to improve the calculation of the available water content (AWC) of stony soils at a regional scale. On a 36,200 ha surface area in Beauce located in the Region Centre of France, the AWC was calculated by coupling pedotransfer cl...
Cai, Jianhua
2017-05-01
The time-frequency analysis method represents signal as a function of time and frequency, and it is considered a powerful tool for handling arbitrary non-stationary time series by using instantaneous frequency and instantaneous amplitude. It also provides a possible alternative to the analysis of the non-stationary magnetotelluric (MT) signal. Based on the Hilbert-Huang transform (HHT), a time-frequency analysis method is proposed to obtain stable estimates of the magnetotelluric response function. In contrast to conventional methods, the response function estimation is performed in the time-frequency domain using instantaneous spectra rather than in the frequency domain, which allows for imaging the response parameter content as a function of time and frequency. The theory of the method is presented and the mathematical model and calculation procedure, which are used to estimate response function based on HHT time-frequency spectrum, are discussed. To evaluate the results, response function estimates are compared with estimates from a standard MT data processing method based on the Fourier transform. All results show that apparent resistivities and phases, which are calculated from the HHT time-frequency method, are generally more stable and reliable than those determined from the simple Fourier analysis. The proposed method overcomes the drawbacks of the traditional Fourier methods, and the resulting parameter minimises the estimation bias caused by the non-stationary characteristics of the MT data.
Motion estimation for cardiac functional analysis using two x-ray computed tomography scans.
Fung, George S K; Ciuffo, Luisa; Ashikaga, Hiroshi; Taguchi, Katsuyuki
2017-09-01
This work concerns computed tomography (CT)-based cardiac functional analysis (CFA) with a reduced radiation dose. As CT-CFA requires images over the entire heartbeat, the scans are often performed at 10-20% of the tube current settings that are typically used for coronary CT angiography. A large image noise then degrades the accuracy of motion estimation. Moreover, even if the scan was performed during the sinus rhythm, the cardiac motion observed in CT images may not be cyclic with patients with atrial fibrillation. In this study, we propose to use two CT scan data, one for CT angiography at a quiescent phase at a standard dose and the other for CFA over the entire heart beat at a lower dose. We have made the following four modifications to an image-based cardiac motion estimation method we have previously developed for a full-dose retrospectively gated coronary CT angiography: (a) a full-dose prospectively gated coronary CT angiography image acquired at the least motion phase was used as the reference image; (b) a three-dimensional median filter was applied to lower-dose retrospectively gated cardiac images acquired at 20 phases over one heartbeat in order to reduce image noise; (c) the strength of the temporal regularization term was made adaptive; and (d) a one-dimensional temporal filter was applied to the estimated motion vector field in order to decrease jaggy motion patterns. We describe the conventional method iME1 and the proposed method iME2 in this article. Five observers assessed the accuracy of the estimated motion vector field of iME2 and iME1 using a 4-point scale. The observers repeated the assessment with data presented in a new random order 1 week after the first assessment session. The study confirmed that the proposed iME2 was robust against the mismatch of noise levels, contrast enhancement levels, and shapes of the chambers. There was a statistically significant difference between iME2 and iME1 (accuracy score, 2.08 ± 0.81 versus 2.77
[Cardiac Synchronization Function Estimation Based on ASM Level Set Segmentation Method].
Zhang, Yaonan; Gao, Yuan; Tang, Liang; He, Ying; Zhang, Huie
At present, there is no accurate and quantitative methods for the determination of cardiac mechanical synchronism, and quantitative determination of the synchronization function of the four cardiac cavities with medical images has a great clinical value. This paper uses the whole heart ultrasound image sequence, and segments the left & right atriums and left & right ventricles of each frame. After the segmentation, the number of pixels in each cavity and in each frame is recorded, and the areas of the four cavities of the image sequence are therefore obtained. The area change curves of the four cavities are further extracted, and the synchronous information of the four cavities is obtained. Because of the low SNR of Ultrasound images, the boundary lines of cardiac cavities are vague, so the extraction of cardiac contours is still a challenging problem. Therefore, the ASM model information is added to the traditional level set method to force the curve evolution process. According to the experimental results, the improved method improves the accuracy of the segmentation. Furthermore, based on the ventricular segmentation, the right and left ventricular systolic functions are evaluated, mainly according to the area changes. The synchronization of the four cavities of the heart is estimated based on the area changes and the volume changes.
International Nuclear Information System (INIS)
Larsson, I.; Lindstedt, E.; Ohlin, P.; Strand, S.E.; White, T.
1975-01-01
A scintillation camera technique was used for measuring renal uptake of [ 131 I]Hippuran 80-110 s after injection. Externally measured Hippuran uptake was markedly influenced by kidney depth, which was measured by lateral-view image after injection of [ 99 Tc]iron ascorbic acid complex or [ 197 Hg]chlormerodrine. When one kidney was nearer to the dorsal surface of the body than the other, it was necessary to correct the externally measured Hippuran uptake for kidney depth to obtain reliable information on the true partition of Hippuran between the two kidneys. In some patients the glomerular filtration rate (GFR) was measured before and after nephrectomy. Measured postoperative GFR was compared with preoperative predicted GFR, which was calculated by multiplying the preoperative Hippuran uptake of the kidney to be left in situ, as a fraction of the preoperative Hippuran uptake of both kidneys, by the measured preoperative GFR. The measured postoperative GFR was usually moderately higher than the preoperatively predicted GFR. The difference could be explained by a postoperative compensatory increase in function of the remaining kidney. Thus, the present method offers a possibility of estimating separate kidney function without arterial or ureteric catheterization. (auth)
Directory of Open Access Journals (Sweden)
Luis Eduardo Akiyoshi Sanches Suzuki
2008-06-01
Full Text Available O conhecimento das relações entre propriedades físicas e mecânicas do solo pode contribuir no desenvolvimento de funções de pedotransferência, que permitam estimar outras propriedades do solo de difícil mensuração. Neste trabalho, objetivou-se avaliar a relação entre a susceptibilidade à compactação e o suporte de carga com propriedades físicas de solos do sul do Brasil. Foram avaliadas a resistência à penetração, a umidade, a densidade e a compressibilidade de seis solos. A resistência à penetração pode ser estimada pelo modelo que considera a umidade e densidade do solo. Solos com maior densidade inicial apresentaram menor susceptibilidade à compactação e menor deformação, quando submetidos a pressões externas. Quanto maior a resistência do solo à penetração, menor é a deformação e maior é a capacidade de suporte de carga, embora isso não indique solos com qualidade física adequada para as culturas; quanto maior a deformação do solo, maior a susceptibilidade à compactação e menor a capacidade de suporte de carga. A susceptibilidade de um solo à compactação e sua capacidade de suporte de carga podem ser estimadas, respectivamente, pela densidade inicial e pela resistência do solo à penetração.Quantifying the relationship between physical and mechanical soil properties can contribute to the development of pedotransfer functions that allow estimating hard-to-measure soil properties. The objective of this study was to evaluate the interrelations between susceptibility to compaction and load support with some physical properties of soils from Southern Brazil. Penetration resistance, moisture, bulk density and compressibility of six soils were evaluated. In a model including soil moisture and bulk density as independent variables, the relation with penetration resistance values obtained in the field was high. Soils with higher initial bulk density were less susceptible to compaction and exhibited
Directory of Open Access Journals (Sweden)
Chris Bambey Guure
2012-01-01
Full Text Available The Weibull distribution has been observed as one of the most useful distribution, for modelling and analysing lifetime data in engineering, biology, and others. Studies have been done vigorously in the literature to determine the best method in estimating its parameters. Recently, much attention has been given to the Bayesian estimation approach for parameters estimation which is in contention with other estimation methods. In this paper, we examine the performance of maximum likelihood estimator and Bayesian estimator using extension of Jeffreys prior information with three loss functions, namely, the linear exponential loss, general entropy loss, and the square error loss function for estimating the two-parameter Weibull failure time distribution. These methods are compared using mean square error through simulation study with varying sample sizes. The results show that Bayesian estimator using extension of Jeffreys' prior under linear exponential loss function in most cases gives the smallest mean square error and absolute bias for both the scale parameter α and the shape parameter β for the given values of extension of Jeffreys' prior.
Blair, Clancy; Raver, C. Cybele; Berry, Daniel J.
2014-01-01
In the current article, we contrast 2 analytical approaches to estimate the relation of parenting to executive function development in a sample of 1,292 children assessed longitudinally between the ages of 36 and 60 months of age. Children were administered a newly developed and validated battery of 6 executive function tasks tapping inhibitory…
Energy Technology Data Exchange (ETDEWEB)
Ramirez-Guinart, Oriol; Rigol, Anna; Vidal, Miquel [Analytical Chemistry department, Faculty of Chemistry, University of Barcelona, Mart i Franques 1-11, 08028, Barcelona (Spain)
2014-07-01
In the frame of the revision of the IAEA TRS 364 (Handbook of parameter values for the prediction of radionuclide transfer in temperate environments), a database of radionuclide solid-liquid distribution coefficients (K{sub d}) in soils was compiled with data coming from field and laboratory experiments, from references mostly from 1990 onwards, including data from reports, reviewed papers, and grey literature. The K{sub d} values were grouped for each radionuclide according to two criteria. The first criterion was based on the sand and clay mineral percentages referred to the mineral matter, and the organic matter (OM) content in the soil. This defined the 'texture/OM' criterion. The second criterion was to group soils regarding specific soil factors governing the radionuclide-soil interaction ('cofactor' criterion). The cofactors depended on the radionuclide considered. An advantage of using cofactors was that the variability of K{sub d} ranges for a given soil group decreased considerably compared with that observed when the classification was based solely on sand, clay and organic matter contents. The K{sub d} best estimates were defined as the calculated GM values assuming that K{sub d} values were always log-normally distributed. Risk assessment models may require as input data for a given parameter either a single value (a best estimate) or a continuous function from which not only individual best estimates but also confidence ranges and data variability can be derived. In the case of the K{sub d} parameter, a suitable continuous function which contains the statistical parameters (e.g. arithmetical/geometric mean, arithmetical/geometric standard deviation, mode, etc.) that better explain the distribution among the K{sub d} values of a dataset is the Cumulative Distribution Function (CDF). To our knowledge, appropriate CDFs has not been proposed for radionuclide K{sub d} in soils yet. Therefore, the aim of this works is to create CDFs for
Multiscale Bayesian neural networks for soil water content estimation
Jana, Raghavendra B.; Mohanty, Binayak P.; Springer, Everett P.
2008-08-01
Artificial neural networks (ANN) have been used for some time now to estimate soil hydraulic parameters from other available or more easily measurable soil properties. However, most such uses of ANNs as pedotransfer functions (PTFs) have been at matching spatial scales (1:1) of inputs and outputs. This approach assumes that the outputs are only required at the same scale as the input data. Unfortunately, this is rarely true. Different hydrologic, hydroclimatic, and contaminant transport models require soil hydraulic parameter data at different spatial scales, depending upon their grid sizes. While conventional (deterministic) ANNs have been traditionally used in these studies, the use of Bayesian training of ANNs is a more recent development. In this paper, we develop a Bayesian framework to derive soil water retention function including its uncertainty at the point or local scale using PTFs trained with coarser-scale Soil Survey Geographic (SSURGO)-based soil data. The approach includes an ANN trained with Bayesian techniques as a PTF tool with training and validation data collected across spatial extents (scales) in two different regions in the United States. The two study areas include the Las Cruces Trench site in the Rio Grande basin of New Mexico, and the Southern Great Plains 1997 (SGP97) hydrology experimental region in Oklahoma. Each region-specific Bayesian ANN is trained using soil texture and bulk density data from the SSURGO database (scale 1:24,000), and predictions of the soil water contents at different pressure heads with point scale data (1:1) inputs are made. The resulting outputs are corrected for bias using both linear and nonlinear correction techniques. The results show good agreement between the soil water content values measured at the point scale and those predicted by the Bayesian ANN-based PTFs for both the study sites. Overall, Bayesian ANNs coupled with nonlinear bias correction are found to be very suitable tools for deriving soil
Directory of Open Access Journals (Sweden)
Z. Meghnatisi
2009-06-01
Full Text Available Let Xi1, · · · , Xini be a random sample from a gamma distribution with known shape parameter νi > 0 and unknown scale parameter βi > 0, i = 1, 2, satisfying 0 < β1 6 β2. We consider the class of mixed estimators for estimation of β1 and β2 under reflected gamma loss function. It has been shown that the minimum risk equivariant estimator of βi, i = 1, 2, which is admissible when no information on the ordering of parameters are given, is inadmissible and dominated by a class of mixed estimators when it is known that the parameters are ordered. Also, the inadmissible estimators in the class of mixed estimators are derived. Finally the results are extended to some subclass of exponential family
Better estimation of protein-DNA interaction parameters improve prediction of functional sites
Directory of Open Access Journals (Sweden)
O'Flanagan Ruadhan A
2008-12-01
Full Text Available Abstract Background Characterizing transcription factor binding motifs is a common bioinformatics task. For transcription factors with variable binding sites, we need to get many suboptimal binding sites in our training dataset to get accurate estimates of free energy penalties for deviating from the consensus DNA sequence. One procedure to do that involves a modified SELEX (Systematic Evolution of Ligands by Exponential Enrichment method designed to produce many such sequences. Results We analyzed low stringency SELEX data for E. coli Catabolic Activator Protein (CAP, and we show here that appropriate quantitative analysis improves our ability to predict in vitro affinity. To obtain large number of sequences required for this analysis we used a SELEX SAGE protocol developed by Roulet et al. The sequences obtained from here were subjected to bioinformatic analysis. The resulting bioinformatic model characterizes the sequence specificity of the protein more accurately than those sequence specificities predicted from previous analysis just by using a few known binding sites available in the literature. The consequences of this increase in accuracy for prediction of in vivo binding sites (and especially functional ones in the E. coli genome are also discussed. We measured the dissociation constants of several putative CAP binding sites by EMSA (Electrophoretic Mobility Shift Assay and compared the affinities to the bioinformatics scores provided by methods like the weight matrix method and QPMEME (Quadratic Programming Method of Energy Matrix Estimation trained on known binding sites as well as on the new sites from SELEX SAGE data. We also checked predicted genome sites for conservation in the related species S. typhimurium. We found that bioinformatics scores based on SELEX SAGE data does better in terms of prediction of physical binding energies as well as in detecting functional sites. Conclusion We think that training binding site detection
Haskell, Craig A.; Beauchamp, David A.; Bollens, Stephen M.
2017-01-01
Juvenile salmon (Oncorhynchus spp.) use of reservoir food webs is understudied. We examined the feeding behavior of subyearling Chinook salmon (O. tshawytscha) and its relation to growth by estimating the functional response of juvenile salmon to changes in the density of Daphnia, an important component of reservoir food webs. We then estimated salmon growth across a broad range of water temperatures and daily rations of two primary prey, Daphnia and juvenile American shad (Alosa sapidissima) using a bioenergetics model. Laboratory feeding experiments yielded a Type-II functional response curve: C = 29.858 P *(4.271 + P)-1 indicating that salmon consumption (C) of Daphnia was not affected until Daphnia densities (P) were < 30 · L-1. Past field studies documented Daphnia densities in lower Columbia River reservoirs of < 3 · L-1 in July but as high as 40 · L-1 in August. Bioenergetics modeling indicated that subyearlings could not achieve positive growth above 22°C regardless of prey type or consumption rate. When feeding on Daphnia, subyearlings could not achieve positive growth above 20°C (water temperatures they commonly encounter in the lower Columbia River during summer). At 16–18°C, subyearlings had to consume about 27,000 Daphnia · day-1 to achieve positive growth. However, when feeding on juvenile American shad, subyearlings had to consume 20 shad · day-1 at 16–18°C, or at least 25 shad · day-1 at 20°C to achieve positive growth. Using empirical consumption rates and water temperatures from summer 2013, subyearlings exhibited negative growth during July (-0.23 to -0.29 g · d-1) and August (-0.05 to -0.07 g · d-1). By switching prey from Daphnia to juvenile shad which have a higher energy density, subyearlings can partially compensate for the effects of higher water temperatures they experience in the lower Columbia River during summer. However, achieving positive growth as piscivores requires subyearlings to feed at
Impact of regression methods on improved effects of soil structure on soil water retention estimates
Nguyen, Phuong Minh; De Pue, Jan; Le, Khoa Van; Cornelis, Wim
2015-06-01
Increasing the accuracy of pedotransfer functions (PTFs), an indirect method for predicting non-readily available soil features such as soil water retention characteristics (SWRC), is of crucial importance for large scale agro-hydrological modeling. Adding significant predictors (i.e., soil structure), and implementing more flexible regression algorithms are among the main strategies of PTFs improvement. The aim of this study was to investigate whether the improved effect of categorical soil structure information on estimating soil-water content at various matric potentials, which has been reported in literature, could be enduringly captured by regression techniques other than the usually applied linear regression. Two data mining techniques, i.e., Support Vector Machines (SVM), and k-Nearest Neighbors (kNN), which have been recently introduced as promising tools for PTF development, were utilized to test if the incorporation of soil structure will improve PTF's accuracy under a context of rather limited training data. The results show that incorporating descriptive soil structure information, i.e., massive, structured and structureless, as grouping criterion can improve the accuracy of PTFs derived by SVM approach in the range of matric potential of -6 to -33 kPa (average RMSE decreased up to 0.005 m3 m-3 after grouping, depending on matric potentials). The improvement was primarily attributed to the outperformance of SVM-PTFs calibrated on structureless soils. No improvement was obtained with kNN technique, at least not in our study in which the data set became limited in size after grouping. Since there is an impact of regression techniques on the improved effect of incorporating qualitative soil structure information, selecting a proper technique will help to maximize the combined influence of flexible regression algorithms and soil structure information on PTF accuracy.
Directory of Open Access Journals (Sweden)
Jungwook Kim
2018-05-01
Full Text Available The objective function is usually used for verification of the optimization process between observed and simulated flows for the parameter estimation of rainfall–runoff model. However, it does not focus on peak flow and on representative parameter for various rain storm events of the basin, but it can estimate the optimal parameters by minimizing the overall error of observed and simulated flows. Therefore, the aim of this study is to suggest the objective functions that can fit peak flow in hydrograph and estimate the representative parameter of the basin for the events. The Streamflow Synthesis And Reservoir Regulation (SSARR model was employed to perform flood runoff simulation for the Mihocheon stream basin in Geum River, Korea. Optimization was conducted using three calibration methods: genetic algorithm, pattern search, and the Shuffled Complex Evolution method developed at the University of Arizona (SCE-UA. Two objective functions of the Sum of Squared of Residual (SSR and the Weighted Sum of Squared of Residual (WSSR suggested in this study for peak flow optimization were applied. Since the parameters estimated using a single rain storm event do not represent the parameters for various rain storms in the basin, we used the representative objective function that can minimize the sum of objective functions of the events. Six rain storm events were used for the parameter estimation. Four events were used for the calibration and the other two for validation; then, the results by SSR and WSSR were compared. Flow runoff simulation was carried out based on the proposed objective functions, and the objective function of WSSR was found to be more useful than that of SSR in the simulation of peak flow runoff. Representative parameters that minimize the objective function for each of the four rain storm events were estimated. The calibrated observed and simulated flow runoff hydrographs obtained from applying the estimated representative
International Nuclear Information System (INIS)
Lee, Haw-Long; Chang, Win-Jin; Chen, Wen-Lih; Yang, Yu-Ching
2012-01-01
Highlights: ► Time-dependent base heat flux of a functionally graded fin is inversely estimated. ► An inverse algorithm based on the conjugate gradient method and the discrepancy principle is applied. ► The distributions of temperature in the fin are determined as well. ► The influence of measurement error and measurement location upon the precision of the estimated results is also investigated. - Abstract: In this study, an inverse algorithm based on the conjugate gradient method and the discrepancy principle is applied to estimate the unknown time-dependent base heat flux of a functionally graded fin from the knowledge of temperature measurements taken within the fin. Subsequently, the distributions of temperature in the fin can be determined as well. It is assumed that no prior information is available on the functional form of the unknown base heat flux; hence the procedure is classified as the function estimation in inverse calculation. The temperature data obtained from the direct problem are used to simulate the temperature measurements. The influence of measurement errors and measurement location upon the precision of the estimated results is also investigated. Results show that an excellent estimation on the time-dependent base heat flux and temperature distributions can be obtained for the test case considered in this study.
Flood damage estimation of companies: A comparison of Stage-Damage-Functions and Random Forests
Sieg, Tobias; Kreibich, Heidi; Vogel, Kristin; Merz, Bruno
2017-04-01
The development of appropriate flood damage models plays an important role not only for the damage assessment after an event but also to develop adaptation and risk mitigation strategies. So called Stage-Damage-Functions (SDFs) are often applied as a standard approach to estimate flood damage. These functions assign a certain damage to the water depth depending on the use or other characteristics of the exposed objects. Recent studies apply machine learning algorithms like Random Forests (RFs) to model flood damage. These algorithms usually consider more influencing variables and promise to depict a more detailed insight into the damage processes. In addition they provide an inherent validation scheme. Our study focuses on direct, tangible damage of single companies. The objective is to model and validate the flood damage suffered by single companies with SDFs and RFs. The data sets used are taken from two surveys conducted after the floods in the Elbe and Danube catchments in the years 2002 and 2013 in Germany. Damage to buildings (n = 430), equipment (n = 651) as well as goods and stock (n = 530) are taken into account. The model outputs are validated via a comparison with the actual flood damage acquired by the surveys and subsequently compared with each other. This study investigates the gain in model performance with the use of additional data and the advantages and disadvantages of the RFs compared to SDFs. RFs show an increase in model performance with an increasing amount of data records over a comparatively large range, while the model performance of the SDFs is already saturated for a small set of records. In addition, the RFs are able to identify damage influencing variables, which improves the understanding of damage processes. Hence, RFs can slightly improve flood damage predictions and provide additional insight into the underlying mechanisms compared to SDFs.
A note on the conditional density estimate in single functional index model
2010-01-01
Abstract In this paper, we consider estimation of the conditional density of a scalar response variable Y given a Hilbertian random variable X when the observations are linked with a single-index structure. We establish the pointwise and the uniform almost complete convergence (with the rate) of the kernel estimate of this model. As an application, we show how our result can be applied in the prediction problem via the conditional mode estimate. Finally, the estimation of the funct...
International Nuclear Information System (INIS)
Alves, Carolina Moura; Horodecki, Pawel; Oi, Daniel K. L.; Kwek, L. C.; Ekert, Artur K.
2003-01-01
We present a method of direct estimation of important properties of a shared bipartite quantum state, within the ''distant laboratories'' paradigm, using only local operations and classical communication. We apply this procedure to spectrum estimation of shared states, and locally implementable structural physical approximations to incompletely positive maps. This procedure can also be applied to the estimation of channel capacity and measures of entanglement
Estimates of azimuthal numbers associated with elementary elliptic cylinder wave functions
Kovalev, V. A.; Radaev, Yu. N.
2014-05-01
The paper deals with issues related to the construction of solutions, 2 π-periodic in the angular variable, of the Mathieu differential equation for the circular elliptic cylinder harmonics, the associated characteristic values, and the azimuthal numbers needed to form the elementary elliptic cylinder wave functions. A superposition of the latter is one possible form for representing the analytic solution of the thermoelastic wave propagation problem in long waveguides with elliptic cross-section contour. The classical Sturm-Liouville problem for the Mathieu equation is reduced to a spectral problem for a linear self-adjoint operator in the Hilbert space of infinite square summable two-sided sequences. An approach is proposed that permits one to derive rather simple algorithms for computing the characteristic values of the angular Mathieu equation with real parameters and the corresponding eigenfunctions. Priority is given to the application of the most symmetric forms and equations that have not yet been used in the theory of the Mathieu equation. These algorithms amount to constructing a matrix diagonalizing an infinite symmetric pentadiagonal matrix. The problem of generalizing the notion of azimuthal number of a wave propagating in a cylindrical waveguide to the case of elliptic geometry is considered. Two-sided mutually refining estimates are constructed for the spectral values of the Mathieu differential operator with periodic and half-periodic (antiperiodic) boundary conditions.
Directory of Open Access Journals (Sweden)
Marina Solé
2014-01-01
Full Text Available Functional conformation and performance in Classic and Menorca Dressage are the main selection criteria in the Menorca Horse breeding program. Menorca Dressage is an alternative Classical Dressage discipline which is exclusive of the Menorca Island, but including a series of movements that the animals perform in the traditional festivities called “Jaleo Menorquín”. One of these movements involves the horse raising its forelimbs and standing or walking on its hindlimbs, which is called “el bot”. To make the Menorca horse breed more competitive in the equestrian market, it is necessary to understand the genetic background that characterizes the aptitude for Menorca Dressage and its relationship with conformation traits. The analysed data consisted of 15 conformation traits from 347 Menorca horses (200 males and 147 females, with 1,550 performance records in Menorca Dressage competitions. Genetic parameters were estimated using linear and threshold animal models. The heritabilities for heights and lengths were high (0.45-0.76, those for angulations and binary conformation traits were low to moderate (0.10-0.36 as were the scores for dressage performance (0.13-0.21. The results suggest that the analyzed traits could be used as an efficient tool for selecting breeding horses.
Estimation of bone perfusion as a function of intramedullary pressure in sheep
International Nuclear Information System (INIS)
Rosenthal, M.S.; Lehner, C.E.; Pearson, D.W.; Kanikula, T.M.; Adler, G.G.; Venci, R.; Lanphier, E.H.; De Luca, P.M.
1985-01-01
It has been reported previously that following decompression (i.e. diving ascents) the intramedullary pressure (IMP) in bone can rise dramatically and possibly by the mechanism which can induce dysbaric osteonecrosis or the ''silent bends''. If the blood supply for the bone transverses the marrow compartment, than an increase in IMP could cause a temporary decrease in perfusion or hemostasis and hence ischemia leading to bone necrosis. To test this hypothesis, the authors measured the perfusion of bone in sheep as a function of IMP. The bone perfusion was estimated by measuring the perfusion-limited clearance of Ar-41 (Eγ=1293 keV, T/sub 1/2/=1.83 h) from the bone mineral matrix of sheep's tibia. The argon gas was formed in vivo by the fast neutron activation of Ca-44 to Ar-41 following the Ca-44(n,α) reaction. Clearance of Ar-41 was measured by time gated gamma-ray spectroscopy. These results indicate that an elevation of intramedullary pressure can decrease perfusion in bone and may cause bone necrosis
Energy Technology Data Exchange (ETDEWEB)
Sole, M.; Cervantes, I.; Gutierrez, J. P.; Gomez, M. D.; Valera, M.
2014-06-01
Functional conformation and performance in Classic and Menorca Dressage are the main selection criteria in the Menorca Horse breeding program. Menorca Dressage is an alternative Classical Dressage discipline which is exclusive of the Menorca Island, but including a series of movements that the animals perform in the traditional festivities called Jaleo Menorquin. One of these movements involves the horse raising its forelimbs and standing or walking on its hindlimbs, which is called el bot. To make the Menorca horse breed more competitive in the equestrian market, it is necessary to understand the genetic background that characterizes the aptitude for Menorca Dressage and its relationship with conformation traits. The analysed data consisted of 15 conformation traits from 347 Menorca horses (200 males and 147 females), with 1,550 performance records in Menorca Dressage competitions. Genetic parameters were estimated using linear and threshold animal models. The heritabilities for heights and lengths were high (0.45-0.76), those for angulations and binary conformation traits were low to moderate (0.10-0.36) as were the scores for dressage performance (0.13-0.21). The results suggest that the analyzed traits could be used as an efficient tool for selecting breeding horses. (Author)
Comparison of volatility function technique for risk-neutral densities estimation
Bahaludin, Hafizah; Abdullah, Mimi Hafizah
2017-08-01
Volatility function technique by using interpolation approach plays an important role in extracting the risk-neutral density (RND) of options. The aim of this study is to compare the performances of two interpolation approaches namely smoothing spline and fourth order polynomial in extracting the RND. The implied volatility of options with respect to strike prices/delta are interpolated to obtain a well behaved density. The statistical analysis and forecast accuracy are tested using moments of distribution. The difference between the first moment of distribution and the price of underlying asset at maturity is used as an input to analyze forecast accuracy. RNDs are extracted from the Dow Jones Industrial Average (DJIA) index options with a one month constant maturity for the period from January 2011 until December 2015. The empirical results suggest that the estimation of RND using a fourth order polynomial is more appropriate to be used compared to a smoothing spline in which the fourth order polynomial gives the lowest mean square error (MSE). The results can be used to help market participants capture market expectations of the future developments of the underlying asset.
Estimating dose rates to organs as a function of age following internal exposure to radionuclides
International Nuclear Information System (INIS)
Leggett, R.W.; Eckerman, K.F.; Dunning, D.E. Jr.; Cristy, M.; Crawford-Brown, D.J.; Williams, L.R.
1984-03-01
The AGEDOS methodology allows estimates of dose rates, as a function of age, to radiosensitive organs and tissues in the human body at arbitrary times during or after internal exposure to radioactive material. Presently there are few, if any, radionuclides for which sufficient metabolic information is available to allow full use of all features of the methodology. The intention has been to construct the methodology so that optimal information can be gained from a mixture of the limited amount of age-dependent, nuclide-specific data and the generally plentiful age-dependent physiological data now available. Moreover, an effort has been made to design the methodology so that constantly accumulating metabolic information can be incorporated with minimal alterations in the AGEDOS computer code. Some preliminary analyses performed by the authors, using the AGEDOS code in conjunction with age-dependent risk factors developed from the A-bomb survivor data and other studies, has indicated that the doses and subsequent risks of eventually experiencing radiogenic cancers may vary substantially with age for some exposure scenarios and may be relatively invariant with age for other scenarios. We believe that the AGEDOS methodology provides a convenient and efficient means for performing the internal dosimetry
State-space model with deep learning for functional dynamics estimation in resting-state fMRI.
Suk, Heung-Il; Wee, Chong-Yaw; Lee, Seong-Whan; Shen, Dinggang
2016-04-01
Studies on resting-state functional Magnetic Resonance Imaging (rs-fMRI) have shown that different brain regions still actively interact with each other while a subject is at rest, and such functional interaction is not stationary but changes over time. In terms of a large-scale brain network, in this paper, we focus on time-varying patterns of functional networks, i.e., functional dynamics, inherent in rs-fMRI, which is one of the emerging issues along with the network modelling. Specifically, we propose a novel methodological architecture that combines deep learning and state-space modelling, and apply it to rs-fMRI based Mild Cognitive Impairment (MCI) diagnosis. We first devise a Deep Auto-Encoder (DAE) to discover hierarchical non-linear functional relations among regions, by which we transform the regional features into an embedding space, whose bases are complex functional networks. Given the embedded functional features, we then use a Hidden Markov Model (HMM) to estimate dynamic characteristics of functional networks inherent in rs-fMRI via internal states, which are unobservable but can be inferred from observations statistically. By building a generative model with an HMM, we estimate the likelihood of the input features of rs-fMRI as belonging to the corresponding status, i.e., MCI or normal healthy control, based on which we identify the clinical label of a testing subject. In order to validate the effectiveness of the proposed method, we performed experiments on two different datasets and compared with state-of-the-art methods in the literature. We also analyzed the functional networks learned by DAE, estimated the functional connectivities by decoding hidden states in HMM, and investigated the estimated functional connectivities by means of a graph-theoretic approach. Copyright © 2016 Elsevier Inc. All rights reserved.
Enhancing PTFs with remotely sensed data for multi-scale soil water retention estimation
Jana, Raghavendra B.; Mohanty, Binayak P.
2011-03-01
SummaryUse of remotely sensed data products in the earth science and water resources fields is growing due to increasingly easy availability of the data. Traditionally, pedotransfer functions (PTFs) employed for soil hydraulic parameter estimation from other easily available data have used basic soil texture and structure information as inputs. Inclusion of surrogate/supplementary data such as topography and vegetation information has shown some improvement in the PTF's ability to estimate more accurate soil hydraulic parameters. Artificial neural networks (ANNs) are a popular tool for PTF development, and are usually applied across matching spatial scales of inputs and outputs. However, different hydrologic, hydro-climatic, and contaminant transport models require input data at different scales, all of which may not be easily available from existing databases. In such a scenario, it becomes necessary to scale the soil hydraulic parameter values estimated by PTFs to suit the model requirements. Also, uncertainties in the predictions need to be quantified to enable users to gauge the suitability of a particular dataset in their applications. Bayesian Neural Networks (BNNs) inherently provide uncertainty estimates for their outputs due to their utilization of Markov Chain Monte Carlo (MCMC) techniques. In this paper, we present a PTF methodology to estimate soil water retention characteristics built on a Bayesian framework for training of neural networks and utilizing several in situ and remotely sensed datasets jointly. The BNN is also applied across spatial scales to provide fine scale outputs when trained with coarse scale data. Our training data inputs include ground/remotely sensed soil texture, bulk density, elevation, and Leaf Area Index (LAI) at 1 km resolutions, while similar properties measured at a point scale are used as fine scale inputs. The methodology was tested at two different hydro-climatic regions. We also tested the effect of varying the support
McDonald, A. David; Sandal, Leif Kristoffer
1998-01-01
Estimation of parameters in the drift and diffusion terms of stochastic differential equations involves simulation and generally requires substantial data sets. We examine a method that can be applied when available time series are limited to less than 20 observations per replication. We compare and contrast parameter estimation for linear and nonlinear first-order stochastic differential equations using two criterion functions: one based on a Chi-square statistic, put forward by Hurn and Lin...
Frolov, Maxim; Chistiakova, Olga
2017-06-01
Paper is devoted to a numerical justification of the recent a posteriori error estimate for Reissner-Mindlin plates. This majorant provides a reliable control of accuracy of any conforming approximate solution of the problem including solutions obtained with commercial software for mechanical engineering. The estimate is developed on the basis of the functional approach and is applicable to several types of boundary conditions. To verify the approach, numerical examples with mesh refinements are provided.
International Nuclear Information System (INIS)
Bachoc, F.
2013-01-01
The parametric estimation of the covariance function of a Gaussian process is studied, in the framework of the Kriging model. Maximum Likelihood and Cross Validation estimators are considered. The correctly specified case, in which the covariance function of the Gaussian process does belong to the parametric set used for estimation, is first studied in an increasing-domain asymptotic framework. The sampling considered is a randomly perturbed multidimensional regular grid. Consistency and asymptotic normality are proved for the two estimators. It is then put into evidence that strong perturbations of the regular grid are always beneficial to Maximum Likelihood estimation. The incorrectly specified case, in which the covariance function of the Gaussian process does not belong to the parametric set used for estimation, is then studied. It is shown that Cross Validation is more robust than Maximum Likelihood in this case. Finally, two applications of the Kriging model with Gaussian processes are carried out on industrial data. For a validation problem of the friction model of the thermal-hydraulic code FLICA 4, where experimental results are available, it is shown that Gaussian process modeling of the FLICA 4 code model error enables to considerably improve its predictions. Finally, for a meta modeling problem of the GERMINAL thermal-mechanical code, the interest of the Kriging model with Gaussian processes, compared to neural network methods, is shown. (author) [fr
Directory of Open Access Journals (Sweden)
L. B. Oliveira
2002-06-01
Full Text Available Funções de pedotransferência são equações usadas para estimar características edáficas de difícil determinação a partir de outras mais facilmente obtidas. Apesar do bom número de equações disponíveis para estimativa da umidade retida a potenciais matriciais específicos, elas não devem ser usadas indiscriminadamente, pois, em sua maioria, foram desenvolvidas com solos de clima temperado e a partir de dados gerados por métodos diversos dos em uso nos laboratórios brasileiros. O presente trabalho teve por objetivos: (a elaborar funções de pedotransferência para estimar o conteúdo de água nos potenciais de -33 e -1.500 kPa e a água disponível, a partir de dados granulométricos e de densidade do solo, para solos do estado de Pernambuco e (b comparar a eficiência de predição das equações propostas em relação a equações similares, disponíveis na literatura. No desenvolvimento das equações, foi utilizada uma base de dados composta por 98 perfis de solos e 467 horizontes. Os perfis foram agrupados, de acordo com Sistema Brasileiro de Classificação de Solos, em 27 classes de terceiro nível categórico (grande grupo. As equações desenvolvidas apresentaram bons coeficientes de determinação e baixo erro de predição. A sistematização dos dados por atividade da fração argila ou grau aproximado de desenvolvimento ou classe textural não produziu melhorias na capacidade preditiva das pedofunções. As equações propostas em outros trabalhos apresentaram elevado erro de predição, o que restringe a sua utilização para solos do estado de Pernambuco.
Z. Meghnatisi; N. Nematollahi
2009-01-01
Let Xi1, · · · , Xini be a random sample from a gamma distribution with known shape parameter νi > 0 and unknown scale parameter βi > 0, i = 1, 2, satisfying 0 < β1 6 β2. We consider the class of mixed estimators for estimation of β1 and β2 under reflected gamma loss function. It has been shown that the minimum risk equivariant estimator of βi, i = 1, 2, which is admissible when no information on the ordering of parameters are given, is inadmissible and dominated by a cla...
International Nuclear Information System (INIS)
Hubert, X.
2009-12-01
This work deals with the estimation of the concentration of molecules in arterial blood which are labelled with positron-emitting radioelements. This concentration is called 'β + arterial input function'. This concentration has to be estimated for a large number of pharmacokinetic analyses. Nowadays it is measured through series of arterial sampling, which is an accurate method but requiring a stringent protocol. Complications might occur during arterial blood sampling because this method is invasive (hematomas, nosocomial infections). The objective of this work is to overcome this risk through a non-invasive estimation of β + input function with an external detector and a collimator. This allows the reconstruction of blood vessels and thus the discrimination of arterial signal from signals in other tissues. Collimators in medical imaging are not adapted to estimate β + input function because their sensitivity is very low. During this work, they are replaced by coded-aperture collimators, originally developed for astronomy. New methods where coded apertures are used with statistical reconstruction algorithms are presented. Techniques for analytical ray-tracing and for the acceleration of reconstructions are proposed. A new method which decomposes reconstructions on temporal sets and on spatial sets is also developed to efficiently estimate arterial input function from series of temporal acquisitions. This work demonstrates that the trade-off between sensitivity and spatial resolution in PET can be improved thanks to coded aperture collimators and statistical reconstruction algorithm; it also provides new tools to implement such improvements. (author)
Shi, Lei; Guo, Lianghui; Ma, Yawei; Li, Yonghua; Wang, Weilai
2018-05-01
The technique of teleseismic receiver function H-κ stacking is popular for estimating the crustal thickness and Vp/Vs ratio. However, it has large uncertainty or ambiguity when the Moho multiples in receiver function are not easy to be identified. We present an improved technique to estimate the crustal thickness and Vp/Vs ratio by joint constraints of receiver function and gravity data. The complete Bouguer gravity anomalies, composed of the anomalies due to the relief of the Moho interface and the heterogeneous density distribution within the crust, are associated with the crustal thickness, density and Vp/Vs ratio. According to their relationship formulae presented by Lowry and Pérez-Gussinyé, we invert the complete Bouguer gravity anomalies by using a common algorithm of likelihood estimation to obtain the crustal thickness and Vp/Vs ratio, and then utilize them to constrain the receiver function H-κ stacking result. We verified the improved technique on three synthetic crustal models and evaluated the influence of selected parameters, the results of which demonstrated that the novel technique could reduce the ambiguity and enhance the accuracy of estimation. Real data test at two given stations in the NE margin of Tibetan Plateau illustrated that the improved technique provided reliable estimations of crustal thickness and Vp/Vs ratio.
Maadooliat, Mehdi
2015-10-21
This paper develops a method for simultaneous estimation of density functions for a collection of populations of protein backbone angle pairs using a data-driven, shared basis that is constructed by bivariate spline functions defined on a triangulation of the bivariate domain. The circular nature of angular data is taken into account by imposing appropriate smoothness constraints across boundaries of the triangles. Maximum penalized likelihood is used to fit the model and an alternating blockwise Newton-type algorithm is developed for computation. A simulation study shows that the collective estimation approach is statistically more efficient than estimating the densities individually. The proposed method was used to estimate neighbor-dependent distributions of protein backbone dihedral angles (i.e., Ramachandran distributions). The estimated distributions were applied to protein loop modeling, one of the most challenging open problems in protein structure prediction, by feeding them into an angular-sampling-based loop structure prediction framework. Our estimated distributions compared favorably to the Ramachandran distributions estimated by fitting a hierarchical Dirichlet process model; and in particular, our distributions showed significant improvements on the hard cases where existing methods do not work well.
Maadooliat, Mehdi; Zhou, Lan; Najibi, Seyed Morteza; Gao, Xin; Huang, Jianhua Z.
2015-01-01
This paper develops a method for simultaneous estimation of density functions for a collection of populations of protein backbone angle pairs using a data-driven, shared basis that is constructed by bivariate spline functions defined on a triangulation of the bivariate domain. The circular nature of angular data is taken into account by imposing appropriate smoothness constraints across boundaries of the triangles. Maximum penalized likelihood is used to fit the model and an alternating blockwise Newton-type algorithm is developed for computation. A simulation study shows that the collective estimation approach is statistically more efficient than estimating the densities individually. The proposed method was used to estimate neighbor-dependent distributions of protein backbone dihedral angles (i.e., Ramachandran distributions). The estimated distributions were applied to protein loop modeling, one of the most challenging open problems in protein structure prediction, by feeding them into an angular-sampling-based loop structure prediction framework. Our estimated distributions compared favorably to the Ramachandran distributions estimated by fitting a hierarchical Dirichlet process model; and in particular, our distributions showed significant improvements on the hard cases where existing methods do not work well.
Directory of Open Access Journals (Sweden)
Z. Khodadadi
2008-03-01
Full Text Available Let S be matrix of residual sum of square in linear model Y = Aβ + e where matrix e is distributed as elliptically contoured with unknown scale matrix Σ. In present work, we consider the problem of estimating Σ with respect to squared loss function, L(Σˆ , Σ = tr(ΣΣˆ −1 −I 2 . It is shown that improvement of the estimators were obtained by James, Stein [7], Dey and Srivasan [1] under the normality assumption remains robust under an elliptically contoured distribution respect to squared loss function
Yu, Z. P.; Yue, Z. F.; Liu, W.
2018-05-01
With the development of artificial intelligence, more and more reliability experts have noticed the roles of subjective information in the reliability design of complex system. Therefore, based on the certain numbers of experiment data and expert judgments, we have divided the reliability estimation based on distribution hypothesis into cognition process and reliability calculation. Consequently, for an illustration of this modification, we have taken the information fusion based on intuitional fuzzy belief functions as the diagnosis model of cognition process, and finished the reliability estimation for the open function of cabin door affected by the imprecise judgment corresponding to distribution hypothesis.
International Nuclear Information System (INIS)
Heys, D.W.; Stump, D.R.
1984-01-01
The variational principle is used to estimate the ground state of the Kogut-Susskind Hamiltonian of the SU(2) lattice gauge theory, with a trial wave function for which the magnetic fields on different plaquettes are uncorrelated. This trial function describes a disordered state. The energy expectation value is evaluated by a Monte Carlo method. The variational results are compared to similar results for a related Abelian gauge theory. Also, the expectation value of the Wilson loop operator is computed for the trial state, and the resulting estimate of the string tension is compared to the prediction of asymptotic freedom
Estimation of the Lagrangian structure function constant ¤C¤0 from surface-layer wind data
DEFF Research Database (Denmark)
Anfossi, D.; Degrazia, G.; Ferrero, E.
2000-01-01
Eulerian turbulence observations, made in the surface layer under unstable conditions (z/L > 0), by a sonic anemometer were used to estimate the Lagrangian structure function constant C(0). Two methods were considered. The first one makes use of a relationship, widely used in the Lagrangian...... stochastic dispersion models, relating C(0) to the turbulent kinetic energy dissipation rate epsilon, wind velocity variance and Lagrangian decorrelation time. The second one employs a novel equation, connecting C(0) to the constant of the second-order Eulerian structure function. Before estimating C(0...
Cvan Trobec, Katja; Kerec Kos, Mojca; von Haehling, Stephan; Anker, Stefan D; Macdougall, Iain C; Ponikowski, Piotr; Lainscak, Mitja
2015-12-01
To compare the performance of iohexol plasma clearance and creatinine-based renal function estimating equations in monitoring longitudinal renal function changes in chronic heart failure (CHF) patients, and to assess the effects of body composition on the equation performance. Iohexol plasma clearance was measured in 43 CHF patients at baseline and after at least 6 months. Simultaneously, renal function was estimated with five creatinine-based equations (four- and six-variable Modification of Diet in Renal Disease, Cockcroft-Gault, Cockcroft-Gault adjusted for lean body mass, Chronic Kidney Disease Epidemiology Collaboration equation) and body composition was assessed using bioimpedance and dual-energy x-ray absorptiometry. Over a median follow-up of 7.5 months (range 6-17 months), iohexol clearance significantly declined (52.8 vs 44.4 mL/[min ×1.73 m2], P=0.001). This decline was significantly higher in patients receiving mineralocorticoid receptor antagonists at baseline (mean decline -22% of baseline value vs -3%, P=0.037). Mean serum creatinine concentration did not change significantly during follow-up and no creatinine-based renal function estimating equation was able to detect the significant longitudinal decline of renal function determined by iohexol clearance. After accounting for body composition, the accuracy of the equations improved, but not their ability to detect renal function decline. Renal function measured with iohexol plasma clearance showed relevant decline in CHF patients, particularly in those treated with mineralocorticoid receptor antagonists. None of the equations for renal function estimation was able to detect these changes. ClinicalTrials.gov registration number: NCT01829880.
Cvan Trobec, Katja; Kerec Kos, Mojca; von Haehling, Stephan; Anker, Stefan D.; Macdougall, Iain C.; Ponikowski, Piotr; Lainscak, Mitja
2015-01-01
Aim To compare the performance of iohexol plasma clearance and creatinine-based renal function estimating equations in monitoring longitudinal renal function changes in chronic heart failure (CHF) patients, and to assess the effects of body composition on the equation performance. Methods Iohexol plasma clearance was measured in 43 CHF patients at baseline and after at least 6 months. Simultaneously, renal function was estimated with five creatinine-based equations (four- and six-variable Modification of Diet in Renal Disease, Cockcroft-Gault, Cockcroft-Gault adjusted for lean body mass, Chronic Kidney Disease Epidemiology Collaboration equation) and body composition was assessed using bioimpedance and dual-energy x-ray absorptiometry. Results Over a median follow-up of 7.5 months (range 6-17 months), iohexol clearance significantly declined (52.8 vs 44.4 mL/[min ×1.73 m2], P = 0.001). This decline was significantly higher in patients receiving mineralocorticoid receptor antagonists at baseline (mean decline -22% of baseline value vs -3%, P = 0.037). Mean serum creatinine concentration did not change significantly during follow-up and no creatinine-based renal function estimating equation was able to detect the significant longitudinal decline of renal function determined by iohexol clearance. After accounting for body composition, the accuracy of the equations improved, but not their ability to detect renal function decline. Conclusions Renal function measured with iohexol plasma clearance showed relevant decline in CHF patients, particularly in those treated with mineralocorticoid receptor antagonists. None of the equations for renal function estimation was able to detect these changes. ClinicalTrials.gov registration number NCT01829880 PMID:26718759
Estimation of delays and other parameters in nonlinear functional differential equations
Banks, H. T.; Lamm, P. K. D.
1983-01-01
A spline-based approximation scheme for nonlinear nonautonomous delay differential equations is discussed. Convergence results (using dissipative type estimates on the underlying nonlinear operators) are given in the context of parameter estimation problems which include estimation of multiple delays and initial data as well as the usual coefficient-type parameters. A brief summary of some of the related numerical findings is also given.
Directory of Open Access Journals (Sweden)
Weikai Li
2017-08-01
Full Text Available Functional brain network (FBN has been becoming an increasingly important way to model the statistical dependence among neural time courses of brain, and provides effective imaging biomarkers for diagnosis of some neurological or psychological disorders. Currently, Pearson's Correlation (PC is the simplest and most widely-used method in constructing FBNs. Despite its advantages in statistical meaning and calculated performance, the PC tends to result in a FBN with dense connections. Therefore, in practice, the PC-based FBN needs to be sparsified by removing weak (potential noisy connections. However, such a scheme depends on a hard-threshold without enough flexibility. Different from this traditional strategy, in this paper, we propose a new approach for estimating FBNs by remodeling PC as an optimization problem, which provides a way to incorporate biological/physical priors into the FBNs. In particular, we introduce an L1-norm regularizer into the optimization model for obtaining a sparse solution. Compared with the hard-threshold scheme, the proposed framework gives an elegant mathematical formulation for sparsifying PC-based networks. More importantly, it provides a platform to encode other biological/physical priors into the PC-based FBNs. To further illustrate the flexibility of the proposed method, we extend the model to a weighted counterpart for learning both sparse and scale-free networks, and then conduct experiments to identify autism spectrum disorders (ASD from normal controls (NC based on the constructed FBNs. Consequently, we achieved an 81.52% classification accuracy which outperforms the baseline and state-of-the-art methods.
Li, Weikai; Wang, Zhengxia; Zhang, Limei; Qiao, Lishan; Shen, Dinggang
2017-01-01
Functional brain network (FBN) has been becoming an increasingly important way to model the statistical dependence among neural time courses of brain, and provides effective imaging biomarkers for diagnosis of some neurological or psychological disorders. Currently, Pearson's Correlation (PC) is the simplest and most widely-used method in constructing FBNs. Despite its advantages in statistical meaning and calculated performance, the PC tends to result in a FBN with dense connections. Therefore, in practice, the PC-based FBN needs to be sparsified by removing weak (potential noisy) connections. However, such a scheme depends on a hard-threshold without enough flexibility. Different from this traditional strategy, in this paper, we propose a new approach for estimating FBNs by remodeling PC as an optimization problem, which provides a way to incorporate biological/physical priors into the FBNs. In particular, we introduce an L 1 -norm regularizer into the optimization model for obtaining a sparse solution. Compared with the hard-threshold scheme, the proposed framework gives an elegant mathematical formulation for sparsifying PC-based networks. More importantly, it provides a platform to encode other biological/physical priors into the PC-based FBNs. To further illustrate the flexibility of the proposed method, we extend the model to a weighted counterpart for learning both sparse and scale-free networks, and then conduct experiments to identify autism spectrum disorders (ASD) from normal controls (NC) based on the constructed FBNs. Consequently, we achieved an 81.52% classification accuracy which outperforms the baseline and state-of-the-art methods.
The organization of the human cerebellum estimated by intrinsic functional connectivity
Krienen, Fenna M.; Castellanos, Angela; Diaz, Julio C.; Yeo, B. T. Thomas
2011-01-01
The cerebral cortex communicates with the cerebellum via polysynaptic circuits. Separate regions of the cerebellum are connected to distinct cerebral areas, forming a complex topography. In this study we explored the organization of cerebrocerebellar circuits in the human using resting-state functional connectivity MRI (fcMRI). Data from 1,000 subjects were registered using nonlinear deformation of the cerebellum in combination with surface-based alignment of the cerebral cortex. The foot, hand, and tongue representations were localized in subjects performing movements. fcMRI maps derived from seed regions placed in different parts of the motor body representation yielded the expected inverted map of somatomotor topography in the anterior lobe and the upright map in the posterior lobe. Next, we mapped the complete topography of the cerebellum by estimating the principal cerebral target for each point in the cerebellum in a discovery sample of 500 subjects and replicated the topography in 500 independent subjects. The majority of the human cerebellum maps to association areas. Quantitative analysis of 17 distinct cerebral networks revealed that the extent of the cerebellum dedicated to each network is proportional to the network's extent in the cerebrum with a few exceptions, including primary visual cortex, which is not represented in the cerebellum. Like somatomotor representations, cerebellar regions linked to association cortex have separate anterior and posterior representations that are oriented as mirror images of one another. The orderly topography of the representations suggests that the cerebellum possesses at least two large, homotopic maps of the full cerebrum and possibly a smaller third map. PMID:21795627
Directory of Open Access Journals (Sweden)
Hong Yao
2016-01-01
Full Text Available The number of surface water pollution accidents (abbreviated as SWPAs has increased substantially in China in recent years. Estimation of economic losses due to SWPAs has been one of the focuses in China and is mentioned many times in the Environmental Protection Law of China promulgated in 2014. From the perspective of water bodies’ functions, pollution accident damages can be divided into eight types: damage to human health, water supply suspension, fishery, recreational functions, biological diversity, environmental property loss, the accident’s origin and other indirect losses. In the valuation of damage to people’s life, the procedure for compensation of traffic accidents in China was used. The functional replacement cost method was used in economic estimation of the losses due to water supply suspension and loss of water’s recreational functions. Damage to biological diversity was estimated by recovery cost analysis and damage to environmental property losses were calculated using pollutant removal costs. As a case study, using the proposed calculation procedure the economic losses caused by the major Songhuajiang River pollution accident that happened in China in 2005 have been estimated at 2263 billion CNY. The estimated economic losses for real accidents can sometimes be influenced by social and political factors, such as data authenticity and accuracy. Besides, one or more aspects in the method might be overestimated, underrated or even ignored. The proposed procedure may be used by decision makers for the economic estimation of losses in SWPAs. Estimates of the economic losses of pollution accidents could help quantify potential costs associated with increased risk sources along lakes/rivers but more importantly, highlight the value of clean water to society as a whole.
Yao, Hong; You, Zhen; Liu, Bo
2016-01-01
The number of surface water pollution accidents (abbreviated as SWPAs) has increased substantially in China in recent years. Estimation of economic losses due to SWPAs has been one of the focuses in China and is mentioned many times in the Environmental Protection Law of China promulgated in 2014. From the perspective of water bodies’ functions, pollution accident damages can be divided into eight types: damage to human health, water supply suspension, fishery, recreational functions, biological diversity, environmental property loss, the accident’s origin and other indirect losses. In the valuation of damage to people’s life, the procedure for compensation of traffic accidents in China was used. The functional replacement cost method was used in economic estimation of the losses due to water supply suspension and loss of water’s recreational functions. Damage to biological diversity was estimated by recovery cost analysis and damage to environmental property losses were calculated using pollutant removal costs. As a case study, using the proposed calculation procedure the economic losses caused by the major Songhuajiang River pollution accident that happened in China in 2005 have been estimated at 2263 billion CNY. The estimated economic losses for real accidents can sometimes be influenced by social and political factors, such as data authenticity and accuracy. Besides, one or more aspects in the method might be overestimated, underrated or even ignored. The proposed procedure may be used by decision makers for the economic estimation of losses in SWPAs. Estimates of the economic losses of pollution accidents could help quantify potential costs associated with increased risk sources along lakes/rivers but more importantly, highlight the value of clean water to society as a whole. PMID:26805869
Yao, Hong; You, Zhen; Liu, Bo
2016-01-22
The number of surface water pollution accidents (abbreviated as SWPAs) has increased substantially in China in recent years. Estimation of economic losses due to SWPAs has been one of the focuses in China and is mentioned many times in the Environmental Protection Law of China promulgated in 2014. From the perspective of water bodies' functions, pollution accident damages can be divided into eight types: damage to human health, water supply suspension, fishery, recreational functions, biological diversity, environmental property loss, the accident's origin and other indirect losses. In the valuation of damage to people's life, the procedure for compensation of traffic accidents in China was used. The functional replacement cost method was used in economic estimation of the losses due to water supply suspension and loss of water's recreational functions. Damage to biological diversity was estimated by recovery cost analysis and damage to environmental property losses were calculated using pollutant removal costs. As a case study, using the proposed calculation procedure the economic losses caused by the major Songhuajiang River pollution accident that happened in China in 2005 have been estimated at 2263 billion CNY. The estimated economic losses for real accidents can sometimes be influenced by social and political factors, such as data authenticity and accuracy. Besides, one or more aspects in the method might be overestimated, underrated or even ignored. The proposed procedure may be used by decision makers for the economic estimation of losses in SWPAs. Estimates of the economic losses of pollution accidents could help quantify potential costs associated with increased risk sources along lakes/rivers but more importantly, highlight the value of clean water to society as a whole.
DEFF Research Database (Denmark)
Jensen, H B; Mamoei, Sepehr; Ravnborg, M.
2016-01-01
OBJECTIVE: To provide distribution-based estimates of the minimal clinical important difference (MCID) after slow release fampridine treatment on cognition and functional capacity in people with MS (PwMS). METHOD: MCID values were determined after SR-Fampridine treatment in 105 PwMS. Testing...
DEFF Research Database (Denmark)
Jørgensen, Bent; Demétrio, Clarice G. B.; Kristensen, Erik
2011-01-01
Estimation of Taylor’s power law for species abundance data may be performed by linear regression of the log empirical variances on the log means, but this method suffers from a problem of bias for sparse data. We show that the bias may be reduced by using a bias-corrected Pearson estimating...
Allen, Marcus; Zhong, Qiang; Kirsch, Nicholas; Dani, Ashwin; Clark, William W; Sharma, Nitin
2017-12-01
Miniature inertial measurement units (IMUs) are wearable sensors that measure limb segment or joint angles during dynamic movements. However, IMUs are generally prone to drift, external magnetic interference, and measurement noise. This paper presents a new class of nonlinear state estimation technique called state-dependent coefficient (SDC) estimation to accurately predict joint angles from IMU measurements. The SDC estimation method uses limb dynamics, instead of limb kinematics, to estimate the limb state. Importantly, the nonlinear limb dynamic model is formulated into state-dependent matrices that facilitate the estimator design without performing a Jacobian linearization. The estimation method is experimentally demonstrated to predict knee joint angle measurements during functional electrical stimulation of the quadriceps muscle. The nonlinear knee musculoskeletal model was identified through a series of experiments. The SDC estimator was then compared with an extended kalman filter (EKF), which uses a Jacobian linearization and a rotation matrix method, which uses a kinematic model instead of the dynamic model. Each estimator's performance was evaluated against the true value of the joint angle, which was measured through a rotary encoder. The experimental results showed that the SDC estimator, the rotation matrix method, and EKF had root mean square errors of 2.70°, 2.86°, and 4.42°, respectively. Our preliminary experimental results show the new estimator's advantage over the EKF method but a slight advantage over the rotation matrix method. However, the information from the dynamic model allows the SDC method to use only one IMU to measure the knee angle compared with the rotation matrix method that uses two IMUs to estimate the angle.
Estimating the small-x exponent of the structure function g1NS from the Bjorken sum rule
International Nuclear Information System (INIS)
Knauf, Anke; Meyer-Hermann, Michael; Soff, Gerhard
2002-01-01
We present a new estimate of the exponent governing the small-x behavior of the nonsinglet structure function g 1 p-n derived under the assumption that the Bjorken sum rule is valid. We use the world wide average of α s and the NNNLO QCD corrections to the Bjorken sum rule. The structure function g 1 NS is found to be clearly divergent for small x
Gitman, M.B.; Klyuev, A.V.; Stolbov, V.Y.; Gitman, I.M.
2017-01-01
The technique allows analysis using grain-phase structure of the functional material to evaluate its performance, particularly strength properties. The technique is based on the use of linguistic variable in the process of comprehensive evaluation. An example of estimating the strength properties of steel reinforcement, subject to special heat treatment to obtain the desired grain-phase structure.
International Nuclear Information System (INIS)
Dumonteil, E.; Diop, C. M.
2009-01-01
This paper derives an unbiased minimum variance estimator (UMVE) of a matrix exponential function of a normal wean. The result is then used to propose a reference scheme to solve Boltzmann/Bateman coupled equations, thanks to Monte Carlo transport codes. The last section will present numerical results on a simple example. (authors)
Bilir, Mustafa Kuzey
2009-01-01
This study uses a new psychometric model (mixture item response theory-MIMIC model) that simultaneously estimates differential item functioning (DIF) across manifest groups and latent classes. Current DIF detection methods investigate DIF from only one side, either across manifest groups (e.g., gender, ethnicity, etc.), or across latent classes…
Narison, Stéphan
1994-01-01
We estimate the sum of the \\Upsilon \\bar BB couplings using QCD Spectral Sum Rules (QSSR). Our result implies the phenomenological bound \\xi'(vv'=1) \\geq -1.04 for the slope of the Isgur-Wise function. An analytic estimate of the (physical) slope to two loops within QSSR leads to the accurate value \\xi'(vv'=1) \\simeq -(1.00 \\pm 0.02) due to the (almost) complete cancellations between the perturbative and non-perturbative corrections at the stability points. Then, we deduce, from the present data, the improved estimate \\vert V_{cb} \\vert \\simeq \\ga 1.48 \\mbox{ps}/\\tau_B \\dr ^{1/2}(37.3 \\pm 1.2 \\pm 1.4)\\times 10^{-3} where the first error comes from the data analysis and the second one from the different model parametrizations of the Isgur-Wise function.
Dalla Valle, Nicolas; Wutzler, Thomas; Meyer, Stefanie; Potthast, Karin; Michalzik, Beate
2017-04-01
Dual-permeability type models are widely used to simulate water fluxes and solute transport in structured soils. These models contain two spatially overlapping flow domains with different parameterizations or even entirely different conceptual descriptions of flow processes. They are usually able to capture preferential flow phenomena, but a large set of parameters is needed, which are very laborious to obtain or cannot be measured at all. Therefore, model inversions are often used to derive the necessary parameters. Although these require sufficient input data themselves, they can use measurements of state variables instead, which are often easier to obtain and can be monitored by automated measurement systems. In this work we show a method to estimate soil hydraulic parameters from high frequency soil moisture time series data gathered at two different measurement depths by inversion of a simple one dimensional dual-permeability model. The model uses an advection equation based on the kinematic wave theory to describe the flow in the fracture domain and a Richards equation for the flow in the matrix domain. The soil moisture time series data were measured in mesocosms during sprinkling experiments. The inversion consists of three consecutive steps: First, the parameters of the water retention function were assessed using vertical soil moisture profiles in hydraulic equilibrium. This was done using two different exponential retention functions and the Campbell function. Second, the soil sorptivity and diffusivity functions were estimated from Boltzmann-transformed soil moisture data, which allowed the calculation of the hydraulic conductivity function. Third, the parameters governing flow in the fracture domain were determined using the whole soil moisture time series. The resulting retention functions were within the range of values predicted by pedotransfer functions apart from very dry conditions, where all retention functions predicted lower matrix potentials
Directory of Open Access Journals (Sweden)
Rupnow Marcia FT
2005-09-01
Full Text Available Abstract Background Most tools for estimating utilities use clinical trial data from general health status models, such as the 36-Item Short-Form Health Survey (SF-36. A disease-specific model may be more appropriate. The objective of this study was to apply a disease-specific utility mapping function for schizophrenia to data from a large, 1-year, open-label study of long-acting risperidone and to compare its performance with an SF-36-based utility mapping function. Methods Patients with schizophrenia or schizoaffective disorder by DSM-IV criteria received 25, 50, or 75 mg long-acting risperidone every 2 weeks for 12 months. The Positive and Negative Syndrome Scale (PANSS and SF-36 were used to assess efficacy and health-related quality of life. Movement disorder severity was measured using the Extrapyramidal Symptom Rating Scale (ESRS; data concerning other common adverse effects (orthostatic hypotension, weight gain were collected. Transforms were applied to estimate utilities. Results A total of 474 patients completed the study. Long-acting risperidone treatment was associated with a utility gain of 0.051 using the disease-specific function. The estimated gain using an SF-36-based mapping function was smaller: 0.0285. Estimates of gains were only weakly correlated (r = 0.2. Because of differences in scaling and variance, the requisite sample size for a randomized trial to confirm observed effects is much smaller for the disease-specific mapping function (156 versus 672 total subjects. Conclusion Application of a disease-specific mapping function was feasible. Differences in scaling and precision suggest the clinically based mapping function has greater power than the SF-36-based measure to detect differences in utility.
Connectivity among subpopulations of Louisiana black bears as estimated by a step selection function
Clark, Joseph D.; Jared S. Laufenberg,; Maria Davidson,; Jennifer L. Murrow,
2015-01-01
Habitat fragmentation is a fundamental cause of population decline and increased risk of extinction for many wildlife species; animals with large home ranges and small population sizes are particularly sensitive. The Louisiana black bear (Ursus americanus luteolus) exists only in small, isolated subpopulations as a result of land clearing for agriculture, but the relative potential for inter-subpopulation movement by Louisiana black bears has not been quantified, nor have characteristics of effective travel routes between habitat fragments been identified. We placed and monitored global positioning system (GPS) radio collars on 8 female and 23 male bears located in 4 subpopulations in Louisiana, which included a reintroduced subpopulation located between 2 of the remnant subpopulations. We compared characteristics of sequential radiolocations of bears (i.e., steps) with steps that were possible but not chosen by the bears to develop step selection function models based on conditional logistic regression. The probability of a step being selected by a bear increased as the distance to natural land cover and agriculture at the end of the step decreased and as distance from roads at the end of a step increased. To characterize connectivity among subpopulations, we used the step selection models to create 4,000 hypothetical correlated random walks for each subpopulation representing potential dispersal events to estimate the proportion that intersected adjacent subpopulations (hereafter referred to as successful dispersals). Based on the models, movement paths for males intersected all adjacent subpopulations but paths for females intersected only the most proximate subpopulations. Cross-validation and genetic and independent observation data supported our findings. Our models also revealed that successful dispersals were facilitated by a reintroduced population located between 2 distant subpopulations. Successful dispersals for males were dependent on natural land
On the expected value and variance for an estimator of the spatio-temporal product density function
DEFF Research Database (Denmark)
Rodríguez-Corté, Francisco J.; Ghorbani, Mohammad; Mateu, Jorge
Second-order characteristics are used to analyse the spatio-temporal structure of the underlying point process, and thus these methods provide a natural starting point for the analysis of spatio-temporal point process data. We restrict our attention to the spatio-temporal product density function......, and develop a non-parametric edge-corrected kernel estimate of the product density under the second-order intensity-reweighted stationary hypothesis. The expectation and variance of the estimator are obtained, and closed form expressions derived under the Poisson case. A detailed simulation study is presented...... to compare our close expression for the variance with estimated ones for Poisson cases. The simulation experiments show that the theoretical form for the variance gives acceptable values, which can be used in practice. Finally, we apply the resulting estimator to data on the spatio-temporal distribution...
Efficient estimation of dynamic density functions with an application to outlier detection
Qahtan, Abdulhakim Ali Ali; Zhang, Xiangliang; Wang, Suojin
2012-01-01
In this paper, we propose a new method to estimate the dynamic density over data streams, named KDE-Track as it is based on a conventional and widely used Kernel Density Estimation (KDE) method. KDE-Track can efficiently estimate the density with linear complexity by using interpolation on a kernel model, which is incrementally updated upon the arrival of streaming data. Both theoretical analysis and experimental validation show that KDE-Track outperforms traditional KDE and a baseline method Cluster-Kernels on estimation accuracy of the complex density structures in data streams, computing time and memory usage. KDE-Track is also demonstrated on timely catching the dynamic density of synthetic and real-world data. In addition, KDE-Track is used to accurately detect outliers in sensor data and compared with two existing methods developed for detecting outliers and cleaning sensor data. © 2012 ACM.
Efficient Estimation of Dynamic Density Functions with Applications in Streaming Data
Qahtan, Abdulhakim Ali Ali
2016-01-01
application is to detect outliers in data streams from sensor networks based on the estimated PDF. The method detects outliers accurately and outperforms baseline methods designed for detecting and cleaning outliers in sensor data. The third application
Endogenous markers for estimation of renal function in peritoneal dialysis patients
DEFF Research Database (Denmark)
Kjaergaard, Krista Dybtved; Jensen, Jens Dam; Rehling, Michael
2012-01-01
OBJECTIVE: This method comparison study, conducted at the peritoneal dialysis (PD) outpatient clinic of the Department of Renal Medicine, Aarhus University Hospital, Denmark, set out to evaluate the accuracy and reproducibility of methods for estimating glomerular filtration rate (GFR) based...
Cost function approach for estimating derived demand for composite wood products
T. C. Marcin
1991-01-01
A cost function approach was examined for using the concept of duality between production and input factor demands. A translog cost function was used to represent residential construction costs and derived conditional factor demand equations. Alternative models were derived from the translog cost function by imposing parameter restrictions.
Yang, Shuang-Long; Liang, Li-Ping; Liu, Hou-De; Xu, Ke-Jun
2018-03-01
Aiming at reducing the estimation error of the sensor frequency response function (FRF) estimated by the commonly used window-based spectral estimation method, the error models of interpolation and transient errors are derived in the form of non-parameter models. Accordingly, window effects on the errors are analyzed and reveal that the commonly used hanning window leads to smaller interpolation error which can also be significantly eliminated by the cubic spline interpolation method when estimating the FRF from the step response data, and window with smaller front-end value can restrain more transient error. Thus, a new dual-cosine window with its non-zero discrete Fourier transform bins at -3, -1, 0, 1, and 3 is constructed for FRF estimation. Compared with the hanning window, the new dual-cosine window has the equivalent interpolation error suppression capability and better transient error suppression capability when estimating the FRF from the step response; specifically, it reduces the asymptotic property of the transient error from O(N-2) of the hanning window method to O(N-4) while only increases the uncertainty slightly (about 0.4 dB). Then, one direction of a wind tunnel strain gauge balance which is a high order, small damping, and non-minimum phase system is employed as the example for verifying the new dual-cosine window-based spectral estimation method. The model simulation result shows that the new dual-cosine window method is better than the hanning window method for FRF estimation, and compared with the Gans method and LPM method, it has the advantages of simple computation, less time consumption, and short data requirement; the actual data calculation result of the balance FRF is consistent to the simulation result. Thus, the new dual-cosine window is effective and practical for FRF estimation.
Directory of Open Access Journals (Sweden)
Elizabeth Hansen
2012-07-01
Full Text Available Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} The annual response variable in an ecological monitoring study often relates linearly to the weighted cumulative effect of some daily covariate, after adjusting for other annual covariates. Here we consider the problem of non-parametrically estimating the weights involved in computing the aforementioned cumulative effect, with a panel of short and contemporaneously correlated time series whose responses share the common cumulative effect of a daily covariate. The sequence of (unknown daily weights constitutes the so-called transfer function. Specifically, we consider the problem of estimating a smooth common transfer function shared by a panel of short time series that are contemporaneously correlated. We propose an estimation scheme using a likelihood approach that penalizes the roughness of the common transfer function. We illustrate the proposed method with a simulation study and a biological example of indirectly estimating the spawning date distribution of North Sea cod.
Gao, Mingwu; Cheng, Hao-Min; Sung, Shih-Hsien; Chen, Chen-Huan; Olivier, Nicholas Bari; Mukkamala, Ramakrishna
2017-07-01
pulse transit time (PTT) varies with blood pressure (BP) throughout the cardiac cycle, yet, because of wave reflection, only one PTT value at the diastolic BP level is conventionally estimated from proximal and distal BP waveforms. The objective was to establish a technique to estimate multiple PTT values at different BP levels in the cardiac cycle. a technique was developed for estimating PTT as a function of BP (to indicate the PTT value for every BP level) from proximal and distal BP waveforms. First, a mathematical transformation from one waveform to the other is defined in terms of the parameters of a nonlinear arterial tube-load model accounting for BP-dependent arterial compliance and wave reflection. Then, the parameters are estimated by optimally fitting the waveforms to each other via the model-based transformation. Finally, PTT as a function of BP is specified by the parameters. The technique was assessed in animals and patients in several ways including the ability of its estimated PTT-BP function to serve as a subject-specific curve for calibrating PTT to BP. the calibration curve derived by the technique during a baseline period yielded bias and precision errors in mean BP of 5.1 ± 0.9 and 6.6 ± 1.0 mmHg, respectively, during hemodynamic interventions that varied mean BP widely. the new technique may permit, for the first time, estimation of PTT values throughout the cardiac cycle from proximal and distal waveforms. the technique could potentially be applied to improve arterial stiffness monitoring and help realize cuff-less BP monitoring.
On the growth estimates of entire functions of double complex variables
Directory of Open Access Journals (Sweden)
Sanjib Datta
2017-08-01
Full Text Available Recently Datta et al. (2016 introduced the idea of relative type and relative weak type of entire functions of two complex variables with respect to another entire function of two complex variables and prove some related growth properties of it. In this paper, further we study some growth properties of entire functions of two complex variables on the basis of their relative types and relative weak types as introduced by Datta et al (2016.
Neokosmidis, Ioannis; Kamalakis, Thomas; Chipouras, Aristides; Sphicopoulos, Thomas
2005-01-01
The performance of high-powered wavelength-division multiplexed (WDM) optical networks can be severely degraded by four-wave-mixing- (FWM-) induced distortion. The multicanonical Monte Carlo method (MCMC) is used to calculate the probability-density function (PDF) of the decision variable of a receiver, limited by FWM noise. Compared with the conventional Monte Carlo method previously used to estimate this PDF, the MCMC method is much faster and can accurately estimate smaller error probabilities. The method takes into account the correlation between the components of the FWM noise, unlike the Gaussian model, which is shown not to provide accurate results.
A Complex Estimation Function based on Community Reputation for On-line Transaction Systems
Directory of Open Access Journals (Sweden)
Yu Yang
2012-09-01
Full Text Available A reputation management system is crucial in online transaction systems, in which a reputation function is its central component. We propose a generalized set-theoretic reputation function in this paper, which can be configured to meet various assessment requirements of a wide range of reputation scenarios encountered in online transaction nowadays. We analyze and verify tolerance of this reputation function against various socio-communal reputation attacks. We find the function to be dynamic, customizable and tolerant against different attacks. As such it can serve well in many online transaction systems such as e-commerce websites, online group activities, and P2P systems.
DEFF Research Database (Denmark)
Chon, K H; Cohen, R J; Holstein-Rathlou, N H
1997-01-01
A linear and nonlinear autoregressive moving average (ARMA) identification algorithm is developed for modeling time series data. The algorithm uses Laguerre expansion of kernals (LEK) to estimate Volterra-Wiener kernals. However, instead of estimating linear and nonlinear system dynamics via moving...... average models, as is the case for the Volterra-Wiener analysis, we propose an ARMA model-based approach. The proposed algorithm is essentially the same as LEK, but this algorithm is extended to include past values of the output as well. Thus, all of the advantages associated with using the Laguerre...
John S. Hogland; Nathaniel M. Anderson
2015-01-01
Raster modeling is an integral component of spatial analysis. However, conventional raster modeling techniques can require a substantial amount of processing time and storage space, often limiting the types of analyses that can be performed. To address this issue, we have developed Function Modeling. Function Modeling is a new modeling framework that streamlines the...
Borovkova, Svetlana; Burton, Robert; Dehling, Herold
2001-01-01
In this paper we develop a general approach for investigating the asymptotic distribution of functional Xn = f((Zn+k)k∈z) of absolutely regular stochastic processes (Zn)n∈z. Such functional occur naturally as orbits of chaotic dynamical systems, and thus our results can be used to study
DEFF Research Database (Denmark)
Shekarchi, Sayedali; Christensen-Dalsgaard, Jakob; Hallam, John
2015-01-01
A head-related transfer function (HRTF) model employing Legendre polynomials (LPs) is evaluated as an HRTF spatial complexity indicator and interpolation technique in the azimuth plane. LPs are a set of orthogonal functions derived on the sphere which can be used to compress an HRTF dataset...
Peng, Shitao; Zhou, Ran; Qin, Xuebo; Shi, Honghua; Ding, Dewen
2013-09-15
In this study, the functional group concept was first applied to evaluate the ecosystem health of Bohai Bay. Macrobenthos functional groups were defined according to feeding types and divided into five groups: a carnivorous group (CA), omnivorous group (OM), planktivorous group (PL), herbivorous group (HE), and detritivorous group (DE). Groups CA, DE, OM, and PL were identified, but the HE group was absent from Bohai Bay. Group DE was dominant during the study periods. The ecosystem health was assessed using a functional group evenness index. The functional group evenness values of most sampling stations were less than 0.40, indicating that the ecosystem health was deteriorated in Bohai Bay. Such deterioration could be attributed to land reclamation, industrial and sewage effluents, oil pollution, and hypersaline water discharge. This study demonstrates that the functional group concept can be applied to ecosystem health assessment in a semi-enclosed bay. Copyright © 2013 Elsevier Ltd. All rights reserved.
Bipp, T.; Steinmayr, R.; Spinath, B.
2012-01-01
Building on the notion that motivation energizes and directs resources in achievement situations, we argue that goal orientations affect perceptions of own intelligence and that the effect of goals on performance is partly mediated by self-estimates of intelligence. Studies 1 (n = 89) and 2 (n =
Simultaneous Estimation of Regression Functions for Marine Corps Technical Training Specialties.
Dunbar, Stephen B.; And Others
This paper considers the application of Bayesian techniques for simultaneous estimation to the specification of regression weights for selection tests used in various technical training courses in the Marine Corps. Results of a method for m-group regression developed by Molenaar and Lewis (1979) suggest that common weights for training courses…
Adding a Parameter Increases the Variance of an Estimated Regression Function
Withers, Christopher S.; Nadarajah, Saralees
2011-01-01
The linear regression model is one of the most popular models in statistics. It is also one of the simplest models in statistics. It has received applications in almost every area of science, engineering and medicine. In this article, the authors show that adding a predictor to a linear model increases the variance of the estimated regression…
The Support Reduction Algorithm for Computing Non-Parametric Function Estimates in Mixture Models
GROENEBOOM, PIET; JONGBLOED, GEURT; WELLNER, JON A.
2008-01-01
In this paper, we study an algorithm (which we call the support reduction algorithm) that can be used to compute non-parametric M-estimators in mixture models. The algorithm is compared with natural competitors in the context of convex regression and the ‘Aspect problem’ in quantum physics.
Panel data estimates of the production function and product and labor market imperfections
Dobbelaere, S.; Mairesse, J.
2013-01-01
Consistent with two models of imperfect competition in the labor market-the efficient bargaining model and the monopsony model-we provide two extensions of a microeconomic version of Hall's framework for estimating price-cost margins. We show that both product and labor market imperfections generate
Directory of Open Access Journals (Sweden)
Iris Gorny
2018-03-01
Full Text Available ObjectivesThe German socio-demographic estimation scale was developed by Jahn et al. (1 to quickly predict premorbid global cognitive functioning in patients. So far, it has been validated in healthy adults and has shown a good correlation with the full and verbal IQ of the Wechsler Adult Intelligence Scale (WAIS in this group. However, there are no data regarding its use as a bedside test in epilepsy patients.MethodsForty native German speaking adult patients with refractory epilepsy were included. They completed a neuropsychological assessment, including a nine scale short form of the German version of the WAIS-III and the German socio-demographic estimation scale by Jahn et al. (1 during their presurgical diagnostic stay in our center. We calculated means, correlations, and the rate of concordance (range ±5 and ±7.5 IQ score points between these two measures for the whole group, and a subsample of 19 patients with a global cognitive functioning level within 1 SD of the mean (IQ score range 85–115 and who had completed their formal education before epilepsy onset.ResultsThe German demographic estimation scale by Jahn et al. (1 showed a significant mean overestimation of the global cognitive functioning level of eight points in the epilepsy patient sample compared with the short form WAIS-III score. The accuracy within a range of ±5 or ±7.5 IQ score points for each patient was similar to that of the healthy controls reported by Jahn et al. (1 in our subsample, but not in our whole sample.ConclusionOur results show that the socio-demographic scale by Jahn et al. (1 is not sufficiently reliable as an estimation tool of global cognitive functioning in epilepsy patients. It can be used to estimate global cognitive functioning in a subset of patients with a normal global cognitive functioning level who have completed their formal education before epilepsy onset, but it does not reliably predict global cognitive functioning in epilepsy patients
Directory of Open Access Journals (Sweden)
Dan Li
2017-11-01
Full Text Available Abstract Background Epidemiologic surveillance of lung function is key to clinical care of individuals with cystic fibrosis, but lung function decline is nonlinear and often impacted by acute respiratory events known as pulmonary exacerbations. Statistical models are needed to simultaneously estimate lung function decline while providing risk estimates for the onset of pulmonary exacerbations, in order to identify relevant predictors of declining lung function and understand how these associations could be used to predict the onset of pulmonary exacerbations. Methods Using longitudinal lung function (FEV1 measurements and time-to-event data on pulmonary exacerbations from individuals in the United States Cystic Fibrosis Registry, we implemented a flexible semiparametric joint model consisting of a mixed-effects submodel with regression splines to fit repeated FEV1 measurements and a time-to-event submodel for possibly censored data on pulmonary exacerbations. We contrasted this approach with methods currently used in epidemiological studies and highlight clinical implications. Results The semiparametric joint model had the best fit of all models examined based on deviance information criterion. Higher starting FEV1 implied more rapid lung function decline in both separate and joint models; however, individualized risk estimates for pulmonary exacerbation differed depending upon model type. Based on shared parameter estimates from the joint model, which accounts for the nonlinear FEV1 trajectory, patients with more positive rates of change were less likely to experience a pulmonary exacerbation (HR per one standard deviation increase in FEV1 rate of change = 0.566, 95% CI 0.516–0.619, and having higher absolute FEV1 also corresponded to lower risk of having a pulmonary exacerbation (HR per one standard deviation increase in FEV1 = 0.856, 95% CI 0.781–0.937. At the population level, both submodels indicated significant effects of birth
Mejia, Amanda F; Nebel, Mary Beth; Barber, Anita D; Choe, Ann S; Pekar, James J; Caffo, Brian S; Lindquist, Martin A
2018-05-15
Reliability of subject-level resting-state functional connectivity (FC) is determined in part by the statistical techniques employed in its estimation. Methods that pool information across subjects to inform estimation of subject-level effects (e.g., Bayesian approaches) have been shown to enhance reliability of subject-level FC. However, fully Bayesian approaches are computationally demanding, while empirical Bayesian approaches typically rely on using repeated measures to estimate the variance components in the model. Here, we avoid the need for repeated measures by proposing a novel measurement error model for FC describing the different sources of variance and error, which we use to perform empirical Bayes shrinkage of subject-level FC towards the group average. In addition, since the traditional intra-class correlation coefficient (ICC) is inappropriate for biased estimates, we propose a new reliability measure denoted the mean squared error intra-class correlation coefficient (ICC MSE ) to properly assess the reliability of the resulting (biased) estimates. We apply the proposed techniques to test-retest resting-state fMRI data on 461 subjects from the Human Connectome Project to estimate connectivity between 100 regions identified through independent components analysis (ICA). We consider both correlation and partial correlation as the measure of FC and assess the benefit of shrinkage for each measure, as well as the effects of scan duration. We find that shrinkage estimates of subject-level FC exhibit substantially greater reliability than traditional estimates across various scan durations, even for the most reliable connections and regardless of connectivity measure. Additionally, we find partial correlation reliability to be highly sensitive to the choice of penalty term, and to be generally worse than that of full correlations except for certain connections and a narrow range of penalty values. This suggests that the penalty needs to be chosen carefully
Renal parenchyma thickness: a rapid estimation of renal function on computed tomography
International Nuclear Information System (INIS)
Kaplon, Daniel M.; Lasser, Michael S.; Sigman, Mark; Haleblian, George E.; Pareek, Gyan
2009-01-01
Purpose: To define the relationship between renal parenchyma thickness (RPT) on computed tomography and renal function on nuclear renography in chronically obstructed renal units (ORUs) and to define a minimal thickness ratio associated with adequate function. Materials and Methods: Twenty-eight consecutive patients undergoing both nuclear renography and CT during a six-month period between 2004 and 2006 were included. All patients that had a diagnosis of unilateral obstruction were included for analysis. RPT was measured in the following manner: The parenchyma thickness at three discrete levels of each kidney was measured using calipers on a CT workstation. The mean of these three measurements was defined as RPT. The renal parenchyma thickness ratio of the ORUs and non-obstructed renal unit (NORUs) was calculated and this was compared to the observed function on Mag-3 lasix Renogram. Results: A total of 28 patients were evaluated. Mean parenchyma thickness was 1.82 cm and 2.25 cm in the ORUs and NORUs, respectively. The mean relative renal function of ORUs was 39%. Linear regression analysis comparing renogram function to RPT ratio revealed a correlation coefficient of 0.48 (p * RPT ratio. A thickness ratio of 0.68 correlated with 20% renal function. Conclusion: RPT on computed tomography appears to be a powerful predictor of relative renal function in ORUs. Assessment of RPT is a useful and readily available clinical tool for surgical decision making (renal salvage therapy versus nephrectomy) in patients with ORUs. (author)
Asiri, Sharefa M.; Elmetennani, Shahrazed; Laleg-Kirati, Taous-Meriem
2017-01-01
In this paper, an on-line estimation algorithm of the source term in a first order hyperbolic PDE is proposed. This equation describes heat transport dynamics in concentrated solar collectors where the source term represents the received energy. This energy depends on the solar irradiance intensity and the collector characteristics affected by the environmental changes. Control strategies are usually used to enhance the efficiency of heat production; however, these strategies often depend on the source term which is highly affected by the external working conditions. Hence, efficient source estimation methods are required. The proposed algorithm is based on modulating functions method where a moving horizon strategy is introduced. Numerical results are provided to illustrate the performance of the proposed estimator in open and closed loops.
Asiri, Sharefa M.
2017-08-22
In this paper, an on-line estimation algorithm of the source term in a first order hyperbolic PDE is proposed. This equation describes heat transport dynamics in concentrated solar collectors where the source term represents the received energy. This energy depends on the solar irradiance intensity and the collector characteristics affected by the environmental changes. Control strategies are usually used to enhance the efficiency of heat production; however, these strategies often depend on the source term which is highly affected by the external working conditions. Hence, efficient source estimation methods are required. The proposed algorithm is based on modulating functions method where a moving horizon strategy is introduced. Numerical results are provided to illustrate the performance of the proposed estimator in open and closed loops.
Faizullah, Faiz
2016-01-01
The aim of the current paper is to present the path-wise and moment estimates for solutions to stochastic functional differential equations with non-linear growth condition in the framework of G-expectation and G-Brownian motion. Under the nonlinear growth condition, the pth moment estimates for solutions to SFDEs driven by G-Brownian motion are proved. The properties of G-expectations, Hölder's inequality, Bihari's inequality, Gronwall's inequality and Burkholder-Davis-Gundy inequalities are used to develop the above mentioned theory. In addition, the path-wise asymptotic estimates and continuity of pth moment for the solutions to SFDEs in the G-framework, with non-linear growth condition are shown.
Chai, Rui; Xu, Li-Sheng; Yao, Yang; Hao, Li-Ling; Qi, Lin
2017-01-01
This study analyzed ascending branch slope (A_slope), dicrotic notch height (Hn), diastolic area (Ad) and systolic area (As) diastolic blood pressure (DBP), systolic blood pressure (SBP), pulse pressure (PP), subendocardial viability ratio (SEVR), waveform parameter (k), stroke volume (SV), cardiac output (CO), and peripheral resistance (RS) of central pulse wave invasively and non-invasively measured. Invasively measured parameters were compared with parameters measured from brachial pulse waves by regression model and transfer function model. Accuracy of parameters estimated by regression and transfer function model, was compared too. Findings showed that k value, central pulse wave and brachial pulse wave parameters invasively measured, correlated positively. Regression model parameters including A_slope, DBP, SEVR, and transfer function model parameters had good consistency with parameters invasively measured. They had same effect of consistency. SBP, PP, SV, and CO could be calculated through the regression model, but their accuracies were worse than that of transfer function model.
Blair, Clancy; Raver, C. Cybele; Berry, Daniel J.
2015-01-01
In the current article, we contrast 2 analytical approaches to estimate the relation of parenting to executive function development in a sample of 1,292 children assessed longitudinally between the ages of 36 and 60 months of age. Children were administered a newly developed and validated battery of 6 executive function tasks tapping inhibitory control, working memory, and attention shifting. Residualized change analysis indicated that higher quality parenting as indicated by higher scores on widely used measures of parenting at both earlier and later time points predicted more positive gain in executive function at 60 months. Latent change score models in which parenting and executive function over time were held to standards of longitudinal measurement invariance provided additional evidence of the association between change in parenting quality and change in executive function. In these models, cross-lagged paths indicated that in addition to parenting predicting change in executive function, executive function bidirectionally predicted change in parenting quality. Results were robust with the addition of covariates, including child sex, race, maternal education, and household income-to-need. Strengths and drawbacks of the 2 analytic approaches are discussed, and the findings are considered in light of emerging methodological innovations for testing the extent to which executive function is malleable and open to the influence of experience. PMID:23834294
Directory of Open Access Journals (Sweden)
Eyad K Almaita
2017-03-01
Keywords: Energy efficiency, Power quality, Radial basis function, neural networks, adaptive, harmonic. Article History: Received Dec 15, 2016; Received in revised form Feb 2nd 2017; Accepted 13rd 2017; Available online How to Cite This Article: Almaita, E.K and Shawawreh J.Al (2017 Improving Stability and Convergence for Adaptive Radial Basis Function Neural Networks Algorithm (On-Line Harmonics Estimation Application. International Journal of Renewable Energy Develeopment, 6(1, 9-17. http://dx.doi.org/10.14710/ijred.6.1.9-17
Asiri, Sharefa M.
2016-10-20
In this paper, modulating functions-based method is proposed for estimating space–time-dependent unknowns in one-dimensional partial differential equations. The proposed method simplifies the problem into a system of algebraic equations linear in unknown parameters. The well-posedness of the modulating functions-based solution is proved. The wave and the fifth-order KdV equations are used as examples to show the effectiveness of the proposed method in both noise-free and noisy cases.
Estimate of the influence of muzzle smoke on function range of infrared system
Luo, Yan-ling; Wang, Jun; Wu, Jiang-hui; Wu, Jun; Gao, Meng; Gao, Fei; Zhao, Yu-jie; Zhang, Lei
2013-09-01
Muzzle smoke produced by weapons shooting has important influence on infrared (IR) system while detecting targets. Based on the theoretical model of detecting spot targets and surface targets of IR system while there is muzzle smoke, the function range for detecting spot targets and surface targets are deduced separately according to the definition of noise equivalent temperature difference(NETD) and minimum resolution temperature difference(MRTD). Also parameters of muzzle smoke affecting function range of IR system are analyzed. Base on measured data of muzzle smoke for single shot, the function range of an IR system for detecting typical targets are calculated separately while there is muzzle smoke and there is no muzzle smoke at 8-12 micron waveband. For our IR system function range has reduced by over 10% for detecting tank if muzzle smoke exists. The results will provide evidence for evaluating the influence of muzzle smoke on IR system and will help researchers to improve ammo craftwork.
Directory of Open Access Journals (Sweden)
Yuri B. Tebekin
2011-11-01
Full Text Available The article is devoted to the problem of the quality management for multiphase processes on the basis of the probabilistic approach. Method with continuous response functions is offered from the application of the method of Lagrange multipliers.
Bornkamp, Björn; Ickstadt, Katja
2009-03-01
In this article, we consider monotone nonparametric regression in a Bayesian framework. The monotone function is modeled as a mixture of shifted and scaled parametric probability distribution functions, and a general random probability measure is assumed as the prior for the mixing distribution. We investigate the choice of the underlying parametric distribution function and find that the two-sided power distribution function is well suited both from a computational and mathematical point of view. The model is motivated by traditional nonlinear models for dose-response analysis, and provides possibilities to elicitate informative prior distributions on different aspects of the curve. The method is compared with other recent approaches to monotone nonparametric regression in a simulation study and is illustrated on a data set from dose-response analysis.
The Approach to an Estimation of a Local Area Network Functioning Efficiency
Directory of Open Access Journals (Sweden)
M. M. Taraskin
2010-09-01
Full Text Available In the article authors call attention to a choice of system of metrics, which permits to take a qualitative assessment of local area network functioning efficiency in condition of computer attacks.
Functional estimation of kidneys after extracorporeal shock wave therapy (ESWL) by clearance
International Nuclear Information System (INIS)
Sydow, K.; Kirschner, P.; Brien, G.; Buchali, K.; Frenzel, R.
1991-01-01
35 patients were scintiscanned with 99m-Tc-DTPA to determine the effects of extracorporeal shock waves used to desintegrate renal concrements may have on the patients renal function. The therapy was conducted using a standard Lithostar unit (Siemens) (20 patients) and an additional overtable module (15 patients). Functional scintigraphy was performed using a gamma camera before lithotripsy, and on the first day after it. Further control investigations were performed one or two weeks later and two till six months later. In both groups most of the patients developed temporary restrictions in renal function, some of them irreversible restrictions. Functional losses were found to be less severe with the use of the overtable module than with the standard Lithostar unit. (orig.) [de
Estimation of CN Parameter for Small Agricultural Watersheds Using Asymptotic Functions
Tomasz Kowalik; Andrzej Walega
2015-01-01
This paper investigates a possibility of using asymptotic functions to determine the value of curve number (CN) parameter as a function of rainfall in small agricultural watersheds. It also compares the actually calculated CN with its values provided in the Soil Conservation Service (SCS) National Engineering Handbook Section 4: Hydrology (NEH-4) and Technical Release 20 (TR-20). The analysis showed that empirical CN values presented in the National Engineering Handbook tables differed from t...
International Nuclear Information System (INIS)
Cooke, D.J.
1983-01-01
A procedure has been developed for deriving functions which characterize the effect of geomagnetic cutoffs on the charged primary cosmic rays that give rise to neutrinos arriving in any given direction at specified points on or in the earth. These cutoff distribution functions, for use in atmospheric-neutrino flux calculations, have been determined for eight nucleon-decay--experiment sites, by use of a technique which employs the Stormer cutoff expression, and which assumes collinear motion of neutrino and parent primary
Using Empirical Data to Estimate Potential Functions in Commodity Markets: Some Initial Results
Shen, C.; Haven, E.
2017-12-01
This paper focuses on estimating real and quantum potentials from financial commodities. The log returns of six common commodities are considered. We find that some phenomena, such as the vertical potential walls and the time scale issue of the variation on returns, also exists in commodity markets. By comparing the quantum and classical potentials, we attempt to demonstrate that the information within these two types of potentials is different. We believe this empirical result is consistent with the theoretical assumption that quantum potentials (when embedded into social science contexts) may contain some social cognitive or market psychological information, while classical potentials mainly reflect `hard' market conditions. We also compare the two potential forces and explore their relationship by simply estimating the Pearson correlation between them. The Medium or weak interaction effect may indicate that the cognitive system among traders may be affected by those `hard' market conditions.
Wang, Wei; Young, Bessie A.; Fülöp, Tibor; de Boer, Ian H.; Boulware, L. Ebony; Katz, Ronit; Correa, Adolfo; Griswold, Michael E.
2015-01-01
Background The calibration to Isotope Dilution Mass Spectroscopy (IDMS) traceable creatinine is essential for valid use of the new Chronic Kidney Disease Epidemiology Collaboration (CKD-EPI) equation to estimate the glomerular filtration rate (GFR). Methods For 5,210 participants in the Jackson Heart Study (JHS), serum creatinine was measured with a multipoint enzymatic spectrophotometric assay at the baseline visit (2000–2004) and re-measured using the Roche enzymatic method, traceable to IDMS in a subset of 206 subjects. The 200 eligible samples (6 were excluded, 1 for failure of the re-measurement and 5 for outliers) were divided into three disjoint sets - training, validation, and test - to select a calibration model, estimate true errors, and assess performance of the final calibration equation. The calibration equation was applied to serum creatinine measurements of 5,210 participants to estimate GFR and the prevalence of CKD. Results The selected Deming regression model provided a slope of 0.968 (95% Confidence Interval (CI), 0.904 to 1.053) and intercept of −0.0248 (95% CI, −0.0862 to 0.0366) with R squared 0.9527. Calibrated serum creatinine showed high agreement with actual measurements when applying to the unused test set (concordance correlation coefficient 0.934, 95% CI, 0.894 to 0.960). The baseline prevalence of CKD in the JHS (2000–2004) was 6.30% using calibrated values, compared with 8.29% using non-calibrated serum creatinine with the CKD-EPI equation (P creatinine measurements in the JHS and the calibrated values provide a lower CKD prevalence estimate. PMID:25806862
DEFF Research Database (Denmark)
Hansen, Peter Reinhard; Lunde, Asger
An economic time series can often be viewed as a noisy proxy for an underlying economic variable. Measurement errors will influence the dynamic properties of the observed process and may conceal the persistence of the underlying time series. In this paper we develop instrumental variable (IV...... application despite the large sample. Unit root tests based on the IV estimator have better finite sample properties in this context....
Verification of functional a posteriori error estimates for obstacle problem in 1D
Czech Academy of Sciences Publication Activity Database
Harasim, P.; Valdman, Jan
2013-01-01
Roč. 49, č. 5 (2013), s. 738-754 ISSN 0023-5954 R&D Projects: GA ČR GA13-18652S Institutional support: RVO:67985556 Keywords : obstacle problem * a posteriori error estimate * variational inequalities Subject RIV: BA - General Mathematics Impact factor: 0.563, year: 2013 http://library.utia.cas.cz/separaty/2014/MTR/valdman-0424082.pdf
Verification of functional a posteriori error estimates for obstacle problem in 2D
Czech Academy of Sciences Publication Activity Database
Harasim, P.; Valdman, Jan
2014-01-01
Roč. 50, č. 6 (2014), s. 978-1002 ISSN 0023-5954 R&D Projects: GA ČR GA13-18652S Institutional support: RVO:67985556 Keywords : obstacle problem * a posteriori error estimate * finite element method * variational inequalities Subject RIV: BA - General Mathematics Impact factor: 0.541, year: 2014 http://library.utia.cas.cz/separaty/2015/MTR/valdman-0441661.pdf
Wang, Wei; Young, Bessie A; Fülöp, Tibor; de Boer, Ian H; Boulware, L Ebony; Katz, Ronit; Correa, Adolfo; Griswold, Michael E
2015-05-01
The calibration to isotope dilution mass spectrometry-traceable creatinine is essential for valid use of the new Chronic Kidney Disease Epidemiology Collaboration equation to estimate the glomerular filtration rate. For 5,210 participants in the Jackson Heart Study (JHS), serum creatinine was measured with a multipoint enzymatic spectrophotometric assay at the baseline visit (2000-2004) and remeasured using the Roche enzymatic method, traceable to isotope dilution mass spectrometry in a subset of 206 subjects. The 200 eligible samples (6 were excluded, 1 for failure of the remeasurement and 5 for outliers) were divided into 3 disjoint sets-training, validation and test-to select a calibration model, estimate true errors and assess performance of the final calibration equation. The calibration equation was applied to serum creatinine measurements of 5,210 participants to estimate glomerular filtration rate and the prevalence of chronic kidney disease (CKD). The selected Deming regression model provided a slope of 0.968 (95% confidence interval [CI], 0.904-1.053) and intercept of -0.0248 (95% CI, -0.0862 to 0.0366) with R value of 0.9527. Calibrated serum creatinine showed high agreement with actual measurements when applying to the unused test set (concordance correlation coefficient 0.934, 95% CI, 0.894-0.960). The baseline prevalence of CKD in the JHS (2000-2004) was 6.30% using calibrated values compared with 8.29% using noncalibrated serum creatinine with the Chronic Kidney Disease Epidemiology Collaboration equation (P creatinine measurements in the JHS, and the calibrated values provide a lower CKD prevalence estimate.
Efficacy of using data from angler-caught Burbot to estimate population rate functions
Brauer, Tucker A.; Rhea, Darren T.; Walrath, John D.; Quist, Michael C.
2018-01-01
The effective management of a fish population depends on the collection of accurate demographic data from that population. Since demographic data are often expensive and difficult to obtain, developing cost‐effective and efficient collection methods is a high priority. This research evaluates the efficacy of using angler‐supplied data to monitor a nonnative population of Burbot Lota lota. Age and growth estimates were compared between Burbot collected by anglers and those collected in trammel nets from two Wyoming reservoirs. Collection methods produced different length‐frequency distributions, but no difference was observed in age‐frequency distributions. Mean back‐calculated lengths at age revealed that netted Burbot grew faster than angled Burbot in Fontenelle Reservoir. In contrast, angled Burbot grew slightly faster than netted Burbot in Flaming Gorge Reservoir. Von Bertalanffy growth models differed between collection methods, but differences in parameter estimates were minor. Estimates of total annual mortality (A) of Burbot in Fontenelle Reservoir were comparable between angled (A = 35.4%) and netted fish (33.9%); similar results were observed in Flaming Gorge Reservoir for angled (29.3%) and netted fish (30.5%). Beverton–Holt yield‐per‐recruit models were fit using data from both collection methods. Estimated yield differed by less than 15% between data sources and reservoir. Spawning potential ratios indicated that an exploitation rate of 20% would be required to induce recruitment overfishing in either reservoir, regardless of data source. Results of this study suggest that angler‐supplied data are useful for monitoring Burbot population dynamics in Wyoming and may be an option to efficiently monitor other fish populations in North America.
Estimation of Import and Export demand Functions Using Bilateral Trade Data ___ the case of Pakistan
Jahanzaib Haider; Muhammad Afzal; Farah Riaz
2011-01-01
We estimated the import and export elasticities of Pakistan trade with traditional trade partners and some Asian countries to see the dynamics of Pakistan trade from 1973 to 2008. OLS results suggest that income is the principal determinant of exports and imports. Pakistan exports are cointegrated with Japan and USA while the imports are cointegrated with UAE and USA. Pakistan imports and exports are cointegrated with Bangladesh and Sri Lanka but not with India and China. Income and exchange ...
Hospital costs estimation and prediction as a function of patient and admission characteristics.
Ramiarina, Robert; Almeida, Renan Mvr; Pereira, Wagner Ca
2008-01-01
The present work analyzed the association between hospital costs and patient admission characteristics in a general public hospital in the city of Rio de Janeiro, Brazil. The unit costs method was used to estimate inpatient day costs associated to specific hospital clinics. With this aim, three "cost centers" were defined in order to group direct and indirect expenses pertaining to the clinics. After the costs were estimated, a standard linear regression model was developed for correlating cost units and their putative predictors (the patients gender and age, the admission type (urgency/elective), ICU admission (yes/no), blood transfusion (yes/no), the admission outcome (death/no death), the complexity of the medical procedures performed, and a risk-adjustment index). Data were collected for 3100 patients, January 2001-January 2003. Average inpatient costs across clinics ranged from (US$) 1135 [Orthopedics] to 3101 [Cardiology]. Costs increased according to increases in the risk-adjustment index in all clinics, and the index was statistically significant in all clinics except Urology, General surgery, and Clinical medicine. The occupation rate was inversely correlated to costs, and age had no association with costs. The (adjusted) per cent of explained variance varied between 36.3% [Clinical medicine] and 55.1% [Thoracic surgery clinic]. The estimates are an important step towards the standardization of hospital costs calculation, especially for countries that lack formal hospital accounting systems.
Su, Nan-Yao; Lee, Sang-Hee
2008-04-01
Marked termites were released in a linear-connected foraging arena, and the spatial heterogeneity of their capture probabilities was averaged for both directions at distance r from release point to obtain a symmetrical distribution, from which the density function of directionally averaged capture probability P(x) was derived. We hypothesized that as marked termites move into the population and given sufficient time, the directionally averaged capture probability may reach an equilibrium P(e) over the distance r and thus satisfy the equal mixing assumption of the mark-recapture protocol. The equilibrium capture probability P(e) was used to estimate the population size N. The hypothesis was tested in a 50-m extended foraging arena to simulate the distance factor of field colonies of subterranean termites. Over the 42-d test period, the density functions of directionally averaged capture probability P(x) exhibited four phases: exponential decline phase, linear decline phase, equilibrium phase, and postequilibrium phase. The equilibrium capture probability P(e), derived as the intercept of the linear regression during the equilibrium phase, correctly projected N estimates that were not significantly different from the known number of workers in the arena. Because the area beneath the probability density function is a constant (50% in this study), preequilibrium regression parameters and P(e) were used to estimate the population boundary distance 1, which is the distance between the release point and the boundary beyond which the population is absent.
Liang, Xiaoyun; Vaughan, David N; Connelly, Alan; Calamante, Fernando
2018-05-01
The conventional way to estimate functional networks is primarily based on Pearson correlation along with classic Fisher Z test. In general, networks are usually calculated at the individual-level and subsequently aggregated to obtain group-level networks. However, such estimated networks are inevitably affected by the inherent large inter-subject variability. A joint graphical model with Stability Selection (JGMSS) method was recently shown to effectively reduce inter-subject variability, mainly caused by confounding variations, by simultaneously estimating individual-level networks from a group. However, its benefits might be compromised when two groups are being compared, given that JGMSS is blinded to other groups when it is applied to estimate networks from a given group. We propose a novel method for robustly estimating networks from two groups by using group-fused multiple graphical-lasso combined with stability selection, named GMGLASS. Specifically, by simultaneously estimating similar within-group networks and between-group difference, it is possible to address inter-subject variability of estimated individual networks inherently related with existing methods such as Fisher Z test, and issues related to JGMSS ignoring between-group information in group comparisons. To evaluate the performance of GMGLASS in terms of a few key network metrics, as well as to compare with JGMSS and Fisher Z test, they are applied to both simulated and in vivo data. As a method aiming for group comparison studies, our study involves two groups for each case, i.e., normal control and patient groups; for in vivo data, we focus on a group of patients with right mesial temporal lobe epilepsy.
Functional approach in estimation of cultural ecosystem services of recreational areas
Sautkin, I. S.; Rogova, T. V.
2018-01-01
The article is devoted to the identification and analysis of cultural ecosystem services of recreational areas from the different forest plant functional groups in the suburbs of Kazan. The study explored two cultural ecosystem services supplied by forest plants by linking these services to different plant functional traits. Information on the functional traits of 76 plants occurring in the forest ecosystems of the investigated area was collected from reference books on the biological characteristics of plant species. Analysis of these species and traits with the Ward clustering method yielded four functional groups with different potentials for delivering ecosystem services. The results show that the contribution of species diversity to services can be characterized through the functional traits of plants. This proves that there is a stable relationship between biodiversity and the quality and quantity of ecosystem services. The proposed method can be extended to other types of services (regulating and supporting). The analysis can be used in the socio-economic assessment of natural ecosystems for recreation and other uses.
Estimation of Hepatic Function Using 99mTc-DISIDA Plasma Clearance Rate
International Nuclear Information System (INIS)
Lee, M. S.; Yoo, H. S.; Lee, J. T.; Park, C. Y.
1983-01-01
Various methods to determine hepatic function have been studied, and among these, calculated maximal removal rate of ICG (ICG R-max) is more accurate and sensitive index for quantification of hepatic function. But calculation of ICG R-max is time-consuming, invasive, and expensive study, and even ICG R-max 1 day study is still complicated. So we tried to evaluated of hepatic function test using 99mTc-DISIDA plasma clearance rate. The author studied 11 cases of normal control, 4 cases of acute hepatitis, 8 cases of chronic hepatitis, and 19 cases of liver cirrhosis. The results were as follows: 1. In normal control, DISIDA-K was 0.70 min -1 , in liver cirrhosis 0.25 min -1 , in acute hepatitis 0.46 min -1 , and in chronic hepatitis 0.14min -1 . The most severe depressed DISIDA-K value was observed in liver cirrhosis. 2. Comparison of DISIDA-K value to liver function indices revealed no correlation between DISIDA-K value and serum albumin, prothrombin time, total bilirubin, SGOT, and alkaline phosphatase. 3. DISIDA-K value in liver cirrhosis with complication such as ascites, splenomegaly, esophageal varices, and hepatic coma was more decreased than without complication. With the above result, calculation of DISIDA-K value was found easily available, accurate, and simple index for quantification of hepatic function.
2009-01-01
Background During the last part of the 1990s the chance of surviving breast cancer increased. Changes in survival functions reflect a mixture of effects. Both, the introduction of adjuvant treatments and early screening with mammography played a role in the decline in mortality. Evaluating the contribution of these interventions using mathematical models requires survival functions before and after their introduction. Furthermore, required survival functions may be different by age groups and are related to disease stage at diagnosis. Sometimes detailed information is not available, as was the case for the region of Catalonia (Spain). Then one may derive the functions using information from other geographical areas. This work presents the methodology used to estimate age- and stage-specific Catalan breast cancer survival functions from scarce Catalan survival data by adapting the age- and stage-specific US functions. Methods Cubic splines were used to smooth data and obtain continuous hazard rate functions. After, we fitted a Poisson model to derive hazard ratios. The model included time as a covariate. Then the hazard ratios were applied to US survival functions detailed by age and stage to obtain Catalan estimations. Results We started estimating the hazard ratios for Catalonia versus the USA before and after the introduction of screening. The hazard ratios were then multiplied by the age- and stage-specific breast cancer hazard rates from the USA to obtain the Catalan hazard rates. We also compared breast cancer survival in Catalonia and the USA in two time periods, before cancer control interventions (USA 1975–79, Catalonia 1980–89) and after (USA and Catalonia 1990–2001). Survival in Catalonia in the 1980–89 period was worse than in the USA during 1975–79, but the differences disappeared in 1990–2001. Conclusion Our results suggest that access to better treatments and quality of care contributed to large improvements in survival in Catalonia. On
Unbiased determination of the proton structure function F2p with faithful uncertainty estimation
International Nuclear Information System (INIS)
Del Debbio, Luigi; Forte, Stefano; Latorre, Jose I.; Rojo, Joan; Piccione, Andrea
2005-01-01
We construct a parametrization of the deep-inelastic structure function of the proton F 2 (x,Q 2 ) based on all available experimental information from charged lepton deep-inelastic scattering experiments. The parametrization effectively provides a bias-free determination of the probability measure in the space of structure functions, which retains information on experimental errors and correlations. The result is obtained in the form of a Monte Carlo sample of neural networks trained on an ensemble of replicas of the experimental data. We discuss in detail the techniques required for the construction of bias-free parameterizations of large amounts of structure function data, in view of future applications to the determination of parton distributions based on the same method. (author)
Quantative pre-surgical lung function estimation with SPECT/CT
International Nuclear Information System (INIS)
Bailey, Dale L.; Timmins, Sophi; Harris, Benjamin E.; Bailey, Elizabeth A.; Roach, Paul J.; Willowson, Kathy P.
2009-01-01
Full text: Objectives: To develop methodology to predict lobar lung function based on SPECT/CT ventilation 6 k perfusion (V/Q) scanning in candidates for lobectomy for lung cancer. This combines two development areas from our group: quantitative SPECT based on CT-derived corrections for scattering and attenuation of photons, and SPECT V/Q scanning with lobar segmentation from CT Six patients underwent baseline pulmonary function testing (PFT) including spirometry, measurement of DLCO and cardio-pulmonary exercise testing. A SPECT/CT V/Q scan was acquired at baseline. Using in-house software each lobe was anatomically defined using CT to provide lobar ROIs which could be applied to the SPECT data. From these, individual lobar contribution to overall function was calculated from counts within the lobe and post-operative FEVl, DLCO and V02 peak were predicted. This was compared with the quantitative planar scan method using 3 rectangular ROIs over each lung.
Estimation of the multidimensional transient functions oculo-motor system of human
Pavlenko, Vitaliy; Salata, Dmytro; Dombrovskyi, Mykola; Maksymenko, Yuri
2017-09-01
Proposed a new method of constructing nonparametric dynamic models of the oculomotor system system (OMS) in the form of human multidimensional transition functions on the basis of experimental data "input-output". As the test signals used bright points on the long duration of the computer screen. OMS response is measured using information technology Eye-tracking and recorded on video. As a result data processing of the experiment we receive function based "pupil coordinate - time". Using the method of least squares (Ordinary Least Squares, OLS) defined transition functions of the first, second and third order - integral transformations of Volterra kernels, representing a model of OMS. Completed experimental studies using computer simulations confirm the adequacy of the constructed approximation model as a real system.
Danjon, Frédéric; Caplan, Joshua S; Fortin, Mathieu; Meredieu, Céline
2013-01-01
Root systems of woody plants generally display a strong relationship between the cross-sectional area or cross-sectional diameter (CSD) of a root and the dry weight of biomass (DWd) or root volume (Vd) that has grown (i.e., is descendent) from a point. Specification of this relationship allows one to quantify root architectural patterns and estimate the amount of material lost when root systems are extracted from the soil. However, specifications of this relationship generally do not account for the fact that root systems are comprised of multiple types of roots. We assessed whether the relationship between CSD and Vd varies as a function of root type. Additionally, we sought to identify a more accurate and time-efficient method for estimating missing root volume than is currently available. We used a database that described the 3D root architecture of Pinus pinaster root systems (5, 12, or 19 years) from a stand in southwest France. We determined the relationship between CSD and Vd for 10,000 root segments from intact root branches. Models were specified that did and did not account for root type. The relationships were then applied to the diameters of 11,000 broken root ends to estimate the volume of missing roots. CSD was nearly linearly related to the square root of Vd, but the slope of the curve varied greatly as a function of root type. Sinkers and deep roots tapered rapidly, as they were limited by available soil depth. Distal shallow roots tapered gradually, as they were less limited spatially. We estimated that younger trees lost an average of 17% of root volume when excavated, while older trees lost 4%. Missing volumes were smallest in the central parts of root systems and largest in distal shallow roots. The slopes of the curves for each root type are synthetic parameters that account for differentiation due to genetics, soil properties, or mechanical stimuli. Accounting for this differentiation is critical to estimating root loss accurately.
International Nuclear Information System (INIS)
Pereira, A.B.; Vrisman, A.L.; Galvani, E.
2002-01-01
The solar radiation received at the surface of the earth, apart from its relevance to several daily human activities, plays an important role in the growth and development of plants. The aim of the current work was to develop and gauge an estimation model for the evaluation of the global solar radiation flux density as a function of the solar energy potential at soil surface. Radiometric data were collected at Ponta Grossa, PR, Brazil (latitude 25°13' S, longitude 50°03' W, altitude 880 m). Estimated values of solar energy potential obtained as a function of only one measurement taken at solar noon time were confronted with those measured by a Robitzsch bimetalic actinograph, for days that presented insolation ratios higher than 0.85. This data set was submitted to a simple linear regression analysis, having been obtained a good adjustment between observed and calculated values. For the estimation of the coefficients a and b of Angström's equation, the method based on the solar energy potential at soil surface was used for the site under study. The methodology was efficient to assess the coefficients, aiming at the determination of the global solar radiation flux density, whith quickness and simplicity, having also found out that the criterium for the estimation of the solar energy potential is equivalent to that of the classical methodology of Angström. Knowledge of the available solar energy potential and global solar radiation flux density is of great importance for the estimation of the maximum atmospheric evaporative demand, of water consumption by irrigated crops, and also for building solar engineering equipment, such as driers, heaters, solar ovens, refrigerators, etc [pt
Jonge, de R.; Zanten, van J.H.
2012-01-01
We investigate posterior contraction rates for priors on multivariate functions that are constructed using tensor-product B-spline expansions. We prove that using a hierarchical prior with an appropriate prior distribution on the partition size and Gaussian prior weights on the B-spline
A non-parametric estimator for the doubly-periodic Poisson intensity function
R. Helmers (Roelof); I.W. Mangku (Wayan); R. Zitikis
2007-01-01
textabstractIn a series of papers, J. Garrido and Y. Lu have proposed and investigated a doubly-periodic Poisson model, and then applied it to analyze hurricane data. The authors have suggested several parametric models for the underlying intensity function. In the present paper we construct and
Optimization of the coherence function estimation for multi-core central processing unit
Cheremnov, A. G.; Faerman, V. A.; Avramchuk, V. S.
2017-02-01
The paper considers use of parallel processing on multi-core central processing unit for optimization of the coherence function evaluation arising in digital signal processing. Coherence function along with other methods of spectral analysis is commonly used for vibration diagnosis of rotating machinery and its particular nodes. An algorithm is given for the function evaluation for signals represented with digital samples. The algorithm is analyzed for its software implementation and computational problems. Optimization measures are described, including algorithmic, architecture and compiler optimization, their results are assessed for multi-core processors from different manufacturers. Thus, speeding-up of the parallel execution with respect to sequential execution was studied and results are presented for Intel Core i7-4720HQ и AMD FX-9590 processors. The results show comparatively high efficiency of the optimization measures taken. In particular, acceleration indicators and average CPU utilization have been significantly improved, showing high degree of parallelism of the constructed calculating functions. The developed software underwent state registration and will be used as a part of a software and hardware solution for rotating machinery fault diagnosis and pipeline leak location with acoustic correlation method.
Drug Dosing and Estimated Renal Function-Any Step Forward from Effersoe?
DEFF Research Database (Denmark)
Hornum, Mads; Feldt-Rasmussen, Bo
2017-01-01
Drug dosing in accordance with the renal function is a long-standing challenge to clinicians. For many years it has been evident that in many clinical situations there is no easy way to correctly dose any drug that is mainly cleared by the kidneys. Despite the development of many formulas...
Asymptotic Estimates of Gerber-Shiu Functions in the Renewal Risk Model with Exponential Claims
Institute of Scientific and Technical Information of China (English)
Li WEI
2012-01-01
This paper continues to study the asymptotic behavior of Gerber-Shiu expected discounted penalty functions in the renewal risk model as the initial capital becomes large.Under the assumption that the claim-size distribution is exponential,we establish an explicit asymptotic formula.Some straightforward consequences of this formula match existing results in the field.
Construction of New Electronic Density Functionals with Error Estimation Through Fitting
DEFF Research Database (Denmark)
Petzold, V.; Bligaard, T.; Jacobsen, K. W.
2012-01-01
We investigate the possibilities and limitations for the development of new electronic density functionals through large-scale fitting to databases of binding energies obtained experimentally or through high-quality calculations. We show that databases with up to a few hundred entries allow for u...
International Nuclear Information System (INIS)
Frid, I.A.; Berntstejn, M.I.; Evtyukhin, A.I.; Shul'ga, N.I.
1980-01-01
The functional state of the adrenal glands during surgical and combinated treatment was examined in 38 radically operated patients with pulmonary cancer. Irradiation of lung cancer patients was found to stimulate the adrenal glands activity followed by reduction of their potentialities, manifested in a less marked increase of the catecholamines level and decreased 11-OCS level in blood during surgical treatment
Modelling of migration from multi-layers and functional barriers: Estimation of parameters
Dole, P.; Voulzatis, Y.; Vitrac, O.; Reynier, A.; Hankemeier, T.; Aucejo, S.; Feigenbaum, A.
2006-01-01
Functional barriers form parts of multi-layer packaging materials, which are deemed to protect the food from migration of a broad range of contaminants, e.g. those associated with reused packaging. Often, neither the presence nor the identity of the contaminants is known, so that safety assessment
Re-estimation of renal function with 99mTc-DTPA by the Gates' method
International Nuclear Information System (INIS)
Itoh, Kazuo; Arakawa, Masanori
1987-01-01
We analyzed a regression equation between percent total renal uptake (%TRU) of 99m Tc-DTPA and creatinine clearance (Ccr) by the Gates' method in 82 patients. 1) The following regression equations between measured renal depth on CT scan and (weight in kg)/(height in cm) in Japanese were obtained; Right in both kidneys = 13.6361 · (W/H) 0.6996 (n = 217, r = 0.86691, p 0.7554 (n = 224, r = 0.8822, p 0.8099 (n = 27, r = 0.9515, p 0.6997 (n = 21, r = 0.9213, p 2 ) = 13.15 · %TRU 0.787 (n = 86, r = 0.820, p 0.753 (n = 40, r = 0.754). The Gates' method is very convenient for an immediate estimation of glomerular filtration rate (GFR) after renal scintigraphy using 99m Tc-DTPA. However, the correlation coefficient was not high as compared to the Gates' results. The equation which was reported by Gates' is not necessarily adaptable in routine study. Each facility which uses the Gates' method for estimating GFR should obtain the corrected regression equation between %TRU and Ccr. (author)
Estimation of the lower flammability limit of organic compounds as a function of temperature.
Rowley, J R; Rowley, R L; Wilding, W V
2011-02-15
A new method of estimating the lower flammability limit (LFL) of general organic compounds is presented. The LFL is predicted at 298 K for gases and the lower temperature limit for solids and liquids from structural contributions and the ideal gas heat of formation of the fuel. The average absolute deviation from more than 500 experimental data points is 10.7%. In a previous study, the widely used modified Burgess-Wheeler law was shown to underestimate the effect of temperature on the lower flammability limit when determined in a large-diameter vessel. An improved version of the modified Burgess-Wheeler law is presented that represents the temperature dependence of LFL data determined in large-diameter vessels more accurately. When the LFL is estimated at increased temperatures using a combination of this model and the proposed structural-contribution method, an average absolute deviation of 3.3% is returned when compared with 65 data points for 17 organic compounds determined in an ASHRAE-style apparatus. Copyright © 2010 Elsevier B.V. All rights reserved.
Haberman, Shelby J; Sinharay, Sandip; Chon, Kyong Hee
2013-07-01
Residual analysis (e.g. Hambleton & Swaminathan, Item response theory: principles and applications, Kluwer Academic, Boston, 1985; Hambleton, Swaminathan, & Rogers, Fundamentals of item response theory, Sage, Newbury Park, 1991) is a popular method to assess fit of item response theory (IRT) models. We suggest a form of residual analysis that may be applied to assess item fit for unidimensional IRT models. The residual analysis consists of a comparison of the maximum-likelihood estimate of the item characteristic curve with an alternative ratio estimate of the item characteristic curve. The large sample distribution of the residual is proved to be standardized normal when the IRT model fits the data. We compare the performance of our suggested residual to the standardized residual of Hambleton et al. (Fundamentals of item response theory, Sage, Newbury Park, 1991) in a detailed simulation study. We then calculate our suggested residuals using data from an operational test. The residuals appear to be useful in assessing the item fit for unidimensional IRT models.
Estimation of Net Radiation in Three Different Plant Functional Types in Korea
International Nuclear Information System (INIS)
Kwon, H.J.
2009-01-01
Net Radiation (R N ) is the major driving force for biophysical and biogeochemical processes in the terrestrial ecosystems, which is one of the most critical variables in both measurement and modeling. Despite its importance, there are only 10 weather stations conducting R N measurements among the 544 stations operated by Korea Meteorological Administration (KMA; KMA, 2008). The measurement of incoming shortwave radiation (R S ↓) is, however, conducted at 22 stations while that of sunshine duration is conducted at all the manned stations. In this context, the recent research for estimating R N using R S ↓ in Korean peninsula by Kwon (2009) is of great worth. The author used a linear regression and the radiation balance methods. We generally agree with the author that, in terms of simplicity and practicality, both methods show reliable applicability for estimating R N . We noted, however, that the author’s experimental method and analysis need some clarification and improvement, that are addressed in the following perspectives: (1) the use of daily integrated data for regression, (2) the use of measured albedo, (3) the use of linear coefficients for whole year data, (4) methodological improvement, (5) the use of sunshine duration, and (6) the error assessment. (author)
Schneider, Hauke; Huynh, Thien J; Demchuk, Andrew M; Dowlatshahi, Dar; Rodriguez-Luna, David; Silva, Yolanda; Aviv, Richard; Dzialowski, Imanuel
2018-06-01
The intracerebral hemorrhage (ICH) score is the most commonly used grading scale for stratifying functional outcome in patients with acute ICH. We sought to determine whether a combination of the ICH score and the computed tomographic angiography spot sign may improve outcome prediction in the cohort of a prospective multicenter hemorrhage trial. Prospectively collected data from 241 patients from the observational PREDICT study (Prediction of Hematoma Growth and Outcome in Patients With Intracerebral Hemorrhage Using the CT-Angiography Spot Sign) were analyzed. Functional outcome at 3 months was dichotomized using the modified Rankin Scale (0-3 versus 4-6). Performance of (1) the ICH score and (2) the spot sign ICH score-a scoring scale combining ICH score and spot sign number-was tested. Multivariable analysis demonstrated that ICH score (odds ratio, 3.2; 95% confidence interval, 2.2-4.8) and spot sign number (n=1: odds ratio, 2.7; 95% confidence interval, 1.1-7.4; n>1: odds ratio, 3.8; 95% confidence interval, 1.2-17.1) were independently predictive of functional outcome at 3 months with similar odds ratios. Prediction of functional outcome was not significantly different using the spot sign ICH score compared with the ICH score alone (spot sign ICH score area under curve versus ICH score area under curve: P =0.14). In the PREDICT cohort, a prognostic score adding the computed tomographic angiography-based spot sign to the established ICH score did not improve functional outcome prediction compared with the ICH score. © 2018 American Heart Association, Inc.
Directory of Open Access Journals (Sweden)
Atta Ullah
2014-01-01
Full Text Available In practical utilization of stratified random sampling scheme, the investigator meets a problem to select a sample that maximizes the precision of a finite population mean under cost constraint. An allocation of sample size becomes complicated when more than one characteristic is observed from each selected unit in a sample. In many real life situations, a linear cost function of a sample size nh is not a good approximation to actual cost of sample survey when traveling cost between selected units in a stratum is significant. In this paper, sample allocation problem in multivariate stratified random sampling with proposed cost function is formulated in integer nonlinear multiobjective mathematical programming. A solution procedure is proposed using extended lexicographic goal programming approach. A numerical example is presented to illustrate the computational details and to compare the efficiency of proposed compromise allocation.
Spectral Velocity Estimation using the Autocorrelation Function and Sparse data Sequences
DEFF Research Database (Denmark)
Jensen, Jørgen Arendt
2005-01-01
Ultrasound scanners can be used for displaying the distribution of velocities in blood vessels by finding the power spectrum of the received signal. It is desired to show a B-mode image for orientation and data for this has to be acquired interleaved with the flow data. Techniques for maintaining...... both the B-mode frame rate, and at the same time have the highest possible $f_{prf}$ only limited by the depth of investigation, are, thus, of great interest. The power spectrum can be calculated from the Fourier transform of the autocorrelation function $R_r(k)$. The lag $k$ corresponds...... of the sequence. The audio signal has also been synthesized from the autocorrelation data by passing white, Gaussian noise through a filter designed from the power spectrum of the autocorrelation function. The results show that both the full velocity range can be maintained at the same time as a B-mode image...
Goel, R.; Kofman, I.; DeDios, Y. E.; Jeevarajan, J.; Stepanyan, V.; Nair, M.; Congdon, S.; Fregia, M.; Peters, B.; Cohen, H.;
2015-01-01
Sensorimotor changes such as postural and gait instabilities can affect the functional performance of astronauts when they transition across different gravity environments. We are developing a method, based on stochastic resonance (SR), to enhance information transfer by applying non-zero levels of external noise on the vestibular system (vestibular stochastic resonance, VSR). The goal of this project was to determine optimal levels of stimulation for SR applications by using a defined vestibular threshold of motion detection.
Estimating Diversifying Selection and Functional Constraint in the Presence of Recombination
Wilson, Daniel J.; McVean, Gilean
2006-01-01
Models of molecular evolution that incorporate the ratio of nonsynonymous to synonymous polymorphism (dN/dS ratio) as a parameter can be used to identify sites that are under diversifying selection or functional constraint in a sample of gene sequences. However, when there has been recombination in the evolutionary history of the sequences, reconstructing a single phylogenetic tree is not appropriate, and inference based on a single tree can give misleading results. In the presence of high le...
Estimation of Input Function from Dynamic PET Brain Data Using Bayesian Blind Source Separation
Czech Academy of Sciences Publication Activity Database
Tichý, Ondřej; Šmídl, Václav
2015-01-01
Roč. 12, č. 4 (2015), s. 1273-1287 ISSN 1820-0214 R&D Projects: GA ČR GA13-29225S Institutional support: RVO:67985556 Keywords : blind source separation * Variational Bayes method * dynamic PET * input function * deconvolution Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.623, year: 2015 http://library.utia.cas.cz/separaty/2015/AS/tichy-0450509.pdf
Estimation of placenta function using T2* measurements during hyper- and normoxia
DEFF Research Database (Denmark)
Peters, David Alberg; Sørensen, Anne Nødgård; Fründ, Ernst Torben
2012-01-01
MR imaging is becoming widely used for pre natal diagnosis1. Conventional pre natal imaging focuses on structural changes, but recently several groups have begun to investigate changes in the MR signal during oxygen breathing. The main focus has been changes in the BOLD signal2,3 in organs such a...... such as liver, brain, lungs and heart. In this study we invenstigate the feasibility of pre- and post-oxygen T2* measurements to evaluate the function of the placenta....
Directory of Open Access Journals (Sweden)
Bin Chen
Full Text Available To establish a simple two-compartment model for glomerular filtration rate (GFR and renal plasma flow (RPF estimations by dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI.A total of eight New Zealand white rabbits were included in DCE-MRI. The two-compartment model was modified with the impulse residue function in this study. First, the reliability of GFR measurement of the proposed model was compared with other published models in Monte Carlo simulation at different noise levels. Then, functional parameters were estimated in six healthy rabbits to test the feasibility of the new model. Moreover, in order to investigate its validity of GFR estimation, two rabbits underwent acute ischemia surgical procedure in unilateral kidney before DCE-MRI, and pixel-wise measurements were implemented to detect the cortical GFR alterations between normal and abnormal kidneys.The lowest variability of GFR and RPF measurements were found in the proposed model in the comparison. Mean GFR was 3.03±1.1 ml/min and mean RPF was 2.64±0.5 ml/g/min in normal animals, which were in good agreement with the published values. Moreover, large GFR decline was found in dysfunction kidneys comparing to the contralateral control group.Results in our study demonstrate that measurement of renal kinetic parameters based on the proposed model is feasible and it has the ability to discriminate GFR changes in healthy and diseased kidneys.
Yeo, B T Thomas; Krienen, Fenna M; Chee, Michael W L; Buckner, Randy L
2014-03-01
The organization of the human cerebral cortex has recently been explored using techniques for parcellating the cortex into distinct functionally coupled networks. The divergent and convergent nature of cortico-cortical anatomic connections suggests the need to consider the possibility of regions belonging to multiple networks and hierarchies among networks. Here we applied the Latent Dirichlet Allocation (LDA) model and spatial independent component analysis (ICA) to solve for functionally coupled cerebral networks without assuming that cortical regions belong to a single network. Data analyzed included 1000 subjects from the Brain Genomics Superstruct Project (GSP) and 12 high quality individual subjects from the Human Connectome Project (HCP). The organization of the cerebral cortex was similar regardless of whether a winner-take-all approach or the more relaxed constraints of LDA (or ICA) were imposed. This suggests that large-scale networks may function as partially isolated modules. Several notable interactions among networks were uncovered by the LDA analysis. Many association regions belong to at least two networks, while somatomotor and early visual cortices are especially isolated. As examples of interaction, the precuneus, lateral temporal cortex, medial prefrontal cortex and posterior parietal cortex participate in multiple paralimbic networks that together comprise subsystems of the default network. In addition, regions at or near the frontal eye field and human lateral intraparietal area homologue participate in multiple hierarchically organized networks. These observations were replicated in both datasets and could be detected (and replicated) in individual subjects from the HCP. © 2013.
Age-independent anti-Müllerian hormone (AMH) standard deviation scores to estimate ovarian function.
Helden, Josef van; Weiskirchen, Ralf
2017-06-01
To determine single year age-specific anti-Müllerian hormone (AMH) standard deviation scores (SDS) for women associated to normal ovarian function and different ovarian disorders resulting in sub- or infertility. Determination of particular year median and mean AMH values with standard deviations (SD), calculation of age-independent cut off SDS for the discrimination between normal ovarian function and ovarian disorders. Single-year-specific median, mean, and SD values have been evaluated for the Beckman Access AMH immunoassay. While the decrease of both median and mean AMH values is strongly correlated with increasing age, calculated SDS values have been shown to be age independent with the differentiation between normal ovarian function measured as occurred ovulation with sufficient luteal activity compared with hyperandrogenemic cycle disorders or anovulation associated with high AMH values and reduced ovarian activity or insufficiency associated with low AMH, respectively. These results will be helpful for the treatment of patients and the ventilation of the different reproductive options. Copyright © 2017 Elsevier B.V. All rights reserved.
International Nuclear Information System (INIS)
Ding Shouguo; Xie Yu; Yang Ping; Weng Fuzhong; Liu Quanhua; Baum, Bryan; Hu Yongxiang
2009-01-01
The bulk-scattering properties of dust aerosols and clouds are computed for the community radiative transfer model (CRTM) that is a flagship effort of the Joint Center for Satellite Data Assimilation (JCSDA). The delta-fit method is employed to truncate the forward peaks of the scattering phase functions and to compute the Legendre expansion coefficients for re-constructing the truncated phase function. Use of more terms in the expansion gives more accurate re-construction of the phase function, but the issue remains as to how many terms are necessary for different applications. To explore this issue further, the bidirectional reflectances associated with dust aerosols, water clouds, and ice clouds are simulated with various numbers of Legendre expansion terms. To have relative numerical errors smaller than 5%, the present analyses indicate that, in the visible spectrum, 16 Legendre polynomials should be used for dust aerosols, while 32 Legendre expansion terms should be used for both water and ice clouds. In the infrared spectrum, the brightness temperatures at the top of the atmosphere are computed by using the scattering properties of dust aerosols, water clouds and ice clouds. Although small differences of brightness temperatures compared with the counterparts computed with 4, 8, 128 expansion terms are observed at large viewing angles for each layer, it is shown that 4 terms of Legendre polynomials are sufficient in the radiative transfer computation at infrared wavelengths for practical applications.
Estimated glomerular filtration rate function in patients with and without metabolic syndrome
Directory of Open Access Journals (Sweden)
María E Lizardo
2016-06-01
Full Text Available Introduction: Metabolic syndrome (MS is an independent risk factor, which affects the development of chronic kidney disease, so the glomerular filtration rate (GFR as an indicator of glomerular function in patients with and without MS who attended the outpatient clinic “los Grillitos, sector Caña de Azucar”. Materials and Methods: A comparative, correlational, cross-sectional study was conducted in a non-probability sample of convenience consisting of 60 patients with MS diagnosed according to the criteria Panel ATP III, and 60 apparently healthy individuals, whom the GFR was determined by the Cockcroft-Gault as well as clinical and biochemical parameters for the diagnosis of MS. Results: Out of the total patients evaluated, 37 (30.7% showed alterations that put them in grades G2 and G3 system risk stratification of CKD, of these 18 and 19 corresponded to patients with and without MS respectively. Glomerular Hyperfiltration (> 120 mil / min it was found in both groups 28 (46.7% and 24 (40% cases of patients with and without MS respectively. The glomerular function was strongly correlated with abdominal obesity and high levels of stress arterial. As for the number of criteria and its relationship to the level of kidney damage present, not a firm to increase the latter with respect to the first (p=0.385 trend was observed. Conclusion: The change in the glomerular function is not directly related to the MS but with its components, specifically abdominal obesity and hypertension.
Antarctic ice sheet thickness estimation based on P-receiver function and waveform inversion
Yan, P.; Li, F.; LI, Z.; Li, J.; Yang, Y.; Hao, W.
2016-12-01
Antarctic ice sheet thickness is key parameter and boundary condition for ice sheet model construction, which has great significance for glacial isostatic adjustment, ice sheet mass balance and global change study. Ice thickness acquired utilizing seismological receiver function method can complement and verify with results obtained by radar echo sounding method. In this paper, P-receiver functions(PRFs) are extracted for stations deployed on Antarctic ice sheet, then Vp/Vs ratio and ice thickness are obtained using H-Kappa stacking. Comparisons are made between Bedmap2 dataset and the ice thickness from PRFs, most of the absolute value of the differences are less than 200 meters, only a few reach 600 meters. Taking into account of the intensity of Bedmap2 dataset survey lines and the uncertainty of radio echo sounding, as well as the inherit complexity of the internal ice structure beneath some stations, the ice thickness obtained from receiver function method is reliable. However limitation exists when using H-Kappa stacking method for stations where sediment squeezed between the ice and the bed rock layer. For better verifying the PRF result, a global optimizing method-Neighbourhood algotithm(NA) and spline interpolation are used to modeling PRFs assuming an isotropic layered ice sheet with depth varied densities and velocities beneath the stations. Then the velocity structure and ice sheet thickness are obtained through nonlinear searching by optimally fitting the real and the theoretical PRFs. The obtained ice sheet thickness beneath the stations agree well with the former H-Kappa method, but further detailed study are needed to constrain the inner ice velocity structure.
Energy Technology Data Exchange (ETDEWEB)
Lee, Jong Kyeom; Kim, Tae Yun; Kim, Hyun Su; Chai, Jang Bom; Lee, Jin Woo [Div. of Mechanical Engineering, Ajou University, Suwon (Korea, Republic of)
2016-10-15
This paper presents an advanced estimation method for obtaining the probability density functions of a damage parameter for valve leakage detection in a reciprocating pump. The estimation method is based on a comparison of model data which are simulated by using a mathematical model, and experimental data which are measured on the inside and outside of the reciprocating pump in operation. The mathematical model, which is simplified and extended on the basis of previous models, describes not only the normal state of the pump, but also its abnormal state caused by valve leakage. The pressure in the cylinder is expressed as a function of the crankshaft angle, and an additional volume flow rate due to the valve leakage is quantified by a damage parameter in the mathematical model. The change in the cylinder pressure profiles due to the suction valve leakage is noticeable in the compression and expansion modes of the pump. The damage parameter value over 300 cycles is calculated in two ways, considering advance or delay in the opening and closing angles of the discharge valves. The probability density functions of the damage parameter are compared for diagnosis and prognosis on the basis of the probabilistic features of valve leakage.
International Nuclear Information System (INIS)
Lee, Jong Kyeom; Kim, Tae Yun; Kim, Hyun Su; Chai, Jang Bom; Lee, Jin Woo
2016-01-01
This paper presents an advanced estimation method for obtaining the probability density functions of a damage parameter for valve leakage detection in a reciprocating pump. The estimation method is based on a comparison of model data which are simulated by using a mathematical model, and experimental data which are measured on the inside and outside of the reciprocating pump in operation. The mathematical model, which is simplified and extended on the basis of previous models, describes not only the normal state of the pump, but also its abnormal state caused by valve leakage. The pressure in the cylinder is expressed as a function of the crankshaft angle, and an additional volume flow rate due to the valve leakage is quantified by a damage parameter in the mathematical model. The change in the cylinder pressure profiles due to the suction valve leakage is noticeable in the compression and expansion modes of the pump. The damage parameter value over 300 cycles is calculated in two ways, considering advance or delay in the opening and closing angles of the discharge valves. The probability density functions of the damage parameter are compared for diagnosis and prognosis on the basis of the probabilistic features of valve leakage
Directory of Open Access Journals (Sweden)
Jong Kyeom Lee
2016-10-01
Full Text Available This paper presents an advanced estimation method for obtaining the probability density functions of a damage parameter for valve leakage detection in a reciprocating pump. The estimation method is based on a comparison of model data which are simulated by using a mathematical model, and experimental data which are measured on the inside and outside of the reciprocating pump in operation. The mathematical model, which is simplified and extended on the basis of previous models, describes not only the normal state of the pump, but also its abnormal state caused by valve leakage. The pressure in the cylinder is expressed as a function of the crankshaft angle, and an additional volume flow rate due to the valve leakage is quantified by a damage parameter in the mathematical model. The change in the cylinder pressure profiles due to the suction valve leakage is noticeable in the compression and expansion modes of the pump. The damage parameter value over 300 cycles is calculated in two ways, considering advance or delay in the opening and closing angles of the discharge valves. The probability density functions of the damage parameter are compared for diagnosis and prognosis on the basis of the probabilistic features of valve leakage.
International Nuclear Information System (INIS)
Belisheva, N.K.; Popov, A.N.; Petukhova, N.V.; Pavlova, L.P.; Osipov, K.S.; Tkachenko, S.Eh.; Baranova, T.I.
1995-01-01
The comparison of functional dynamics of human brain with reference to qualitative and quantitative characteristics of local geomagnetic field (GMF) variations was conducted. Steady and unsteady states of human brain can be determined: by geomagnetic disturbances before the observation period; by structure and doses of GMF variations; by different combinations of qualitative and quantitative characteristics of GMF variations. Decrease of optimal GMF activity level and the appearance of aperiodic disturbances of GMF can be a reason of unsteady brain's state. 18 refs.; 3 figs
Estimation of cluster stability using the theory of electron density functional
International Nuclear Information System (INIS)
Borisov, Yu.A.
1985-01-01
Prospects of using simple versions of the electron density functional for studying the energy characteristics of cluster compounds Was discussed. These types of cluster compounds were considered: clusters of Cs, Be, B, Sr, Cd, Sc, In, V, Tl, I elements as intermediate form between molecule and solid body, metalloorganic Mo, W, Tc, Re, Rn clusters and elementoorganic compounds of nido-cluster type. The problem concerning changes in the binding energy of homoatomic clusters depending on their size and three-dimensional structure was analysed
6-Electron exchange function as a simple estimator of aromaticity in large polyaromatic hydrocarbons
Mandado, Marcos; Mosquera, Ricardo A.
2009-02-01
The 6-electron exchange function (6-EEF) is defined and calculated for a series of large polyaromatic hydrocarbons (PAHs). It is shown that the 6-EEF, computed at selected points in space, is able to reproduce in PAHs the same relative values as the multicenter electron delocalization indices with an affordable computational cost and without using any definition of the atom in the molecule. Calculations for a series of D 6h PAHs ranging from C 6H 6 to C 216H 36 are performed. The results can be extrapolated to even larger PAHs and allow predicting the behaviour of a benzene ring in an infinite sheet of graphite.
Directory of Open Access Journals (Sweden)
Xiaoling Chen
2018-05-01
Full Text Available Recently, functional corticomuscular coupling (FCMC between the cortex and the contralateral muscle has been used to evaluate motor function after stroke. As we know, the motor-control system is a closed-loop system that is regulated by complex self-regulating and interactive mechanisms which operate in multiple spatial and temporal scales. Multiscale analysis can represent the inherent complexity. However, previous studies in FCMC for stroke patients mainly focused on the coupling strength in single-time scale, without considering the changes of the inherently directional and multiscale properties in sensorimotor systems. In this paper, a multiscale-causal model, named multiscale transfer entropy, was used to quantify the functional connection between electroencephalogram over the scalp and electromyogram from the flexor digitorum superficialis (FDS recorded simultaneously during steady-state grip task in eight stroke patients and eight healthy controls. Our results showed that healthy controls exhibited higher coupling when the scale reached up to about 12, and the FCMC in descending direction was stronger at certain scales (1, 7, 12, and 14 than that in ascending direction. Further analysis showed these multi-time scale characteristics mainly focused on the beta1 band at scale 11 and beta2 band at scale 9, 11, 13, and 15. Compared to controls, the multiscale properties of the FCMC for stroke were changed, the strengths in both directions were reduced, and the gaps between the descending and ascending directions were disappeared over all scales. Further analysis in specific bands showed that the reduced FCMC mainly focused on the alpha2 at higher scale, beta1 and beta2 across almost the entire scales. This study about multi-scale confirms that the FCMC between the brain and muscles is capable of complex and directional characteristics, and these characteristics in functional connection for stroke are destroyed by the structural lesion in the
Estimation of CN Parameter for Small Agricultural Watersheds Using Asymptotic Functions
Directory of Open Access Journals (Sweden)
Tomasz Kowalik
2015-03-01
Full Text Available This paper investigates a possibility of using asymptotic functions to determine the value of curve number (CN parameter as a function of rainfall in small agricultural watersheds. It also compares the actually calculated CN with its values provided in the Soil Conservation Service (SCS National Engineering Handbook Section 4: Hydrology (NEH-4 and Technical Release 20 (TR-20. The analysis showed that empirical CN values presented in the National Engineering Handbook tables differed from the actually observed values. Calculations revealed a strong correlation between the observed CN and precipitation (P. In three of the analyzed watersheds, a typical pattern of the observed CN stabilization during abundant precipitation was perceived. It was found that Model 2, based on a kinetics equation, most effectively described the P-CN relationship. In most cases, the observed CN in the investigated watersheds was similar to the empirical CN, corresponding to average moisture conditions set out by NEH-4. Model 2 also provided the greatest stability of CN at 90% sampled event rainfall.
Radionuclide estimation of kidney function in patients with acute renal failure
International Nuclear Information System (INIS)
Ilic, S.; Bogicevic, M.; Stefanovic, V.
1989-01-01
In order to evaluate kidney function radionuclide studies were made in 51 patients with different phases of acute renal failure within the period of six months from the beginning of underlying disease. Low 99m -Tc-DTPA clearance values meaning a marked reduction of glomerular filtration rate in oligoanuric phase, with an improvement but not normalization during diuretic and recovery phase. A decrease of the effective renal plasma flow was also found in 131 I-hippurate studies. In the oligoanuric phase glomerular filtration rate was more severely impaired than renal plasma flow, while in the recovery phase this difference disappeared. In the oligoanuric phase of ARF 99m Tc-DTPA dynamic curves were aplated and those of 131 I-hippurate showed accumulation type, in the diuretic phase hypofunction type with both radionuclides, in the recovery phase a minority of them were completely normalized. It is suggested that radionuclide methods should be used to evaluate and follow up kidney function in patients with different phases of ARF. (orig.) [de
International Nuclear Information System (INIS)
Bundo-Morita, K.; Gibson, S.; Lenard, J.
1987-01-01
The target sizes associated with fusion and hemolysis carried out by Sendai virus envelope glycoproteins were determined by radiation inactivation analysis. The target size for influenza virus mediated fusion with erythrocyte ghosts at pH 5.0 was also determined for comparison. Sendai-mediated fusion with erythrocyte ghosts at pH 7.0 was likewise inactivated exponentially with increasing radiation dose, yielding a target size of 60 +/- 6 kDa, a value consistent with the molecular weight of a single F-protein molecule. The inactivation curve for Sendai-mediated fusion with cardiolipin liposomes at pH 7.0, however, was more complex. Assuming a multiple target-single hit model, the target consisted of 2-3 units of ca. 60 kDa each. A similar target was seen if the liposome contained 10% gangliosides or if the reaction was measured at pH 5.0, suggesting that fusion occurred by the same mechanism at high and low pH. A target size of 261 +/- 48 kDa was found for Sendai-induced hemolysis, in contrast with influenza, which had a more complex target size for this activity. Sendai virus fusion thus occurs by different mechanisms depending upon the nature of the target membrane, since it is mediated by different functional units. Hemolysis is mediated by a functional unit different from that associated with erythrocyte ghost fusion or with cardiolipin liposome fusion
Energy Technology Data Exchange (ETDEWEB)
Alekseychuk, O.
2006-07-01
A new algorithm for detection of longitudinal crack-like indications in radiographic images is developed in this work. Conventional local detection techniques give unsatisfactory results for this task due to the low signal to noise ratio (SNR {proportional_to} 1) of crack-like indications in radiographic images. The usage of global features of crack-like indications provides the necessary noise resistance, but this is connected with prohibitive computational complexities of detection and difficulties in a formal description of the indication shape. Conventionally, the excessive computational complexity of the solution is reduced by usage of heuristics. The heuristics to be used, are selected on a trial and error basis, are problem dependent and do not guarantee the optimal solution. Not following this way is a distinctive feature of the algorithm developed here. Instead, a global characteristic of crack-like indication (the estimation function) is used, whose maximum in the space of all possible positions, lengths and shapes can be found exactly, i.e. without any heuristics. The proposed estimation function is defined as a sum of a posteriori information gains about hypothesis of indication presence in each point along the whole hypothetical indication. The gain in the information about hypothesis of indication presence results from the analysis of the underlying image in the local area. Such an estimation function is theoretically justified and exhibits a desirable behaviour on changing signals. The developed algorithm is implemented in the C++ programming language and tested on synthetic as well as on real images. It delivers good results (high correct detection rate by given false alarm rate) which are comparable to the performance of trained human inspectors.
International Nuclear Information System (INIS)
Alekseychuk, O.
2006-01-01
A new algorithm for detection of longitudinal crack-like indications in radiographic images is developed in this work. Conventional local detection techniques give unsatisfactory results for this task due to the low signal to noise ratio (SNR ∝ 1) of crack-like indications in radiographic images. The usage of global features of crack-like indications provides the necessary noise resistance, but this is connected with prohibitive computational complexities of detection and difficulties in a formal description of the indication shape. Conventionally, the excessive computational complexity of the solution is reduced by usage of heuristics. The heuristics to be used, are selected on a trial and error basis, are problem dependent and do not guarantee the optimal solution. Not following this way is a distinctive feature of the algorithm developed here. Instead, a global characteristic of crack-like indication (the estimation function) is used, whose maximum in the space of all possible positions, lengths and shapes can be found exactly, i.e. without any heuristics. The proposed estimation function is defined as a sum of a posteriori information gains about hypothesis of indication presence in each point along the whole hypothetical indication. The gain in the information about hypothesis of indication presence results from the analysis of the underlying image in the local area. Such an estimation function is theoretically justified and exhibits a desirable behaviour on changing signals. The developed algorithm is implemented in the C++ programming language and tested on synthetic as well as on real images. It delivers good results (high correct detection rate by given false alarm rate) which are comparable to the performance of trained human inspectors
The effect of epoch length on estimated EEG functional connectivity and brain network organisation
Fraschini, Matteo; Demuru, Matteo; Crobe, Alessandra; Marrosu, Francesco; Stam, Cornelis J.; Hillebrand, Arjan
2016-06-01
Objective. Graph theory and network science tools have revealed fundamental mechanisms of functional brain organization in resting-state M/EEG analysis. Nevertheless, it is still not clearly understood how several methodological aspects may bias the topology of the reconstructed functional networks. In this context, the literature shows inconsistency in the chosen length of the selected epochs, impeding a meaningful comparison between results from different studies. Approach. The aim of this study was to provide a network approach insensitive to the effects that epoch length has on functional connectivity and network reconstruction. Two different measures, the phase lag index (PLI) and the amplitude envelope correlation (AEC) were applied to EEG resting-state recordings for a group of 18 healthy volunteers using non-overlapping epochs with variable length (1, 2, 4, 6, 8, 10, 12, 14 and 16 s). Weighted clustering coefficient (CCw), weighted characteristic path length (L w) and minimum spanning tree (MST) parameters were computed to evaluate the network topology. The analysis was performed on both scalp and source-space data. Main results. Results from scalp analysis show a decrease in both mean PLI and AEC values with an increase in epoch length, with a tendency to stabilize at a length of 12 s for PLI and 6 s for AEC. Moreover, CCw and L w show very similar behaviour, with metrics based on AEC more reliable in terms of stability. In general, MST parameters stabilize at short epoch lengths, particularly for MSTs based on PLI (1-6 s versus 4-8 s for AEC). At the source-level the results were even more reliable, with stability already at 1 s duration for PLI-based MSTs. Significance. The present work suggests that both PLI and AEC depend on epoch length and that this has an impact on the reconstructed network topology, particularly at the scalp-level. Source-level MST topology is less sensitive to differences in epoch length, therefore enabling the comparison of brain
Estimation of similarity between functions extracted from x86 executable files
Directory of Open Access Journals (Sweden)
Berta Katarina
2015-01-01
Full Text Available Comparison of functions is required in various domains of software engineering. In most domains, comparison is done using source code, but in some domains, such as license violation or malware analysis, only binary code is available. The goal of this paper is to evaluate whether the existing solution meant for ARM architecture can be applied to x86 architecture. The existing solution encompasses multiple approaches, but for the purpose of this paper three representative approaches are implemented; two are based on machine learning, and the third does not require previous knowledge. Results show that the best recalls obtained for the first ten positions on both architectures are comparable and do not differ significantly. The results confirm that adaptation of all approaches of the existing solution is not only possible but also promising and represent adequate basis for future research. [Projekat Ministarstva nauke Republike Srbije, br. III44009 i br. TR32047
Tlalolini, David; Ritou, Mathieu; Rabréau, Clément; Le Loch, Sébastien; Furet, Benoit
2018-05-01
The paper presents an electromagnetic system that has been developed to measure the quasi-static and dynamic behavior of machine-tool spindle, at different spindle speeds. This system consists in four Pulse Width Modulation amplifiers and four electromagnets to produce magnetic forces of ± 190 N for the static mode and ± 80 N for the dynamic mode up to 5 kHz. In order to measure the Frequency Response Function (FRF) of spindle, the applied force is required, which is a key issue. A dynamic force model is proposed in order to obtain the load from the measured current in the amplifiers. The model depends on the exciting frequency and on the magnetic characteristics of the system. The predicted force at high speed is validated with a specific experiment and the performance limits of the experimental device are investigated. The FRF obtained with the electromagnetic system is compared to a classical tap test measurement.
DEFF Research Database (Denmark)
Løkkegaard, Annemette; Herz, Damian M; Haagensen, Brian Numelin
2016-01-01
Dystonia is characterized by sustained or intermittent muscle contractions causing abnormal, often repetitive, movements or postures. Functional neuroimaging studies have yielded abnormal task-related sensorimotor activation in dystonia, but the results appear to be rather variable across studies....... Further, study size was usually small including different types of dystonia. Here we performed an activation likelihood estimation (ALE) meta-analysis of functional neuroimaging studies in patients with primary dystonia to test for convergence of dystonia-related alterations in task-related activity...... postcentral gyrus, right superior temporal gyrus and dorsal midbrain. Apart from the midbrain cluster, all between-group differences in task-related activity were retrieved in a sub-analysis including only the 14 studies on patients with focal dystonia. For focal dystonia, an additional cluster of increased...
Voronkov, Yury; Skedina, Marina; Degterenkova, Natalia; Stepanova, Galina
Long stay of cosmonauts in conditions of International Space Station demands the increased medical control over their health during selection. Various parameters of cardiovascular system (CVS) undergo significant changes both during adaptation to space flight (period of removing into an orbit), directly under conditions of weightlessness and during readaptation to terrestrial environment. The CVS is sensitive indicator of adaptation reaction of total organism. Therefore much attention is given to the research of CVS regulation, its opportunities to adapt to various stress conditions, detection of pre-nozological changes in mechanisms of its regulation. One of the informative methods for detecting problems in CVS regulation is a postural orthostatic test. This work was designed to research regulation of hemodynamics during passive orthostatic test. 21 practically healthy people in the age from 18 to 36 years old have passed the test. During test the following parameters were registered: 12 Lead ECG and the BP, parameters of a myocardium by means of "CardioVisor-06" (CV) device, and also a condition of microcirculatory bloodstream (MCB) was estimated by means of ultrasonic high-frequency dopplerograph "Minimax-Doppler-K" with 20 MHz sensor. The impedance method of rheoencephalography (REG) by means of the "Encephalan-EEGR-13103" device was used to research a cerebral blood circulation. All subjects had normal parameters of ECG during test. However, during analysis data of CV, REG and MCB high tolerability to the test was observed in 14 test subjects. In other 7 subjects dynamics of parameters during test reflected problems in mechanisms of CVS regulation in its separate parts. Changes in parameters of REG and ultrasound in 4 test subjects reflected a hypotensive reaction. The parameter of a tone of arterioles in carotid and vertebral arteries system decreased for 15,3 % and 55,2 % accordingly. The parameters of MCB: average speed, vascular tone and peripheric
Pernot, Pascal; Savin, Andreas
2018-06-01
Benchmarking studies in computational chemistry use reference datasets to assess the accuracy of a method through error statistics. The commonly used error statistics, such as the mean signed and mean unsigned errors, do not inform end-users on the expected amplitude of prediction errors attached to these methods. We show that, the distributions of model errors being neither normal nor zero-centered, these error statistics cannot be used to infer prediction error probabilities. To overcome this limitation, we advocate for the use of more informative statistics, based on the empirical cumulative distribution function of unsigned errors, namely, (1) the probability for a new calculation to have an absolute error below a chosen threshold and (2) the maximal amplitude of errors one can expect with a chosen high confidence level. Those statistics are also shown to be well suited for benchmarking and ranking studies. Moreover, the standard error on all benchmarking statistics depends on the size of the reference dataset. Systematic publication of these standard errors would be very helpful to assess the statistical reliability of benchmarking conclusions.
Estimating diversifying selection and functional constraint in the presence of recombination.
Wilson, Daniel J; McVean, Gilean
2006-03-01
Models of molecular evolution that incorporate the ratio of nonsynonymous to synonymous polymorphism (dN/dS ratio) as a parameter can be used to identify sites that are under diversifying selection or functional constraint in a sample of gene sequences. However, when there has been recombination in the evolutionary history of the sequences, reconstructing a single phylogenetic tree is not appropriate, and inference based on a single tree can give misleading results. In the presence of high levels of recombination, the identification of sites experiencing diversifying selection can suffer from a false-positive rate as high as 90%. We present a model that uses a population genetics approximation to the coalescent with recombination and use reversible-jump MCMC to perform Bayesian inference on both the dN/dS ratio and the recombination rate, allowing each to vary along the sequence. We demonstrate that the method has the power to detect variation in the dN/dS ratio and the recombination rate and does not suffer from a high false-positive rate. We use the method to analyze the porB gene of Neisseria meningitidis and verify the inferences using prior sensitivity analysis and model criticism techniques.
Estimation of residual MSW heating value as a function of waste component recycling
International Nuclear Information System (INIS)
Magrinho, Alexandre; Semiao, Viriato
2008-01-01
Recycling of packaging wastes may be compatible with incineration within integrated waste management systems. To study this, a mathematical model is presented to calculate the fraction composition of residual municipal solid waste (MSW) only as a function of the MSW fraction composition at source and recycling fractions of the different waste materials. The application of the model to the Lisbon region yielded results showing that the residual waste fraction composition depends both on the packaging wastes fraction at source and on the ratio between that fraction and the fraction of the same material, packaging and non-packaging, at source. This behaviour determines the variation of the residual waste LHV. For 100% of paper packaging recycling, LHV reduces 4.2% whereas this reduction is of 14.4% for 100% of packaging plastics recycling. For 100% of food waste recovery, LHV increases 36.8% due to the moisture fraction reduction of the residual waste. Additionally the results evidence that the negative impact of recycling paper and plastic packaging on the LHV may be compensated by recycling food waste and glass and metal packaging. This makes packaging materials recycling and food waste recovery compatible strategies with incineration within integrated waste management systems
Functional size of vacuolar H+ pumps: Estimates from radiation inactivation studies
International Nuclear Information System (INIS)
Sarafian, V.; Poole, R.J.
1991-01-01
The PPase and the ATPase from red beet (Beta vulgaris) vacuolar membranes were subjected to radiation inactivation by a 60 Co source in both the native tonoplast and detergent-solubilized states, in order to determine their target molecular sizes. Analysis of the residual phosphohydrolytic and proton transport activities, after exposure to varying doses of radiation, yielded exponential relationships between the activities and radiation doses. The deduced target molecular sizes for PPase activity in native and solubilized membranes were 125kD and 259kD respectively and 327kD for H + -transport. This suggests that the minimum number of subunits of 67kD for PPi hydrolysis is two in the native state and four after Triton X-100 solubilization. At least four subunits would be required for H + -translocation. Analysis of the ATPase inactivation patterns revealed target sizes of 384kD and 495kD for ATP hydrolysis in native and solubilized tonoplast respectively and 430kD for H + -transport. These results suggest that the minimum size for hydrolytic or transport functions is relatively constant for the ATPase
International Nuclear Information System (INIS)
Berryman, J.G.; Blair, S.C.
1986-01-01
Scanning electron microscope images of cross sections of several porous specimens have been digitized and analyzed using image processing techniques. The porosity and specific surface area may be estimated directly from measured two-point spatial correlation functions. The measured values of porosity and image specific surface were combined with known values of electrical formation factors to estimate fluid permeability using one version of the Kozeny-Carman empirical relation. For glass bead samples with measured permeability values in the range of a few darcies, our estimates agree well ( +- 10--20%) with the measurements. For samples of Ironton-Galesville sandstone with a permeability in the range of hundreds of millidarcies, our best results agree with the laboratory measurements again within about 20%. For Berea sandstone with still lower permeability (tens of millidarcies), our predictions from the images agree within 10--30%. Best results for the sandstones were obtained by using the porosities obtained at magnifications of about 100 x (since less resolution and better statistics are required) and the image specific surface obtained at magnifications of about 500 x (since greater resolution is required)
Directory of Open Access Journals (Sweden)
Nicola Koper
2012-03-01
Full Text Available Resource selection functions (RSF are often developed using satellite (ARGOS or Global Positioning System (GPS telemetry datasets, which provide a large amount of highly correlated data. We discuss and compare the use of generalized linear mixed-effects models (GLMM and generalized estimating equations (GEE for using this type of data to develop RSFs. GLMMs directly model differences among caribou, while GEEs depend on an adjustment of the standard error to compensate for correlation of data points within individuals. Empirical standard errors, rather than model-based standard errors, must be used with either GLMMs or GEEs when developing RSFs. There are several important differences between these approaches; in particular, GLMMs are best for producing parameter estimates that predict how management might influence individuals, while GEEs are best for predicting how management might influence populations. As the interpretation, value, and statistical significance of both types of parameter estimates differ, it is important that users select the appropriate analytical method. We also outline the use of k-fold cross validation to assess fit of these models. Both GLMMs and GEEs hold promise for developing RSFs as long as they are used appropriately.
Feng, Fei; Li, Xianglan; Yao, Yunjun; Liang, Shunlin; Chen, Jiquan; Zhao, Xiang; Jia, Kun; Pintér, Krisztina; McCaughey, J Harry
2016-01-01
Accurate estimation of latent heat flux (LE) based on remote sensing data is critical in characterizing terrestrial ecosystems and modeling land surface processes. Many LE products were released during the past few decades, but their quality might not meet the requirements in terms of data consistency and estimation accuracy. Merging multiple algorithms could be an effective way to improve the quality of existing LE products. In this paper, we present a data integration method based on modified empirical orthogonal function (EOF) analysis to integrate the Moderate Resolution Imaging Spectroradiometer (MODIS) LE product (MOD16) and the Priestley-Taylor LE algorithm of Jet Propulsion Laboratory (PT-JPL) estimate. Twenty-two eddy covariance (EC) sites with LE observation were chosen to evaluate our algorithm, showing that the proposed EOF fusion method was capable of integrating the two satellite data sets with improved consistency and reduced uncertainties. Further efforts were needed to evaluate and improve the proposed algorithm at larger spatial scales and time periods, and over different land cover types.
Wang, Yikai; Kang, Jian; Kemmer, Phebe B; Guo, Ying
2016-01-01
Currently, network-oriented analysis of fMRI data has become an important tool for understanding brain organization and brain networks. Among the range of network modeling methods, partial correlation has shown great promises in accurately detecting true brain network connections. However, the application of partial correlation in investigating brain connectivity, especially in large-scale brain networks, has been limited so far due to the technical challenges in its estimation. In this paper, we propose an efficient and reliable statistical method for estimating partial correlation in large-scale brain network modeling. Our method derives partial correlation based on the precision matrix estimated via Constrained L1-minimization Approach (CLIME), which is a recently developed statistical method that is more efficient and demonstrates better performance than the existing methods. To help select an appropriate tuning parameter for sparsity control in the network estimation, we propose a new Dens-based selection method that provides a more informative and flexible tool to allow the users to select the tuning parameter based on the desired sparsity level. Another appealing feature of the Dens-based method is that it is much faster than the existing methods, which provides an important advantage in neuroimaging applications. Simulation studies show that the Dens-based method demonstrates comparable or better performance with respect to the existing methods in network estimation. We applied the proposed partial correlation method to investigate resting state functional connectivity using rs-fMRI data from the Philadelphia Neurodevelopmental Cohort (PNC) study. Our results show that partial correlation analysis removed considerable between-module marginal connections identified by full correlation analysis, suggesting these connections were likely caused by global effects or common connection to other nodes. Based on partial correlation, we find that the most significant
Chai Rui; Li Si-Man; Xu Li-Sheng; Yao Yang; Hao Li-Ling
2017-07-01
This study mainly analyzed the parameters such as ascending branch slope (A_slope), dicrotic notch height (Hn), diastolic area (Ad) and systolic area (As) diastolic blood pressure (DBP), systolic blood pressure (SBP), pulse pressure (PP), subendocardial viability ratio (SEVR), waveform parameter (k), stroke volume (SV), cardiac output (CO) and peripheral resistance (RS) of central pulse wave invasively and non-invasively measured. These parameters extracted from the central pulse wave invasively measured were compared with the parameters measured from the brachial pulse waves by a regression model and a transfer function model. The accuracy of the parameters which were estimated by the regression model and the transfer function model was compared too. Our findings showed that in addition to the k value, the above parameters of the central pulse wave and the brachial pulse wave invasively measured had positive correlation. Both the regression model parameters including A_slope, DBP, SEVR and the transfer function model parameters had good consistency with the parameters invasively measured, and they had the same effect of consistency. The regression equations of the three parameters were expressed by Y'=a+bx. The SBP, PP, SV, CO of central pulse wave could be calculated through the regression model, but their accuracies were worse than that of transfer function model.
Functioning of police in Volgograd oblast in the estimations of the public
Directory of Open Access Journals (Sweden)
Anna P. Alekseyeva
2015-12-01
Full Text Available Objective to determine the reliability and objectivity of information to the population on the functioning of police in Volgograd oblast. Methods sociological absentee polling in the form of a questionnaire statistical logical documentary graphic method of systemic analysis. Results the opinions of the population on police are extremely controversial. It is connected mostly with the sources of information which mostly are not reliable and objective. The article shows that despite the growth of victimization the level of anxiety of the population about the criminal attacks is gradually reduced and the sense of security in citizens is strengthened though often due to their personal efforts to protect their life health and property. Only every fifth citizen relies on the participation of law enforcers in ensuring security and public order. The survey revealed that about half of the citizens who are potentially interested in the reaction of law enforcement officers to the crime actually agree to leave the offender unpunished due to the mistrust of the police. A third of claimants were unsatisfied with the police action on their application which does not correlate with the declared numbers. The opinion of Volgograd citizens on the frequency of bribery among police officers remains unchanged whereas other malfeasances received a significant increase. And despite this the attitude of the respondents towards the police in general has improved. Mostly it was the result of media activities that inform the public about successful police work using TV shows documentaries and feature films. The successful work of the police is also confirmed by statistic that shows the rapid decline in recorded crime. Scientific novelty for the first time on the basis of a combination of various methods the reliability and objectivity of information to the population is investigated on the police of Volgograd oblast. Practical significance the main provisions and
Directory of Open Access Journals (Sweden)
Razieh Khajeh-Kazemi
2011-01-01
Full Text Available Background: The celebrated generalized estimating equations (GEE approach is often used in longitudinal data analysis While this method behaves robustly against misspecification of the working correlation structure, it has some limitations on efficacy of estimators, goodness-of-fit tests and model selection criteria The quadratic inference functions (QIF is a new statistical methodology that overcomes these limitations Methods : We administered the use of QIF and GEE in comparing the superior and inferior Ahmed glaucoma valve (AGV implantation, while our focus was on the efficiency of estimation and using model selection criteria, we compared the effect of implant location on intraocular pressure (IOP in refractory glaucoma patients We modeled the relationship between IOP and implant location, patient′s sex and age, best corrected visual acuity, history of cataract surgery, preoperative IOP and months after surgery with assuming unstructured working correlation Results : 63 eyes of 63 patients were included in this study, 28 eyes in inferior group and 35 eyes in superior group The GEE analysis revealed that preoperative IOP has a significant effect on IOP (p = 0 011 However, QIF showed that preoperative IOP, months after surgery and squared months are significantly associated with IOP after surgery (p < 0 05 Overall, estimates from QIF are more efficient than GEE (RE = 1 272 Conclusions : In the case of unstructured working correlation, the QIF is more efficient than GEE There were no considerable difference between these locations, our results confirmed previously published works which mentioned it is better that glaucoma patients undergo superior AGV implantation
Estimating renal function in children: a new GFR-model based on serum cystatin C and body cell mass.
Andersen, Trine Borup
2012-07-01
This PhD thesis is based on four individual studies including 131 children aged 2-14 years with nephro-urologic disorders. The majority (72%) of children had a normal renal function (GFR > 82 ml/min/1.73 square metres), and only 8% had a renal function thesis´ main aims were: 1) to develop a more accurate GFR model based on a novel theory of body cell mass (BCM) and cystatin C (CysC); 2) to investigate the diagnostic performance in comparison to other models as well as serum CysC and creatinine; 3) to validate the new models precision and validity. The model´s diagnostic performance was investigated in study I as the ability to detect changes in renal function (total day-to-day variation), and in study IV as the ability to discriminate between normal and reduced function. The model´s precision and validity were indirectly evaluated in study II and III, and in study I accuracy was estimated by comparison to reference GFR. Several prediction models based on CysC or a combination of CysC and serum creatinine have been developed for predicting GFR in children. Despite these efforts to improve GFR estimates, no alternative to exogenous methods has been found and the Schwartz´s formula based on height, creatinine and an empirically derived constant is still recommended for GFR estimation in children. However, the inclusion of BCM as a possible variable in a CysC-based prediction model has not yet been explored. As CysC is produced at a constant rate from all nucleated cells we hypothesize that including BCM in a new prediction model will increase accuracy of the GFR estimate. Study I aimed at deriving the new GFR-prediction model based on the novel theory of CysC and BCM and comparing the performance to previously published models. The BCM-model took the form GFR (mL/min) = 10.2 × (BCM/CysC)E 0.40 × (height × body surface area/Crea)E 0.65. The model predicted 99% within ± 30% of reference GFR, and 67% within ±10%. This was higher than any other model. The
Energy Technology Data Exchange (ETDEWEB)
Jimenez D, H.; Cabral P, A.; Melendez L, L.; Lopez C, R.; Colunga S, S.; Valencia A, R.; Cruz J, S.; Gaytan G, E.; Chavez A, E
1992-04-15
In this work a method to estimate the temperature and density of the electron (T{sub e}, n{sub e}), based on the deconvolution of the part of absorption of the dispersion function of the plasma is suggested. The absorptive part of this function, is proportional to the convolution of a Gauss distribution with a Lorentz function. The Gaussian represents to the Maxwell function of velocities distribution of the electrons of the plasma. The Lorentzian represents to the form of it lines of an linearized electrostatic wave that spreads with reduction in the plasma. The complex variable z of the plasma dispersion function is written as: z = u + ia, where u = 2 (w-w{sub 0}) {radical} Ln 2 /{gamma}{sub G} is the dimensionless frequency variable, a = {gamma}{sub L} {radical} Ln 2 /{gamma}{sub G} is the Posener parameter, {gamma}{sub G} = k {gamma}{sup '}{sub G} where k is the wave number of the oscillatory phenomenon, {gamma}{sup '}{sub G} is the FWHM of the Gaussian and {gamma}{sub L} = 2 {alpha}, {alpha} being the damping constant; i.e the imaginary part of the frequency {omega}. In this method, it will be assumed that a wave of frequency , and of amplitude small enough to avoid non-linear effects, propagates in the plasma and decays in such a way {alpha} is the Landau damping. With this assumption, the method is only valid in the interval k < < k{sub D}, where k{sub D} is the Debye wave number. Deconvolution of the detected absorption frequency spectrum of the signal, gives the values of {gamma}{sub G} and {gamma}{sub L} from which the values of n{sub e} and T{sub e} can be deduced. (Author)
International Nuclear Information System (INIS)
Jimenez D, H.; Cabral P, A.; Melendez L, L.; Lopez C, R.; Colunga S, S.; Valencia A, R.; Cruz J, S.; Gaytan G, E.; Chavez A, E.
1992-04-01
In this work a method to estimate the temperature and density of the electron (T e , n e ), based on the deconvolution of the part of absorption of the dispersion function of the plasma is suggested. The absorptive part of this function, is proportional to the convolution of a Gauss distribution with a Lorentz function. The Gaussian represents to the Maxwell function of velocities distribution of the electrons of the plasma. The Lorentzian represents to the form of it lines of an linearized electrostatic wave that spreads with reduction in the plasma. The complex variable z of the plasma dispersion function is written as: z = u + ia, where u = 2 (w-w 0 ) √ Ln 2 /Γ G is the dimensionless frequency variable, a = Γ L √ Ln 2 /Γ G is the Posener parameter, Γ G = k Γ ' G where k is the wave number of the oscillatory phenomenon, Γ ' G is the FWHM of the Gaussian and Γ L = 2 α, α being the damping constant; i.e the imaginary part of the frequency ω. In this method, it will be assumed that a wave of frequency , and of amplitude small enough to avoid non-linear effects, propagates in the plasma and decays in such a way α is the Landau damping. With this assumption, the method is only valid in the interval k D , where k D is the Debye wave number. Deconvolution of the detected absorption frequency spectrum of the signal, gives the values of Γ G and Γ L from which the values of n e and T e can be deduced. (Author)
Directory of Open Access Journals (Sweden)
Molly C. Bletz
2017-09-01
Full Text Available For decades, Amphibians have been globally threatened by the still expanding infectious disease, chytridiomycosis. Madagascar is an amphibian biodiversity hotspot where Batrachochytrium dendrobatidis (Bd has only recently been detected. While no Bd-associated population declines have been reported, the risk of declines is high when invasive virulent lineages become involved. Cutaneous bacteria contribute to host innate immunity by providing defense against pathogens for numerous animals, including amphibians. Little is known, however, about the cutaneous bacterial residents of Malagasy amphibians and the functional capacity they have against Bd. We cultured 3179 skin bacterial isolates from over 90 frog species across Madagascar, identified them via Sanger sequencing of approximately 700 bp of the 16S rRNA gene, and characterized their functional capacity against Bd. A subset of isolates was also tested against multiple Bd genotypes. In addition, we applied the concept of herd immunity to estimate Bd-associated risk for amphibian communities across Madagascar based on bacterial antifungal activity. We found that multiple bacterial isolates (39% of all isolates cultured from the skin of Malagasy frogs were able to inhibit Bd. Mean inhibition was weakly correlated with bacterial phylogeny, and certain taxonomic groups appear to have a high proportion of inhibitory isolates, such as the Enterobacteriaceae, Pseudomonadaceae, and Xanthamonadaceae (84, 80, and 75% respectively. Functional capacity of bacteria against Bd varied among Bd genotypes; however, there were some bacteria that showed broad spectrum inhibition against all tested Bd genotypes, suggesting that these bacteria would be good candidates for probiotic therapies. We estimated Bd-associated risk for sampled amphibian communities based on the concept of herd immunity. Multiple amphibian communities, including those in the amphibian diversity hotspots, Andasibe and Ranomafana, were
Bletz, Molly C; Myers, Jillian; Woodhams, Douglas C; Rabemananjara, Falitiana C E; Rakotonirina, Angela; Weldon, Che; Edmonds, Devin; Vences, Miguel; Harris, Reid N
2017-01-01
For decades, Amphibians have been globally threatened by the still expanding infectious disease, chytridiomycosis. Madagascar is an amphibian biodiversity hotspot where Batrachochytrium dendrobatidis ( Bd ) has only recently been detected. While no Bd -associated population declines have been reported, the risk of declines is high when invasive virulent lineages become involved. Cutaneous bacteria contribute to host innate immunity by providing defense against pathogens for numerous animals, including amphibians. Little is known, however, about the cutaneous bacterial residents of Malagasy amphibians and the functional capacity they have against Bd . We cultured 3179 skin bacterial isolates from over 90 frog species across Madagascar, identified them via Sanger sequencing of approximately 700 bp of the 16S rRNA gene, and characterized their functional capacity against Bd . A subset of isolates was also tested against multiple Bd genotypes. In addition, we applied the concept of herd immunity to estimate Bd -associated risk for amphibian communities across Madagascar based on bacterial antifungal activity. We found that multiple bacterial isolates (39% of all isolates) cultured from the skin of Malagasy frogs were able to inhibit Bd . Mean inhibition was weakly correlated with bacterial phylogeny, and certain taxonomic groups appear to have a high proportion of inhibitory isolates, such as the Enterobacteriaceae, Pseudomonadaceae, and Xanthamonadaceae (84, 80, and 75% respectively). Functional capacity of bacteria against Bd varied among Bd genotypes; however, there were some bacteria that showed broad spectrum inhibition against all tested Bd genotypes, suggesting that these bacteria would be good candidates for probiotic therapies. We estimated Bd -associated risk for sampled amphibian communities based on the concept of herd immunity. Multiple amphibian communities, including those in the amphibian diversity hotspots, Andasibe and Ranomafana, were estimated to be
Roebeling, P C; Rocha, J; Nunes, J P; Fidélis, T; Alves, H; Fonseca, S
2014-01-01
Coastal aquatic ecosystems are increasingly affected by diffuse source nutrient water pollution from agricultural activities in coastal catchments, even though these ecosystems are important from a social, environmental and economic perspective. To warrant sustainable economic development of coastal regions, we need to balance marginal costs from coastal catchment water pollution abatement and associated marginal benefits from coastal resource appreciation. Diffuse-source water pollution abatement costs across agricultural sectors are not easily determined given the spatial heterogeneity in biophysical and agro-ecological conditions as well as the available range of best agricultural practices (BAPs) for water quality improvement. We demonstrate how the Soil and Water Assessment Tool (SWAT) can be used to estimate diffuse-source water pollution abatement cost functions across agricultural land use categories based on a stepwise adoption of identified BAPs for water quality improvement and corresponding SWAT-based estimates for agricultural production, agricultural incomes, and water pollution deliveries. Results for the case of dissolved inorganic nitrogen (DIN) surface water pollution by the key agricultural land use categories ("annual crops," "vineyards," and "mixed annual crops & vineyards") in the Vouga catchment in central Portugal show that no win-win agricultural practices are available within the assessed BAPs for DIN water quality improvement. Estimated abatement costs increase quadratically in the rate of water pollution abatement, with largest abatement costs for the "mixed annual crops & vineyards" land use category (between 41,900 and 51,900 € tDIN yr) and fairly similar abatement costs across the "vineyards" and "annual crops" land use categories (between 7300 and 15,200 € tDIN yr). Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.
FunGeneNet: a web tool to estimate enrichment of functional interactions in experimental gene sets.
Tiys, Evgeny S; Ivanisenko, Timofey V; Demenkov, Pavel S; Ivanisenko, Vladimir A
2018-02-09
Estimation of functional connectivity in gene sets derived from genome-wide or other biological experiments is one of the essential tasks of bioinformatics. A promising approach for solving this problem is to compare gene networks built using experimental gene sets with random networks. One of the resources that make such an analysis possible is CrossTalkZ, which uses the FunCoup database. However, existing methods, including CrossTalkZ, do not take into account individual types of interactions, such as protein/protein interactions, expression regulation, transport regulation, catalytic reactions, etc., but rather work with generalized types characterizing the existence of any connection between network members. We developed the online tool FunGeneNet, which utilizes the ANDSystem and STRING to reconstruct gene networks using experimental gene sets and to estimate their difference from random networks. To compare the reconstructed networks with random ones, the node permutation algorithm implemented in CrossTalkZ was taken as a basis. To study the FunGeneNet applicability, the functional connectivity analysis of networks constructed for gene sets involved in the Gene Ontology biological processes was conducted. We showed that the method sensitivity exceeds 0.8 at a specificity of 0.95. We found that the significance level of the difference between gene networks of biological processes and random networks is determined by the type of connections considered between objects. At the same time, the highest reliability is achieved for the generalized form of connections that takes into account all the individual types of connections. By taking examples of the thyroid cancer networks and the apoptosis network, it is demonstrated that key participants in these processes are involved in the interactions of those types by which these networks differ from random ones. FunGeneNet is a web tool aimed at proving the functionality of networks in a wide range of sizes of
Botoseneanu, Anda; Allore, Heather G; Gahbauer, Evelyne A; Gill, Thomas M
2013-07-01
Gender-specific trajectories of lower extremity function (LEF) and the potential for bias in LEF estimation due to differences in survival have been understudied. We evaluated longitudinal data from 690 initially nondisabled adults age 70 or older from the Precipitating Events Project. LEF was assessed every 18 months for 12 years using a modified Short Physical Performance Battery (mSPPB). Hierarchical linear models with adjustments for length-of-survival estimated the intraindividual trajectory of LEF and differences in trajectory intercept and slope between men and women. LEF declined following a nonlinear trajectory. In the full sample, and among participants with high (mSPPB 10-12) and intermediate (mSPPB 7-9) baseline LEF, the rate-of-decline in mSPPB was slower in women than in men, with no gender differences in baseline mSPPB scores. Among participants with low baseline LEF (mSPPB ≤6), men had a higher starting mSPPB score, whereas women experienced a deceleration in the rate-of-decline over time. In all groups, participants who survived longer had higher starting mSPPB scores and slower rates-of-decline compared with those who died sooner. Over the course of 12 years, older women preserve LEF better than men. Nonadjustment for differences in survival results in overestimating the level and underestimating the rate-of-decline in LEF over time.
International Nuclear Information System (INIS)
González Robaina, Felicita; López Seijas, Teresa
2008-01-01
The modeling of the processes involved in the movement of water in soil solutions generally requires the general equation of water flow for the condition of saturation, or Darcy - Buckinghan approach. In this approach the hydraulic - soil moisture (K(0)) conductivity function is a fundamental property of the soil to determine for each field condition. Several methods reported in the literature for determining the hydraulic conductivity are based on simplifications of assuming unit gradient method or a fixed ratio K(0). In recent years related to the search for simple, rapid and inexpensive methods to measure this relationship in the field using a lot of work aftershocks reported. One of these methods is the parameterized equation proposed by Reichardt, using the parameters of the equations describing the process of internal drainage and explain the exponential nature of the relationship K(0). The objective of this work is to estimate the K(0), with the method of the parameterized equation. To do the test results of internal drainage on a Ferralsol area south of Havana will be used. The results show that the parameterized equation provides an estimation of K(0) for those similar to the methods that assume unit gradient conditions
Hoshiba, Yasuhiro; Hirata, Takafumi; Shigemitsu, Masahito; Nakano, Hideyuki; Hashioka, Taketo; Masuda, Yoshio; Yamanaka, Yasuhiro
2018-06-01
Ecosystem models are used to understand ecosystem dynamics and ocean biogeochemical cycles and require optimum physiological parameters to best represent biological behaviours. These physiological parameters are often tuned up empirically, while ecosystem models have evolved to increase the number of physiological parameters. We developed a three-dimensional (3-D) lower-trophic-level marine ecosystem model known as the Nitrogen, Silicon and Iron regulated Marine Ecosystem Model (NSI-MEM) and employed biological data assimilation using a micro-genetic algorithm to estimate 23 physiological parameters for two phytoplankton functional types in the western North Pacific. The estimation of the parameters was based on a one-dimensional simulation that referenced satellite data for constraining the physiological parameters. The 3-D NSI-MEM optimized by the data assimilation improved the timing of a modelled plankton bloom in the subarctic and subtropical regions compared to the model without data assimilation. Furthermore, the model was able to improve not only surface concentrations of phytoplankton but also their subsurface maximum concentrations. Our results showed that surface data assimilation of physiological parameters from two contrasting observatory stations benefits the representation of vertical plankton distribution in the western North Pacific.
ST Fleur, S.; Courboulex, F.; Bertrand, E.; Mercier De Lepinay, B. F.; Hough, S. E.; Boisson, D.; Momplaisir, R.
2017-12-01
To assess the possible impact of a future earthquake in the urban area of Port-au-Prince (Haiti), we have implemented a simulation approach for complex ground motions produced by an earthquake. To this end, we have integrated local site effect in the prediction of strong ground motions in Port-au-Prince using the complex transfer functions method, which takes into account amplitude changes as well as phase changes. This technique is particularly suitable for basins where a conventional 1D digital approach proves inadequate, as is the case in Port-au-Prince. To do this, we use the results of the Standard Spectral Ratio (SSR) approach of St Fleur et al. (2016) to estimate the amplitude of the response of the site to a nearby rock site. Then, we determine the phase difference between sites, interpreted as changes in the phase of the signal related to local site conditions, using the signals of the 2010 earthquake aftershocks records. Finally, the accelerogram of the simulated earthquake is obtain using the technique of the inverse Fourier transform. The results of this study showed that the strongest soil motions are expected in neighborhoods of downtown Port-au-Prince and adjacent hills. In addition, this simulation method by complex transfer functions was validated by comparison with recorded actual data. Our simulated response spectra reproduce very well both the amplitude and the shape of the response spectra of recorded earthquakes. This new approach allowed to reproduce the lengthening of the signal that could be generated by surface waves at certain stations in the city of Port-au-Prince. However, two points of vigilance must be considered: (1) a good signal-to-noise ratio is necessary to obtain a robust estimate of the site-reference phase shift (ratio at least equal to 10); (2) unless the amplitude and phase changes are measured on strong motion records, this technique does not take non-linear effects into account.
International Nuclear Information System (INIS)
Kato, Fumi; Kamishima, Tamotsu; Morita, Ken; Muto, Natalia S.; Okamoto, Syozou; Omatsu, Tokuhiko; Oyama, Noriko; Terae, Satoshi; Kanegae, Kakuko; Nonomura, Katsuya; Shirato, Hiroki
2011-01-01
Purpose: To evaluate the speed and precision of split renal volume (SRV) measurement, which is the ratio of unilateral renal volume to bilateral renal volume, using a newly developed software for computed tomographic (CT) volumetry and to investigate the usefulness of SRV for the estimation of split renal function (SRF) in kidney donors. Method: Both dynamic CT and renal scintigraphy in 28 adult potential living renal donors were the subjects of this study. We calculated SRV using the newly developed volumetric software built into a PACS viewer (n-SRV), and compared it with SRV calculated using a conventional workstation, ZIOSOFT (z-SRV). The correlation with split renal function (SRF) using 99m Tc-DMSA scintigraphy was also investigated. Results: The time required for volumetry of bilateral kidneys with the newly developed software (16.7 ± 3.9 s) was significantly shorter than that of the workstation (102.6 ± 38.9 s, p < 0.0001). The results of n-SRV (49.7 ± 4.0%) were highly consistent with those of z-SRV (49.9 ± 3.6%), with a mean discrepancy of 0.12 ± 0.84%. The SRF also agreed well with the n-SRV, with a mean discrepancy of 0.25 ± 1.65%. The dominant side determined by SRF and n-SRV showed agreement in 26 of 28 cases (92.9%). Conclusion: The newly developed software for CT volumetry was more rapid than the conventional workstation volumetry and just as accurate, and was suggested to be useful for the estimation of SRF and thus the dominant side in kidney donors.
Energy Technology Data Exchange (ETDEWEB)
Kato, Fumi, E-mail: fumikato@med.hokudai.ac.jp [Department of Radiology, Hokkaido University Graduate School of Medicine, N15, W7, Kita-ku, Sapporo, Hokkaido 060-8638 (Japan); Kamishima, Tamotsu, E-mail: ktamotamo2@yahoo.co.jp [Department of Radiology, Hokkaido University Graduate School of Medicine, N15, W7, Kita-ku, Sapporo, Hokkaido 060-8638 (Japan); Morita, Ken, E-mail: kenordic@carrot.ocn.ne.jp [Department of Urology, Hokkaido University Graduate School of Medicine, N15, W7, Kita-ku, Sapporo, 060-8638 (Japan); Muto, Natalia S., E-mail: nataliamuto@gmail.com [Department of Radiology, Hokkaido University Graduate School of Medicine, N15, W7, Kita-ku, Sapporo, Hokkaido 060-8638 (Japan); Okamoto, Syozou, E-mail: shozo@med.hokudai.ac.jp [Department of Nuclear Medicine, Hokkaido University Graduate School of Medicine, N15, W7, Kita-ku, Sapporo, 060-8638 (Japan); Omatsu, Tokuhiko, E-mail: omatoku@nirs.go.jp [Department of Radiology, Hokkaido University Graduate School of Medicine, N15, W7, Kita-ku, Sapporo, Hokkaido 060-8638 (Japan); Oyama, Noriko, E-mail: ZAT04404@nifty.ne.jp [Department of Radiology, Hokkaido University Graduate School of Medicine, N15, W7, Kita-ku, Sapporo, Hokkaido 060-8638 (Japan); Terae, Satoshi, E-mail: saterae@med.hokudai.ac.jp [Department of Radiology, Hokkaido University Graduate School of Medicine, N15, W7, Kita-ku, Sapporo, Hokkaido 060-8638 (Japan); Kanegae, Kakuko, E-mail: IZW00143@nifty.ne.jp [Department of Nuclear Medicine, Hokkaido University Graduate School of Medicine, N15, W7, Kita-ku, Sapporo, 060-8638 (Japan); Nonomura, Katsuya, E-mail: k-nonno@med.hokudai.ac.jp [Department of Urology, Hokkaido University Graduate School of Medicine, N15, W7, Kita-ku, Sapporo, 060-8638 (Japan); Shirato, Hiroki, E-mail: shirato@med.hokudai.ac.jp [Department of Radiology, Hokkaido University Graduate School of Medicine, N15, W7, Kita-ku, Sapporo, Hokkaido 060-8638 (Japan)
2011-07-15
Purpose: To evaluate the speed and precision of split renal volume (SRV) measurement, which is the ratio of unilateral renal volume to bilateral renal volume, using a newly developed software for computed tomographic (CT) volumetry and to investigate the usefulness of SRV for the estimation of split renal function (SRF) in kidney donors. Method: Both dynamic CT and renal scintigraphy in 28 adult potential living renal donors were the subjects of this study. We calculated SRV using the newly developed volumetric software built into a PACS viewer (n-SRV), and compared it with SRV calculated using a conventional workstation, ZIOSOFT (z-SRV). The correlation with split renal function (SRF) using {sup 99m}Tc-DMSA scintigraphy was also investigated. Results: The time required for volumetry of bilateral kidneys with the newly developed software (16.7 {+-} 3.9 s) was significantly shorter than that of the workstation (102.6 {+-} 38.9 s, p < 0.0001). The results of n-SRV (49.7 {+-} 4.0%) were highly consistent with those of z-SRV (49.9 {+-} 3.6%), with a mean discrepancy of 0.12 {+-} 0.84%. The SRF also agreed well with the n-SRV, with a mean discrepancy of 0.25 {+-} 1.65%. The dominant side determined by SRF and n-SRV showed agreement in 26 of 28 cases (92.9%). Conclusion: The newly developed software for CT volumetry was more rapid than the conventional workstation volumetry and just as accurate, and was suggested to be useful for the estimation of SRF and thus the dominant side in kidney donors.
International Nuclear Information System (INIS)
Fang, Yu-Hua; Kao, Tsair; Liu, Ren-Shyan; Wu, Liang-Chih
2004-01-01
A novel statistical method, namely Regression-Estimated Input Function (REIF), is proposed in this study for the purpose of non-invasive estimation of the input function for fluorine-18 2-fluoro-2-deoxy-d-glucose positron emission tomography (FDG-PET) quantitative analysis. We collected 44 patients who had undergone a blood sampling procedure during their FDG-PET scans. First, we generated tissue time-activity curves of the grey matter and the whole brain with a segmentation technique for every subject. Summations of different intervals of these two curves were used as a feature vector, which also included the net injection dose. Multiple linear regression analysis was then applied to find the correlation between the input function and the feature vector. After a simulation study with in vivo data, the data of 29 patients were applied to calculate the regression coefficients, which were then used to estimate the input functions of the other 15 subjects. Comparing the estimated input functions with the corresponding real input functions, the averaged error percentages of the area under the curve and the cerebral metabolic rate of glucose (CMRGlc) were 12.13±8.85 and 16.60±9.61, respectively. Regression analysis of the CMRGlc values derived from the real and estimated input functions revealed a high correlation (r=0.91). No significant difference was found between the real CMRGlc and that derived from our regression-estimated input function (Student's t test, P>0.05). The proposed REIF method demonstrated good abilities for input function and CMRGlc estimation, and represents a reliable replacement for the blood sampling procedures in FDG-PET quantification. (orig.)
International Nuclear Information System (INIS)
Wells, Jered R.; Dobbins, James T. III
2012-01-01
Purpose: The modulation transfer function (MTF) of medical imaging devices is commonly reported in the form of orthogonal one-dimensional (1D) measurements made near the vertical and horizontal axes with a slit or edge test device. A more complete description is found by measuring the two-dimensional (2D) MTF. Some 2D test devices have been proposed, but there are some issues associated with their use: (1) they are not generally available; (2) they may require many images; (3) the results may have diminished accuracy; and (4) their implementation may be particularly cumbersome. This current work proposes the application of commonly available 1D test devices for practical and accurate estimation of the 2D presampled MTF of digital imaging systems. Methods: Theory was developed and applied to ensure adequate fine sampling of the system line spread function for 1D test devices at orientations other than approximately vertical and horizontal. Methods were also derived and tested for slit nonuniformity correction at arbitrary angle. Techniques were validated with experimental measurements at ten angles using an edge test object and three angles using a slit test device on an indirect-detection flat-panel system [GE Revolution XQ/i (GE Healthcare, Waukesha, WI)]. The 2D MTF was estimated through a simple surface fit with interpolation based on Delaunay triangulation of the 1D edge-based MTF measurements. Validation by synthesis was also performed with simulated images from a hypothetical direct-detection flat-panel device. Results: The 2D MTF derived from physical measurements yielded an average relative precision error of 0.26% for frequencies below the cutoff (2.5 mm −1 ) and approximate circular symmetry at frequencies below 4 mm −1 . While slit analysis generally agreed with the results of edge analysis, the two showed subtle differences at frequencies above 4 mm −1 . Slit measurement near 45° revealed radial asymmetry in the MTF resulting from the square
International Nuclear Information System (INIS)
Li Heng; Mohan, Radhe; Zhu, X Ronald
2008-01-01
The clinical applications of kilovoltage x-ray cone-beam computed tomography (CBCT) have been compromised by the limited quality of CBCT images, which typically is due to a substantial scatter component in the projection data. In this paper, we describe an experimental method of deriving the scatter kernel of a CBCT imaging system. The estimated scatter kernel can be used to remove the scatter component from the CBCT projection images, thus improving the quality of the reconstructed image. The scattered radiation was approximated as depth-dependent, pencil-beam kernels, which were derived using an edge-spread function (ESF) method. The ESF geometry was achieved with a half-beam block created by a 3 mm thick lead sheet placed on a stack of slab solid-water phantoms. Measurements for ten water-equivalent thicknesses (WET) ranging from 0 cm to 41 cm were taken with (half-blocked) and without (unblocked) the lead sheet, and corresponding pencil-beam scatter kernels or point-spread functions (PSFs) were then derived without assuming any empirical trial function. The derived scatter kernels were verified with phantom studies. Scatter correction was then incorporated into the reconstruction process to improve image quality. For a 32 cm diameter cylinder phantom, the flatness of the reconstructed image was improved from 22% to 5%. When the method was applied to CBCT images for patients undergoing image-guided therapy of the pelvis and lung, the variation in selected regions of interest (ROIs) was reduced from >300 HU to <100 HU. We conclude that the scatter reduction technique utilizing the scatter kernel effectively suppresses the artifact caused by scatter in CBCT.
International Nuclear Information System (INIS)
Buryi, E V; Kosygin, A A
2004-01-01
It is shown that, when the angular resolution of a receiving optical system is insufficient, the angular dimensions of a located object can be estimated and its shape can be reconstructed by estimating the parameters of the fourth-order correlation function (CF) of scattered coherent radiation. The reliability of the estimates of CF counts obtained by the method of a discrete spatial convolution of the intensity-field counts, the possibility of estimating the CF profile counts by the method of one-dimensional convolution of intensity counts, and the applicability of the method for reconstructing the object shape are confirmed experimentally. (laser applications and other topics in quantum electronics)
Anderson, Josephine L C; Gruppen, Eke G; van Tienhoven-Wind, Lynnda; Eisenga, Michele F; de Vries, Hanne; Gansevoort, Ron T; Bakker, Stephan J L; Dullaart, Robin P F
BACKGROUND: Effects of variations in thyroid function within the euthyroid range on renal function are unclear. Cystatin C-based equations to estimate glomerular filtration rate (GFR) are currently advocated for mortality and renal risk prediction. However, the applicability of cystatin C-based
Peng, Mei; Jaeger, Sara R; Hautus, Michael J
2014-03-01
Psychometric functions are predominately used for estimating detection thresholds in vision and audition. However, the requirement of large data quantities for fitting psychometric functions (>30 replications) reduces their suitability in olfactory studies because olfactory response data are often limited (ASTM) E679. The slope parameter of the individual-judge psychometric function is fixed to be the same as that of the group function; the same-shaped symmetrical sigmoid function is fitted only using the intercept. This study evaluated the proposed method by comparing it with 2 available methods. Comparison to conventional psychometric functions (fitted slope and intercept) indicated that the assumption of a fixed slope did not compromise precision of the threshold estimates. No systematic difference was obtained between the proposed method and the ASTM method in terms of group threshold estimates or threshold distributions, but there were changes in the rank, by threshold, of judges in the group. Overall, the fixed-slope psychometric function is recommended for obtaining relatively reliable individual threshold estimates when the quantity of data is limited.
Silverman, Merav H.; Jedd, Kelly; Luciana, Monica
2015-01-01
Behavioral responses to, and the neural processing of, rewards change dramatically during adolescence and may contribute to observed increases in risk-taking during this developmental period. Functional MRI (fMRI) studies suggest differences between adolescents and adults in neural activation during reward processing, but findings are contradictory, and effects have been found in non-predicted directions. The current study uses an activation likelihood estimation (ALE) approach for quantitative meta-analysis of functional neuroimaging studies to: 1) confirm the network of brain regions involved in adolescents’ reward processing, 2) identify regions involved in specific stages (anticipation, outcome) and valence (positive, negative) of reward processing, and 3) identify differences in activation likelihood between adolescent and adult reward-related brain activation. Results reveal a subcortical network of brain regions involved in adolescent reward processing similar to that found in adults with major hubs including the ventral and dorsal striatum, insula, and posterior cingulate cortex (PCC). Contrast analyses find that adolescents exhibit greater likelihood of activation in the insula while processing anticipation relative to outcome and greater likelihood of activation in the putamen and amygdala during outcome relative to anticipation. While processing positive compared to negative valence, adolescents show increased likelihood for activation in the posterior cingulate cortex (PCC) and ventral striatum. Contrasting adolescent reward processing with the existing ALE of adult reward processing (Liu et al., 2011) reveals increased likelihood for activation in limbic, frontolimbic, and striatal regions in adolescents compared with adults. Unlike adolescents, adults also activate executive control regions of the frontal and parietal lobes. These findings support hypothesized elevations in motivated activity during adolescence. PMID:26254587
Chantereau, W.; Usher, C.; Bastian, N.
2018-05-01
It is now well-established that most (if not all) ancient globular clusters host multiple populations, that are characterised by distinct chemical features such as helium abundance variations along with N-C and Na-O anti-correlations, at fixed [Fe/H]. These very distinct chemical features are similar to what is found in the centres of the massive early-type galaxies and may influence measurements of the global properties of the galaxies. Additionally, recent results have suggested that M/L variations found in the centres of massive early-type galaxies might be due to a bottom-heavy stellar initial mass function. We present an analysis of the effects of globular cluster-like multiple populations on the integrated properties of early-type galaxies. In particular, we focus on spectral features in the integrated optical spectrum and the global mass-to-light ratio that have been used to infer variations in the stellar initial mass function. To achieve this we develop appropriate stellar population synthesis models and take into account, for the first time, an initial-final mass relation which takes into consideration a varying He abundance. We conclude that while the multiple populations may be present in massive early-type galaxies, they are likely not responsible for the observed variations in the mass-to-light ratio and IMF sensitive line strengths. Finally, we estimate the fraction of stars with multiple populations chemistry that come from disrupted globular clusters within massive ellipticals and find that they may explain some of the observed chemical patterns in the centres of these galaxies.
Wirth, Christian; Schumacher, Jens; Schulze, Ernst-Detlef
2004-02-01
To facilitate future carbon and nutrient inventories, we used mixed-effect linear models to develop new generic biomass functions for Norway spruce (Picea abies (L.) Karst.) in Central Europe. We present both the functions and their respective variance-covariance matrices and illustrate their application for biomass prediction and uncertainty estimation for Norway spruce trees ranging widely in size, age, competitive status and site. We collected biomass data for 688 trees sampled in 102 stands by 19 authors. The total number of trees in the "base" model data sets containing the predictor variables diameter at breast height (D), height (H), age (A), site index (SI) and site elevation (HSL) varied according to compartment (roots: n = 114, stem: n = 235, dry branches: n = 207, live branches: n = 429 and needles: n = 551). "Core" data sets with about 40% fewer trees could be extracted containing the additional predictor variables crown length and social class. A set of 43 candidate models representing combinations of lnD, lnH, lnA, SI and HSL, including second-order polynomials and interactions, was established. The categorical variable "author" subsuming mainly methodological differences was included as a random effect in a mixed linear model. The Akaike Information Criterion was used for model selection. The best models for stem, root and branch biomass contained only combinations of D, H and A as predictors. More complex models that included site-related variables resulted for needle biomass. Adding crown length as a predictor for needles, branches and roots reduced both the bias and the confidence interval of predictions substantially. Applying the best models to a test data set of 17 stands ranging in age from 16 to 172 years produced realistic allocation patterns at the tree and stand levels. The 95% confidence intervals (% of mean prediction) were highest for crown compartments (approximately +/- 12%) and lowest for stem biomass (approximately +/- 5%), and
Fuente, David; Lizama, Carlos; Urchueguía, Javier F.; Conejero, J. Alberto
2018-01-01
Light attenuation within suspensions of photosynthetic microorganisms has been widely described by the Lambert-Beer equation. However, at depths where most of the light has been absorbed by the cells, light decay deviates from the exponential behaviour and shows a lower attenuation than the corresponding from the purely exponential fall. This discrepancy can be modelled through the Mittag-Leffler function, extending Lambert-Beer law via a tuning parameter α that takes into account the attenuation process. In this work, we describe a fractional Lambert-Beer law to estimate light attenuation within cultures of model organism Synechocystis sp. PCC 6803. Indeed, we benchmark the measured light field inside cultures of two different Synechocystis strains, namely the wild-type and the antenna mutant strain called Olive at five different cell densities, with our in silico results. The Mittag-Leffler hyper-parameter α that best fits the data is 0.995, close to the exponential case. One of the most striking results to emerge from this work is that unlike prior literature on the subject, this one provides experimental evidence on the validity of fractional calculus for determining the light field. We show that by applying the fractional Lambert-Beer law for describing light attenuation, we are able to properly model light decay in photosynthetic microorganisms suspensions.
Aroca-Jimenez, Estefania; Bodoque, Jose Maria; Diez-Herrero, Andres
2015-04-01
Flash floods constitute one of the natural hazards better able to generate risk, particularly with regard to Society. The complexity of this process and its dependence on various factors related to the characteristics of the basin and rainfall make flash floods are difficult to characterize in terms of their hydrological response.To do this, it is essential a proper analysis of the so called 'initial abstractions'. Among all of these processes, infiltration plays a crucial role in explaining the occurrence of floods in mountainous basins.For its characterization the Green-Ampt model , which depends on the characteristics of rainfall and physical properties of soil has been used in this work.This is a method enabling to simulate floods in mountainous basins where hydrological response is sub-daily. However, it has the disadvantage that it is based on physical properties of soil which have a high spatial variability. To address this difficulty soil mapping units have been delineated according to the geomorphological landforms and elements. They represent hydro-functional mapping units that are theoretically homogeneous from the perspective of the pedostructure parameters of the pedon. So the soil texture of each homogeneous group of landform units was studied by granulometric analyses using standarized sieves and Sedigraph devices. In addition, uncertainty associated with the parameterization of the Green-Ampt method has been estimated by implementing a Monte Carlo approach, which required assignment of the proper distribution function to each parameter.The suitability of this method was contrasted by calibrating and validating a hydrological model, in which the generation of runoff hydrograph has been simulated using the SCS unit hydrograph (HEC-GeoHMS software), while flood wave routing has been characterized using the Muskingum-Cunge method. Calibration and validation of the model was from the use of an automatic routine based on the employ of the search algorithm
Aldoghaither, Abeer
2015-12-01
In this paper, a new method, based on the so-called modulating functions, is proposed to estimate average velocity, dispersion coefficient, and differentiation order in a space-fractional advection-dispersion equation, where the average velocity and the dispersion coefficient are space-varying. First, the average velocity and the dispersion coefficient are estimated by applying the modulating functions method, where the problem is transformed into a linear system of algebraic equations. Then, the modulating functions method combined with a Newton\\'s iteration algorithm is applied to estimate the coefficients and the differentiation order simultaneously. The local convergence of the proposed method is proved. Numerical results are presented with noisy measurements to show the effectiveness and robustness of the proposed method. It is worth mentioning that this method can be extended to general fractional partial differential equations.
Aldoghaither, Abeer; Liu, Da-Yan; Laleg-Kirati, Taous-Meriem
2015-01-01
In this paper, a new method, based on the so-called modulating functions, is proposed to estimate average velocity, dispersion coefficient, and differentiation order in a space-fractional advection-dispersion equation, where the average velocity and the dispersion coefficient are space-varying. First, the average velocity and the dispersion coefficient are estimated by applying the modulating functions method, where the problem is transformed into a linear system of algebraic equations. Then, the modulating functions method combined with a Newton's iteration algorithm is applied to estimate the coefficients and the differentiation order simultaneously. The local convergence of the proposed method is proved. Numerical results are presented with noisy measurements to show the effectiveness and robustness of the proposed method. It is worth mentioning that this method can be extended to general fractional partial differential equations.
International Nuclear Information System (INIS)
Casanova, R; Yang, L; Hairston, W D; Laurienti, P J; Maldjian, J A
2009-01-01
Recently we have proposed the use of Tikhonov regularization with temporal smoothness constraints to estimate the BOLD fMRI hemodynamic response function (HRF). The temporal smoothness constraint was imposed on the estimates by using second derivative information while the regularization parameter was selected based on the generalized cross-validation function (GCV). Using one-dimensional simulations, we previously found this method to produce reliable estimates of the HRF time course, especially its time to peak (TTP), being at the same time fast and robust to over-sampling in the HRF estimation. Here, we extend the method to include simultaneous temporal and spatial smoothness constraints. This method does not need Gaussian smoothing as a pre-processing step as usually done in fMRI data analysis. We carried out two-dimensional simulations to compare the two methods: Tikhonov regularization with temporal (Tik-GCV-T) and spatio-temporal (Tik-GCV-ST) smoothness constraints on the estimated HRF. We focus our attention on quantifying the influence of the Gaussian data smoothing and the presence of edges on the performance of these techniques. Our results suggest that the spatial smoothing introduced by regularization is less severe than that produced by Gaussian smoothing. This allows more accurate estimates of the response amplitudes while producing similar estimates of the TTP. We illustrate these ideas using real data. (note)
International Nuclear Information System (INIS)
Akiko, Furuno; Hideyuki, Kitabata
2003-01-01
Full text: The importance of computer-based decision support systems for local and regional scale accidents has been recognized by many countries with the experiences of accidental atmospheric releases of radionuclides at Chernobyl in 1986 in the former Soviet Union. The recent increase of nuclear power plants in the Asian region also necessitates an emergency response system for Japan to predict the long-range atmospheric dispersion of radionuclides due to overseas accident. On the basis of these backgrounds, WSPEEDI (Worldwide version of System for Prediction of Environmental Emergency Dose Information) at Japan Atomic Energy Research Institute is developed to forecast long-range atmospheric dispersions of radionuclides during nuclear emergency. Although the source condition is critical parameter for accurate prediction, it is rarely that the condition can be acquired in the early stage of overseas accident. Thus, we have been developing a computer-based function to estimate radioactive source term, e.g. the release point, time and amount, as a part of WSPEEDI. This function consists of atmospheric transport simulations and statistical analysis for the prediction and monitoring of air dose rates. Atmospheric transport simulations are carried out for the matrix of possible release points in Eastern Asia and possible release times. The simulation results of air dose rates are compared with monitoring data and the best fitted release condition is defined as source term. This paper describes the source term estimation method and the application to Eastern Asia. The latest version of WSPEEDI accommodates following two models: an atmospheric meteorological model MM5 and a particle random walk model GEARN. MM5 is a non-hydrostatic meteorological model developed by the Pennsylvania State University and the National Center for Atmospheric Research (NCAR). MM5 physically calculates more than 40 meteorological parameters with high resolution in time and space based an