A Comparison of Item Fit Statistics for Mixed IRT Models
Chon, Kyong Hee; Lee, Won-Chan; Dunbar, Stephen B.
2010-01-01
In this study we examined procedures for assessing model-data fit of item response theory (IRT) models for mixed format data. The model fit indices used in this study include PARSCALE's G[superscript 2], Orlando and Thissen's S-X[superscript 2] and S-G[superscript 2], and Stone's chi[superscript 2*] and G[superscript 2*]. To investigate the…
Liao, Tim Futing
2011-01-01
An incomparably useful examination of statistical methods for comparisonThe nature of doing science, be it natural or social, inevitably calls for comparison. Statistical methods are at the heart of such comparison, for they not only help us gain understanding of the world around us but often define how our research is to be carried out. The need to compare between groups is best exemplified by experiments, which have clearly defined statistical methods. However, true experiments are not always possible. What complicates the matter more is a great deal of diversity in factors that are not inde
On the statistical comparison of climate model output and climate data
International Nuclear Information System (INIS)
Solow, A.R.
1991-01-01
Some broad issues arising in the statistical comparison of the output of climate models with the corresponding climate data are reviewed. Particular attention is paid to the question of detecting climate change. The purpose of this paper is to review some statistical approaches to the comparison of the output of climate models with climate data. There are many statistical issues arising in such a comparison. The author will focus on some of the broader issues, although some specific methodological questions will arise along the way. One important potential application of the approaches discussed in this paper is the detection of climate change. Although much of the discussion will be fairly general, he will try to point out the appropriate connections to the detection question. 9 refs
On the statistical comparison of climate model output and climate data
International Nuclear Information System (INIS)
Solow, A.R.
1990-01-01
Some broad issues arising in the statistical comparison of the output of climate models with the corresponding climate data are reviewed. Particular attention is paid to the question of detecting climate change. The purpose of this paper is to review some statistical approaches to the comparison of the output of climate models with climate data. There are many statistical issues arising in such a comparison. The author will focus on some of the broader issues, although some specific methodological questions will arise along the way. One important potential application of the approaches discussed in this paper is the detection of climate change. Although much of the discussion will be fairly general, he will try to point out the appropriate connections to the detection question
Hart, Carl R; Reznicek, Nathan J; Wilson, D Keith; Pettit, Chris L; Nykaza, Edward T
2016-05-01
Many outdoor sound propagation models exist, ranging from highly complex physics-based simulations to simplified engineering calculations, and more recently, highly flexible statistical learning methods. Several engineering and statistical learning models are evaluated by using a particular physics-based model, namely, a Crank-Nicholson parabolic equation (CNPE), as a benchmark. Narrowband transmission loss values predicted with the CNPE, based upon a simulated data set of meteorological, boundary, and source conditions, act as simulated observations. In the simulated data set sound propagation conditions span from downward refracting to upward refracting, for acoustically hard and soft boundaries, and low frequencies. Engineering models used in the comparisons include the ISO 9613-2 method, Harmonoise, and Nord2000 propagation models. Statistical learning methods used in the comparisons include bagged decision tree regression, random forest regression, boosting regression, and artificial neural network models. Computed skill scores are relative to sound propagation in a homogeneous atmosphere over a rigid ground. Overall skill scores for the engineering noise models are 0.6%, -7.1%, and 83.8% for the ISO 9613-2, Harmonoise, and Nord2000 models, respectively. Overall skill scores for the statistical learning models are 99.5%, 99.5%, 99.6%, and 99.6% for bagged decision tree, random forest, boosting, and artificial neural network regression models, respectively.
Model Accuracy Comparison for High Resolution Insar Coherence Statistics Over Urban Areas
Zhang, Yue; Fu, Kun; Sun, Xian; Xu, Guangluan; Wang, Hongqi
2016-06-01
The interferometric coherence map derived from the cross-correlation of two complex registered synthetic aperture radar (SAR) images is the reflection of imaged targets. In many applications, it can act as an independent information source, or give additional information complementary to the intensity image. Specially, the statistical properties of the coherence are of great importance in land cover classification, segmentation and change detection. However, compared to the amount of work on the statistical characters of SAR intensity, there are quite fewer researches on interferometric SAR (InSAR) coherence statistics. And to our knowledge, all of the existing work that focuses on InSAR coherence statistics, models the coherence with Gaussian distribution with no discrimination on data resolutions or scene types. But the properties of coherence may be different for different data resolutions and scene types. In this paper, we investigate on the coherence statistics for high resolution data over urban areas, by making a comparison of the accuracy of several typical statistical models. Four typical land classes including buildings, trees, shadow and roads are selected as the representatives of urban areas. Firstly, several regions are selected from the coherence map manually and labelled with their corresponding classes respectively. Then we try to model the statistics of the pixel coherence for each type of region, with different models including Gaussian, Rayleigh, Weibull, Beta and Nakagami. Finally, we evaluate the model accuracy for each type of region. The experiments on TanDEM-X data show that the Beta model has a better performance than other distributions.
MODEL ACCURACY COMPARISON FOR HIGH RESOLUTION INSAR COHERENCE STATISTICS OVER URBAN AREAS
Directory of Open Access Journals (Sweden)
Y. Zhang
2016-06-01
Full Text Available The interferometric coherence map derived from the cross-correlation of two complex registered synthetic aperture radar (SAR images is the reflection of imaged targets. In many applications, it can act as an independent information source, or give additional information complementary to the intensity image. Specially, the statistical properties of the coherence are of great importance in land cover classification, segmentation and change detection. However, compared to the amount of work on the statistical characters of SAR intensity, there are quite fewer researches on interferometric SAR (InSAR coherence statistics. And to our knowledge, all of the existing work that focuses on InSAR coherence statistics, models the coherence with Gaussian distribution with no discrimination on data resolutions or scene types. But the properties of coherence may be different for different data resolutions and scene types. In this paper, we investigate on the coherence statistics for high resolution data over urban areas, by making a comparison of the accuracy of several typical statistical models. Four typical land classes including buildings, trees, shadow and roads are selected as the representatives of urban areas. Firstly, several regions are selected from the coherence map manually and labelled with their corresponding classes respectively. Then we try to model the statistics of the pixel coherence for each type of region, with different models including Gaussian, Rayleigh, Weibull, Beta and Nakagami. Finally, we evaluate the model accuracy for each type of region. The experiments on TanDEM-X data show that the Beta model has a better performance than other distributions.
Kolokythas, Kostantinos; Vasileios, Salamalikis; Athanassios, Argiriou; Kazantzidis, Andreas
2015-04-01
The wind is a result of complex interactions of numerous mechanisms taking place in small or large scales, so, the better knowledge of its behavior is essential in a variety of applications, especially in the field of power production coming from wind turbines. In the literature there is a considerable number of models, either physical or statistical ones, dealing with the problem of simulation and prediction of wind speed. Among others, Artificial Neural Networks (ANNs) are widely used for the purpose of wind forecasting and, in the great majority of cases, outperform other conventional statistical models. In this study, a number of ANNs with different architectures, which have been created and applied in a dataset of wind time series, are compared to Auto Regressive Integrated Moving Average (ARIMA) statistical models. The data consist of mean hourly wind speeds coming from a wind farm on a hilly Greek region and cover a period of one year (2013). The main goal is to evaluate the models ability to simulate successfully the wind speed at a significant point (target). Goodness-of-fit statistics are performed for the comparison of the different methods. In general, the ANN showed the best performance in the estimation of wind speed prevailing over the ARIMA models.
Computational and Statistical Models: A Comparison for Policy Modeling of Childhood Obesity
Mabry, Patricia L.; Hammond, Ross; Ip, Edward Hak-Sing; Huang, Terry T.-K.
As systems science methodologies have begun to emerge as a set of innovative approaches to address complex problems in behavioral, social science, and public health research, some apparent conflicts with traditional statistical methodologies for public health have arisen. Computational modeling is an approach set in context that integrates diverse sources of data to test the plausibility of working hypotheses and to elicit novel ones. Statistical models are reductionist approaches geared towards proving the null hypothesis. While these two approaches may seem contrary to each other, we propose that they are in fact complementary and can be used jointly to advance solutions to complex problems. Outputs from statistical models can be fed into computational models, and outputs from computational models can lead to further empirical data collection and statistical models. Together, this presents an iterative process that refines the models and contributes to a greater understanding of the problem and its potential solutions. The purpose of this panel is to foster communication and understanding between statistical and computational modelers. Our goal is to shed light on the differences between the approaches and convey what kinds of research inquiries each one is best for addressing and how they can serve complementary (and synergistic) roles in the research process, to mutual benefit. For each approach the panel will cover the relevant "assumptions" and how the differences in what is assumed can foster misunderstandings. The interpretations of the results from each approach will be compared and contrasted and the limitations for each approach will be delineated. We will use illustrative examples from CompMod, the Comparative Modeling Network for Childhood Obesity Policy. The panel will also incorporate interactive discussions with the audience on the issues raised here.
Lee, K. David; Wiesenfeld, Eric; Gelfand, Andrew
2007-04-01
One of the greatest challenges in modern combat is maintaining a high level of timely Situational Awareness (SA). In many situations, computational complexity and accuracy considerations make the development and deployment of real-time, high-level inference tools very difficult. An innovative hybrid framework that combines Bayesian inference, in the form of Bayesian Networks, and Possibility Theory, in the form of Fuzzy Logic systems, has recently been introduced to provide a rigorous framework for high-level inference. In previous research, the theoretical basis and benefits of the hybrid approach have been developed. However, lacking is a concrete experimental comparison of the hybrid framework with traditional fusion methods, to demonstrate and quantify this benefit. The goal of this research, therefore, is to provide a statistical analysis on the comparison of the accuracy and performance of hybrid network theory, with pure Bayesian and Fuzzy systems and an inexact Bayesian system approximated using Particle Filtering. To accomplish this task, domain specific models will be developed under these different theoretical approaches and then evaluated, via Monte Carlo Simulation, in comparison to situational ground truth to measure accuracy and fidelity. Following this, a rigorous statistical analysis of the performance results will be performed, to quantify the benefit of hybrid inference to other fusion tools.
Straub, Annette; Beck, Christoph; Breitner, Susanne; Cyrys, Josef; Geruschkat, Uta; Jacobeit, Jucundus; Kühlbach, Benjamin; Kusch, Thomas; Richter, Katja; Schneider, Alexandra; Umminger, Robin; Wolf, Kathrin
2017-04-01
Frequently spatial variations of air temperature of considerable magnitude occur within urban areas. They correspond to varying land use/land cover characteristics and vary with season, time of day and synoptic conditions. These temperature differences have an impact on human health and comfort directly by inducing thermal stress as well as indirectly by means of affecting air quality. Therefore, knowledge of the spatial patterns of air temperature in cities and the factors causing them is of great importance, e.g. for urban planners. A multitude of studies have shown statistical modelling to be a suitable tool for generating spatial air temperature patterns. This contribution presents a comparison of different statistical modelling approaches for deriving spatial air temperature patterns in the urban environment of Augsburg, Southern Germany. In Augsburg there exists a measurement network for air temperature and humidity currently comprising 48 stations in the city and its rural surroundings (corporately operated by the Institute of Epidemiology II, Helmholtz Zentrum München, German Research Center for Environmental Health and the Institute of Geography, University of Augsburg). Using different datasets for land surface characteristics (Open Street Map, Urban Atlas) area percentages of different types of land cover were calculated for quadratic buffer zones of different size (25, 50, 100, 250, 500 m) around the stations as well for source regions of advective air flow and used as predictors together with additional variables such as sky view factor, ground level and distance from the city centre. Multiple Linear Regression and Random Forest models for different situations taking into account season, time of day and weather condition were applied utilizing selected subsets of these predictors in order to model spatial distributions of mean hourly and daily air temperature deviations from a rural reference station. Furthermore, the different model setups were
Brandt, Laura A.; Benscoter, Allison; Harvey, Rebecca G.; Speroterra, Carolina; Bucklin, David N.; Romañach, Stephanie; Watling, James I.; Mazzotti, Frank J.
2017-01-01
Climate envelope models are widely used to describe potential future distribution of species under different climate change scenarios. It is broadly recognized that there are both strengths and limitations to using climate envelope models and that outcomes are sensitive to initial assumptions, inputs, and modeling methods Selection of predictor variables, a central step in modeling, is one of the areas where different techniques can yield varying results. Selection of climate variables to use as predictors is often done using statistical approaches that develop correlations between occurrences and climate data. These approaches have received criticism in that they rely on the statistical properties of the data rather than directly incorporating biological information about species responses to temperature and precipitation. We evaluated and compared models and prediction maps for 15 threatened or endangered species in Florida based on two variable selection techniques: expert opinion and a statistical method. We compared model performance between these two approaches for contemporary predictions, and the spatial correlation, spatial overlap and area predicted for contemporary and future climate predictions. In general, experts identified more variables as being important than the statistical method and there was low overlap in the variable sets (0.9 for area under the curve (AUC) and >0.7 for true skill statistic (TSS). Spatial overlap, which compares the spatial configuration between maps constructed using the different variable selection techniques, was only moderate overall (about 60%), with a great deal of variability across species. Difference in spatial overlap was even greater under future climate projections, indicating additional divergence of model outputs from different variable selection techniques. Our work is in agreement with other studies which have found that for broad-scale species distribution modeling, using statistical methods of variable
Dai, Qi; Yang, Yanchun; Wang, Tianming
2008-10-15
Many proposed statistical measures can efficiently compare biological sequences to further infer their structures, functions and evolutionary information. They are related in spirit because all the ideas for sequence comparison try to use the information on the k-word distributions, Markov model or both. Motivated by adding k-word distributions to Markov model directly, we investigated two novel statistical measures for sequence comparison, called wre.k.r and S2.k.r. The proposed measures were tested by similarity search, evaluation on functionally related regulatory sequences and phylogenetic analysis. This offers the systematic and quantitative experimental assessment of our measures. Moreover, we compared our achievements with these based on alignment or alignment-free. We grouped our experiments into two sets. The first one, performed via ROC (receiver operating curve) analysis, aims at assessing the intrinsic ability of our statistical measures to search for similar sequences from a database and discriminate functionally related regulatory sequences from unrelated sequences. The second one aims at assessing how well our statistical measure is used for phylogenetic analysis. The experimental assessment demonstrates that our similarity measures intending to incorporate k-word distributions into Markov model are more efficient.
International Nuclear Information System (INIS)
Norman, S.
1992-04-01
The origin of this study was to find a good, or even the best, stochastic model for the hydraulic conductivity field at the Finnsjoe site. The conductivity field in question are regularized, that is upscaled. The reason for performing regularization of measurement data is primarily the need for long correlation scales. This is needed in order to model reasonably large domains that can be used when describing regional groundwater flow accurately. A theory of regularization is discussed in this report. In order to find the best model, jacknifing is employed to compare different stochastic models. The theory for this method is described. In the act of doing so we also take a look at linear predictor theory, so called kriging, and include a general discussion of stochastic functions and intrinsic random functions. The statistical inference methods for finding the models are also described, in particular regression, iterative generalized regression (IGLSE) and non-parametric variogram estimators. A large amount of results is presented for a regularization scale of 36 metre. (30 refs.) (au)
Noda, Y; Goshima, S; Nagata, S; Miyoshi, T; Kawada, H; Kawai, N; Tanahashi, Y; Matsuo, M
2018-06-01
To compare right adrenal vein (RAV) visualisation and contrast enhancement degree on adrenal venous phase images reconstructed using adaptive statistical iterative reconstruction (ASiR) and model-based iterative reconstruction (MBIR) techniques. This prospective study was approved by the institutional review board, and written informed consent was waived. Fifty-seven consecutive patients who underwent adrenal venous phase imaging were enrolled. The same raw data were reconstructed using ASiR 40% and MBIR. The expert and beginner independently reviewed computed tomography (CT) images. RAV visualisation rates, background noise, and CT attenuation of the RAV, right adrenal gland, inferior vena cava (IVC), hepatic vein, and bilateral renal veins were compared between the two reconstruction techniques. RAV visualisation rates were higher with MBIR than with ASiR (95% versus 88%, p=0.13 in expert and 93% versus 75%, p=0.002 in beginner, respectively). RAV visualisation confidence ratings with MBIR were significantly greater than with ASiR (pASiR (pASiR (p=0.0013 and 0.02). Reconstruction of adrenal venous phase images using MBIR significantly reduces background noise, leading to an improvement in the RAV visualisation compared with ASiR. Copyright © 2018 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.
Soni, Kirti; Parmar, Kulwinder Singh; Kapoor, Sangeeta; Kumar, Nishant
2016-05-15
A lot of studies in the literature of Aerosol Optical Depth (AOD) done by using Moderate Resolution Imaging Spectroradiometer (MODIS) derived data, but the accuracy of satellite data in comparison to ground data derived from ARrosol Robotic NETwork (AERONET) has been always questionable. So to overcome from this situation, comparative study of a comprehensive ground based and satellite data for the period of 2001-2012 is modeled. The time series model is used for the accurate prediction of AOD and statistical variability is compared to assess the performance of the model in both cases. Root mean square error (RMSE), mean absolute percentage error (MAPE), stationary R-squared, R-squared, maximum absolute percentage error (MAPE), normalized Bayesian information criterion (NBIC) and Ljung-Box methods are used to check the applicability and validity of the developed ARIMA models revealing significant precision in the model performance. It was found that, it is possible to predict the AOD by statistical modeling using time series obtained from past data of MODIS and AERONET as input data. Moreover, the result shows that MODIS data can be formed from AERONET data by adding 0.251627 ± 0.133589 and vice-versa by subtracting. From the forecast available for AODs for the next four years (2013-2017) by using the developed ARIMA model, it is concluded that the forecasted ground AOD has increased trend. Copyright © 2016 Elsevier B.V. All rights reserved.
Martin, Jordan S; Suarez, Scott A
2017-08-01
Interest in quantifying consistent among-individual variation in primate behavior, also known as personality, has grown rapidly in recent decades. Although behavioral coding is the most frequently utilized method for assessing primate personality, limitations in current statistical practice prevent researchers' from utilizing the full potential of their coding datasets. These limitations include the use of extensive data aggregation, not modeling biologically relevant sources of individual variance during repeatability estimation, not partitioning between-individual (co)variance prior to modeling personality structure, the misuse of principal component analysis, and an over-reliance upon exploratory statistical techniques to compare personality models across populations, species, and data collection methods. In this paper, we propose a statistical framework for primate personality research designed to address these limitations. Our framework synthesizes recently developed mixed-effects modeling approaches for quantifying behavioral variation with an information-theoretic model selection paradigm for confirmatory personality research. After detailing a multi-step analytic procedure for personality assessment and model comparison, we employ this framework to evaluate seven models of personality structure in zoo-housed bonobos (Pan paniscus). We find that differences between sexes, ages, zoos, time of observation, and social group composition contributed to significant behavioral variance. Independently of these factors, however, personality nonetheless accounted for a moderate to high proportion of variance in average behavior across observational periods. A personality structure derived from past rating research receives the strongest support relative to our model set. This model suggests that personality variation across the measured behavioral traits is best described by two correlated but distinct dimensions reflecting individual differences in affiliation and
International Nuclear Information System (INIS)
Lachet, Bernard.
1975-01-01
A statistical study was carried out on 208 survival curves for chlorella subjected to γ or particle radiations. The computing programmes used were written in Fortran. The different experimental causes contributing to the variance of a survival rate are analyzed and consequently the experiments can be planned. Each curve was fitted to four models by the weighted least squares method applied to non-linear functions. The validity of the fits obtained can be checked by the F test. It was possible to define the confidence and prediction zones around an adjusted curve by weighting of the residual variance, in spite of error on the doses delivered; the confidence limits can them be fixed for a dose estimated from an exact or measured survival. The four models adopted were compared for the precision of their fit (by a non-parametric simultaneous comparison test) and the scattering of their adjusted parameters: Wideroe's model gives a very good fit with the experimental points in return for a scattering of its parameters, which robs them of their presumed meaning. The principal component analysis showed the statistical equivalence of the 1 and 2 hit target models. Division of the irradiation into two doses, the first fixed by the investigator, leads to families of curves for which the equation was established from that of any basic model expressing the dose survival relationship in one-stage irradiation [fr
Weerasinghe, Dash; Orsak, Timothy; Mendro, Robert
In an age of student accountability, public school systems must find procedures for identifying effective schools, classrooms, and teachers that help students continue to learn academically. As a result, researchers have been modeling schools and classrooms to calculate productivity indicators that will withstand not only statistical review but…
A Comparison of Selected Statistical Techniques to Model Soil Cation Exchange Capacity
Khaledian, Yones; Brevik, Eric C.; Pereira, Paulo; Cerdà, Artemi; Fattah, Mohammed A.; Tazikeh, Hossein
2017-04-01
Cation exchange capacity (CEC) measures the soil's ability to hold positively charged ions and is an important indicator of soil quality (Khaledian et al., 2016). However, other soil properties are more commonly determined and reported, such as texture, pH, organic matter and biology. We attempted to predict CEC using different advanced statistical methods including monotone analysis of variance (MONANOVA), artificial neural networks (ANNs), principal components regressions (PCR), and particle swarm optimization (PSO) in order to compare the utility of these approaches and identify the best predictor. We analyzed 170 soil samples from four different nations (USA, Spain, Iran and Iraq) under three land uses (agriculture, pasture, and forest). Seventy percent of the samples (120 samples) were selected as the calibration set and the remaining 50 samples (30%) were used as the prediction set. The results indicated that the MONANOVA (R2= 0.82 and Root Mean Squared Error (RMSE) =6.32) and ANNs (R2= 0.82 and RMSE=5.53) were the best models to estimate CEC, PSO (R2= 0.80 and RMSE=5.54) and PCR (R2= 0.70 and RMSE=6.48) also worked well and the overall results were very similar to each other. Clay (positively correlated) and sand (negatively correlated) were the most influential variables for predicting CEC for the entire data set, while the most influential variables for the various countries and land uses were different and CEC was affected by different variables in different situations. Although the MANOVA and ANNs provided good predictions of the entire dataset, PSO gives a formula to estimate soil CEC using commonly tested soil properties. Therefore, PSO shows promise as a technique to estimate soil CEC. Establishing effective pedotransfer functions to predict CEC would be productive where there are limitations of time and money, and other commonly analyzed soil properties are available. References Khaledian, Y., Kiani, F., Ebrahimi, S., Brevik, E.C., Aitkenhead
Everson, Howard T.; And Others
This paper explores the feasibility of neural computing methods such as artificial neural networks (ANNs) and abductory induction mechanisms (AIM) for use in educational measurement. ANNs and AIMS methods are contrasted with more traditional statistical techniques, such as multiple regression and discriminant function analyses, for making…
Directory of Open Access Journals (Sweden)
E. Baümler
2014-06-01
Full Text Available In this study the kinetics of oil extraction from partially dehulled safflower seeds under two moisture conditions (7 and 9% dry basis was investigated. The extraction assays were performed using a stirred batch system, thermostated at 50 ºC, using n-hexane as solvent. The data obtained were fitted to a modified diffusion model in order to represent the extraction kinetics. The model took into account a washing and a diffusive step. Fitting parameters were compared statistically for both moisture conditions. The oil yield increased with the extraction time in both cases, although the oil was released at different rates. A comparison of the parameters showed that both the portion extracted in the washing phase and the effective diffusion coefficient were moisture-dependent. The effective diffusivities were 2.81 10-12 and 8.06 10-13 m²s-1 for moisture contents of 7% and 9%, respectively.
Sampling, Probability Models and Statistical Reasoning Statistical
Indian Academy of Sciences (India)
Home; Journals; Resonance – Journal of Science Education; Volume 1; Issue 5. Sampling, Probability Models and Statistical Reasoning Statistical Inference. Mohan Delampady V R Padmawar. General Article Volume 1 Issue 5 May 1996 pp 49-58 ...
Percent Effort vs. Fee-for-Service: A Comparison of Models for Statistical Collaboration
Ittenbach, Richard F.; DeAngelis, Francis W.
2012-01-01
Many statisticians are uncomfortable with discussions about the financial implications of their work. Those who are comfortable may not fully understand the policies and procedures underlying the financial operations of the department. The purpose of the present paper is twofold: first, to describe two predominant models of compensation used by…
Othman, A. A.; Gloaguen, R.; Andreani, L.; Rahnama, M.
2015-03-01
During the last decades, expansion of settlements into areas prone to landslides in Iraq has increased the importance of accurate hazard assessment. Susceptibility mapping provides information about hazardous locations and thus helps to potentially prevent infrastructure damage due to mass wasting. The aim of this study is to evaluate and compare frequency ratio (FR), weight of evidence (WOE), logistic regression (LR) and probit regression (PR) approaches in combination with new geomorphological indices to determine the landslide susceptibility index (LSI). We tested these four methods in Mawat area, Kurdistan Region, NE Iraq, where landslides occur frequently. For this purpose, we evaluated 16 geomorphological, geological and environmental predicting factors mainly derived from the advanced spaceborne thermal emission and reflection radiometer (ASTER) satellite. The available reference inventory includes 351 landslides representing a cumulative surface of 3.127 km2. This reference inventory was mapped from QuickBird data by manual delineation and partly verified by field survey. The areas under curve (AUC) of the receiver operating characteristic (ROC), and relative landslide density (R index) show that all models perform similarly and that focus should be put on the careful selection of proxies. The results indicate that the lithology and the slope aspects play major roles for landslide occurrences. Furthermore, this paper demonstrates that using hypsometric integral as a prediction factor instead of slope curvature gives better results and increases the accuracy of the LSI.
International Nuclear Information System (INIS)
Bashahu, M.
2003-01-01
Nine correlations have been developed in this paper to estimate the monthly average diffuse radiation for Dakar, Senegal. A 16-year period data on the global (H) and diffuse (H d ) radiation, together with data on the bright sunshine hours (N), the fraction of the sky's (Ne/8), the water vapour pressure in the air (e) and the ambient temperature (T) have been used for that purpose. A model inter-comparison based on the MBE, RMSE and t statistical tests has shown that estimates in any of the obtained correlations are not significantly different from their measured counterparts, thus all the nine models are recommended for the aforesaid location. Three of them should be particularly selected for their simplicity, universal applicability and high accuracy. Those are simple linear correlations between K d and N/N d , Ne/8 or K t . Even presenting adequate performance, the remaining correlations are either simple but less accurate, or multiple or nonlinear regressions needing one or two input variables. (author)
Energy Technology Data Exchange (ETDEWEB)
Bashahu, M. [University of Burundi, Bujumbura (Burundi). Institute of Applied Pedagogy, Department of Physics and Technology
2003-07-01
Nine correlations have been developed in this paper to estimate the monthly average diffuse radiation for Dakar, Senegal. A 16-year period data on the global (H) and diffuse (H{sub d}) radiation, together with data on the bright sunshine hours (N), the fraction of the sky's (Ne/8), the water vapour pressure in the air (e) and the ambient temperature (T) have been used for that purpose. A model inter-comparison based on the MBE, RMSE and t statistical tests has shown that estimates in any of the obtained correlations are not significantly different from their measured counterparts, thus all the nine models are recommended for the aforesaid location. Three of them should be particularly selected for their simplicity, universal applicability and high accuracy. Those are simple linear correlations between K{sub d} and N/N{sub d}, Ne/8 or K{sub t}. Even presenting adequate performance, the remaining correlations are either simple but less accurate, or multiple or nonlinear regressions needing one or two input variables. (author)
Diffeomorphic Statistical Deformation Models
DEFF Research Database (Denmark)
Hansen, Michael Sass; Hansen, Mads/Fogtman; Larsen, Rasmus
2007-01-01
In this paper we present a new method for constructing diffeomorphic statistical deformation models in arbitrary dimensional images with a nonlinear generative model and a linear parameter space. Our deformation model is a modified version of the diffeomorphic model introduced by Cootes et al....... The modifications ensure that no boundary restriction has to be enforced on the parameter space to prevent folds or tears in the deformation field. For straightforward statistical analysis, principal component analysis and sparse methods, we assume that the parameters for a class of deformations lie on a linear...... with ground truth in form of manual expert annotations, and compared to Cootes's model. We anticipate applications in unconstrained diffeomorphic synthesis of images, e.g. for tracking, segmentation, registration or classification purposes....
Ichikawa, Yasutaka; Kitagawa, Kakuya; Nagasawa, Naoki; Murashima, Shuichi; Sakuma, Hajime
2013-08-09
The recently developed model-based iterative reconstruction (MBIR) enables significant reduction of image noise and artifacts, compared with adaptive statistical iterative reconstruction (ASIR) and filtered back projection (FBP). The purpose of this study was to evaluate lesion detectability of low-dose chest computed tomography (CT) with MBIR in comparison with ASIR and FBP. Chest CT was acquired with 64-slice CT (Discovery CT750HD) with standard-dose (5.7 ± 2.3 mSv) and low-dose (1.6 ± 0.8 mSv) conditions in 55 patients (aged 72 ± 7 years) who were suspected of lung disease on chest radiograms. Low-dose CT images were reconstructed with MBIR, ASIR 50% and FBP, and standard-dose CT images were reconstructed with FBP, using a reconstructed slice thickness of 0.625 mm. Two observers evaluated the image quality of abnormal lung and mediastinal structures on a 5-point scale (Score 5 = excellent and score 1 = non-diagnostic). The objective image noise was also measured as the standard deviation of CT intensity in the descending aorta. The image quality score of enlarged mediastinal lymph nodes on low-dose MBIR CT (4.7 ± 0.5) was significantly improved in comparison with low-dose FBP and ASIR CT (3.0 ± 0.5, p = 0.004; 4.0 ± 0.5, p = 0.02, respectively), and was nearly identical to the score of standard-dose FBP image (4.8 ± 0.4, p = 0.66). Concerning decreased lung attenuation (bulla, emphysema, or cyst), the image quality score on low-dose MBIR CT (4.9 ± 0.2) was slightly better compared to low-dose FBP and ASIR CT (4.5 ± 0.6, p = 0.01; 4.6 ± 0.5, p = 0.01, respectively). There were no significant differences in image quality scores of visualization of consolidation or mass, ground-glass attenuation, or reticular opacity among low- and standard-dose CT series. Image noise with low-dose MBIR CT (11.6 ± 1.0 Hounsfield units (HU)) were significantly lower than with low-dose ASIR (21.1 ± 2.6 HU, p standard-dose FBP CT (16.6 ± 2.3 HU, p 70%, MBIR can provide
Benz, Dominik C; Fuchs, Tobias A; Gräni, Christoph; Studer Bruengger, Annina A; Clerc, Olivier F; Mikulicic, Fran; Messerli, Michael; Stehli, Julia; Possner, Mathias; Pazhenkottil, Aju P; Gaemperli, Oliver; Kaufmann, Philipp A; Buechel, Ronny R
2018-02-01
Iterative reconstruction (IR) algorithms allow for a significant reduction in radiation dose of coronary computed tomography angiography (CCTA). We performed a head-to-head comparison of adaptive statistical IR (ASiR) and model-based IR (MBIR) algorithms to assess their impact on quantitative image parameters and diagnostic accuracy for submillisievert CCTA. CCTA datasets of 91 patients were reconstructed using filtered back projection (FBP), increasing contributions of ASiR (20, 40, 60, 80, and 100%), and MBIR. Signal and noise were measured in the aortic root to calculate signal-to-noise ratio (SNR). In a subgroup of 36 patients, diagnostic accuracy of ASiR 40%, ASiR 100%, and MBIR for diagnosis of coronary artery disease (CAD) was compared with invasive coronary angiography. Median radiation dose was 0.21 mSv for CCTA. While increasing levels of ASiR gradually reduced image noise compared with FBP (up to - 48%, P ASiR (-59% compared with ASiR 100%; P ASiR 40% and ASiR 100% resulted in substantially lower diagnostic accuracy to detect CAD as diagnosed by invasive coronary angiography compared with MBIR: sensitivity and specificity were 100 and 37%, 100 and 57%, and 100 and 74% for ASiR 40%, ASiR 100%, and MBIR, respectively. MBIR offers substantial noise reduction with increased SNR, paving the way for implementation of submillisievert CCTA protocols in clinical routine. In contrast, inferior noise reduction by ASiR negatively affects diagnostic accuracy of submillisievert CCTA for CAD detection. Published on behalf of the European Society of Cardiology. All rights reserved. © The Author 2017. For permissions, please email: journals.permissions@oup.com.
Energy Technology Data Exchange (ETDEWEB)
Otake, M [Hiroshima Univ. (Japan). Faculty of Science
1976-12-01
Various statistical models designed to determine the effects of radiation dose on mortality of atomic bomb survivors in Hiroshima and Nagasaki from specific cancers were evaluated on the basis of a basic k(age) x c(dose) x 2 contingency table. From the aspects of application and fits of different models, analysis based on the additive logit model was applied to the mortality experience of this population during the 22year period from 1 Oct. 1950 to 31 Dec. 1972. The advantages and disadvantages of the additive logit model were demonstrated. Leukemia mortality showed a sharp rise with an increase in dose. The dose response relationship suggests a possible curvature or a log linear model, particularly if the dose estimated to be more than 600 rad were set arbitrarily at 600 rad, since the average dose in the 200+ rad group would then change from 434 to 350 rad. In the 22year period from 1950 to 1972, a high mortality risk due to radiation was observed in survivors with doses of 200 rad and over for all cancers except leukemia. On the other hand, during the latest period from 1965 to 1972 a significant risk was noted also for stomach and breast cancers. Survivors who were 9 year old or less at the time of the bomb and who were exposed to high doses of 200+ rad appeared to show a high mortality risk for all cancers except leukemia, although the number of observed deaths is yet small. A number of interesting areas are discussed from the statistical and epidemiological standpoints, i.e., the numerical comparison of risks in various models, the general evaluation of cancer mortality by the additive logit model, the dose response relationship, the relative risk in the high dose group, the time period of radiation induced cancer mortality, the difference of dose response between Hiroshima and Nagasaki and the relative biological effectiveness of neutrons.
Directory of Open Access Journals (Sweden)
Ozonoff Al
2010-07-01
Full Text Available Abstract Background A common, important problem in spatial epidemiology is measuring and identifying variation in disease risk across a study region. In application of statistical methods, the problem has two parts. First, spatial variation in risk must be detected across the study region and, second, areas of increased or decreased risk must be correctly identified. The location of such areas may give clues to environmental sources of exposure and disease etiology. One statistical method applicable in spatial epidemiologic settings is a generalized additive model (GAM which can be applied with a bivariate LOESS smoother to account for geographic location as a possible predictor of disease status. A natural hypothesis when applying this method is whether residential location of subjects is associated with the outcome, i.e. is the smoothing term necessary? Permutation tests are a reasonable hypothesis testing method and provide adequate power under a simple alternative hypothesis. These tests have yet to be compared to other spatial statistics. Results This research uses simulated point data generated under three alternative hypotheses to evaluate the properties of the permutation methods and compare them to the popular spatial scan statistic in a case-control setting. Case 1 was a single circular cluster centered in a circular study region. The spatial scan statistic had the highest power though the GAM method estimates did not fall far behind. Case 2 was a single point source located at the center of a circular cluster and Case 3 was a line source at the center of the horizontal axis of a square study region. Each had linearly decreasing logodds with distance from the point. The GAM methods outperformed the scan statistic in Cases 2 and 3. Comparing sensitivity, measured as the proportion of the exposure source correctly identified as high or low risk, the GAM methods outperformed the scan statistic in all three Cases. Conclusions The GAM
Young, Robin L; Weinberg, Janice; Vieira, Verónica; Ozonoff, Al; Webster, Thomas F
2010-07-19
A common, important problem in spatial epidemiology is measuring and identifying variation in disease risk across a study region. In application of statistical methods, the problem has two parts. First, spatial variation in risk must be detected across the study region and, second, areas of increased or decreased risk must be correctly identified. The location of such areas may give clues to environmental sources of exposure and disease etiology. One statistical method applicable in spatial epidemiologic settings is a generalized additive model (GAM) which can be applied with a bivariate LOESS smoother to account for geographic location as a possible predictor of disease status. A natural hypothesis when applying this method is whether residential location of subjects is associated with the outcome, i.e. is the smoothing term necessary? Permutation tests are a reasonable hypothesis testing method and provide adequate power under a simple alternative hypothesis. These tests have yet to be compared to other spatial statistics. This research uses simulated point data generated under three alternative hypotheses to evaluate the properties of the permutation methods and compare them to the popular spatial scan statistic in a case-control setting. Case 1 was a single circular cluster centered in a circular study region. The spatial scan statistic had the highest power though the GAM method estimates did not fall far behind. Case 2 was a single point source located at the center of a circular cluster and Case 3 was a line source at the center of the horizontal axis of a square study region. Each had linearly decreasing logodds with distance from the point. The GAM methods outperformed the scan statistic in Cases 2 and 3. Comparing sensitivity, measured as the proportion of the exposure source correctly identified as high or low risk, the GAM methods outperformed the scan statistic in all three Cases. The GAM permutation testing methods provide a regression
International Nuclear Information System (INIS)
Schoenwiese, C.D.
1990-01-01
Based on univariate correction and coherence analyses, including techniques moving in time, and taking account of the physical basis of the relationships, a simple multivariate concept is presented which correlates observational climatic time series simultaneously with solar, volcanic, ENSO (El Nino/Souther Oscillation) and anthropogenic greenhouse-gas forcing. The climatic elements considered are air temperature (near the ground and stratosphere), sea surface temperature, sea level and precipitation, and cover at least the period 1881-1980 (stratospheric temperature only since 1960). The climate signal assessments which may be hypothetically attributed to the observed CO 2 or equivalent CO 2 (implying additional greenhouse gases) increase are compared with those resulting from GCM experiments. In case of the Northern hemisphere air temperature these comparisons are performed not only in respect to hemispheric and global means, but also in respect to the regional and seasonal patterns. Autocorrelations and phase shifts of the climate response to natural and anthropogenic forcing complicate the statistical assessments
Energy Technology Data Exchange (ETDEWEB)
ChiBin, Zhang; XiaoHui, Lin, E-mail: lxh60@seu.edu.cn; ZhaoMin, Wang; ChangBao, Wang
2017-03-15
In experiments and theoretical analyses, this study examines the capture efficiency (CE) of magnetic drug carrier particles (MDCPs) for implant-assisted magnetic drug targeting (IA-MDT) in microvessels. It also proposes a three-dimensional statistical transport model of MDCPs for IA-MDT in permeable microvessels, which describes blood flow by the two-fluid (Casson and Newtonian) model. The model accounts for the permeable effect of the microvessel wall and the coupling effect between the blood flow and tissue fluid flow. The MDCPs move randomly through the microvessel, and their transport state is described by the Boltzmann equation. The regulated changes and factors affecting the CE of the MDCPs in the assisted magnetic targeting were obtained by solving the theoretical model and by experimental testing. The CE was negatively correlated with the blood flow velocity, and positively correlated with the external magnetic field intensity and microvessel permeability. The predicted CEs of the MDCPs were consistent with the experimental results. Additionally, under the same external magnetic field, the predicted CE was 5–8% higher in the IA-MDT model than in the model ignoring the permeability effect of the microvessel wall. - Highlights: • A model of MDCPs for IA-MDT in permeable microvessels was established. • An experimental device was established, the CE of MDCPs was measured. • The predicted CE of MDCPs was 5–8% higher in the IA-MDT model.
Dėdelė, Audrius; Miškinytė, Auksė
2015-09-01
In many countries, road traffic is one of the main sources of air pollution associated with adverse effects on human health and environment. Nitrogen dioxide (NO2) is considered to be a measure of traffic-related air pollution, with concentrations tending to be higher near highways, along busy roads, and in the city centers, and the exceedances are mainly observed at measurement stations located close to traffic. In order to assess the air quality in the city and the air pollution impact on public health, air quality models are used. However, firstly, before the model can be used for these purposes, it is important to evaluate the accuracy of the dispersion modelling as one of the most widely used method. The monitoring and dispersion modelling are two components of air quality monitoring system (AQMS), in which statistical comparison was made in this research. The evaluation of the Atmospheric Dispersion Modelling System (ADMS-Urban) was made by comparing monthly modelled NO2 concentrations with the data of continuous air quality monitoring stations in Kaunas city. The statistical measures of model performance were calculated for annual and monthly concentrations of NO2 for each monitoring station site. The spatial analysis was made using geographic information systems (GIS). The calculation of statistical parameters indicated a good ADMS-Urban model performance for the prediction of NO2. The results of this study showed that the agreement of modelled values and observations was better for traffic monitoring stations compared to the background and residential stations.
Exclusion statistics and integrable models
International Nuclear Information System (INIS)
Mashkevich, S.
1998-01-01
The definition of exclusion statistics, as given by Haldane, allows for a statistical interaction between distinguishable particles (multi-species statistics). The thermodynamic quantities for such statistics ca be evaluated exactly. The explicit expressions for the cluster coefficients are presented. Furthermore, single-species exclusion statistics is realized in one-dimensional integrable models. The interesting questions of generalizing this correspondence onto the higher-dimensional and the multi-species cases remain essentially open
Energy Technology Data Exchange (ETDEWEB)
Moore, James [U.S. Army Corps of Engineers - New York District 26 Federal Plaza, New York, New York 10278 (United States); Hays, David [U.S. Army Corps of Engineers - Kansas City District 601 E. 12th Street, Kansas City, Missouri 64106 (United States); Quinn, John; Johnson, Robert; Durham, Lisa [Argonne National Laboratory, Environmental Science Division 9700 S. Cass Ave., Argonne, Illinois 60439 (United States)
2013-07-01
As part of the ongoing remediation process at the Maywood Formerly Utilized Sites Remedial Action Program (FUSRAP) properties, Argonne National Laboratory (Argonne) assisted the U.S. Army Corps of Engineers (USACE) New York District by providing contaminated soil volume estimates for the main site area, much of which is fully or partially remediated. As part of the volume estimation process, an initial conceptual site model (ICSM) was prepared for the entire site that captured existing information (with the exception of soil sampling results) pertinent to the possible location of surface and subsurface contamination above cleanup requirements. This ICSM was based on historical anecdotal information, aerial photographs, and the logs from several hundred soil cores that identified the depth of fill material and the depth to bedrock under the site. Specialized geostatistical software developed by Argonne was used to update the ICSM with historical sampling results and down-hole gamma survey information for hundreds of soil core locations. The updating process yielded both a best guess estimate of contamination volumes and a conservative upper bound on the volume estimate that reflected the estimate's uncertainty. Comparison of model results to actual removed soil volumes was conducted on a parcel-by-parcel basis. Where sampling data density was adequate, the actual volume matched the model's average or best guess results. Where contamination was un-characterized and unknown to the model, the actual volume exceeded the model's conservative estimate. Factors affecting volume estimation were identified to assist in planning further excavations. (authors)
Exclusion statistics and integrable models
International Nuclear Information System (INIS)
Mashkevich, S.
1998-01-01
The definition of exclusion statistics that was given by Haldane admits a 'statistical interaction' between distinguishable particles (multispecies statistics). For such statistics, thermodynamic quantities can be evaluated exactly; explicit expressions are presented here for cluster coefficients. Furthermore, single-species exclusion statistics is realized in one-dimensional integrable models of the Calogero-Sutherland type. The interesting questions of generalizing this correspondence to the higher-dimensional and the multispecies cases remain essentially open; however, our results provide some hints as to searches for the models in question
Yasaka, Koichiro; Katsura, Masaki; Akahane, Masaaki; Sato, Jiro; Matsuda, Izuru; Ohtomo, Kuni
2013-12-01
To evaluate dose reduction and image quality of abdominopelvic computed tomography (CT) reconstructed with model-based iterative reconstruction (MBIR) compared to adaptive statistical iterative reconstruction (ASIR). In this prospective study, 85 patients underwent referential-, low-, and ultralow-dose unenhanced abdominopelvic CT. Images were reconstructed with ASIR for low-dose (L-ASIR) and ultralow-dose CT (UL-ASIR), and with MBIR for ultralow-dose CT (UL-MBIR). Image noise was measured in the abdominal aorta and iliopsoas muscle. Subjective image analyses and a lesion detection study (adrenal nodules) were conducted by two blinded radiologists. A reference standard was established by a consensus panel of two different radiologists using referential-dose CT reconstructed with filtered back projection. Compared to low-dose CT, there was a 63% decrease in dose-length product with ultralow-dose CT. UL-MBIR had significantly lower image noise than L-ASIR and UL-ASIR (all pASIR and UL-ASIR (all pASIR in diagnostic acceptability (p>0.65), or diagnostic performance for adrenal nodules (p>0.87). MBIR significantly improves image noise and streak artifacts compared to ASIR, and can achieve radiation dose reduction without severely compromising image quality.
Statistical Model of Extreme Shear
DEFF Research Database (Denmark)
Larsen, Gunner Chr.; Hansen, Kurt Schaldemose
2004-01-01
In order to continue cost-optimisation of modern large wind turbines, it is important to continously increase the knowledge on wind field parameters relevant to design loads. This paper presents a general statistical model that offers site-specific prediction of the probability density function...... by a model that, on a statistically consistent basis, describe the most likely spatial shape of an extreme wind shear event. Predictions from the model have been compared with results from an extreme value data analysis, based on a large number of high-sampled full-scale time series measurements...... are consistent, given the inevitabel uncertainties associated with model as well as with the extreme value data analysis. Keywords: Statistical model, extreme wind conditions, statistical analysis, turbulence, wind loading, statistical analysis, turbulence, wind loading, wind shear, wind turbines....
Statistically robust sampling strategies form an integral component of grain storage and handling activities throughout the world. Developing sampling strategies to target biological pests such as insects in stored grain is inherently difficult due to species biology and behavioral characteristics. ...
Goodenberger, Martin H; Wagner-Bartak, Nicolaus A; Gupta, Shiva; Liu, Xinming; Yap, Ramon Q; Sun, Jia; Tamm, Eric P; Jensen, Corey T
The purpose of this study was to compare abdominopelvic computed tomography images reconstructed with adaptive statistical iterative reconstruction-V (ASIR-V) with model-based iterative reconstruction (Veo 3.0), ASIR, and filtered back projection (FBP). Abdominopelvic computed tomography scans for 36 patients (26 males and 10 females) were reconstructed using FBP, ASIR (80%), Veo 3.0, and ASIR-V (30%, 60%, 90%). Mean ± SD patient age was 32 ± 10 years with mean ± SD body mass index of 26.9 ± 4.4 kg/m. Images were reviewed by 2 independent readers in a blinded, randomized fashion. Hounsfield unit, noise, and contrast-to-noise ratio (CNR) values were calculated for each reconstruction algorithm for further comparison. Phantom evaluation of low-contrast detectability (LCD) and high-contrast resolution was performed. Adaptive statistical iterative reconstruction-V 30%, ASIR-V 60%, and ASIR 80% were generally superior qualitatively compared with ASIR-V 90%, Veo 3.0, and FBP (P ASIR-V 60% with respective CNR values of 5.54 ± 2.39, 8.78 ± 3.15, and 3.49 ± 1.77 (P ASIR 80% had the best and worst spatial resolution, respectively. Adaptive statistical iterative reconstruction-V 30% and ASIR-V 60% provided the best combination of qualitative and quantitative performance. Adaptive statistical iterative reconstruction 80% was equivalent qualitatively, but demonstrated inferior spatial resolution and LCD.
Statistical modeling for degradation data
Lio, Yuhlong; Ng, Hon; Tsai, Tzong-Ru
2017-01-01
This book focuses on the statistical aspects of the analysis of degradation data. In recent years, degradation data analysis has come to play an increasingly important role in different disciplines such as reliability, public health sciences, and finance. For example, information on products’ reliability can be obtained by analyzing degradation data. In addition, statistical modeling and inference techniques have been developed on the basis of different degradation measures. The book brings together experts engaged in statistical modeling and inference, presenting and discussing important recent advances in degradation data analysis and related applications. The topics covered are timely and have considerable potential to impact both statistics and reliability engineering.
Statistical modelling with quantile functions
Gilchrist, Warren
2000-01-01
Galton used quantiles more than a hundred years ago in describing data. Tukey and Parzen used them in the 60s and 70s in describing populations. Since then, the authors of many papers, both theoretical and practical, have used various aspects of quantiles in their work. Until now, however, no one put all the ideas together to form what turns out to be a general approach to statistics.Statistical Modelling with Quantile Functions does just that. It systematically examines the entire process of statistical modelling, starting with using the quantile function to define continuous distributions. The author shows that by using this approach, it becomes possible to develop complex distributional models from simple components. A modelling kit can be developed that applies to the whole model - deterministic and stochastic components - and this kit operates by adding, multiplying, and transforming distributions rather than data.Statistical Modelling with Quantile Functions adds a new dimension to the practice of stati...
A Statistical Programme Assignment Model
DEFF Research Database (Denmark)
Rosholm, Michael; Staghøj, Jonas; Svarer, Michael
When treatment effects of active labour market programmes are heterogeneous in an observable way across the population, the allocation of the unemployed into different programmes becomes a particularly important issue. In this paper, we present a statistical model designed to improve the present...... duration of unemployment spells may result if a statistical programme assignment model is introduced. We discuss several issues regarding the plementation of such a system, especially the interplay between the statistical model and case workers....
Robot Trajectories Comparison: A Statistical Approach
Directory of Open Access Journals (Sweden)
A. Ansuategui
2014-01-01
Full Text Available The task of planning a collision-free trajectory from a start to a goal position is fundamental for an autonomous mobile robot. Although path planning has been extensively investigated since the beginning of robotics, there is no agreement on how to measure the performance of a motion algorithm. This paper presents a new approach to perform robot trajectories comparison that could be applied to any kind of trajectories and in both simulated and real environments. Given an initial set of features, it automatically selects the most significant ones and performs a statistical comparison using them. Additionally, a graphical data visualization named polygraph which helps to better understand the obtained results is provided. The proposed method has been applied, as an example, to compare two different motion planners, FM2 and WaveFront, using different environments, robots, and local planners.
Robot Trajectories Comparison: A Statistical Approach
Ansuategui, A.; Arruti, A.; Susperregi, L.; Yurramendi, Y.; Jauregi, E.; Lazkano, E.; Sierra, B.
2014-01-01
The task of planning a collision-free trajectory from a start to a goal position is fundamental for an autonomous mobile robot. Although path planning has been extensively investigated since the beginning of robotics, there is no agreement on how to measure the performance of a motion algorithm. This paper presents a new approach to perform robot trajectories comparison that could be applied to any kind of trajectories and in both simulated and real environments. Given an initial set of features, it automatically selects the most significant ones and performs a statistical comparison using them. Additionally, a graphical data visualization named polygraph which helps to better understand the obtained results is provided. The proposed method has been applied, as an example, to compare two different motion planners, FM2 and WaveFront, using different environments, robots, and local planners. PMID:25525618
Tropical geometry of statistical models.
Pachter, Lior; Sturmfels, Bernd
2004-11-16
This article presents a unified mathematical framework for inference in graphical models, building on the observation that graphical models are algebraic varieties. From this geometric viewpoint, observations generated from a model are coordinates of a point in the variety, and the sum-product algorithm is an efficient tool for evaluating specific coordinates. Here, we address the question of how the solutions to various inference problems depend on the model parameters. The proposed answer is expressed in terms of tropical algebraic geometry. The Newton polytope of a statistical model plays a key role. Our results are applied to the hidden Markov model and the general Markov model on a binary tree.
Statistical Model of Extreme Shear
DEFF Research Database (Denmark)
Hansen, Kurt Schaldemose; Larsen, Gunner Chr.
2005-01-01
In order to continue cost-optimisation of modern large wind turbines, it is important to continuously increase the knowledge of wind field parameters relevant to design loads. This paper presents a general statistical model that offers site-specific prediction of the probability density function...... by a model that, on a statistically consistent basis, describes the most likely spatial shape of an extreme wind shear event. Predictions from the model have been compared with results from an extreme value data analysis, based on a large number of full-scale measurements recorded with a high sampling rate...
Statistical Models for Social Networks
Snijders, Tom A. B.; Cook, KS; Massey, DS
2011-01-01
Statistical models for social networks as dependent variables must represent the typical network dependencies between tie variables such as reciprocity, homophily, transitivity, etc. This review first treats models for single (cross-sectionally observed) networks and then for network dynamics. For
Sensometrics: Thurstonian and Statistical Models
DEFF Research Database (Denmark)
Christensen, Rune Haubo Bojesen
. sensR is a package for sensory discrimination testing with Thurstonian models and ordinal supports analysis of ordinal data with cumulative link (mixed) models. While sensR is closely connected to the sensometrics field, the ordinal package has developed into a generic statistical package applicable......This thesis is concerned with the development and bridging of Thurstonian and statistical models for sensory discrimination testing as applied in the scientific discipline of sensometrics. In sensory discrimination testing sensory differences between products are detected and quantified by the use...... and sensory discrimination testing in particular in a series of papers by advancing Thurstonian models for a range of sensory discrimination protocols in addition to facilitating their application by providing software for fitting these models. The main focus is on identifying Thurstonian models...
Directory of Open Access Journals (Sweden)
Drzewiecki Wojciech
2017-12-01
Full Text Available We evaluated the performance of nine machine learning regression algorithms and their ensembles for sub-pixel estimation of impervious areas coverages from Landsat imagery. The accuracy of imperviousness mapping in individual time points was assessed based on RMSE, MAE and R2. These measures were also used for the assessment of imperviousness change intensity estimations. The applicability for detection of relevant changes in impervious areas coverages at sub-pixel level was evaluated using overall accuracy, F-measure and ROC Area Under Curve. The results proved that Cubist algorithm may be advised for Landsat-based mapping of imperviousness for single dates. Stochastic gradient boosting of regression trees (GBM may be also considered for this purpose. However, Random Forest algorithm is endorsed for both imperviousness change detection and mapping of its intensity. In all applications the heterogeneous model ensembles performed at least as well as the best individual models or better. They may be recommended for improving the quality of sub-pixel imperviousness and imperviousness change mapping. The study revealed also limitations of the investigated methodology for detection of subtle changes of imperviousness inside the pixel. None of the tested approaches was able to reliably classify changed and non-changed pixels if the relevant change threshold was set as one or three percent. Also for fi ve percent change threshold most of algorithms did not ensure that the accuracy of change map is higher than the accuracy of random classifi er. For the threshold of relevant change set as ten percent all approaches performed satisfactory.
Drzewiecki, Wojciech
2017-12-01
We evaluated the performance of nine machine learning regression algorithms and their ensembles for sub-pixel estimation of impervious areas coverages from Landsat imagery. The accuracy of imperviousness mapping in individual time points was assessed based on RMSE, MAE and R2. These measures were also used for the assessment of imperviousness change intensity estimations. The applicability for detection of relevant changes in impervious areas coverages at sub-pixel level was evaluated using overall accuracy, F-measure and ROC Area Under Curve. The results proved that Cubist algorithm may be advised for Landsat-based mapping of imperviousness for single dates. Stochastic gradient boosting of regression trees (GBM) may be also considered for this purpose. However, Random Forest algorithm is endorsed for both imperviousness change detection and mapping of its intensity. In all applications the heterogeneous model ensembles performed at least as well as the best individual models or better. They may be recommended for improving the quality of sub-pixel imperviousness and imperviousness change mapping. The study revealed also limitations of the investigated methodology for detection of subtle changes of imperviousness inside the pixel. None of the tested approaches was able to reliably classify changed and non-changed pixels if the relevant change threshold was set as one or three percent. Also for fi ve percent change threshold most of algorithms did not ensure that the accuracy of change map is higher than the accuracy of random classifi er. For the threshold of relevant change set as ten percent all approaches performed satisfactory.
Classical model of intermediate statistics
International Nuclear Information System (INIS)
Kaniadakis, G.
1994-01-01
In this work we present a classical kinetic model of intermediate statistics. In the case of Brownian particles we show that the Fermi-Dirac (FD) and Bose-Einstein (BE) distributions can be obtained, just as the Maxwell-Boltzmann (MD) distribution, as steady states of a classical kinetic equation that intrinsically takes into account an exclusion-inclusion principle. In our model the intermediate statistics are obtained as steady states of a system of coupled nonlinear kinetic equations, where the coupling constants are the transmutational potentials η κκ' . We show that, besides the FD-BE intermediate statistics extensively studied from the quantum point of view, we can also study the MB-FD and MB-BE ones. Moreover, our model allows us to treat the three-state mixing FD-MB-BE intermediate statistics. For boson and fermion mixing in a D-dimensional space, we obtain a family of FD-BE intermediate statistics by varying the transmutational potential η BF . This family contains, as a particular case when η BF =0, the quantum statistics recently proposed by L. Wu, Z. Wu, and J. Sun [Phys. Lett. A 170, 280 (1992)]. When we consider the two-dimensional FD-BE statistics, we derive an analytic expression of the fraction of fermions. When the temperature T→∞, the system is composed by an equal number of bosons and fermions, regardless of the value of η BF . On the contrary, when T=0, η BF becomes important and, according to its value, the system can be completely bosonic or fermionic, or composed both by bosons and fermions
Textual information access statistical models
Gaussier, Eric
2013-01-01
This book presents statistical models that have recently been developed within several research communities to access information contained in text collections. The problems considered are linked to applications aiming at facilitating information access:- information extraction and retrieval;- text classification and clustering;- opinion mining;- comprehension aids (automatic summarization, machine translation, visualization).In order to give the reader as complete a description as possible, the focus is placed on the probability models used in the applications
Improved model for statistical alignment
Energy Technology Data Exchange (ETDEWEB)
Miklos, I.; Toroczkai, Z. (Zoltan)
2001-01-01
The statistical approach to molecular sequence evolution involves the stochastic modeling of the substitution, insertion and deletion processes. Substitution has been modeled in a reliable way for more than three decades by using finite Markov-processes. Insertion and deletion, however, seem to be more difficult to model, and thc recent approaches cannot acceptably deal with multiple insertions and deletions. A new method based on a generating function approach is introduced to describe the multiple insertion process. The presented algorithm computes the approximate joint probability of two sequences in 0(13) running time where 1 is the geometric mean of the sequence lengths.
Jain, Shekhar; Ginzburg, Valeriy V; Jog, Prasanna; Weinhold, Jeffrey; Srivastava, Rakesh; Chapman, Walter G
2009-07-28
The interaction between two polymer grafted surfaces is important in many applications, such as nanocomposites, colloid stabilization, and polymer alloys. In our previous work [Jain et al., J. Chem. Phys. 128, 154910 (2008)], we showed that interfacial statistical associating fluid density theory (iSAFT) successfully calculates the structure of grafted polymer chains in the absence/presence of a free polymer. In the current work, we have applied this density functional theory to calculate the force of interaction between two such grafted monolayers in implicit good solvent conditions. In particular, we have considered the case where the segment sizes of the free (sigma(f)) and grafted (sigma(g)) polymers are different. The interactions between the two monolayers in the absence of the free polymer are always repulsive. However, in the presence of the free polymer, the force either can be purely repulsive or can have an attractive minimum depending upon the relative chain lengths of the free (N(f)) and grafted polymers (N(g)). The attractive minimum is observed only when the ratio alpha = N(f)/N(g) is greater than a critical value. We find that these critical values of alpha satisfy the following scaling relation: rho(g) square root(N(g)) beta(3) proportional to alpha(-lambda), where beta = sigma(f)/sigma(g) and lambda is the scaling exponent. For beta = 1 or the same segment sizes of the free and grafted polymers, this scaling relation is in agreement with those from previous theoretical studies using self-consistent field theory (SCFT). Detailed comparisons between iSAFT and SCFT are made for the structures of the monolayers and their forces of interaction. These comparisons lead to interesting implications for the modeling of nanocomposite thermodynamics.
Active Learning with Statistical Models.
1995-01-01
Active Learning with Statistical Models ASC-9217041, NSF CDA-9309300 6. AUTHOR(S) David A. Cohn, Zoubin Ghahramani, and Michael I. Jordan 7. PERFORMING...TERMS 15. NUMBER OF PAGES Al, MIT, Artificial Intelligence, active learning , queries, locally weighted 6 regression, LOESS, mixtures of gaussians...COMPUTATIONAL LEARNING DEPARTMENT OF BRAIN AND COGNITIVE SCIENCES A.I. Memo No. 1522 January 9. 1995 C.B.C.L. Paper No. 110 Active Learning with
Statistical modelling of citation exchange between statistics journals.
Varin, Cristiano; Cattelan, Manuela; Firth, David
2016-01-01
Rankings of scholarly journals based on citation data are often met with scepticism by the scientific community. Part of the scepticism is due to disparity between the common perception of journals' prestige and their ranking based on citation counts. A more serious concern is the inappropriate use of journal rankings to evaluate the scientific influence of researchers. The paper focuses on analysis of the table of cross-citations among a selection of statistics journals. Data are collected from the Web of Science database published by Thomson Reuters. Our results suggest that modelling the exchange of citations between journals is useful to highlight the most prestigious journals, but also that journal citation data are characterized by considerable heterogeneity, which needs to be properly summarized. Inferential conclusions require care to avoid potential overinterpretation of insignificant differences between journal ratings. Comparison with published ratings of institutions from the UK's research assessment exercise shows strong correlation at aggregate level between assessed research quality and journal citation 'export scores' within the discipline of statistics.
Statistical Models of Adaptive Immune populations
Sethna, Zachary; Callan, Curtis; Walczak, Aleksandra; Mora, Thierry
The availability of large (104-106 sequences) datasets of B or T cell populations from a single individual allows reliable fitting of complex statistical models for naïve generation, somatic selection, and hypermutation. It is crucial to utilize a probabilistic/informational approach when modeling these populations. The inferred probability distributions allow for population characterization, calculation of probability distributions of various hidden variables (e.g. number of insertions), as well as statistical properties of the distribution itself (e.g. entropy). In particular, the differences between the T cell populations of embryonic and mature mice will be examined as a case study. Comparing these populations, as well as proposed mixed populations, provides a concrete exercise in model creation, comparison, choice, and validation.
GIGMF - A statistical model program
International Nuclear Information System (INIS)
Vladuca, G.; Deberth, C.
1978-01-01
The program GIGMF computes the differential and integrated statistical model cross sections for the reactions proceeding through a compound nuclear stage. The computational method is based on the Hauser-Feshbach-Wolfenstein theory, modified to include the modern version of Tepel et al. Although the program was written for a PDP-15 computer, with 16K high speed memory, many reaction channels can be taken into account with the following restrictions: the pro ectile spin must be less than 2, the maximum spin momenta of the compound nucleus can not be greater than 10. These restrictions are due solely to the storage allotments and may be easily relaxed. The energy of the impinging particle, the target and projectile masses, the spin and paritjes of the projectile, target, emergent and residual nuclei the maximum orbital momentum and transmission coefficients for each reaction channel are the input parameters of the program. (author)
Statistical modeling of Earth's plasmasphere
Veibell, Victoir
The behavior of plasma near Earth's geosynchronous orbit is of vital importance to both satellite operators and magnetosphere modelers because it also has a significant influence on energy transport, ion composition, and induced currents. The system is highly complex in both time and space, making the forecasting of extreme space weather events difficult. This dissertation examines the behavior and statistical properties of plasma mass density near geosynchronous orbit by using both linear and nonlinear models, as well as epoch analyses, in an attempt to better understand the physical processes that precipitates and drives its variations. It is shown that while equatorial mass density does vary significantly on an hourly timescale when a drop in the disturbance time scale index ( Dst) was observed, it does not vary significantly between the day of a Dst event onset and the day immediately following. It is also shown that increases in equatorial mass density were not, on average, preceded or followed by any significant change in the examined solar wind or geomagnetic variables, including Dst, despite prior results that considered a few selected events and found a notable influence. It is verified that equatorial mass density and and solar activity via the F10.7 index have a strong correlation, which is stronger over longer timescales such as 27 days than it is over an hourly timescale. It is then shown that this connection seems to affect the behavior of equatorial mass density most during periods of strong solar activity leading to large mass density reactions to Dst drops for high values of F10.7. It is also shown that equatorial mass density behaves differently before and after events based on the value of F10.7 at the onset of an equatorial mass density event or a Dst event, and that a southward interplanetary magnetic field at onset leads to slowed mass density growth after event onset. These behavioral differences provide insight into how solar and geomagnetic
Directory of Open Access Journals (Sweden)
Abe D Hofman
Full Text Available We propose and test three statistical models for the analysis of children's responses to the balance scale task, a seminal task to study proportional reasoning. We use a latent class modelling approach to formulate a rule-based latent class model (RB LCM following from a rule-based perspective on proportional reasoning and a new statistical model, the Weighted Sum Model, following from an information-integration approach. Moreover, a hybrid LCM using item covariates is proposed, combining aspects of both a rule-based and information-integration perspective. These models are applied to two different datasets, a standard paper-and-pencil test dataset (N = 779, and a dataset collected within an online learning environment that included direct feedback, time-pressure, and a reward system (N = 808. For the paper-and-pencil dataset the RB LCM resulted in the best fit, whereas for the online dataset the hybrid LCM provided the best fit. The standard paper-and-pencil dataset yielded more evidence for distinct solution rules than the online data set in which quantitative item characteristics are more prominent in determining responses. These results shed new light on the discussion on sequential rule-based and information-integration perspectives of cognitive development.
Probing NWP model deficiencies by statistical postprocessing
DEFF Research Database (Denmark)
Rosgaard, Martin Haubjerg; Nielsen, Henrik Aalborg; Nielsen, Torben S.
2016-01-01
The objective in this article is twofold. On one hand, a Model Output Statistics (MOS) framework for improved wind speed forecast accuracy is described and evaluated. On the other hand, the approach explored identifies unintuitive explanatory value from a diagnostic variable in an operational....... Based on the statistical model candidates inferred from the data, the lifted index NWP model diagnostic is consistently found among the NWP model predictors of the best performing statistical models across sites....
Directory of Open Access Journals (Sweden)
George P. Papaioannou
2016-08-01
Full Text Available In this work we propose a new hybrid model, a combination of the manifold learning Principal Components (PC technique and the traditional multiple regression (PC-regression, for short and medium-term forecasting of daily, aggregated, day-ahead, electricity system-wide load in the Greek Electricity Market for the period 2004–2014. PC-regression is shown to effectively capture the intraday, intraweek and annual patterns of load. We compare our model with a number of classical statistical approaches (Holt-Winters exponential smoothing of its generalizations Error-Trend-Seasonal, ETS models, the Seasonal Autoregressive Moving Average with exogenous variables, Seasonal Autoregressive Integrated Moving Average with eXogenous (SARIMAX model as well as with the more sophisticated artificial intelligence models, Artificial Neural Networks (ANN and Support Vector Machines (SVM. Using a number of criteria for measuring the quality of the generated in-and out-of-sample forecasts, we have concluded that the forecasts of our hybrid model outperforms the ones generated by the other model, with the SARMAX model being the next best performing approach, giving comparable results. Our approach contributes to studies aimed at providing more accurate and reliable load forecasting, prerequisites for an efficient management of modern power systems.
Urban pavement surface temperature. Comparison of numerical and statistical approach
Marchetti, Mario; Khalifa, Abderrahmen; Bues, Michel; Bouilloud, Ludovic; Martin, Eric; Chancibaut, Katia
2015-04-01
The forecast of pavement surface temperature is very specific in the context of urban winter maintenance. to manage snow plowing and salting of roads. Such forecast mainly relies on numerical models based on a description of the energy balance between the atmosphere, the buildings and the pavement, with a canyon configuration. Nevertheless, there is a specific need in the physical description and the numerical implementation of the traffic in the energy flux balance. This traffic was originally considered as a constant. Many changes were performed in a numerical model to describe as accurately as possible the traffic effects on this urban energy balance, such as tires friction, pavement-air exchange coefficient, and infrared flux neat balance. Some experiments based on infrared thermography and radiometry were then conducted to quantify the effect fo traffic on urban pavement surface. Based on meteorological data, corresponding pavement temperature forecast were calculated and were compared with fiels measurements. Results indicated a good agreement between the forecast from the numerical model based on this energy balance approach. A complementary forecast approach based on principal component analysis (PCA) and partial least-square regression (PLS) was also developed, with data from thermal mapping usng infrared radiometry. The forecast of pavement surface temperature with air temperature was obtained in the specific case of urban configurtation, and considering traffic into measurements used for the statistical analysis. A comparison between results from the numerical model based on energy balance, and PCA/PLS was then conducted, indicating the advantages and limits of each approach.
cocor: a comprehensive solution for the statistical comparison of correlations.
Directory of Open Access Journals (Sweden)
Birk Diedenhofen
Full Text Available A valid comparison of the magnitude of two correlations requires researchers to directly contrast the correlations using an appropriate statistical test. In many popular statistics packages, however, tests for the significance of the difference between correlations are missing. To close this gap, we introduce cocor, a free software package for the R programming language. The cocor package covers a broad range of tests including the comparisons of independent and dependent correlations with either overlapping or nonoverlapping variables. The package also includes an implementation of Zou's confidence interval for all of these comparisons. The platform independent cocor package enhances the R statistical computing environment and is available for scripting. Two different graphical user interfaces-a plugin for RKWard and a web interface-make cocor a convenient and user-friendly tool.
MISR Aerosol Product Attributes and Statistical Comparisons with MODIS
Kahn, Ralph A.; Nelson, David L.; Garay, Michael J.; Levy, Robert C.; Bull, Michael A.; Diner, David J.; Martonchik, John V.; Paradise, Susan R.; Hansen, Earl G.; Remer, Lorraine A.
2009-01-01
In this paper, Multi-angle Imaging SpectroRadiometer (MISR) aerosol product attributes are described, including geometry and algorithm performance flags. Actual retrieval coverage is mapped and explained in detail using representative global monthly data. Statistical comparisons are made with coincident aerosol optical depth (AOD) and Angstrom exponent (ANG) retrieval results from the Moderate Resolution Imaging Spectroradiometer (MODIS) instrument. The relationship between these results and the ones previously obtained for MISR and MODIS individually, based on comparisons with coincident ground-truth observations, is established. For the data examined, MISR and MODIS each obtain successful aerosol retrievals about 15% of the time, and coincident MISR-MODIS aerosol retrievals are obtained for about 6%-7% of the total overlap region. Cloud avoidance, glint and oblique-Sun exclusions, and other algorithm physical limitations account for these results. For both MISR and MODIS, successful retrievals are obtained for over 75% of locations where attempts are made. Where coincident AOD retrievals are obtained over ocean, the MISR-MODIS correlation coefficient is about 0.9; over land, the correlation coefficient is about 0.7. Differences are traced to specific known algorithm issues or conditions. Over-ocean ANG comparisons yield a correlation of 0.67, showing consistency in distinguishing aerosol air masses dominated by coarse-mode versus fine-mode particles. Sampling considerations imply that care must be taken when assessing monthly global aerosol direct radiative forcing and AOD trends with these products, but they can be used directly for many other applications, such as regional AOD gradient and aerosol air mass type mapping and aerosol transport model validation. Users are urged to take seriously the published product data-quality statements.
International Nuclear Information System (INIS)
Pulkkinen, U.
2004-04-01
The report describes a simple comparison of two CCF-models, the ECLM, and the Beta-model. The objective of the comparison is to identify differences in the results of the models by applying the models in some simple test data cases. The comparison focuses mainly on theoretical aspects of the above mentioned CCF-models. The properties of the model parameter estimates in the data cases is also discussed. The practical aspects in using and estimating CCFmodels in real PSA context (e.g. the data interpretation, properties of computer tools, the model documentation) are not discussed in the report. Similarly, the qualitative CCF-analyses needed in using the models are not discussed in the report. (au)
Statistical modelling of fish stocks
DEFF Research Database (Denmark)
Kvist, Trine
1999-01-01
for modelling the dynamics of a fish population is suggested. A new approach is introduced to analyse the sources of variation in age composition data, which is one of the most important sources of information in the cohort based models for estimation of stock abundancies and mortalities. The approach combines...... and it is argued that an approach utilising stochastic differential equations might be advantagous in fish stoch assessments....
Cierniak, Robert; Lorent, Anna
2016-09-01
The main aim of this paper is to investigate properties of our originally formulated statistical model-based iterative approach applied to the image reconstruction from projections problem which are related to its conditioning, and, in this manner, to prove a superiority of this approach over ones recently used by other authors. The reconstruction algorithm based on this conception uses a maximum likelihood estimation with an objective adjusted to the probability distribution of measured signals obtained from an X-ray computed tomography system with parallel beam geometry. The analysis and experimental results presented here show that our analytical approach outperforms the referential algebraic methodology which is explored widely in the literature and exploited in various commercial implementations. Copyright © 2016 Elsevier Ltd. All rights reserved.
Kafle, Gopi Krishna; Chen, Lide
2016-02-01
There is a lack of literature reporting the methane potential of several livestock manures under the same anaerobic digestion conditions (same inoculum, temperature, time, and size of the digester). To the best of our knowledge, no previous study has reported biochemical methane potential (BMP) predicting models developed and evaluated by solely using at least five different livestock manure tests results. The goal of this study was to evaluate the BMP of five different livestock manures (dairy manure (DM), horse manure (HM), goat manure (GM), chicken manure (CM) and swine manure (SM)) and to predict the BMP using different statistical models. Nutrients of the digested different manures were also monitored. The BMP tests were conducted under mesophilic temperatures with a manure loading factor of 3.5g volatile solids (VS)/L and a feed to inoculum ratio (F/I) of 0.5. Single variable and multiple variable regression models were developed using manure total carbohydrate (TC), crude protein (CP), total fat (TF), lignin (LIG) and acid detergent fiber (ADF), and measured BMP data. Three different kinetic models (first order kinetic model, modified Gompertz model and Chen and Hashimoto model) were evaluated for BMP predictions. The BMPs of DM, HM, GM, CM and SM were measured to be 204, 155, 159, 259, and 323mL/g VS, respectively and the VS removals were calculated to be 58.6%, 52.9%, 46.4%, 81.4%, 81.4%, respectively. The technical digestion time (T80-90, time required to produce 80-90% of total biogas production) for DM, HM, GM, CM and SM was calculated to be in the ranges of 19-28, 27-37, 31-44, 13-18, 12-17days, respectively. The effluents from the HM showed the lowest nitrogen, phosphorus and potassium concentrations. The effluents from the CM digesters showed highest nitrogen and phosphorus concentrations and digested SM showed highest potassium concentration. Based on the results of the regression analysis, the model using the variable of LIG showed the best (R(2
Statistical lung model for microdosimetry
International Nuclear Information System (INIS)
Fisher, D.R.; Hadley, R.T.
1984-03-01
To calculate the microdosimetry of plutonium in the lung, a mathematical description is needed of lung tissue microstructure that defines source-site parameters. Beagle lungs were expanded using a glutaraldehyde fixative at 30 cm water pressure. Tissue specimens, five microns thick, were stained with hematoxylin and eosin then studied using an image analyzer. Measurements were made along horizontal lines through the magnified tissue image. The distribution of air space and tissue chord lengths and locations of epithelial cell nuclei were recorded from about 10,000 line scans. The distribution parameters constituted a model of lung microstructure for predicting the paths of random alpha particle tracks in the lung and the probability of traversing biologically sensitive sites. This lung model may be used in conjunction with established deposition and retention models for determining the microdosimetry in the pulmonary lung for a wide variety of inhaled radioactive materials
Statistical modelling for ship propulsion efficiency
DEFF Research Database (Denmark)
Petersen, Jóan Petur; Jacobsen, Daniel J.; Winther, Ole
2012-01-01
This paper presents a state-of-the-art systems approach to statistical modelling of fuel efficiency in ship propulsion, and also a novel and publicly available data set of high quality sensory data. Two statistical model approaches are investigated and compared: artificial neural networks...
Actuarial statistics with generalized linear mixed models
Antonio, K.; Beirlant, J.
2007-01-01
Over the last decade the use of generalized linear models (GLMs) in actuarial statistics has received a lot of attention, starting from the actuarial illustrations in the standard text by McCullagh and Nelder [McCullagh, P., Nelder, J.A., 1989. Generalized linear models. In: Monographs on Statistics
Spherical Process Models for Global Spatial Statistics
Jeong, Jaehong; Jun, Mikyoung; Genton, Marc G.
2017-01-01
Statistical models used in geophysical, environmental, and climate science applications must reflect the curvature of the spatial domain in global data. Over the past few decades, statisticians have developed covariance models that capture
Franceschi, Massimo; Caffarra, Paolo; Savarè, Rita; Cerutti, Renata; Grossi, Enzo
2011-01-01
The early differentiation of Alzheimer's disease (AD) from frontotemporal dementia (FTD) may be difficult. The Tower of London (ToL), thought to assess executive functions such as planning and visuo-spatial working memory, could help in this purpose. Twentytwo Dementia Centers consecutively recruited patients with early FTD or AD. ToL performances of these groups were analyzed using both the conventional statistical approaches and the Artificial Neural Networks (ANNs) modelling. Ninety-four non aphasic FTD and 160 AD patients were recruited. ToL Accuracy Score (AS) significantly (p advanced ANNs developed by Semeion Institute. The best ANNs were selected and submitted to ROC curves. The non-linear model was able to discriminate FTD from AD with an average AUC for 7 independent trials of 0.82. The use of hidden information contained in the different items of ToL and the non linear processing of the data through ANNs allows a high discrimination between FTD and AD in individual patients.
Statistical Models and Methods for Lifetime Data
Lawless, Jerald F
2011-01-01
Praise for the First Edition"An indispensable addition to any serious collection on lifetime data analysis and . . . a valuable contribution to the statistical literature. Highly recommended . . ."-Choice"This is an important book, which will appeal to statisticians working on survival analysis problems."-Biometrics"A thorough, unified treatment of statistical models and methods used in the analysis of lifetime data . . . this is a highly competent and agreeable statistical textbook."-Statistics in MedicineThe statistical analysis of lifetime or response time data is a key tool in engineering,
Statistics and the shell model
International Nuclear Information System (INIS)
Weidenmueller, H.A.
1985-01-01
Starting with N. Bohr's paper on compound-nucleus reactions, we confront regular dynamical features and chaotic motion in nuclei. The shell-model and, more generally, mean-field theories describe average nuclear properties which are thus identified as regular features. The fluctuations about the average show chaotic behaviour of the same type as found in classical chaotic systems upon quantisation. These features are therefore generic and quite independent of the specific dynamics of the nucleus. A novel method to calculate fluctuations is discussed, and the results of this method are described. (orig.)
Sample sizes and model comparison metrics for species distribution models
B.B. Hanberry; H.S. He; D.C. Dey
2012-01-01
Species distribution models use small samples to produce continuous distribution maps. The question of how small a sample can be to produce an accurate model generally has been answered based on comparisons to maximum sample sizes of 200 observations or fewer. In addition, model comparisons often are made with the kappa statistic, which has become controversial....
International Nuclear Information System (INIS)
Choo, Ji Yung; Goo, Jin Mo; Park, Chang Min; Park, Sang Joon; Lee, Chang Hyun; Shim, Mi-Suk
2014-01-01
To evaluate filtered back projection (FBP) and two iterative reconstruction (IR) algorithms and their effects on the quantitative analysis of lung parenchyma and airway measurements on computed tomography (CT) images. Low-dose chest CT obtained in 281 adult patients were reconstructed using three algorithms: FBP, adaptive statistical IR (ASIR) and model-based IR (MBIR). Measurements of each dataset were compared: total lung volume, emphysema index (EI), airway measurements of the lumen and wall area as well as average wall thickness. Accuracy of airway measurements of each algorithm was also evaluated using an airway phantom. EI using a threshold of -950 HU was significantly different among the three algorithms in decreasing order of FBP (2.30 %), ASIR (1.49 %) and MBIR (1.20 %) (P < 0.01). Wall thickness was also significantly different among the three algorithms with FBP (2.09 mm) demonstrating thicker walls than ASIR (2.00 mm) and MBIR (1.88 mm) (P < 0.01). Airway phantom analysis revealed that MBIR showed the most accurate value for airway measurements. The three algorithms presented different EIs and wall thicknesses, decreasing in the order of FBP, ASIR and MBIR. Thus, care should be taken in selecting the appropriate IR algorithm on quantitative analysis of the lung. (orig.)
Lewis, Rebecca; Liddick, Sean; Spyrou, Artemis; Crider, Benjamin; Dombos, Alexander; Naqvi, Farheen; Prokop, Christopher; Quinn, Stephen; Larsen, Ann-Cecilie; Crespo Campo, Lucia; Guttormsen, Magne; Renstrom, Therese; Siem, Sunniva; Bleuel, Darren; Couture, Aaron; Mosby, Shea; Perdikakis, George
2017-09-01
A majority of the abundance of the elements above iron are produced by neutron capture reactions, and, in explosive stellar processes, many of these reactions take place on unstable nuclei. Direct neutron capture experiments can only be performed on stable and long-lived nuclei, requiring indirect methods for the remaining isotopes. Statistical neutron capture can be described using the nuclear level density (NLD), the γ strength function (γSF), and an optical model. The NLD and γSF can be obtained using the β-Oslo method. The NLD and γSF were recently determined for 74Zn using the β-Oslo method, and were used in both TALYS and CoH to calculate the 73Zn(n, γ)74Zn neutron capture cross section. The cross sections calculated in TALYS and CoH are expected to be identical if the inputs for both codes are the same, however, after a thorough investigation into the inputs for the 73Zn(n, γ)74Zn reaction there is still a factor of two discrepancy between the two codes.
International Nuclear Information System (INIS)
Kim, Jin Hyeok; Choo, Ki Seok; Moon, Tae Yong; Lee, Jun Woo; Jeon, Ung Bae; Kim, Tae Un; Hwang, Jae Yeon; Yun, Myeong-Ja; Jeong, Dong Wook; Lim, Soo Jin
2016-01-01
To evaluate the subjective and objective qualities of computed tomography (CT) venography images at 80 kVp using model-based iterative reconstruction (MBIR) and to compare these with those of filtered back projection (FBP) and adaptive statistical iterative reconstruction (ASIR) using the same CT data sets. Forty-four patients (mean age: 56.1 ± 18.1) who underwent 80 kVp CT venography (CTV) for the evaluation of deep vein thrombosis (DVT) during 4 months were enrolled in this retrospective study. The same raw data were reconstructed using FBP, ASIR, and MBIR. Objective and subjective image analysis were performed at the inferior vena cava (IVC), femoral vein, and popliteal vein. The mean CNR of MBIR was significantly greater than those of FBP and ASIR and images reconstructed using MBIR had significantly lower objective image noise (p <.001). Subjective image quality and confidence of detecting DVT by MBIR group were significantly greater than those of FBP and ASIR (p <.005), and MBIR had the lowest score for subjective image noise (p <.001). CTV at 80 kVp with MBIR was superior to FBP and ASIR regarding subjective and objective image qualities. (orig.)
Energy Technology Data Exchange (ETDEWEB)
Choo, Ji Yung [Seoul National University Medical Research Center, Department of Radiology, Seoul National University College of Medicine, and Institute of Radiation Medicine, Seoul (Korea, Republic of); Korea University Ansan Hospital, Ansan-si, Department of Radiology, Gyeonggi-do (Korea, Republic of); Goo, Jin Mo; Park, Chang Min; Park, Sang Joon [Seoul National University Medical Research Center, Department of Radiology, Seoul National University College of Medicine, and Institute of Radiation Medicine, Seoul (Korea, Republic of); Seoul National University, Cancer Research Institute, Seoul (Korea, Republic of); Lee, Chang Hyun; Shim, Mi-Suk [Seoul National University Medical Research Center, Department of Radiology, Seoul National University College of Medicine, and Institute of Radiation Medicine, Seoul (Korea, Republic of)
2014-04-15
To evaluate filtered back projection (FBP) and two iterative reconstruction (IR) algorithms and their effects on the quantitative analysis of lung parenchyma and airway measurements on computed tomography (CT) images. Low-dose chest CT obtained in 281 adult patients were reconstructed using three algorithms: FBP, adaptive statistical IR (ASIR) and model-based IR (MBIR). Measurements of each dataset were compared: total lung volume, emphysema index (EI), airway measurements of the lumen and wall area as well as average wall thickness. Accuracy of airway measurements of each algorithm was also evaluated using an airway phantom. EI using a threshold of -950 HU was significantly different among the three algorithms in decreasing order of FBP (2.30 %), ASIR (1.49 %) and MBIR (1.20 %) (P < 0.01). Wall thickness was also significantly different among the three algorithms with FBP (2.09 mm) demonstrating thicker walls than ASIR (2.00 mm) and MBIR (1.88 mm) (P < 0.01). Airway phantom analysis revealed that MBIR showed the most accurate value for airway measurements. The three algorithms presented different EIs and wall thicknesses, decreasing in the order of FBP, ASIR and MBIR. Thus, care should be taken in selecting the appropriate IR algorithm on quantitative analysis of the lung. (orig.)
Ghezelbash, Reza; Maghsoudi, Abbas
2018-05-01
The delineation of populations of stream sediment geochemical data is a crucial task in regional exploration surveys. In this contribution, uni-element stream sediment geochemical data of Cu, Au, Mo, and Bi have been subjected to two reliable anomaly-background separation methods, namely, the concentration-area (C-A) fractal and the U-spatial statistics methods to separate geochemical anomalies related to porphyry-type Cu mineralization in northwest Iran. The quantitative comparison of the delineated geochemical populations using the modified success-rate curves revealed the superiority of the U-spatial statistics method over the fractal model. Moreover, geochemical maps of investigated elements revealed strongly positive correlations between strong anomalies and Oligocene-Miocene intrusions in the study area. Therefore, follow-up exploration programs should focus on these areas.
Bayesian models: A statistical primer for ecologists
Hobbs, N. Thompson; Hooten, Mevin B.
2015-01-01
Bayesian modeling has become an indispensable tool for ecological research because it is uniquely suited to deal with complexity in a statistically coherent way. This textbook provides a comprehensive and accessible introduction to the latest Bayesian methods—in language ecologists can understand. Unlike other books on the subject, this one emphasizes the principles behind the computations, giving ecologists a big-picture understanding of how to implement this powerful statistical approach.Bayesian Models is an essential primer for non-statisticians. It begins with a definition of probability and develops a step-by-step sequence of connected ideas, including basic distribution theory, network diagrams, hierarchical models, Markov chain Monte Carlo, and inference from single and multiple models. This unique book places less emphasis on computer coding, favoring instead a concise presentation of the mathematical statistics needed to understand how and why Bayesian analysis works. It also explains how to write out properly formulated hierarchical Bayesian models and use them in computing, research papers, and proposals.This primer enables ecologists to understand the statistical principles behind Bayesian modeling and apply them to research, teaching, policy, and management.Presents the mathematical and statistical foundations of Bayesian modeling in language accessible to non-statisticiansCovers basic distribution theory, network diagrams, hierarchical models, Markov chain Monte Carlo, and moreDeemphasizes computer coding in favor of basic principlesExplains how to write out properly factored statistical expressions representing Bayesian models
Statistical Model-Based Face Pose Estimation
Institute of Scientific and Technical Information of China (English)
GE Xinliang; YANG Jie; LI Feng; WANG Huahua
2007-01-01
A robust face pose estimation approach is proposed by using face shape statistical model approach and pose parameters are represented by trigonometric functions. The face shape statistical model is firstly built by analyzing the face shapes from different people under varying poses. The shape alignment is vital in the process of building the statistical model. Then, six trigonometric functions are employed to represent the face pose parameters. Lastly, the mapping function is constructed between face image and face pose by linearly relating different parameters. The proposed approach is able to estimate different face poses using a few face training samples. Experimental results are provided to demonstrate its efficiency and accuracy.
Uncertainty the soul of modeling, probability & statistics
Briggs, William
2016-01-01
This book presents a philosophical approach to probability and probabilistic thinking, considering the underpinnings of probabilistic reasoning and modeling, which effectively underlie everything in data science. The ultimate goal is to call into question many standard tenets and lay the philosophical and probabilistic groundwork and infrastructure for statistical modeling. It is the first book devoted to the philosophy of data aimed at working scientists and calls for a new consideration in the practice of probability and statistics to eliminate what has been referred to as the "Cult of Statistical Significance". The book explains the philosophy of these ideas and not the mathematics, though there are a handful of mathematical examples. The topics are logically laid out, starting with basic philosophy as related to probability, statistics, and science, and stepping through the key probabilistic ideas and concepts, and ending with statistical models. Its jargon-free approach asserts that standard methods, suc...
Automated statistical modeling of analytical measurement systems
International Nuclear Information System (INIS)
Jacobson, J.J.
1992-01-01
The statistical modeling of analytical measurement systems at the Idaho Chemical Processing Plant (ICPP) has been completely automated through computer software. The statistical modeling of analytical measurement systems is one part of a complete quality control program used by the Remote Analytical Laboratory (RAL) at the ICPP. The quality control program is an integration of automated data input, measurement system calibration, database management, and statistical process control. The quality control program and statistical modeling program meet the guidelines set forth by the American Society for Testing Materials and American National Standards Institute. A statistical model is a set of mathematical equations describing any systematic bias inherent in a measurement system and the precision of a measurement system. A statistical model is developed from data generated from the analysis of control standards. Control standards are samples which are made up at precise known levels by an independent laboratory and submitted to the RAL. The RAL analysts who process control standards do not know the values of those control standards. The object behind statistical modeling is to describe real process samples in terms of their bias and precision and, to verify that a measurement system is operating satisfactorily. The processing of control standards gives us this ability
Energy Technology Data Exchange (ETDEWEB)
Kim, Jin Su; Lee, Jae Sung; Park, Min Hyun; Lee, Jong Jin; Kang, Hye Jin; Lee, Hyo Jeong; Oh, Seung Ha; Kim, Chong Sun; Jung, June Key; Lee, Myung Chul; Lee, Dong Soo [Seoul National University College of Medicine, Seoul (Korea, Republic of); Lim, Sang Moo [KIRAMS, Seoul (Korea, Republic of)
2005-07-01
Imaging research on the brain of sensory-deprived cats using small animal PET scanner has gained interest since the abundant information about the sensory system of ths animal is available and close examination of the brain is possible due to larger size of its brain than mouse or rat. In this study, we have established the procedures for 3D voxel-based statistical analysis (SPM) of FDG PET image of cat brain, and confirmed using ROI based-method. FDG PET scans of 4 normal and 4 deaf cats were acquired for 30 minutes using microPET R4 scanner. Only the brain cortices were extracted using a masking and threshold method to facilitate spatial normalization. After spatial normalization and smoothing, 3D voxel-wise and ROI based t-test were performed to identify the regions with significant different FDG uptake between the normal and deaf cats. In ROI analysis, 26 ROIs were drawn on both hemispheres, and regional mean pixel value in each ROI was normalized to the global mean of the brain. Cat brains were spatially normalized well onto the target brain due to the removal of background activity. When cerebral glucose metabolism of deaf cats were compared to the normal controls after removing the effects of the global count, the glucose metabolism in the auditory cortex, head of caudate nucleus, and thalamus in both hemispheres of the deaf cats was significantly lower than that of the controls (P<0.01). No area showed a significantly increased metabolism in the deaf cats even in higher significance level (P<0.05). ROI analysis also showed significant reduction of glucose metabolism in the same region. This study established and confirmed a method for voxel-based analysis of animal PET data of cat brain, which showed high localization accuracy and specificity and was useful for examining the cerebral glucose metabolism in a cat cortical deafness model.
International Nuclear Information System (INIS)
Kim, Jin Su; Lee, Jae Sung; Park, Min Hyun; Lee, Jong Jin; Kang, Hye Jin; Lee, Hyo Jeong; Oh, Seung Ha; Kim, Chong Sun; Jung, June Key; Lee, Myung Chul; Lee, Dong Soo; Lim, Sang Moo
2005-01-01
Imaging research on the brain of sensory-deprived cats using small animal PET scanner has gained interest since the abundant information about the sensory system of ths animal is available and close examination of the brain is possible due to larger size of its brain than mouse or rat. In this study, we have established the procedures for 3D voxel-based statistical analysis (SPM) of FDG PET image of cat brain, and confirmed using ROI based-method. FDG PET scans of 4 normal and 4 deaf cats were acquired for 30 minutes using microPET R4 scanner. Only the brain cortices were extracted using a masking and threshold method to facilitate spatial normalization. After spatial normalization and smoothing, 3D voxel-wise and ROI based t-test were performed to identify the regions with significant different FDG uptake between the normal and deaf cats. In ROI analysis, 26 ROIs were drawn on both hemispheres, and regional mean pixel value in each ROI was normalized to the global mean of the brain. Cat brains were spatially normalized well onto the target brain due to the removal of background activity. When cerebral glucose metabolism of deaf cats were compared to the normal controls after removing the effects of the global count, the glucose metabolism in the auditory cortex, head of caudate nucleus, and thalamus in both hemispheres of the deaf cats was significantly lower than that of the controls (P<0.01). No area showed a significantly increased metabolism in the deaf cats even in higher significance level (P<0.05). ROI analysis also showed significant reduction of glucose metabolism in the same region. This study established and confirmed a method for voxel-based analysis of animal PET data of cat brain, which showed high localization accuracy and specificity and was useful for examining the cerebral glucose metabolism in a cat cortical deafness model
International Nuclear Information System (INIS)
Katsura, Masaki; Matsuda, Izuru; Akahane, Masaaki; Sato, Jiro; Akai, Hiroyuki; Yasaka, Koichiro; Kunimatsu, Akira; Ohtomo, Kuni
2012-01-01
To prospectively evaluate dose reduction and image quality characteristics of chest CT reconstructed with model-based iterative reconstruction (MBIR) compared with adaptive statistical iterative reconstruction (ASIR). One hundred patients underwent reference-dose and low-dose unenhanced chest CT with 64-row multidetector CT. Images were reconstructed with 50 % ASIR-filtered back projection blending (ASIR50) for reference-dose CT, and with ASIR50 and MBIR for low-dose CT. Two radiologists assessed the images in a blinded manner for subjective image noise, artefacts and diagnostic acceptability. Objective image noise was measured in the lung parenchyma. Data were analysed using the sign test and pair-wise Student's t-test. Compared with reference-dose CT, there was a 79.0 % decrease in dose-length product with low-dose CT. Low-dose MBIR images had significantly lower objective image noise (16.93 ± 3.00) than low-dose ASIR (49.24 ± 9.11, P < 0.01) and reference-dose ASIR images (24.93 ± 4.65, P < 0.01). Low-dose MBIR images were all diagnostically acceptable. Unique features of low-dose MBIR images included motion artefacts and pixellated blotchy appearances, which did not adversely affect diagnostic acceptability. Diagnostically acceptable chest CT images acquired with nearly 80 % less radiation can be obtained using MBIR. MBIR shows greater potential than ASIR for providing diagnostically acceptable low-dose CT images without severely compromising image quality. (orig.)
Energy Technology Data Exchange (ETDEWEB)
Katsura, Masaki; Matsuda, Izuru; Akahane, Masaaki; Sato, Jiro; Akai, Hiroyuki; Yasaka, Koichiro; Kunimatsu, Akira; Ohtomo, Kuni [University of Tokyo, Department of Radiology, Graduate School of Medicine, Bunkyo-ku, Tokyo (Japan)
2012-08-15
To prospectively evaluate dose reduction and image quality characteristics of chest CT reconstructed with model-based iterative reconstruction (MBIR) compared with adaptive statistical iterative reconstruction (ASIR). One hundred patients underwent reference-dose and low-dose unenhanced chest CT with 64-row multidetector CT. Images were reconstructed with 50 % ASIR-filtered back projection blending (ASIR50) for reference-dose CT, and with ASIR50 and MBIR for low-dose CT. Two radiologists assessed the images in a blinded manner for subjective image noise, artefacts and diagnostic acceptability. Objective image noise was measured in the lung parenchyma. Data were analysed using the sign test and pair-wise Student's t-test. Compared with reference-dose CT, there was a 79.0 % decrease in dose-length product with low-dose CT. Low-dose MBIR images had significantly lower objective image noise (16.93 {+-} 3.00) than low-dose ASIR (49.24 {+-} 9.11, P < 0.01) and reference-dose ASIR images (24.93 {+-} 4.65, P < 0.01). Low-dose MBIR images were all diagnostically acceptable. Unique features of low-dose MBIR images included motion artefacts and pixellated blotchy appearances, which did not adversely affect diagnostic acceptability. Diagnostically acceptable chest CT images acquired with nearly 80 % less radiation can be obtained using MBIR. MBIR shows greater potential than ASIR for providing diagnostically acceptable low-dose CT images without severely compromising image quality. (orig.)
Topology for statistical modeling of petascale data.
Energy Technology Data Exchange (ETDEWEB)
Pascucci, Valerio (University of Utah, Salt Lake City, UT); Mascarenhas, Ajith Arthur; Rusek, Korben (Texas A& M University, College Station, TX); Bennett, Janine Camille; Levine, Joshua (University of Utah, Salt Lake City, UT); Pebay, Philippe Pierre; Gyulassy, Attila (University of Utah, Salt Lake City, UT); Thompson, David C.; Rojas, Joseph Maurice (Texas A& M University, College Station, TX)
2011-07-01
This document presents current technical progress and dissemination of results for the Mathematics for Analysis of Petascale Data (MAPD) project titled 'Topology for Statistical Modeling of Petascale Data', funded by the Office of Science Advanced Scientific Computing Research (ASCR) Applied Math program. Many commonly used algorithms for mathematical analysis do not scale well enough to accommodate the size or complexity of petascale data produced by computational simulations. The primary goal of this project is thus to develop new mathematical tools that address both the petascale size and uncertain nature of current data. At a high level, our approach is based on the complementary techniques of combinatorial topology and statistical modeling. In particular, we use combinatorial topology to filter out spurious data that would otherwise skew statistical modeling techniques, and we employ advanced algorithms from algebraic statistics to efficiently find globally optimal fits to statistical models. This document summarizes the technical advances we have made to date that were made possible in whole or in part by MAPD funding. These technical contributions can be divided loosely into three categories: (1) advances in the field of combinatorial topology, (2) advances in statistical modeling, and (3) new integrated topological and statistical methods.
Challenges in dental statistics: data and modelling
Matranga, D.; Castiglia, P.; Solinas, G.
2013-01-01
The aim of this work is to present the reflections and proposals derived from the first Workshop of the SISMEC STATDENT working group on statistical methods and applications in dentistry, held in Ancona (Italy) on 28th September 2011. STATDENT began as a forum of comparison and discussion for statisticians working in the field of dental research in order to suggest new and improve existing biostatistical and clinical epidemiological methods. During the meeting, we dealt with very important to...
Daily precipitation statistics in regional climate models
DEFF Research Database (Denmark)
Frei, Christoph; Christensen, Jens Hesselbjerg; Déqué, Michel
2003-01-01
An evaluation is undertaken of the statistics of daily precipitation as simulated by five regional climate models using comprehensive observations in the region of the European Alps. Four limited area models and one variable-resolution global model are considered, all with a grid spacing of 50 km...
Infinite Random Graphs as Statistical Mechanical Models
DEFF Research Database (Denmark)
Durhuus, Bergfinnur Jøgvan; Napolitano, George Maria
2011-01-01
We discuss two examples of infinite random graphs obtained as limits of finite statistical mechanical systems: a model of two-dimensional dis-cretized quantum gravity defined in terms of causal triangulated surfaces, and the Ising model on generic random trees. For the former model we describe a ...
Statistical comparison of the geometry of second-phase particles
Energy Technology Data Exchange (ETDEWEB)
Benes, Viktor, E-mail: benesv@karlin.mff.cuni.cz [Charles University in Prague, Faculty of Mathematics and Physics, Department of Probability and Mathematical Statistics, Sokolovska 83, 186 75 Prague 8-Karlin (Czech Republic); Lechnerova, Radka, E-mail: radka.lech@seznam.cz [Private College on Economical Studies, Ltd., Lindnerova 575/1, 180 00 Prague 8-Liben (Czech Republic); Klebanov, Lev [Charles University in Prague, Faculty of Mathematics and Physics, Department of Probability and Mathematical Statistics, Sokolovska 83, 186 75 Prague 8-Karlin (Czech Republic); Slamova, Margarita, E-mail: slamova@vyzkum-kovu.cz [Research Institute for Metals, Ltd., Panenske Brezany 50, 250 70 Odolena Voda (Czech Republic); Slama, Peter [Research Institute for Metals, Ltd., Panenske Brezany 50, 250 70 Odolena Voda (Czech Republic)
2009-10-15
In microscopic studies of materials, there is often a need to provide a statistical test as to whether two microstructures are different or not. Typically, there are some random objects (particles, grains, pores) and the comparison concerns their density, individual geometrical parameters and their spatial distribution. The problem is that neighbouring objects observed in a single window cannot be assumed to be stochastically independent, therefore classical statistical testing based on random sampling is not applicable. The aim of the present paper is to develop a test based on N-distances in probability theory. Using the measurements from a few independent windows, we consider a two-sample test, which involves a large amount of information collected from each window. An application is presented consisting in a comparison of metallographic samples of aluminium alloys, and the results are interpreted.
Statistical comparison of the geometry of second-phase particles
International Nuclear Information System (INIS)
Benes, Viktor; Lechnerova, Radka; Klebanov, Lev; Slamova, Margarita; Slama, Peter
2009-01-01
In microscopic studies of materials, there is often a need to provide a statistical test as to whether two microstructures are different or not. Typically, there are some random objects (particles, grains, pores) and the comparison concerns their density, individual geometrical parameters and their spatial distribution. The problem is that neighbouring objects observed in a single window cannot be assumed to be stochastically independent, therefore classical statistical testing based on random sampling is not applicable. The aim of the present paper is to develop a test based on N-distances in probability theory. Using the measurements from a few independent windows, we consider a two-sample test, which involves a large amount of information collected from each window. An application is presented consisting in a comparison of metallographic samples of aluminium alloys, and the results are interpreted.
Matrix Tricks for Linear Statistical Models
Puntanen, Simo; Styan, George PH
2011-01-01
In teaching linear statistical models to first-year graduate students or to final-year undergraduate students there is no way to proceed smoothly without matrices and related concepts of linear algebra; their use is really essential. Our experience is that making some particular matrix tricks very familiar to students can substantially increase their insight into linear statistical models (and also multivariate statistical analysis). In matrix algebra, there are handy, sometimes even very simple "tricks" which simplify and clarify the treatment of a problem - both for the student and
Statistical physics of pairwise probability models
DEFF Research Database (Denmark)
Roudi, Yasser; Aurell, Erik; Hertz, John
2009-01-01
(dansk abstrakt findes ikke) Statistical models for describing the probability distribution over the states of biological systems are commonly used for dimensional reduction. Among these models, pairwise models are very attractive in part because they can be fit using a reasonable amount of data......: knowledge of the means and correlations between pairs of elements in the system is sufficient. Not surprisingly, then, using pairwise models for studying neural data has been the focus of many studies in recent years. In this paper, we describe how tools from statistical physics can be employed for studying...
Distributions with given marginals and statistical modelling
Fortiana, Josep; Rodriguez-Lallena, José
2002-01-01
This book contains a selection of the papers presented at the meeting `Distributions with given marginals and statistical modelling', held in Barcelona (Spain), July 17-20, 2000. In 24 chapters, this book covers topics such as the theory of copulas and quasi-copulas, the theory and compatibility of distributions, models for survival distributions and other well-known distributions, time series, categorical models, definition and estimation of measures of dependence, monotonicity and stochastic ordering, shape and separability of distributions, hidden truncation models, diagonal families, orthogonal expansions, tests of independence, and goodness of fit assessment. These topics share the use and properties of distributions with given marginals, this being the fourth specialised text on this theme. The innovative aspect of the book is the inclusion of statistical aspects such as modelling, Bayesian statistics, estimation, and tests.
Aspects of statistical model for multifragmentation
International Nuclear Information System (INIS)
Bhattacharyya, P.; Das Gupta, S.; Mekjian, A. Z.
1999-01-01
We deal with two different aspects of an exactly soluble statistical model of fragmentation. First we show, using zero range force and finite temperature Thomas-Fermi theory, that a common link can be found between finite temperature mean field theory and the statistical fragmentation model. We show the latter naturally arises in the spinodal region. Next we show that although the exact statistical model is a canonical model and uses temperature, microcanonical results which use constant energy rather than constant temperature can also be obtained from the canonical model using saddle-point approximation. The methodology is extremely simple to implement and at least in all the examples studied in this work is very accurate. (c) 1999 The American Physical Society
Marsh, Herbert W.; Kuyper, Hans; Morin, Alexandre J. S.; Philip, D. Parker; Seaton, Marjorie
2014-01-01
We offer new theoretical, substantive, statistical, design, and methodological insights into the seemingly paradoxical negative effects of school- and class-average achievement (ACH) on academic self-concept (ASC) the big-fish-little-pond-effect (BFLPE; 15,356 Dutch 9th grade students from 651
Statistical Compression for Climate Model Output
Hammerling, D.; Guinness, J.; Soh, Y. J.
2017-12-01
Numerical climate model simulations run at high spatial and temporal resolutions generate massive quantities of data. As our computing capabilities continue to increase, storing all of the data is not sustainable, and thus is it important to develop methods for representing the full datasets by smaller compressed versions. We propose a statistical compression and decompression algorithm based on storing a set of summary statistics as well as a statistical model describing the conditional distribution of the full dataset given the summary statistics. We decompress the data by computing conditional expectations and conditional simulations from the model given the summary statistics. Conditional expectations represent our best estimate of the original data but are subject to oversmoothing in space and time. Conditional simulations introduce realistic small-scale noise so that the decompressed fields are neither too smooth nor too rough compared with the original data. Considerable attention is paid to accurately modeling the original dataset-one year of daily mean temperature data-particularly with regard to the inherent spatial nonstationarity in global fields, and to determining the statistics to be stored, so that the variation in the original data can be closely captured, while allowing for fast decompression and conditional emulation on modest computers.
Performance modeling, loss networks, and statistical multiplexing
Mazumdar, Ravi
2009-01-01
This monograph presents a concise mathematical approach for modeling and analyzing the performance of communication networks with the aim of understanding the phenomenon of statistical multiplexing. The novelty of the monograph is the fresh approach and insights provided by a sample-path methodology for queueing models that highlights the important ideas of Palm distributions associated with traffic models and their role in performance measures. Also presented are recent ideas of large buffer, and many sources asymptotics that play an important role in understanding statistical multiplexing. I
Simple statistical model for branched aggregates
DEFF Research Database (Denmark)
Lemarchand, Claire; Hansen, Jesper Schmidt
2015-01-01
, given that it already has bonds with others. The model is applied here to asphaltene nanoaggregates observed in molecular dynamics simulations of Cooee bitumen. The variation with temperature of the probabilities deduced from this model is discussed in terms of statistical mechanics arguments....... The relevance of the statistical model in the case of asphaltene nanoaggregates is checked by comparing the predicted value of the probability for one molecule to have exactly i bonds with the same probability directly measured in the molecular dynamics simulations. The agreement is satisfactory......We propose a statistical model that can reproduce the size distribution of any branched aggregate, including amylopectin, dendrimers, molecular clusters of monoalcohols, and asphaltene nanoaggregates. It is based on the conditional probability for one molecule to form a new bond with a molecule...
Advances in statistical models for data analysis
Minerva, Tommaso; Vichi, Maurizio
2015-01-01
This edited volume focuses on recent research results in classification, multivariate statistics and machine learning and highlights advances in statistical models for data analysis. The volume provides both methodological developments and contributions to a wide range of application areas such as economics, marketing, education, social sciences and environment. The papers in this volume were first presented at the 9th biannual meeting of the Classification and Data Analysis Group (CLADAG) of the Italian Statistical Society, held in September 2013 at the University of Modena and Reggio Emilia, Italy.
Structured statistical models of inductive reasoning.
Kemp, Charles; Tenenbaum, Joshua B
2009-01-01
Everyday inductive inferences are often guided by rich background knowledge. Formal models of induction should aim to incorporate this knowledge and should explain how different kinds of knowledge lead to the distinctive patterns of reasoning found in different inductive contexts. This article presents a Bayesian framework that attempts to meet both goals and describes [corrected] 4 applications of the framework: a taxonomic model, a spatial model, a threshold model, and a causal model. Each model makes probabilistic inferences about the extensions of novel properties, but the priors for the 4 models are defined over different kinds of structures that capture different relationships between the categories in a domain. The framework therefore shows how statistical inference can operate over structured background knowledge, and the authors argue that this interaction between structure and statistics is critical for explaining the power and flexibility of human reasoning.
Model for neural signaling leap statistics
International Nuclear Information System (INIS)
Chevrollier, Martine; Oria, Marcos
2011-01-01
We present a simple model for neural signaling leaps in the brain considering only the thermodynamic (Nernst) potential in neuron cells and brain temperature. We numerically simulated connections between arbitrarily localized neurons and analyzed the frequency distribution of the distances reached. We observed qualitative change between Normal statistics (with T 37.5 0 C, awaken regime) and Levy statistics (T = 35.5 0 C, sleeping period), characterized by rare events of long range connections.
Statistical models based on conditional probability distributions
International Nuclear Information System (INIS)
Narayanan, R.S.
1991-10-01
We present a formulation of statistical mechanics models based on conditional probability distribution rather than a Hamiltonian. We show that it is possible to realize critical phenomena through this procedure. Closely linked with this formulation is a Monte Carlo algorithm, in which a configuration generated is guaranteed to be statistically independent from any other configuration for all values of the parameters, in particular near the critical point. (orig.)
Model for neural signaling leap statistics
Chevrollier, Martine; Oriá, Marcos
2011-03-01
We present a simple model for neural signaling leaps in the brain considering only the thermodynamic (Nernst) potential in neuron cells and brain temperature. We numerically simulated connections between arbitrarily localized neurons and analyzed the frequency distribution of the distances reached. We observed qualitative change between Normal statistics (with T = 37.5°C, awaken regime) and Lévy statistics (T = 35.5°C, sleeping period), characterized by rare events of long range connections.
Model for neural signaling leap statistics
Energy Technology Data Exchange (ETDEWEB)
Chevrollier, Martine; Oria, Marcos, E-mail: oria@otica.ufpb.br [Laboratorio de Fisica Atomica e Lasers Departamento de Fisica, Universidade Federal da ParaIba Caixa Postal 5086 58051-900 Joao Pessoa, Paraiba (Brazil)
2011-03-01
We present a simple model for neural signaling leaps in the brain considering only the thermodynamic (Nernst) potential in neuron cells and brain temperature. We numerically simulated connections between arbitrarily localized neurons and analyzed the frequency distribution of the distances reached. We observed qualitative change between Normal statistics (with T 37.5{sup 0}C, awaken regime) and Levy statistics (T = 35.5{sup 0}C, sleeping period), characterized by rare events of long range connections.
Statistical 3D damage accumulation model for ion implant simulators
Hernandez-Mangas, J M; Enriquez, L E; Bailon, L; Barbolla, J; Jaraiz, M
2003-01-01
A statistical 3D damage accumulation model, based on the modified Kinchin-Pease formula, for ion implant simulation has been included in our physically based ion implantation code. It has only one fitting parameter for electronic stopping and uses 3D electron density distributions for different types of targets including compound semiconductors. Also, a statistical noise reduction mechanism based on the dose division is used. The model has been adapted to be run under parallel execution in order to speed up the calculation in 3D structures. Sequential ion implantation has been modelled including previous damage profiles. It can also simulate the implantation of molecular and cluster projectiles. Comparisons of simulated doping profiles with experimental SIMS profiles are presented. Also comparisons between simulated amorphization and experimental RBS profiles are shown. An analysis of sequential versus parallel processing is provided.
Statistical 3D damage accumulation model for ion implant simulators
International Nuclear Information System (INIS)
Hernandez-Mangas, J.M.; Lazaro, J.; Enriquez, L.; Bailon, L.; Barbolla, J.; Jaraiz, M.
2003-01-01
A statistical 3D damage accumulation model, based on the modified Kinchin-Pease formula, for ion implant simulation has been included in our physically based ion implantation code. It has only one fitting parameter for electronic stopping and uses 3D electron density distributions for different types of targets including compound semiconductors. Also, a statistical noise reduction mechanism based on the dose division is used. The model has been adapted to be run under parallel execution in order to speed up the calculation in 3D structures. Sequential ion implantation has been modelled including previous damage profiles. It can also simulate the implantation of molecular and cluster projectiles. Comparisons of simulated doping profiles with experimental SIMS profiles are presented. Also comparisons between simulated amorphization and experimental RBS profiles are shown. An analysis of sequential versus parallel processing is provided
International Nuclear Information System (INIS)
Seeliger, D.
1993-01-01
This contribution contains a brief presentation and comparison of the different Statistical Multistep Approaches, presently available for practical nuclear data calculations. (author). 46 refs, 5 figs
Validation of statistical models for creep rupture by parametric analysis
Energy Technology Data Exchange (ETDEWEB)
Bolton, J., E-mail: john.bolton@uwclub.net [65, Fisher Ave., Rugby, Warks CV22 5HW (United Kingdom)
2012-01-15
Statistical analysis is an efficient method for the optimisation of any candidate mathematical model of creep rupture data, and for the comparative ranking of competing models. However, when a series of candidate models has been examined and the best of the series has been identified, there is no statistical criterion to determine whether a yet more accurate model might be devised. Hence there remains some uncertainty that the best of any series examined is sufficiently accurate to be considered reliable as a basis for extrapolation. This paper proposes that models should be validated primarily by parametric graphical comparison to rupture data and rupture gradient data. It proposes that no mathematical model should be considered reliable for extrapolation unless the visible divergence between model and data is so small as to leave no apparent scope for further reduction. This study is based on the data for a 12% Cr alloy steel used in BS PD6605:1998 to exemplify its recommended statistical analysis procedure. The models considered in this paper include a) a relatively simple model, b) the PD6605 recommended model and c) a more accurate model of somewhat greater complexity. - Highlights: Black-Right-Pointing-Pointer The paper discusses the validation of creep rupture models derived from statistical analysis. Black-Right-Pointing-Pointer It demonstrates that models can be satisfactorily validated by a visual-graphic comparison of models to data. Black-Right-Pointing-Pointer The method proposed utilises test data both as conventional rupture stress and as rupture stress gradient. Black-Right-Pointing-Pointer The approach is shown to be more reliable than a well-established and widely used method (BS PD6605).
Growth curve models and statistical diagnostics
Pan, Jian-Xin
2002-01-01
Growth-curve models are generalized multivariate analysis-of-variance models. These models are especially useful for investigating growth problems on short times in economics, biology, medical research, and epidemiology. This book systematically introduces the theory of the GCM with particular emphasis on their multivariate statistical diagnostics, which are based mainly on recent developments made by the authors and their collaborators. The authors provide complete proofs of theorems as well as practical data sets and MATLAB code.
Topology for Statistical Modeling of Petascale Data
Energy Technology Data Exchange (ETDEWEB)
Pascucci, Valerio [Univ. of Utah, Salt Lake City, UT (United States); Levine, Joshua [Univ. of Utah, Salt Lake City, UT (United States); Gyulassy, Attila [Univ. of Utah, Salt Lake City, UT (United States); Bremer, P. -T. [Univ. of Utah, Salt Lake City, UT (United States)
2013-10-31
Many commonly used algorithms for mathematical analysis do not scale well enough to accommodate the size or complexity of petascale data produced by computational simulations. The primary goal of this project is to develop new mathematical tools that address both the petascale size and uncertain nature of current data. At a high level, the approach of the entire team involving all three institutions is based on the complementary techniques of combinatorial topology and statistical modelling. In particular, we use combinatorial topology to filter out spurious data that would otherwise skew statistical modelling techniques, and we employ advanced algorithms from algebraic statistics to efficiently find globally optimal fits to statistical models. The overall technical contributions can be divided loosely into three categories: (1) advances in the field of combinatorial topology, (2) advances in statistical modelling, and (3) new integrated topological and statistical methods. Roughly speaking, the division of labor between our 3 groups (Sandia Labs in Livermore, Texas A&M in College Station, and U Utah in Salt Lake City) is as follows: the Sandia group focuses on statistical methods and their formulation in algebraic terms, and finds the application problems (and data sets) most relevant to this project, the Texas A&M Group develops new algebraic geometry algorithms, in particular with fewnomial theory, and the Utah group develops new algorithms in computational topology via Discrete Morse Theory. However, we hasten to point out that our three groups stay in tight contact via videconference every 2 weeks, so there is much synergy of ideas between the groups. The following of this document is focused on the contributions that had grater direct involvement from the team at the University of Utah in Salt Lake City.
Bayesian models a statistical primer for ecologists
Hobbs, N Thompson
2015-01-01
Bayesian modeling has become an indispensable tool for ecological research because it is uniquely suited to deal with complexity in a statistically coherent way. This textbook provides a comprehensive and accessible introduction to the latest Bayesian methods-in language ecologists can understand. Unlike other books on the subject, this one emphasizes the principles behind the computations, giving ecologists a big-picture understanding of how to implement this powerful statistical approach. Bayesian Models is an essential primer for non-statisticians. It begins with a definition of probabili
Right-sizing statistical models for longitudinal data.
Wood, Phillip K; Steinley, Douglas; Jackson, Kristina M
2015-12-01
Arguments are proposed that researchers using longitudinal data should consider more and less complex statistical model alternatives to their initially chosen techniques in an effort to "right-size" the model to the data at hand. Such model comparisons may alert researchers who use poorly fitting, overly parsimonious models to more complex, better-fitting alternatives and, alternatively, may identify more parsimonious alternatives to overly complex (and perhaps empirically underidentified and/or less powerful) statistical models. A general framework is proposed for considering (often nested) relationships between a variety of psychometric and growth curve models. A 3-step approach is proposed in which models are evaluated based on the number and patterning of variance components prior to selection of better-fitting growth models that explain both mean and variation-covariation patterns. The orthogonal free curve slope intercept (FCSI) growth model is considered a general model that includes, as special cases, many models, including the factor mean (FM) model (McArdle & Epstein, 1987), McDonald's (1967) linearly constrained factor model, hierarchical linear models (HLMs), repeated-measures multivariate analysis of variance (MANOVA), and the linear slope intercept (linearSI) growth model. The FCSI model, in turn, is nested within the Tuckerized factor model. The approach is illustrated by comparing alternative models in a longitudinal study of children's vocabulary and by comparing several candidate parametric growth and chronometric models in a Monte Carlo study. (c) 2015 APA, all rights reserved).
Statistical transmutation in doped quantum dimer models.
Lamas, C A; Ralko, A; Cabra, D C; Poilblanc, D; Pujol, P
2012-07-06
We prove a "statistical transmutation" symmetry of doped quantum dimer models on the square, triangular, and kagome lattices: the energy spectrum is invariant under a simultaneous change of statistics (i.e., bosonic into fermionic or vice versa) of the holes and of the signs of all the dimer resonance loops. This exact transformation enables us to define the duality equivalence between doped quantum dimer Hamiltonians and provides the analytic framework to analyze dynamical statistical transmutations. We investigate numerically the doping of the triangular quantum dimer model with special focus on the topological Z(2) dimer liquid. Doping leads to four (instead of two for the square lattice) inequivalent families of Hamiltonians. Competition between phase separation, superfluidity, supersolidity, and fermionic phases is investigated in the four families.
STATISTICAL MODELS OF REPRESENTING INTELLECTUAL CAPITAL
Directory of Open Access Journals (Sweden)
Andreea Feraru
2016-06-01
Full Text Available This article entitled Statistical Models of Representing Intellectual Capital approaches and analyses the concept of intellectual capital, as well as the main models which can support enterprisers/managers in evaluating and quantifying the advantages of intellectual capital. Most authors examine intellectual capital from a static perspective and focus on the development of its various evaluation models. In this chapter we surveyed the classical static models: Sveiby, Edvisson, Balanced Scorecard, as well as the canonical model of intellectual capital. Among the group of static models for evaluating organisational intellectual capital the canonical model stands out. This model enables the structuring of organisational intellectual capital in: human capital, structural capital and relational capital. Although the model is widely spread, it is a static one and can thus create a series of errors in the process of evaluation, because all the three entities mentioned above are not independent from the viewpoint of their contents, as any logic of structuring complex entities requires.
(ajst) statistical mechanics model for orientational
African Journals Online (AJOL)
Science and Engineering Series Vol. 6, No. 2, pp. 94 - 101. STATISTICAL MECHANICS MODEL FOR ORIENTATIONAL. MOTION OF TWO-DIMENSIONAL RIGID ROTATOR. Malo, J.O. ... there is no translational motion and that they are well separated so .... constant and I is the moment of inertia of a linear rotator. Thus, the ...
Statistical Model Checking for Biological Systems
DEFF Research Database (Denmark)
David, Alexandre; Larsen, Kim Guldstrand; Legay, Axel
2014-01-01
Statistical Model Checking (SMC) is a highly scalable simulation-based verification approach for testing and estimating the probability that a stochastic system satisfies a given linear temporal property. The technique has been applied to (discrete and continuous time) Markov chains, stochastic...
Topology for Statistical Modeling of Petascale Data
Energy Technology Data Exchange (ETDEWEB)
Bennett, Janine Camille [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Pebay, Philippe Pierre [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Pascucci, Valerio [Univ. of Utah, Salt Lake City, UT (United States); Levine, Joshua [Univ. of Utah, Salt Lake City, UT (United States); Gyulassy, Attila [Univ. of Utah, Salt Lake City, UT (United States); Rojas, Maurice [Texas A & M Univ., College Station, TX (United States)
2014-07-01
This document presents current technical progress and dissemination of results for the Mathematics for Analysis of Petascale Data (MAPD) project titled "Topology for Statistical Modeling of Petascale Data", funded by the Office of Science Advanced Scientific Computing Research (ASCR) Applied Math program.
Establishing statistical models of manufacturing parameters
International Nuclear Information System (INIS)
Senevat, J.; Pape, J.L.; Deshayes, J.F.
1991-01-01
This paper reports on the effect of pilgering and cold-work parameters on contractile strain ratio and mechanical properties that were investigated using a large population of Zircaloy tubes. Statistical models were established between: contractile strain ratio and tooling parameters, mechanical properties (tensile test, creep test) and cold-work parameters, and mechanical properties and stress-relieving temperature
Statistical models for optimizing mineral exploration
International Nuclear Information System (INIS)
Wignall, T.K.; DeGeoffroy, J.
1987-01-01
The primary purpose of mineral exploration is to discover ore deposits. The emphasis of this volume is on the mathematical and computational aspects of optimizing mineral exploration. The seven chapters that make up the main body of the book are devoted to the description and application of various types of computerized geomathematical models. These chapters include: (1) the optimal selection of ore deposit types and regions of search, as well as prospecting selected areas, (2) designing airborne and ground field programs for the optimal coverage of prospecting areas, and (3) delineating and evaluating exploration targets within prospecting areas by means of statistical modeling. Many of these statistical programs are innovative and are designed to be useful for mineral exploration modeling. Examples of geomathematical models are applied to exploring for six main types of base and precious metal deposits, as well as other mineral resources (such as bauxite and uranium)
A statistical model for mapping morphological shape
Directory of Open Access Journals (Sweden)
Li Jiahan
2010-07-01
Full Text Available Abstract Background Living things come in all shapes and sizes, from bacteria, plants, and animals to humans. Knowledge about the genetic mechanisms for biological shape has far-reaching implications for a range spectrum of scientific disciplines including anthropology, agriculture, developmental biology, evolution and biomedicine. Results We derived a statistical model for mapping specific genes or quantitative trait loci (QTLs that control morphological shape. The model was formulated within the mixture framework, in which different types of shape are thought to result from genotypic discrepancies at a QTL. The EM algorithm was implemented to estimate QTL genotype-specific shapes based on a shape correspondence analysis. Computer simulation was used to investigate the statistical property of the model. Conclusion By identifying specific QTLs for morphological shape, the model developed will help to ask, disseminate and address many major integrative biological and genetic questions and challenges in the genetic control of biological shape and function.
Pagano, E. V.; Acosta, L.; Auditore, L.; Cap, T.; Cardella, G.; Colonna, M.; De Filippo, E.; Geraci, E.; Gnoffo, B.; Lanzalone, G.; Maiolino, C.; Martorana, N.; Pagano, A.; Papa, M.; Piasecki, E.; Pirrone, S.; Politi, G.; Porto, F.; Quattrocchi, L.; Rizzo, F.; Russotto, P.; Trifiro’, A.; Trimarchi, M.; Siwek-Wilczynska, K.
2018-05-01
In nuclear reactions at Fermi energies two and multi particles intensity interferometry correlation methods are powerful tools in order to pin down the characteristic time scale of the emission processes. In this paper we summarize an improved application of the fragment-fragment correlation function in the specific physics case of heavy projectile-like (PLF) binary massive splitting in two fragments of intermediate mass(IMF). Results are shown for the reverse kinematics reaction 124 Sn+64 Ni at 35 AMeV that has been investigated by using the forward part of CHIMERA multi-detector. The analysis was performed as a function of the charge asymmetry of the observed couples of IMF. We show a coexistence of dynamical and statistical components as a function of the charge asymmetry. Transport CoMD simulations are compared with the data in order to pin down the timescale of the fragments production and the relevant ingredients of the in medium effective interaction used in the transport calculations.
Performance modeling, stochastic networks, and statistical multiplexing
Mazumdar, Ravi R
2013-01-01
This monograph presents a concise mathematical approach for modeling and analyzing the performance of communication networks with the aim of introducing an appropriate mathematical framework for modeling and analysis as well as understanding the phenomenon of statistical multiplexing. The models, techniques, and results presented form the core of traffic engineering methods used to design, control and allocate resources in communication networks.The novelty of the monograph is the fresh approach and insights provided by a sample-path methodology for queueing models that highlights the importan
Statistical models for competing risk analysis
International Nuclear Information System (INIS)
Sather, H.N.
1976-08-01
Research results on three new models for potential applications in competing risks problems. One section covers the basic statistical relationships underlying the subsequent competing risks model development. Another discusses the problem of comparing cause-specific risk structure by competing risks theory in two homogeneous populations, P1 and P2. Weibull models which allow more generality than the Berkson and Elveback models are studied for the effect of time on the hazard function. The use of concomitant information for modeling single-risk survival is extended to the multiple failure mode domain of competing risks. The model used to illustrate the use of this methodology is a life table model which has constant hazards within pre-designated intervals of the time scale. Two parametric models for bivariate dependent competing risks, which provide interesting alternatives, are proposed and examined
A simple statistical method for catch comparison studies
DEFF Research Database (Denmark)
Holst, René; Revill, Andrew
2009-01-01
For analysing catch comparison data, we propose a simple method based on Generalised Linear Mixed Models (GLMM) and use polynomial approximations to fit the proportions caught in the test codend. The method provides comparisons of fish catch at length by the two gears through a continuous curve...... with a realistic confidence band. We demonstrate the versatility of this method, on field data obtained from the first known testing in European waters of the Rhode Island (USA) 'Eliminator' trawl. These data are interesting as they include a range of species with different selective patterns. Crown Copyright (C...
Statistical physics of pairwise probability models
Directory of Open Access Journals (Sweden)
Yasser Roudi
2009-11-01
Full Text Available Statistical models for describing the probability distribution over the states of biological systems are commonly used for dimensional reduction. Among these models, pairwise models are very attractive in part because they can be fit using a reasonable amount of data: knowledge of the means and correlations between pairs of elements in the system is sufficient. Not surprisingly, then, using pairwise models for studying neural data has been the focus of many studies in recent years. In this paper, we describe how tools from statistical physics can be employed for studying and using pairwise models. We build on our previous work on the subject and study the relation between different methods for fitting these models and evaluating their quality. In particular, using data from simulated cortical networks we study how the quality of various approximate methods for inferring the parameters in a pairwise model depends on the time bin chosen for binning the data. We also study the effect of the size of the time bin on the model quality itself, again using simulated data. We show that using finer time bins increases the quality of the pairwise model. We offer new ways of deriving the expressions reported in our previous work for assessing the quality of pairwise models.
International Nuclear Information System (INIS)
Potter, G.L.; Ellsaesser, H.W.; MacCracken, M.C.; Luther, F.M.
1978-06-01
Results from the zonal model indicate quite reasonable agreement with observation in terms of the parameters and processes that influence the radiation and energy balance calculations. The model produces zonal statistics similar to those from general circulation models, and has also been shown to produce similar responses in sensitivity studies. Further studies of model performance are planned, including: comparison with July data; comparison of temperature and moisture transport and wind fields for winter and summer months; and a tabulation of atmospheric energetics. Based on these preliminary performance studies, however, it appears that the zonal model can be used in conjunction with more complex models to help unravel the problems of understanding the processes governing present climate and climate change. As can be seen in the subsequent paper on model sensitivity studies, in addition to reduced cost of computation, the zonal model facilitates analysis of feedback mechanisms and simplifies analysis of the interactions between processes
Zhang, Harrison G; Ying, Gui-Shuang
2018-02-09
The aim of this study is to evaluate the current practice of statistical analysis of eye data in clinical science papers published in British Journal of Ophthalmology ( BJO ) and to determine whether the practice of statistical analysis has improved in the past two decades. All clinical science papers (n=125) published in BJO in January-June 2017 were reviewed for their statistical analysis approaches for analysing primary ocular measure. We compared our findings to the results from a previous paper that reviewed BJO papers in 1995. Of 112 papers eligible for analysis, half of the studies analysed the data at an individual level because of the nature of observation, 16 (14%) studies analysed data from one eye only, 36 (32%) studies analysed data from both eyes at ocular level, one study (1%) analysed the overall summary of ocular finding per individual and three (3%) studies used the paired comparison. Among studies with data available from both eyes, 50 (89%) of 56 papers in 2017 did not analyse data from both eyes or ignored the intereye correlation, as compared with in 60 (90%) of 67 papers in 1995 (P=0.96). Among studies that analysed data from both eyes at an ocular level, 33 (92%) of 36 studies completely ignored the intereye correlation in 2017, as compared with in 16 (89%) of 18 studies in 1995 (P=0.40). A majority of studies did not analyse the data properly when data from both eyes were available. The practice of statistical analysis did not improve in the past two decades. Collaborative efforts should be made in the vision research community to improve the practice of statistical analysis for ocular data. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Statistical models of petrol engines vehicles dynamics
Ilie, C. O.; Marinescu, M.; Alexa, O.; Vilău, R.; Grosu, D.
2017-10-01
This paper focuses on studying statistical models of vehicles dynamics. It was design and perform a one year testing program. There were used many same type cars with gasoline engines and different mileage. Experimental data were collected of onboard sensors and those on the engine test stand. A database containing data of 64th tests was created. Several mathematical modelling were developed using database and the system identification method. Each modelling is a SISO or a MISO linear predictive ARMAX (AutoRegressive-Moving-Average with eXogenous inputs) model. It represents a differential equation with constant coefficients. It were made 64th equations for each dependency like engine torque as output and engine’s load and intake manifold pressure, as inputs. There were obtained strings with 64 values for each type of model. The final models were obtained using average values of the coefficients. The accuracy of models was assessed.
Equilibrium statistical mechanics of lattice models
Lavis, David A
2015-01-01
Most interesting and difficult problems in equilibrium statistical mechanics concern models which exhibit phase transitions. For graduate students and more experienced researchers this book provides an invaluable reference source of approximate and exact solutions for a comprehensive range of such models. Part I contains background material on classical thermodynamics and statistical mechanics, together with a classification and survey of lattice models. The geometry of phase transitions is described and scaling theory is used to introduce critical exponents and scaling laws. An introduction is given to finite-size scaling, conformal invariance and Schramm—Loewner evolution. Part II contains accounts of classical mean-field methods. The parallels between Landau expansions and catastrophe theory are discussed and Ginzburg—Landau theory is introduced. The extension of mean-field theory to higher-orders is explored using the Kikuchi—Hijmans—De Boer hierarchy of approximations. In Part III the use of alge...
Statistical shape and appearance models of bones.
Sarkalkan, Nazli; Weinans, Harrie; Zadpoor, Amir A
2014-03-01
When applied to bones, statistical shape models (SSM) and statistical appearance models (SAM) respectively describe the mean shape and mean density distribution of bones within a certain population as well as the main modes of variations of shape and density distribution from their mean values. The availability of this quantitative information regarding the detailed anatomy of bones provides new opportunities for diagnosis, evaluation, and treatment of skeletal diseases. The potential of SSM and SAM has been recently recognized within the bone research community. For example, these models have been applied for studying the effects of bone shape on the etiology of osteoarthritis, improving the accuracy of clinical osteoporotic fracture prediction techniques, design of orthopedic implants, and surgery planning. This paper reviews the main concepts, methods, and applications of SSM and SAM as applied to bone. Copyright © 2013 Elsevier Inc. All rights reserved.
Cellular automata and statistical mechanical models
International Nuclear Information System (INIS)
Rujan, P.
1987-01-01
The authors elaborate on the analogy between the transfer matrix of usual lattice models and the master equation describing the time development of cellular automata. Transient and stationary properties of probabilistic automata are linked to surface and bulk properties, respectively, of restricted statistical mechanical systems. It is demonstrated that methods of statistical physics can be successfully used to describe the dynamic and the stationary behavior of such automata. Some exact results are derived, including duality transformations, exact mappings, disorder, and linear solutions. Many examples are worked out in detail to demonstrate how to use statistical physics in order to construct cellular automata with desired properties. This approach is considered to be a first step toward the design of fully parallel, probabilistic systems whose computational abilities rely on the cooperative behavior of their components
Notohamiprodjo, S; Deak, Z; Meurer, F; Maertz, F; Mueck, F G; Geyer, L L; Wirth, S
2015-01-01
The purpose of this study was to compare cranial CT (CCT) image quality (IQ) of the MBIR algorithm with standard iterative reconstruction (ASiR). In this institutional review board (IRB)-approved study, raw data sets of 100 unenhanced CCT examinations (120 kV, 50-260 mAs, 20 mm collimation, 0.984 pitch) were reconstructed with both ASiR and MBIR. Signal-to-noise (SNR) and contrast-to-noise (CNR) were calculated from attenuation values measured in caudate nucleus, frontal white matter, anterior ventricle horn, fourth ventricle, and pons. Two radiologists, who were blinded to the reconstruction algorithms, evaluated anonymized multiplanar reformations of 2.5 mm with respect to depiction of different parenchymal structures and impact of artefacts on IQ with a five-point scale (0: unacceptable, 1: less than average, 2: average, 3: above average, 4: excellent). MBIR decreased artefacts more effectively than ASiR (p ASiR was 2 (p ASiR (p ASiR. As CCT is an examination that is frequently required, the use of MBIR may allow for substantial reduction of radiation exposure caused by medical diagnostics. • Model-Based iterative reconstruction (MBIR) effectively decreased artefacts in cranial CT. • MBIR reconstructed images were rated with significantly higher scores for image quality. • Model-Based iterative reconstruction may allow reduced-dose diagnostic examination protocols.
An ANOVA approach for statistical comparisons of brain networks.
Fraiman, Daniel; Fraiman, Ricardo
2018-03-16
The study of brain networks has developed extensively over the last couple of decades. By contrast, techniques for the statistical analysis of these networks are less developed. In this paper, we focus on the statistical comparison of brain networks in a nonparametric framework and discuss the associated detection and identification problems. We tested network differences between groups with an analysis of variance (ANOVA) test we developed specifically for networks. We also propose and analyse the behaviour of a new statistical procedure designed to identify different subnetworks. As an example, we show the application of this tool in resting-state fMRI data obtained from the Human Connectome Project. We identify, among other variables, that the amount of sleep the days before the scan is a relevant variable that must be controlled. Finally, we discuss the potential bias in neuroimaging findings that is generated by some behavioural and brain structure variables. Our method can also be applied to other kind of networks such as protein interaction networks, gene networks or social networks.
Paulinelli, Regis R; Oliveira, Luis-Fernando P; Freitas-Junior, Ruffo; Soares, Leonardo R
2016-01-01
The objective of the present study was to compare the accuracy of SONOBREAST for the prediction of malignancy in solid breast nodules detected at ultrasonography with that of the BI-RADS system and to assess the agreement between these two methods. This prospective study included 274 women and evaluated 500 breast nodules detected at ultrasonography. The probability of malignancy was calculated based on the SONOBREAST model, available at www.sonobreast.com.br, and on the BI-RADS system, with results being compared with the anatomopathology report. The lesions were considered suspect in 171 cases (34.20%), according to both SONOBREAST and BI-RADS. Agreement between the methods was perfect, as shown by a Kappa coefficient of 1 (pBI-RADS proved identical insofar as sensitivity (95.40%), specificity (78.69%), positive predictive value (48.54%), negative predictive value (98.78%) and accuracy (81.60%) are concerned. With respect to the categorical variables (BI-RADS categories 3, 4 and 5), the area under the receiver operating characteristic (ROC) curve was 94.41 for SONOBREAST (range 92.20-96.62) and 89.99 for BI-RADS (range 86.60-93.37). The accuracy of the SONOBREAST model is identical to that found with BI-RADS when the same parameters are used with respect to the cut-off point at which malignancy is suspected. Regarding the continuous probability of malignancy with BI-RADS categories 3, 4 and 5, SONOBREAST permits a more precise and individualized evaluation. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Statistical Modelling of Wind Proles - Data Analysis and Modelling
DEFF Research Database (Denmark)
Jónsson, Tryggvi; Pinson, Pierre
The aim of the analysis presented in this document is to investigate whether statistical models can be used to make very short-term predictions of wind profiles.......The aim of the analysis presented in this document is to investigate whether statistical models can be used to make very short-term predictions of wind profiles....
Statistical modeling of geopressured geothermal reservoirs
Ansari, Esmail; Hughes, Richard; White, Christopher D.
2017-06-01
Identifying attractive candidate reservoirs for producing geothermal energy requires predictive models. In this work, inspectional analysis and statistical modeling are used to create simple predictive models for a line drive design. Inspectional analysis on the partial differential equations governing this design yields a minimum number of fifteen dimensionless groups required to describe the physics of the system. These dimensionless groups are explained and confirmed using models with similar dimensionless groups but different dimensional parameters. This study models dimensionless production temperature and thermal recovery factor as the responses of a numerical model. These responses are obtained by a Box-Behnken experimental design. An uncertainty plot is used to segment the dimensionless time and develop a model for each segment. The important dimensionless numbers for each segment of the dimensionless time are identified using the Boosting method. These selected numbers are used in the regression models. The developed models are reduced to have a minimum number of predictors and interactions. The reduced final models are then presented and assessed using testing runs. Finally, applications of these models are offered. The presented workflow is generic and can be used to translate the output of a numerical simulator into simple predictive models in other research areas involving numerical simulation.
A statistical model for instable thermodynamical systems
International Nuclear Information System (INIS)
Sommer, Jens-Uwe
2003-01-01
A generic model is presented for statistical systems which display thermodynamic features in contrast to our everyday experience, such as infinite and negative heat capacities. Such system are instable in terms of classical equilibrium thermodynamics. Using our statistical model, we are able to investigate states of instable systems which are undefined in the framework of equilibrium thermodynamics. We show that a region of negative heat capacity in the adiabatic environment, leads to a first order like phase transition when the system is coupled to a heat reservoir. This phase transition takes place without a phase coexistence. Nevertheless, all intermediate states are stable due to fluctuations. When two instable system are brought in thermal contact, the temperature of the composed system is lower than the minimum temperature of the individual systems. Generally, the equilibrium states of instable system cannot be simply decomposed into equilibrium states of the individual systems. The properties of instable system depend on the environment, ensemble equivalence is broken
Logarithmic transformed statistical models in calibration
International Nuclear Information System (INIS)
Zeis, C.D.
1975-01-01
A general type of statistical model used for calibration of instruments having the property that the standard deviations of the observed values increase as a function of the mean value is described. The application to the Helix Counter at the Rocky Flats Plant is primarily from a theoretical point of view. The Helix Counter measures the amount of plutonium in certain types of chemicals. The method described can be used also for other calibrations. (U.S.)
ARSENIC CONTAMINATION IN GROUNDWATER: A STATISTICAL MODELING
Palas Roy; Naba Kumar Mondal; Biswajit Das; Kousik Das
2013-01-01
High arsenic in natural groundwater in most of the tubewells of the Purbasthali- Block II area of Burdwan district (W.B, India) has recently been focused as a serious environmental concern. This paper is intending to illustrate the statistical modeling of the arsenic contaminated groundwater to identify the interrelation of that arsenic contain with other participating groundwater parameters so that the arsenic contamination level can easily be predicted by analyzing only such parameters. Mul...
More 'mapping' in brain mapping: statistical comparison of effects
DEFF Research Database (Denmark)
Jernigan, Terry Lynne; Gamst, Anthony C.; Fennema-Notestine, Christine
2003-01-01
The term 'mapping' in the context of brain imaging conveys to most the concept of localization; that is, a brain map is meant to reveal a relationship between some condition or parameter and specific sites within the brain. However, in reality, conventional voxel-based maps of brain function......, or for that matter of brain structure, are generally constructed using analyses that yield no basis for inferences regarding the spatial nonuniformity of the effects. In the normal analysis path for functional images, for example, there is nowhere a statistical comparison of the observed effect in any voxel relative...... to that in any other voxel. Under these circumstances, strictly speaking, the presence of significant activation serves as a legitimate basis only for inferences about the brain as a unit. In their discussion of results, investigators rarely are content to confirm the brain's role, and instead generally prefer...
A simple statistical model for geomagnetic reversals
Constable, Catherine
1990-01-01
The diversity of paleomagnetic records of geomagnetic reversals now available indicate that the field configuration during transitions cannot be adequately described by simple zonal or standing field models. A new model described here is based on statistical properties inferred from the present field and is capable of simulating field transitions like those observed. Some insight is obtained into what one can hope to learn from paleomagnetic records. In particular, it is crucial that the effects of smoothing in the remanence acquisition process be separated from true geomagnetic field behavior. This might enable us to determine the time constants associated with the dominant field configuration during a reversal.
Instrumental and statistical methods for the comparison of class evidence
Liszewski, Elisa Anne
Trace evidence is a major field within forensic science. Association of trace evidence samples can be problematic due to sample heterogeneity and a lack of quantitative criteria for comparing spectra or chromatograms. The aim of this study is to evaluate different types of instrumentation for their ability to discriminate among samples of various types of trace evidence. Chemometric analysis, including techniques such as Agglomerative Hierarchical Clustering, Principal Components Analysis, and Discriminant Analysis, was employed to evaluate instrumental data. First, automotive clear coats were analyzed by using microspectrophotometry to collect UV absorption data. In total, 71 samples were analyzed with classification accuracy of 91.61%. An external validation was performed, resulting in a prediction accuracy of 81.11%. Next, fiber dyes were analyzed using UV-Visible microspectrophotometry. While several physical characteristics of cotton fiber can be identified and compared, fiber color is considered to be an excellent source of variation, and thus was examined in this study. Twelve dyes were employed, some being visually indistinguishable. Several different analyses and comparisons were done, including an inter-laboratory comparison and external validations. Lastly, common plastic samples and other polymers were analyzed using pyrolysis-gas chromatography/mass spectrometry, and their pyrolysis products were then analyzed using multivariate statistics. The classification accuracy varied dependent upon the number of classes chosen, but the plastics were grouped based on composition. The polymers were used as an external validation and misclassifications occurred with chlorinated samples all being placed into the category containing PVC.
Statistical modelling for recurrent events: an application to sports injuries.
Ullah, Shahid; Gabbett, Tim J; Finch, Caroline F
2014-09-01
Injuries are often recurrent, with subsequent injuries influenced by previous occurrences and hence correlation between events needs to be taken into account when analysing such data. This paper compares five different survival models (Cox proportional hazards (CoxPH) model and the following generalisations to recurrent event data: Andersen-Gill (A-G), frailty, Wei-Lin-Weissfeld total time (WLW-TT) marginal, Prentice-Williams-Peterson gap time (PWP-GT) conditional models) for the analysis of recurrent injury data. Empirical evaluation and comparison of different models were performed using model selection criteria and goodness-of-fit statistics. Simulation studies assessed the size and power of each model fit. The modelling approach is demonstrated through direct application to Australian National Rugby League recurrent injury data collected over the 2008 playing season. Of the 35 players analysed, 14 (40%) players had more than 1 injury and 47 contact injuries were sustained over 29 matches. The CoxPH model provided the poorest fit to the recurrent sports injury data. The fit was improved with the A-G and frailty models, compared to WLW-TT and PWP-GT models. Despite little difference in model fit between the A-G and frailty models, in the interest of fewer statistical assumptions it is recommended that, where relevant, future studies involving modelling of recurrent sports injury data use the frailty model in preference to the CoxPH model or its other generalisations. The paper provides a rationale for future statistical modelling approaches for recurrent sports injury. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Statistical Modelling of the Soil Dielectric Constant
Usowicz, Boguslaw; Marczewski, Wojciech; Bogdan Usowicz, Jerzy; Lipiec, Jerzy
2010-05-01
The dielectric constant of soil is the physical property being very sensitive on water content. It funds several electrical measurement techniques for determining the water content by means of direct (TDR, FDR, and others related to effects of electrical conductance and/or capacitance) and indirect RS (Remote Sensing) methods. The work is devoted to a particular statistical manner of modelling the dielectric constant as the property accounting a wide range of specific soil composition, porosity, and mass density, within the unsaturated water content. Usually, similar models are determined for few particular soil types, and changing the soil type one needs switching the model on another type or to adjust it by parametrization of soil compounds. Therefore, it is difficult comparing and referring results between models. The presented model was developed for a generic representation of soil being a hypothetical mixture of spheres, each representing a soil fraction, in its proper phase state. The model generates a serial-parallel mesh of conductive and capacitive paths, which is analysed for a total conductive or capacitive property. The model was firstly developed to determine the thermal conductivity property, and now it is extended on the dielectric constant by analysing the capacitive mesh. The analysis is provided by statistical means obeying physical laws related to the serial-parallel branching of the representative electrical mesh. Physical relevance of the analysis is established electrically, but the definition of the electrical mesh is controlled statistically by parametrization of compound fractions, by determining the number of representative spheres per unitary volume per fraction, and by determining the number of fractions. That way the model is capable covering properties of nearly all possible soil types, all phase states within recognition of the Lorenz and Knudsen conditions. In effect the model allows on generating a hypothetical representative of
Encoding Dissimilarity Data for Statistical Model Building.
Wahba, Grace
2010-12-01
We summarize, review and comment upon three papers which discuss the use of discrete, noisy, incomplete, scattered pairwise dissimilarity data in statistical model building. Convex cone optimization codes are used to embed the objects into a Euclidean space which respects the dissimilarity information while controlling the dimension of the space. A "newbie" algorithm is provided for embedding new objects into this space. This allows the dissimilarity information to be incorporated into a Smoothing Spline ANOVA penalized likelihood model, a Support Vector Machine, or any model that will admit Reproducing Kernel Hilbert Space components, for nonparametric regression, supervised learning, or semi-supervised learning. Future work and open questions are discussed. The papers are: F. Lu, S. Keles, S. Wright and G. Wahba 2005. A framework for kernel regularization with application to protein clustering. Proceedings of the National Academy of Sciences 102, 12332-1233.G. Corrada Bravo, G. Wahba, K. Lee, B. Klein, R. Klein and S. Iyengar 2009. Examining the relative influence of familial, genetic and environmental covariate information in flexible risk models. Proceedings of the National Academy of Sciences 106, 8128-8133F. Lu, Y. Lin and G. Wahba. Robust manifold unfolding with kernel regularization. TR 1008, Department of Statistics, University of Wisconsin-Madison.
Statistical modelling of transcript profiles of differentially regulated genes
Directory of Open Access Journals (Sweden)
Sergeant Martin J
2008-07-01
Full Text Available Abstract Background The vast quantities of gene expression profiling data produced in microarray studies, and the more precise quantitative PCR, are often not statistically analysed to their full potential. Previous studies have summarised gene expression profiles using simple descriptive statistics, basic analysis of variance (ANOVA and the clustering of genes based on simple models fitted to their expression profiles over time. We report the novel application of statistical non-linear regression modelling techniques to describe the shapes of expression profiles for the fungus Agaricus bisporus, quantified by PCR, and for E. coli and Rattus norvegicus, using microarray technology. The use of parametric non-linear regression models provides a more precise description of expression profiles, reducing the "noise" of the raw data to produce a clear "signal" given by the fitted curve, and describing each profile with a small number of biologically interpretable parameters. This approach then allows the direct comparison and clustering of the shapes of response patterns between genes and potentially enables a greater exploration and interpretation of the biological processes driving gene expression. Results Quantitative reverse transcriptase PCR-derived time-course data of genes were modelled. "Split-line" or "broken-stick" regression identified the initial time of gene up-regulation, enabling the classification of genes into those with primary and secondary responses. Five-day profiles were modelled using the biologically-oriented, critical exponential curve, y(t = A + (B + CtRt + ε. This non-linear regression approach allowed the expression patterns for different genes to be compared in terms of curve shape, time of maximal transcript level and the decline and asymptotic response levels. Three distinct regulatory patterns were identified for the five genes studied. Applying the regression modelling approach to microarray-derived time course data
Representative volume size: A comparison of statistical continuum mechanics and statistical physics
Energy Technology Data Exchange (ETDEWEB)
AIDUN,JOHN B.; TRUCANO,TIMOTHY G.; LO,CHI S.; FYE,RICHARD M.
1999-05-01
In this combination background and position paper, the authors argue that careful work is needed to develop accurate methods for relating the results of fine-scale numerical simulations of material processes to meaningful values of macroscopic properties for use in constitutive models suitable for finite element solid mechanics simulations. To provide a definite context for this discussion, the problem is couched in terms of the lack of general objective criteria for identifying the size of the representative volume (RV) of a material. The objective of this report is to lay out at least the beginnings of an approach for applying results and methods from statistical physics to develop concepts and tools necessary for determining the RV size, as well as alternatives to RV volume-averaging for situations in which the RV is unmanageably large. The background necessary to understand the pertinent issues and statistical physics concepts is presented.
ARSENIC CONTAMINATION IN GROUNDWATER: A STATISTICAL MODELING
Directory of Open Access Journals (Sweden)
Palas Roy
2013-01-01
Full Text Available High arsenic in natural groundwater in most of the tubewells of the Purbasthali- Block II area of Burdwan district (W.B, India has recently been focused as a serious environmental concern. This paper is intending to illustrate the statistical modeling of the arsenic contaminated groundwater to identify the interrelation of that arsenic contain with other participating groundwater parameters so that the arsenic contamination level can easily be predicted by analyzing only such parameters. Multivariate data analysis was done with the collected groundwater samples from the 132 tubewells of this contaminated region shows that three variable parameters are significantly related with the arsenic. Based on these relationships, a multiple linear regression model has been developed that estimated the arsenic contamination by measuring such three predictor parameters of the groundwater variables in the contaminated aquifer. This model could also be a suggestive tool while designing the arsenic removal scheme for any affected groundwater.
Statistical models for expert judgement and wear prediction
International Nuclear Information System (INIS)
Pulkkinen, U.
1994-01-01
This thesis studies the statistical analysis of expert judgements and prediction of wear. The point of view adopted is the one of information theory and Bayesian statistics. A general Bayesian framework for analyzing both the expert judgements and wear prediction is presented. Information theoretic interpretations are given for some averaging techniques used in the determination of consensus distributions. Further, information theoretic models are compared with a Bayesian model. The general Bayesian framework is then applied in analyzing expert judgements based on ordinal comparisons. In this context, the value of information lost in the ordinal comparison process is analyzed by applying decision theoretic concepts. As a generalization of the Bayesian framework, stochastic filtering models for wear prediction are formulated. These models utilize the information from condition monitoring measurements in updating the residual life distribution of mechanical components. Finally, the application of stochastic control models in optimizing operational strategies for inspected components are studied. Monte-Carlo simulation methods, such as the Gibbs sampler and the stochastic quasi-gradient method, are applied in the determination of posterior distributions and in the solution of stochastic optimization problems. (orig.) (57 refs., 7 figs., 1 tab.)
Optimizing refiner operation with statistical modelling
Energy Technology Data Exchange (ETDEWEB)
Broderick, G [Noranda Research Centre, Pointe Claire, PQ (Canada)
1997-02-01
The impact of refining conditions on the energy efficiency of the process and on the handsheet quality of a chemi-mechanical pulp was studied as part of a series of pilot scale refining trials. Statistical models of refiner performance were constructed from these results and non-linear optimization of process conditions were conducted. Optimization results indicated that increasing the ratio of specific energy applied in the first stage led to a reduction of some 15 per cent in the total energy requirement. The strategy can also be used to obtain significant increases in pulp quality for a given energy input. 20 refs., 6 tabs.
Average Nuclear properties based on statistical model
International Nuclear Information System (INIS)
El-Jaick, L.J.
1974-01-01
The rough properties of nuclei were investigated by statistical model, in systems with the same and different number of protons and neutrons, separately, considering the Coulomb energy in the last system. Some average nuclear properties were calculated based on the energy density of nuclear matter, from Weizsscker-Beth mass semiempiric formulae, generalized for compressible nuclei. In the study of a s surface energy coefficient, the great influence exercised by Coulomb energy and nuclear compressibility was verified. For a good adjust of beta stability lines and mass excess, the surface symmetry energy were established. (M.C.K.) [pt
International Nuclear Information System (INIS)
Weathers, J.B.; Luck, R.; Weathers, J.W.
2009-01-01
The complexity of mathematical models used by practicing engineers is increasing due to the growing availability of sophisticated mathematical modeling tools and ever-improving computational power. For this reason, the need to define a well-structured process for validating these models against experimental results has become a pressing issue in the engineering community. This validation process is partially characterized by the uncertainties associated with the modeling effort as well as the experimental results. The net impact of the uncertainties on the validation effort is assessed through the 'noise level of the validation procedure', which can be defined as an estimate of the 95% confidence uncertainty bounds for the comparison error between actual experimental results and model-based predictions of the same quantities of interest. Although general descriptions associated with the construction of the noise level using multivariate statistics exists in the literature, a detailed procedure outlining how to account for the systematic and random uncertainties is not available. In this paper, the methodology used to derive the covariance matrix associated with the multivariate normal pdf based on random and systematic uncertainties is examined, and a procedure used to estimate this covariance matrix using Monte Carlo analysis is presented. The covariance matrices are then used to construct approximate 95% confidence constant probability contours associated with comparison error results for a practical example. In addition, the example is used to show the drawbacks of using a first-order sensitivity analysis when nonlinear local sensitivity coefficients exist. Finally, the example is used to show the connection between the noise level of the validation exercise calculated using multivariate and univariate statistics.
Energy Technology Data Exchange (ETDEWEB)
Weathers, J.B. [Shock, Noise, and Vibration Group, Northrop Grumman Shipbuilding, P.O. Box 149, Pascagoula, MS 39568 (United States)], E-mail: James.Weathers@ngc.com; Luck, R. [Department of Mechanical Engineering, Mississippi State University, 210 Carpenter Engineering Building, P.O. Box ME, Mississippi State, MS 39762-5925 (United States)], E-mail: Luck@me.msstate.edu; Weathers, J.W. [Structural Analysis Group, Northrop Grumman Shipbuilding, P.O. Box 149, Pascagoula, MS 39568 (United States)], E-mail: Jeffrey.Weathers@ngc.com
2009-11-15
The complexity of mathematical models used by practicing engineers is increasing due to the growing availability of sophisticated mathematical modeling tools and ever-improving computational power. For this reason, the need to define a well-structured process for validating these models against experimental results has become a pressing issue in the engineering community. This validation process is partially characterized by the uncertainties associated with the modeling effort as well as the experimental results. The net impact of the uncertainties on the validation effort is assessed through the 'noise level of the validation procedure', which can be defined as an estimate of the 95% confidence uncertainty bounds for the comparison error between actual experimental results and model-based predictions of the same quantities of interest. Although general descriptions associated with the construction of the noise level using multivariate statistics exists in the literature, a detailed procedure outlining how to account for the systematic and random uncertainties is not available. In this paper, the methodology used to derive the covariance matrix associated with the multivariate normal pdf based on random and systematic uncertainties is examined, and a procedure used to estimate this covariance matrix using Monte Carlo analysis is presented. The covariance matrices are then used to construct approximate 95% confidence constant probability contours associated with comparison error results for a practical example. In addition, the example is used to show the drawbacks of using a first-order sensitivity analysis when nonlinear local sensitivity coefficients exist. Finally, the example is used to show the connection between the noise level of the validation exercise calculated using multivariate and univariate statistics.
Statistical pairwise interaction model of stock market
Bury, Thomas
2013-03-01
Financial markets are a classical example of complex systems as they are compound by many interacting stocks. As such, we can obtain a surprisingly good description of their structure by making the rough simplification of binary daily returns. Spin glass models have been applied and gave some valuable results but at the price of restrictive assumptions on the market dynamics or they are agent-based models with rules designed in order to recover some empirical behaviors. Here we show that the pairwise model is actually a statistically consistent model with the observed first and second moments of the stocks orientation without making such restrictive assumptions. This is done with an approach only based on empirical data of price returns. Our data analysis of six major indices suggests that the actual interaction structure may be thought as an Ising model on a complex network with interaction strengths scaling as the inverse of the system size. This has potentially important implications since many properties of such a model are already known and some techniques of the spin glass theory can be straightforwardly applied. Typical behaviors, as multiple equilibria or metastable states, different characteristic time scales, spatial patterns, order-disorder, could find an explanation in this picture.
Materials Informatics: Statistical Modeling in Material Science.
Yosipof, Abraham; Shimanovich, Klimentiy; Senderowitz, Hanoch
2016-12-01
Material informatics is engaged with the application of informatic principles to materials science in order to assist in the discovery and development of new materials. Central to the field is the application of data mining techniques and in particular machine learning approaches, often referred to as Quantitative Structure Activity Relationship (QSAR) modeling, to derive predictive models for a variety of materials-related "activities". Such models can accelerate the development of new materials with favorable properties and provide insight into the factors governing these properties. Here we provide a comparison between medicinal chemistry/drug design and materials-related QSAR modeling and highlight the importance of developing new, materials-specific descriptors. We survey some of the most recent QSAR models developed in materials science with focus on energetic materials and on solar cells. Finally we present new examples of material-informatic analyses of solar cells libraries produced from metal oxides using combinatorial material synthesis. Different analyses lead to interesting physical insights as well as to the design of new cells with potentially improved photovoltaic parameters. © 2016 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Statistical modeling to support power system planning
Staid, Andrea
This dissertation focuses on data-analytic approaches that improve our understanding of power system applications to promote better decision-making. It tackles issues of risk analysis, uncertainty management, resource estimation, and the impacts of climate change. Tools of data mining and statistical modeling are used to bring new insight to a variety of complex problems facing today's power system. The overarching goal of this research is to improve the understanding of the power system risk environment for improved operation, investment, and planning decisions. The first chapter introduces some challenges faced in planning for a sustainable power system. Chapter 2 analyzes the driving factors behind the disparity in wind energy investments among states with a goal of determining the impact that state-level policies have on incentivizing wind energy. Findings show that policy differences do not explain the disparities; physical and geographical factors are more important. Chapter 3 extends conventional wind forecasting to a risk-based focus of predicting maximum wind speeds, which are dangerous for offshore operations. Statistical models are presented that issue probabilistic predictions for the highest wind speed expected in a three-hour interval. These models achieve a high degree of accuracy and their use can improve safety and reliability in practice. Chapter 4 examines the challenges of wind power estimation for onshore wind farms. Several methods for wind power resource assessment are compared, and the weaknesses of the Jensen model are demonstrated. For two onshore farms, statistical models outperform other methods, even when very little information is known about the wind farm. Lastly, chapter 5 focuses on the power system more broadly in the context of the risks expected from tropical cyclones in a changing climate. Risks to U.S. power system infrastructure are simulated under different scenarios of tropical cyclone behavior that may result from climate
Acceleration transforms and statistical kinetic models
International Nuclear Information System (INIS)
LuValle, M.J.; Welsher, T.L.; Svoboda, K.
1988-01-01
For a restricted class of problems a mathematical model of microscopic degradation processes, statistical kinetics, is developed and linked through acceleration transforms to the information which can be obtained from a system in which the only observable sign of degradation is sudden and catastrophic failure. The acceleration transforms were developed in accelerated life testing applications as a tool for extrapolating from the observable results of an accelerated life test to the dynamics of the underlying degradation processes. A particular concern of a physicist attempting to interpreted the results of an analysis based on acceleration transforms is determining the physical species involved in the degradation process. These species may be (a) relatively abundant or (b) relatively rare. The main results of this paper are a theorem showing that for an important subclass of statistical kinetic models, acceleration transforms cannot be used to distinguish between cases a and b, and an example showing that in some cases falling outside the restrictions of the theorem, cases a and b can be distinguished by their acceleration transforms
ASYMPTOTIC COMPARISONS OF U-STATISTICS, V-STATISTICS AND LIMITS OF BAYES ESTIMATES BY DEFICIENCIES
Toshifumi, Nomachi; Hajime, Yamato; Graduate School of Science and Engineering, Kagoshima University:Miyakonojo College of Technology; Faculty of Science, Kagoshima University
2001-01-01
As estimators of estimable parameters, we consider three statistics which are U-statistic, V-statistic and limit of Bayes estimate. This limit of Bayes estimate, called LB-statistic in this paper, is obtained from Bayes estimate of estimable parameter based on Dirichlet process, by letting its parameter tend to zero. For the estimable parameter with non-degenerate kernel, the asymptotic relative efficiencies of LB-statistic with respect to U-statistic and V-statistic and that of V-statistic w...
Atmospheric corrosion: statistical validation of models
International Nuclear Information System (INIS)
Diaz, V.; Martinez-Luaces, V.; Guineo-Cobs, G.
2003-01-01
In this paper we discuss two different methods for validation of regression models, applied to corrosion data. One of them is based on the correlation coefficient and the other one is the statistical test of lack of fit. Both methods are used here to analyse fitting of bi logarithmic model in order to predict corrosion for very low carbon steel substrates in rural and urban-industrial atmospheres in Uruguay. Results for parameters A and n of the bi logarithmic model are reported here. For this purpose, all repeated values were used instead of using average values as usual. Modelling is carried out using experimental data corresponding to steel substrates under the same initial meteorological conditions ( in fact, they are put in the rack at the same time). Results of correlation coefficient are compared with the lack of it tested at two different signification levels (α=0.01 and α=0.05). Unexpected differences between them are explained and finally, it is possible to conclude, at least in the studied atmospheres, that the bi logarithmic model does not fit properly the experimental data. (Author) 18 refs
Spherical Process Models for Global Spatial Statistics
Jeong, Jaehong
2017-11-28
Statistical models used in geophysical, environmental, and climate science applications must reflect the curvature of the spatial domain in global data. Over the past few decades, statisticians have developed covariance models that capture the spatial and temporal behavior of these global data sets. Though the geodesic distance is the most natural metric for measuring distance on the surface of a sphere, mathematical limitations have compelled statisticians to use the chordal distance to compute the covariance matrix in many applications instead, which may cause physically unrealistic distortions. Therefore, covariance functions directly defined on a sphere using the geodesic distance are needed. We discuss the issues that arise when dealing with spherical data sets on a global scale and provide references to recent literature. We review the current approaches to building process models on spheres, including the differential operator, the stochastic partial differential equation, the kernel convolution, and the deformation approaches. We illustrate realizations obtained from Gaussian processes with different covariance structures and the use of isotropic and nonstationary covariance models through deformations and geographical indicators for global surface temperature data. To assess the suitability of each method, we compare their log-likelihood values and prediction scores, and we end with a discussion of related research problems.
A statistical mechanical model of economics
Lubbers, Nicholas Edward Williams
Statistical mechanics pursues low-dimensional descriptions of systems with a very large number of degrees of freedom. I explore this theme in two contexts. The main body of this dissertation explores and extends the Yard Sale Model (YSM) of economic transactions using a combination of simulations and theory. The YSM is a simple interacting model for wealth distributions which has the potential to explain the empirical observation of Pareto distributions of wealth. I develop the link between wealth condensation and the breakdown of ergodicity due to nonlinear diffusion effects which are analogous to the geometric random walk. Using this, I develop a deterministic effective theory of wealth transfer in the YSM that is useful for explaining many quantitative results. I introduce various forms of growth to the model, paying attention to the effect of growth on wealth condensation, inequality, and ergodicity. Arithmetic growth is found to partially break condensation, and geometric growth is found to completely break condensation. Further generalizations of geometric growth with growth in- equality show that the system is divided into two phases by a tipping point in the inequality parameter. The tipping point marks the line between systems which are ergodic and systems which exhibit wealth condensation. I explore generalizations of the YSM transaction scheme to arbitrary betting functions to develop notions of universality in YSM-like models. I find that wealth vi condensation is universal to a large class of models which can be divided into two phases. The first exhibits slow, power-law condensation dynamics, and the second exhibits fast, finite-time condensation dynamics. I find that the YSM, which exhibits exponential dynamics, is the critical, self-similar model which marks the dividing line between the two phases. The final chapter develops a low-dimensional approach to materials microstructure quantification. Modern materials design harnesses complex
Current algebra, statistical mechanics and quantum models
Vilela Mendes, R.
2017-11-01
Results obtained in the past for free boson systems at zero and nonzero temperatures are revisited to clarify the physical meaning of current algebra reducible functionals which are associated to systems with density fluctuations, leading to observable effects on phase transitions. To use current algebra as a tool for the formulation of quantum statistical mechanics amounts to the construction of unitary representations of diffeomorphism groups. Two mathematical equivalent procedures exist for this purpose. One searches for quasi-invariant measures on configuration spaces, the other for a cyclic vector in Hilbert space. Here, one argues that the second approach is closer to the physical intuition when modelling complex systems. An example of application of the current algebra methodology to the pairing phenomenon in two-dimensional fermion systems is discussed.
Statistical model for OCT image denoising
Li, Muxingzi
2017-08-01
Optical coherence tomography (OCT) is a non-invasive technique with a large array of applications in clinical imaging and biological tissue visualization. However, the presence of speckle noise affects the analysis of OCT images and their diagnostic utility. In this article, we introduce a new OCT denoising algorithm. The proposed method is founded on a numerical optimization framework based on maximum-a-posteriori estimate of the noise-free OCT image. It combines a novel speckle noise model, derived from local statistics of empirical spectral domain OCT (SD-OCT) data, with a Huber variant of total variation regularization for edge preservation. The proposed approach exhibits satisfying results in terms of speckle noise reduction as well as edge preservation, at reduced computational cost.
New advances in statistical modeling and applications
Santos, Rui; Oliveira, Maria; Paulino, Carlos
2014-01-01
This volume presents selected papers from the XIXth Congress of the Portuguese Statistical Society, held in the town of Nazaré, Portugal, from September 28 to October 1, 2011. All contributions were selected after a thorough peer-review process. It covers a broad range of papers in the areas of statistical science, probability and stochastic processes, extremes and statistical applications.
A statistical model for predicting muscle performance
Byerly, Diane Leslie De Caix
The objective of these studies was to develop a capability for predicting muscle performance and fatigue to be utilized for both space- and ground-based applications. To develop this predictive model, healthy test subjects performed a defined, repetitive dynamic exercise to failure using a Lordex spinal machine. Throughout the exercise, surface electromyography (SEMG) data were collected from the erector spinae using a Mega Electronics ME3000 muscle tester and surface electrodes placed on both sides of the back muscle. These data were analyzed using a 5th order Autoregressive (AR) model and statistical regression analysis. It was determined that an AR derived parameter, the mean average magnitude of AR poles, significantly correlated with the maximum number of repetitions (designated Rmax) that a test subject was able to perform. Using the mean average magnitude of AR poles, a test subject's performance to failure could be predicted as early as the sixth repetition of the exercise. This predictive model has the potential to provide a basis for improving post-space flight recovery, monitoring muscle atrophy in astronauts and assessing the effectiveness of countermeasures, monitoring astronaut performance and fatigue during Extravehicular Activity (EVA) operations, providing pre-flight assessment of the ability of an EVA crewmember to perform a given task, improving the design of training protocols and simulations for strenuous International Space Station assembly EVA, and enabling EVA work task sequences to be planned enhancing astronaut performance and safety. Potential ground-based, medical applications of the predictive model include monitoring muscle deterioration and performance resulting from illness, establishing safety guidelines in the industry for repetitive tasks, monitoring the stages of rehabilitation for muscle-related injuries sustained in sports and accidents, and enhancing athletic performance through improved training protocols while reducing
Statistical Model Checking of Rich Models and Properties
DEFF Research Database (Denmark)
Poulsen, Danny Bøgsted
in undecidability issues for the traditional model checking approaches. Statistical model checking has proven itself a valuable supplement to model checking and this thesis is concerned with extending this software validation technique to stochastic hybrid systems. The thesis consists of two parts: the first part...... motivates why existing model checking technology should be supplemented by new techniques. It also contains a brief introduction to probability theory and concepts covered by the six papers making up the second part. The first two papers are concerned with developing online monitoring techniques...... systems. The fifth paper shows how stochastic hybrid automata are useful for modelling biological systems and the final paper is concerned with showing how statistical model checking is efficiently distributed. In parallel with developing the theory contained in the papers, a substantial part of this work...
International Nuclear Information System (INIS)
Parvan, A.S.
2016-01-01
The Tsallis statistics was applied to describe the experimental data on the transverse momentum distributions of hadrons. We considered the energy dependence of the parameters of the Tsallis-factorized statistics, which is now widely used for the description of the experimental transverse momentum distributions of hadrons, and the Tsallis statistics for the charged pions produced in pp collisions at high energies. We found that the results of the Tsallis-factorized statistics deviate from the results of the Tsallis statistics only at low NA61/SHINE energies when the value of the entropic parameter is close to unity. At higher energies, when the value of the entropic parameter deviates essentially from unity, the Tsallis-factorized statistics satisfactorily recovers the results of the Tsallis statistics. (orig.)
Network Data: Statistical Theory and New Models
2016-02-17
and with environmental scientists at JPL and Emory University to retrieval from NASA MISR remote sensing images aerosol index AOD for air pollution ...Beijing, May, 2013 Beijing Statistics Forum, Beijing, May, 2013 Statistics Seminar, CREST-ENSAE, Paris , March, 2013 Statistics Seminar, University...to retrieval from NASA MISR remote sensing images aerosol index AOD for air pollution monitoring and management. Satellite- retrieved Aerosol Optical
Comparison: Binomial model and Black Scholes model
Directory of Open Access Journals (Sweden)
Amir Ahmad Dar
2018-03-01
Full Text Available The Binomial Model and the Black Scholes Model are the popular methods that are used to solve the option pricing problems. Binomial Model is a simple statistical method and Black Scholes model requires a solution of a stochastic differential equation. Pricing of European call and a put option is a very difficult method used by actuaries. The main goal of this study is to differentiate the Binominal model and the Black Scholes model by using two statistical model - t-test and Tukey model at one period. Finally, the result showed that there is no significant difference between the means of the European options by using the above two models.
Estimating Predictive Variance for Statistical Gas Distribution Modelling
International Nuclear Information System (INIS)
Lilienthal, Achim J.; Asadi, Sahar; Reggente, Matteo
2009-01-01
Recent publications in statistical gas distribution modelling have proposed algorithms that model mean and variance of a distribution. This paper argues that estimating the predictive concentration variance entails not only a gradual improvement but is rather a significant step to advance the field. This is, first, since the models much better fit the particular structure of gas distributions, which exhibit strong fluctuations with considerable spatial variations as a result of the intermittent character of gas dispersal. Second, because estimating the predictive variance allows to evaluate the model quality in terms of the data likelihood. This offers a solution to the problem of ground truth evaluation, which has always been a critical issue for gas distribution modelling. It also enables solid comparisons of different modelling approaches, and provides the means to learn meta parameters of the model, to determine when the model should be updated or re-initialised, or to suggest new measurement locations based on the current model. We also point out directions of related ongoing or potential future research work.
Quantum statistical model for hot dense matter
International Nuclear Information System (INIS)
Rukhsana Kouser; Tasneem, G.; Saleem Shahzad, M.; Shafiq-ur-Rehman; Nasim, M.H.; Amjad Ali
2015-01-01
In solving numerous applied problems, one needs to know the equation of state, photon absorption coefficient and opacity of substances employed. We present a code for absorption coefficient and opacity calculation based on quantum statistical model. A self-consistent method for the calculation of potential is used. By solving Schrödinger equation with self-consistent potential we find energy spectrum of quantum mechanical system and corresponding wave functions. In addition we find mean occupation numbers of electron states and average charge state of the substance studied. The main processes of interaction of radiation with matter included in our opacity calculation are photon absorption in spectral lines (Bound-bound), photoionization (Bound-free), inverse bremsstrahlung (Free-free), Compton and Thomson scattering. Bound-bound line shape function has contribution from natural, Doppler, fine structure, collisional and stark broadening. To illustrate the main features of the code and its capabilities, calculation of average charge state, absorption coefficient, Rosseland and Planck mean and group opacities of aluminum and iron are presented. Results are satisfactorily compared with the published data. (authors)
Statistical Agent Based Modelization of the Phenomenon of Drug Abuse
di Clemente, Riccardo; Pietronero, Luciano
2012-07-01
We introduce a statistical agent based model to describe the phenomenon of drug abuse and its dynamical evolution at the individual and global level. The agents are heterogeneous with respect to their intrinsic inclination to drugs, to their budget attitude and social environment. The various levels of drug use were inspired by the professional description of the phenomenon and this permits a direct comparison with all available data. We show that certain elements have a great importance to start the use of drugs, for example the rare events in the personal experiences which permit to overcame the barrier of drug use occasionally. The analysis of how the system reacts to perturbations is very important to understand its key elements and it provides strategies for effective policy making. The present model represents the first step of a realistic description of this phenomenon and can be easily generalized in various directions.
A BRDF statistical model applying to space target materials modeling
Liu, Chenghao; Li, Zhi; Xu, Can; Tian, Qichen
2017-10-01
In order to solve the problem of poor effect in modeling the large density BRDF measured data with five-parameter semi-empirical model, a refined statistical model of BRDF which is suitable for multi-class space target material modeling were proposed. The refined model improved the Torrance-Sparrow model while having the modeling advantages of five-parameter model. Compared with the existing empirical model, the model contains six simple parameters, which can approximate the roughness distribution of the material surface, can approximate the intensity of the Fresnel reflectance phenomenon and the attenuation of the reflected light's brightness with the azimuth angle changes. The model is able to achieve parameter inversion quickly with no extra loss of accuracy. The genetic algorithm was used to invert the parameters of 11 different samples in the space target commonly used materials, and the fitting errors of all materials were below 6%, which were much lower than those of five-parameter model. The effect of the refined model is verified by comparing the fitting results of the three samples at different incident zenith angles in 0° azimuth angle. Finally, the three-dimensional modeling visualizations of these samples in the upper hemisphere space was given, in which the strength of the optical scattering of different materials could be clearly shown. It proved the good describing ability of the refined model at the material characterization as well.
Statistical Challenges in Modeling Big Brain Signals
Yu, Zhaoxia
2017-11-01
Brain signal data are inherently big: massive in amount, complex in structure, and high in dimensions. These characteristics impose great challenges for statistical inference and learning. Here we review several key challenges, discuss possible solutions, and highlight future research directions.
Statistical Challenges in Modeling Big Brain Signals
Yu, Zhaoxia; Pluta, Dustin; Shen, Tong; Chen, Chuansheng; Xue, Gui; Ombao, Hernando
2017-01-01
Brain signal data are inherently big: massive in amount, complex in structure, and high in dimensions. These characteristics impose great challenges for statistical inference and learning. Here we review several key challenges, discuss possible
Statistical Learning Theory: Models, Concepts, and Results
von Luxburg, Ulrike; Schoelkopf, Bernhard
2008-01-01
Statistical learning theory provides the theoretical basis for many of today's machine learning algorithms. In this article we attempt to give a gentle, non-technical overview over the key ideas and insights of statistical learning theory. We target at a broad audience, not necessarily machine learning researchers. This paper can serve as a starting point for people who want to get an overview on the field before diving into technical details.
Online Statistical Modeling (Regression Analysis) for Independent Responses
Made Tirta, I.; Anggraeni, Dian; Pandutama, Martinus
2017-06-01
Regression analysis (statistical analmodelling) are among statistical methods which are frequently needed in analyzing quantitative data, especially to model relationship between response and explanatory variables. Nowadays, statistical models have been developed into various directions to model various type and complex relationship of data. Rich varieties of advanced and recent statistical modelling are mostly available on open source software (one of them is R). However, these advanced statistical modelling, are not very friendly to novice R users, since they are based on programming script or command line interface. Our research aims to developed web interface (based on R and shiny), so that most recent and advanced statistical modelling are readily available, accessible and applicable on web. We have previously made interface in the form of e-tutorial for several modern and advanced statistical modelling on R especially for independent responses (including linear models/LM, generalized linier models/GLM, generalized additive model/GAM and generalized additive model for location scale and shape/GAMLSS). In this research we unified them in the form of data analysis, including model using Computer Intensive Statistics (Bootstrap and Markov Chain Monte Carlo/ MCMC). All are readily accessible on our online Virtual Statistics Laboratory. The web (interface) make the statistical modeling becomes easier to apply and easier to compare them in order to find the most appropriate model for the data.
Comparison of stability statistics for yield in barley (Hordeum vulgare ...
African Journals Online (AJOL)
STORAGESEVER
2010-03-15
Mar 15, 2010 ... statistics and yield indicated that only TOP method would be useful for simultaneously selecting for high yield and ... metric stability methods; i) they reduce the bias caused by outliers, ii) ...... Biometrics, 43: 45-53. Sabaghnia N ...
Statistics Based Models for the Dynamics of Chernivtsi Children Disease
Directory of Open Access Journals (Sweden)
Igor G. Nesteruk
2017-10-01
Full Text Available Background. Simple mathematical models of contamination and SIR-model of spreading an infection were used to simulate the time dynamics of the unknown before children disease, which occurred in Chernivtsi (Ukraine. The cause of many cases of alopecia, which began in this city in August 1988 is still not fully clarified. According to the official report of the governmental commission, the last new cases occurred in the middle of November 1988, and the reason of the illness was reported as chemical exogenous intoxication. Later this illness became the name “Chernivtsi chemical disease”. Nevertheless, the significantly increased number of new cases of the local alopecia was registered almost three years and is still not clarified. Objective. The comparison of two different versions of the disease: chemical exogenous intoxication and infection. Identification of the parameters of mathematical models and prediction of the disease development. Methods. Analytical solutions of the contamination models and SIR-model for an epidemic are obtained. The optimal values of parameters with the use of linear regression were found. Results. The optimal values of the models parameters with the use of statistical approach were identified. The calculations showed that the infectious version of the disease is more reliable in comparison with the popular contamination one. The possible date of the epidemic beginning was estimated. Conclusions. The optimal parameters of SIR-model allow calculating the realistic number of victims and other characteristics of possible epidemic. They also show that increased number of cases of local alopecia could be a part of the same epidemic as “Chernivtsi chemical disease”.
International Nuclear Information System (INIS)
Nakamoto, Atsushi; Kim, Tonsok; Hori, Masatoshi; Onishi, Hiromitsu; Tsuboyama, Takahiro; Sakane, Makoto; Tatsumi, Mitsuaki; Tomiyama, Noriyuki
2015-01-01
Highlights: • MBIR significantly improves objective image quality. • MBIR reduces the radiation dose by 87.5% without increasing objective image noise. • A half dose will be needed to maintain the subjective image quality. - Abstract: Purpose: To evaluate the image quality of upper abdominal CT images reconstructed with model-based iterative reconstruction (MBIR) in comparison with filtered back projection (FBP) and adaptive statistical iterative reconstruction (ASIR) on scans acquired with various radiation exposure dose protocols. Materials and methods: This prospective study was approved by our institutional review board, and informed consent was obtained from all 90 patients who underwent both control-dose (CD) and reduced-dose (RD) CT of the upper abdomen (unenhanced: n = 45, contrast-enhanced: n = 45). The RD scan protocol was randomly selected from three protocols; Protocol A: 12.5% dose, Protocol B: 25% dose, Protocol C: 50% dose. Objective image noise, signal-to-noise (SNR) ratio for the liver parenchyma, visual image score and lesion conspicuity were compared among CD images of FBP and RD images of FBP, ASIR and MBIR. Results: RD images of MBIR yielded significantly lower objective image noise and higher SNR compared with RD images of FBP and ASIR for all protocols (P < .01) and CD images of FBP for Protocol C (P < .05). Although the subjective image quality of RD images of MBIR was almost acceptable for Protocol C, it was inferior to that of CD images of FBP for Protocols A and B (P < .0083). The conspicuity of the small lesions in RD images of MBIR tended to be superior to that in RD images of FBP and ASIR and inferior to that in CD images for Protocols A and B, although the differences were not significant (P > .0083). Conclusion: Although 12.5%-dose MBIR images (mean size-specific dose estimates [SSDE] of 1.13 mGy) yielded objective image noise and SNR comparable to CD-FBP images, at least a 50% dose (mean SSDE of 4.63 mGy) would be needed to
Energy Technology Data Exchange (ETDEWEB)
Nakamoto, Atsushi, E-mail: a-nakamoto@radiol.med.osaka-u.ac.jp; Kim, Tonsok, E-mail: kim@radiol.med.osaka-u.ac.jp; Hori, Masatoshi, E-mail: mhori@radiol.med.osaka-u.ac.jp; Onishi, Hiromitsu, E-mail: h-onishi@radiol.med.osaka-u.ac.jp; Tsuboyama, Takahiro, E-mail: t-tsuboyama@radiol.med.osaka-u.ac.jp; Sakane, Makoto, E-mail: m-sakane@radiol.med.osaka-u.ac.jp; Tatsumi, Mitsuaki, E-mail: m-tatsumi@radiol.med.osaka-u.ac.jp; Tomiyama, Noriyuki, E-mail: tomiyama@radiol.med.osaka-u.ac.jp
2015-09-15
Highlights: • MBIR significantly improves objective image quality. • MBIR reduces the radiation dose by 87.5% without increasing objective image noise. • A half dose will be needed to maintain the subjective image quality. - Abstract: Purpose: To evaluate the image quality of upper abdominal CT images reconstructed with model-based iterative reconstruction (MBIR) in comparison with filtered back projection (FBP) and adaptive statistical iterative reconstruction (ASIR) on scans acquired with various radiation exposure dose protocols. Materials and methods: This prospective study was approved by our institutional review board, and informed consent was obtained from all 90 patients who underwent both control-dose (CD) and reduced-dose (RD) CT of the upper abdomen (unenhanced: n = 45, contrast-enhanced: n = 45). The RD scan protocol was randomly selected from three protocols; Protocol A: 12.5% dose, Protocol B: 25% dose, Protocol C: 50% dose. Objective image noise, signal-to-noise (SNR) ratio for the liver parenchyma, visual image score and lesion conspicuity were compared among CD images of FBP and RD images of FBP, ASIR and MBIR. Results: RD images of MBIR yielded significantly lower objective image noise and higher SNR compared with RD images of FBP and ASIR for all protocols (P < .01) and CD images of FBP for Protocol C (P < .05). Although the subjective image quality of RD images of MBIR was almost acceptable for Protocol C, it was inferior to that of CD images of FBP for Protocols A and B (P < .0083). The conspicuity of the small lesions in RD images of MBIR tended to be superior to that in RD images of FBP and ASIR and inferior to that in CD images for Protocols A and B, although the differences were not significant (P > .0083). Conclusion: Although 12.5%-dose MBIR images (mean size-specific dose estimates [SSDE] of 1.13 mGy) yielded objective image noise and SNR comparable to CD-FBP images, at least a 50% dose (mean SSDE of 4.63 mGy) would be needed to
Integer Set Compression and Statistical Modeling
DEFF Research Database (Denmark)
Larsson, N. Jesper
2014-01-01
enumeration of elements may be arbitrary or random, but where statistics is kept in order to estimate probabilities of elements. We present a recursive subset-size encoding method that is able to benefit from statistics, explore the effects of permuting the enumeration order based on element probabilities......Compression of integer sets and sequences has been extensively studied for settings where elements follow a uniform probability distribution. In addition, methods exist that exploit clustering of elements in order to achieve higher compression performance. In this work, we address the case where...
Task-Driven Comparison of Topic Models.
Alexander, Eric; Gleicher, Michael
2016-01-01
Topic modeling, a method of statistically extracting thematic content from a large collection of texts, is used for a wide variety of tasks within text analysis. Though there are a growing number of tools and techniques for exploring single models, comparisons between models are generally reduced to a small set of numerical metrics. These metrics may or may not reflect a model's performance on the analyst's intended task, and can therefore be insufficient to diagnose what causes differences between models. In this paper, we explore task-centric topic model comparison, considering how we can both provide detail for a more nuanced understanding of differences and address the wealth of tasks for which topic models are used. We derive comparison tasks from single-model uses of topic models, which predominantly fall into the categories of understanding topics, understanding similarity, and understanding change. Finally, we provide several visualization techniques that facilitate these tasks, including buddy plots, which combine color and position encodings to allow analysts to readily view changes in document similarity.
Statistical modelling for social researchers principles and practice
Tarling, Roger
2008-01-01
This book explains the principles and theory of statistical modelling in an intelligible way for the non-mathematical social scientist looking to apply statistical modelling techniques in research. The book also serves as an introduction for those wishing to develop more detailed knowledge and skills in statistical modelling. Rather than present a limited number of statistical models in great depth, the aim is to provide a comprehensive overview of the statistical models currently adopted in social research, in order that the researcher can make appropriate choices and select the most suitable model for the research question to be addressed. To facilitate application, the book also offers practical guidance and instruction in fitting models using SPSS and Stata, the most popular statistical computer software which is available to most social researchers. Instruction in using MLwiN is also given. Models covered in the book include; multiple regression, binary, multinomial and ordered logistic regression, log-l...
Linear Mixed Models in Statistical Genetics
R. de Vlaming (Ronald)
2017-01-01
markdownabstractOne of the goals of statistical genetics is to elucidate the genetic architecture of phenotypes (i.e., observable individual characteristics) that are affected by many genetic variants (e.g., single-nucleotide polymorphisms; SNPs). A particular aim is to identify specific SNPs that
Statistical models and methods for reliability and survival analysis
Couallier, Vincent; Huber-Carol, Catherine; Mesbah, Mounir; Huber -Carol, Catherine; Limnios, Nikolaos; Gerville-Reache, Leo
2013-01-01
Statistical Models and Methods for Reliability and Survival Analysis brings together contributions by specialists in statistical theory as they discuss their applications providing up-to-date developments in methods used in survival analysis, statistical goodness of fit, stochastic processes for system reliability, amongst others. Many of these are related to the work of Professor M. Nikulin in statistics over the past 30 years. The authors gather together various contributions with a broad array of techniques and results, divided into three parts - Statistical Models and Methods, Statistical
Consistent dynamical and statistical description of fission and comparison
Energy Technology Data Exchange (ETDEWEB)
Shunuan, Wang [Chinese Nuclear Data Center, Beijing, BJ (China)
1996-06-01
The research survey of consistent dynamical and statistical description of fission is briefly introduced. The channel theory of fission with diffusive dynamics based on Bohr channel theory of fission and Fokker-Planck equation and Kramers-modified Bohr-Wheeler expression according to Strutinsky method given by P.Frobrich et al. are compared and analyzed. (2 figs.).
Geometric modeling in probability and statistics
Calin, Ovidiu
2014-01-01
This book covers topics of Informational Geometry, a field which deals with the differential geometric study of the manifold probability density functions. This is a field that is increasingly attracting the interest of researchers from many different areas of science, including mathematics, statistics, geometry, computer science, signal processing, physics and neuroscience. It is the authors’ hope that the present book will be a valuable reference for researchers and graduate students in one of the aforementioned fields. This textbook is a unified presentation of differential geometry and probability theory, and constitutes a text for a course directed at graduate or advanced undergraduate students interested in applications of differential geometry in probability and statistics. The book contains over 100 proposed exercises meant to help students deepen their understanding, and it is accompanied by software that is able to provide numerical computations of several information geometric objects. The reader...
A statistical model of future human actions
International Nuclear Information System (INIS)
Woo, G.
1992-02-01
A critical review has been carried out of models of future human actions during the long term post-closure period of a radioactive waste repository. Various Markov models have been considered as alternatives to the standard Poisson model, and the problems of parameterisation have been addressed. Where the simplistic Poisson model unduly exaggerates the intrusion risk, some form of Markov model may have to be introduced. This situation may well arise for shallow repositories, but it is less likely for deep repositories. Recommendations are made for a practical implementation of a computer based model and its associated database. (Author)
Enhanced surrogate models for statistical design exploiting space mapping technology
DEFF Research Database (Denmark)
Koziel, Slawek; Bandler, John W.; Mohamed, Achmed S.
2005-01-01
We present advances in microwave and RF device modeling exploiting Space Mapping (SM) technology. We propose new SM modeling formulations utilizing input mappings, output mappings, frequency scaling and quadratic approximations. Our aim is to enhance circuit models for statistical analysis...
Statistical models of shape optimisation and evaluation
Davies, Rhodri; Taylor, Chris
2014-01-01
Deformable shape models have wide application in computer vision and biomedical image analysis. This book addresses a key issue in shape modelling: establishment of a meaningful correspondence between a set of shapes. Full implementation details are provided.
Borsboom, D.; Haig, B.D.
2013-01-01
Unlike most other statistical frameworks, Bayesian statistical inference is wedded to a particular approach in the philosophy of science (see Howson & Urbach, 2006); this approach is called Bayesianism. Rather than being concerned with model fitting, this position in the philosophy of science
Statistical Tests for Mixed Linear Models
Khuri, André I; Sinha, Bimal K
2011-01-01
An advanced discussion of linear models with mixed or random effects. In recent years a breakthrough has occurred in our ability to draw inferences from exact and optimum tests of variance component models, generating much research activity that relies on linear models with mixed and random effects. This volume covers the most important research of the past decade as well as the latest developments in hypothesis testing. It compiles all currently available results in the area of exact and optimum tests for variance component models and offers the only comprehensive treatment for these models a
Statistical modelling of traffic safety development
DEFF Research Database (Denmark)
Christens, Peter
2004-01-01
there were 6861 injury trafficc accidents reported by the police, resulting in 4519 minor injuries, 3946 serious injuries, and 431 fatalities. The general purpose of the research was to improve the insight into aggregated road safety methodology in Denmark. The aim was to analyse advanced statistical methods......, that were designed to study developments over time, including effects of interventions. This aim has been achieved by investigating variations in aggregated Danish traffic accident series and by applying state of the art methodologies to specific case studies. The thesis comprises an introduction...
A statistical mechanical model for equilibrium ionization
International Nuclear Information System (INIS)
Macris, N.; Martin, P.A.; Pule, J.
1990-01-01
A quantum electron interacts with a classical gas of hard spheres and is in thermal equilibrium with it. The interaction is attractive and the electron can form a bound state with the classical particles. It is rigorously shown that in a well defined low density and low temperature limit, the ionization probability for the electron tends to the value predicted by the Saha formula for thermal ionization. In this regime, the electron is found to be in a statistical mixture of a bound and a free state. (orig.)
Statistical image processing and multidimensional modeling
Fieguth, Paul
2010-01-01
Images are all around us! The proliferation of low-cost, high-quality imaging devices has led to an explosion in acquired images. When these images are acquired from a microscope, telescope, satellite, or medical imaging device, there is a statistical image processing task: the inference of something - an artery, a road, a DNA marker, an oil spill - from imagery, possibly noisy, blurry, or incomplete. A great many textbooks have been written on image processing. However this book does not so much focus on images, per se, but rather on spatial data sets, with one or more measurements taken over
International Nuclear Information System (INIS)
Tang Jie; Nett, Brian E; Chen Guanghong
2009-01-01
Of all available reconstruction methods, statistical iterative reconstruction algorithms appear particularly promising since they enable accurate physical noise modeling. The newly developed compressive sampling/compressed sensing (CS) algorithm has shown the potential to accurately reconstruct images from highly undersampled data. The CS algorithm can be implemented in the statistical reconstruction framework as well. In this study, we compared the performance of two standard statistical reconstruction algorithms (penalized weighted least squares and q-GGMRF) to the CS algorithm. In assessing the image quality using these iterative reconstructions, it is critical to utilize realistic background anatomy as the reconstruction results are object dependent. A cadaver head was scanned on a Varian Trilogy system at different dose levels. Several figures of merit including the relative root mean square error and a quality factor which accounts for the noise performance and the spatial resolution were introduced to objectively evaluate reconstruction performance. A comparison is presented between the three algorithms for a constant undersampling factor comparing different algorithms at several dose levels. To facilitate this comparison, the original CS method was formulated in the framework of the statistical image reconstruction algorithms. Important conclusions of the measurements from our studies are that (1) for realistic neuro-anatomy, over 100 projections are required to avoid streak artifacts in the reconstructed images even with CS reconstruction, (2) regardless of the algorithm employed, it is beneficial to distribute the total dose to more views as long as each view remains quantum noise limited and (3) the total variation-based CS method is not appropriate for very low dose levels because while it can mitigate streaking artifacts, the images exhibit patchy behavior, which is potentially harmful for medical diagnosis.
Fluctuations and correlations in statistical models of hadron production
International Nuclear Information System (INIS)
Gorenstein, M. I.
2012-01-01
An extension of the standard concept of the statistical ensembles is suggested. Namely, the statistical ensembles with extensive quantities fluctuating according to an externally given distribution are introduced. Applications in the statistical models of multiple hadron production in high energy physics are discussed.
Analysis and Evaluation of Statistical Models for Integrated Circuits Design
Directory of Open Access Journals (Sweden)
Sáenz-Noval J.J.
2011-10-01
Full Text Available Statistical models for integrated circuits (IC allow us to estimate the percentage of acceptable devices in the batch before fabrication. Actually, Pelgrom is the statistical model most accepted in the industry; however it was derived from a micrometer technology, which does not guarantee reliability in nanometric manufacturing processes. This work considers three of the most relevant statistical models in the industry and evaluates their limitations and advantages in analog design, so that the designer has a better criterion to make a choice. Moreover, it shows how several statistical models can be used for each one of the stages and design purposes.
Modeling of uncertainties in statistical inverse problems
International Nuclear Information System (INIS)
Kaipio, Jari
2008-01-01
In all real world problems, the models that tie the measurements to the unknowns of interest, are at best only approximations for reality. While moderate modeling and approximation errors can be tolerated with stable problems, inverse problems are a notorious exception. Typical modeling errors include inaccurate geometry, unknown boundary and initial data, properties of noise and other disturbances, and simply the numerical approximations of the physical models. In principle, the Bayesian approach to inverse problems, in which all uncertainties are modeled as random variables, is capable of handling these uncertainties. Depending on the type of uncertainties, however, different strategies may be adopted. In this paper we give an overview of typical modeling errors and related strategies within the Bayesian framework.
Interpretation of commonly used statistical regression models.
Kasza, Jessica; Wolfe, Rory
2014-01-01
A review of some regression models commonly used in respiratory health applications is provided in this article. Simple linear regression, multiple linear regression, logistic regression and ordinal logistic regression are considered. The focus of this article is on the interpretation of the regression coefficients of each model, which are illustrated through the application of these models to a respiratory health research study. © 2013 The Authors. Respirology © 2013 Asian Pacific Society of Respirology.
Statistical modeling and extrapolation of carcinogenesis data
International Nuclear Information System (INIS)
Krewski, D.; Murdoch, D.; Dewanji, A.
1986-01-01
Mathematical models of carcinogenesis are reviewed, including pharmacokinetic models for metabolic activation of carcinogenic substances. Maximum likelihood procedures for fitting these models to epidemiological data are discussed, including situations where the time to tumor occurrence is unobservable. The plausibility of different possible shapes of the dose response curve at low doses is examined, and a robust method for linear extrapolation to low doses is proposed and applied to epidemiological data on radiation carcinogenesis
Plan Recognition using Statistical Relational Models
2014-08-25
corresponding undirected model can be significantly more complex since there is no closed form solution for the maximum-likelihood set of parameters unlike in...algorithm did not scale to larger training sets, and the overall results are still not competitive with BALPs. 5In directed models, a closed form solution...opinions of ARO, DARPA, NSF or any other government agency. References Albrecht DW, Zukerman I, Nicholson AE. Bayesian models for keyhole plan
Multivariate statistical modelling based on generalized linear models
Fahrmeir, Ludwig
1994-01-01
This book is concerned with the use of generalized linear models for univariate and multivariate regression analysis. Its emphasis is to provide a detailed introductory survey of the subject based on the analysis of real data drawn from a variety of subjects including the biological sciences, economics, and the social sciences. Where possible, technical details and proofs are deferred to an appendix in order to provide an accessible account for non-experts. Topics covered include: models for multi-categorical responses, model checking, time series and longitudinal data, random effects models, and state-space models. Throughout, the authors have taken great pains to discuss the underlying theoretical ideas in ways that relate well to the data at hand. As a result, numerous researchers whose work relies on the use of these models will find this an invaluable account to have on their desks. "The basic aim of the authors is to bring together and review a large part of recent advances in statistical modelling of m...
Petroleum reserach: a statistical comparison and economic outlook
Energy Technology Data Exchange (ETDEWEB)
Perrodon, A
1965-10-01
The oil wealth of a country or of a sedimentary basin is quite variable. The cumulative quantities drawn to the ''useful'' sedimentary surface may vary from some 10 cu m of oil to many hundreds of thousands per sq km in the richest petroleum-bearing provinces. These results have been obtained after years of exploratory work, especially drilling, which have been carried out for the last 10 or 15 yr. It varies from a few wells to more than one thousand per 10,000 kmU2D. This overall task may give a first approximation of the amount of investments devoted to exploration. A comparison between the results obtained and the means employed reveals different criteria such as the amount of oil or gas discovered per exploratory well, thus giving an idea of the productive capacity of a basin, of the cost of exploration, and of its variation as prospecting proceeds. Thus, the general evolution of oil exploration sets forth, successively, a period of expansion, a phase of maturity, and a period of decline. Such comparisons, which enable various petroliferous provinces to be situated more accurately, must be completed later on by more serious analyses of the geological factors in relation to the origin of these different riches. (18 refs.)
Statistical Modelling of Extreme Rainfall in Taiwan
L-F. Chu (Lan-Fen); M.J. McAleer (Michael); C-C. Chang (Ching-Chung)
2012-01-01
textabstractIn this paper, the annual maximum daily rainfall data from 1961 to 2010 are modelled for 18 stations in Taiwan. We fit the rainfall data with stationary and non-stationary generalized extreme value distributions (GEV), and estimate their future behaviour based on the best fitting model.
Statistical Modelling of Extreme Rainfall in Taiwan
L. Chu (LanFen); M.J. McAleer (Michael); C-H. Chang (Chu-Hsiang)
2013-01-01
textabstractIn this paper, the annual maximum daily rainfall data from 1961 to 2010 are modelled for 18 stations in Taiwan. We fit the rainfall data with stationary and non-stationary generalized extreme value distributions (GEV), and estimate their future behaviour based on the best fitting model.
Comparison of de novo assembly statistics of Cucumis sativus L.
Wojcieszek, Michał; Kuśmirek, Wiktor; Pawełkowicz, Magdalena; PlÄ der, Wojciech; Nowak, Robert M.
2017-08-01
Genome sequencing is the core of genomic research. With the development of NGS and lowering the cost of procedure there is another tight gap - genome assembly. Developing the proper tool for this task is essential as quality of genome has important impact on further research. Here we present comparison of several de Bruijn assemblers tested on C. sativus genomic reads. The assessment shows that newly developed software - dnaasm provides better results in terms of quantity and quality. The number of generated sequences is lower by 5 - 33% with even two fold higher N50. Quality check showed reliable results were generated by dnaasm. This provides us with very strong base for future genomic analysis.
On the Logical Development of Statistical Models.
1983-12-01
1978). "Modelos con parametros variables en el analisis de series temporales " Questiio, 4, 2, 75-87. [25] Seal, H. L. (1967). "The historical...example, a classical state-space representation of a simple time series model is: yt = it + ut Ut = *It-I + Ct (2.2) ut and et are independent normal...on its past values is displayed in the structural equation. This approach has been particularly useful in time series models. For example, model (2.2
A Noise Robust Statistical Texture Model
DEFF Research Database (Denmark)
Hilger, Klaus Baggesen; Stegmann, Mikkel Bille; Larsen, Rasmus
2002-01-01
Appearance Models segmentation framework. This is accomplished by augmenting the model with an estimate of the covariance of the noise present in the training data. This results in a more compact model maximising the signal-to-noise ratio, thus favouring subspaces rich on signal, but low on noise......This paper presents a novel approach to the problem of obtaining a low dimensional representation of texture (pixel intensity) variation present in a training set after alignment using a Generalised Procrustes analysis.We extend the conventional analysis of training textures in the Active...
Quantum secure communication models comparison
Directory of Open Access Journals (Sweden)
Georgi Petrov Bebrov
2017-12-01
Full Text Available The paper concerns the quantum cryptography, more specifically, the quantum secure communication type of schemes. The main focus here is on making a comparison between the distinct secure quantum communication models – quantum secure direct communication and deterministic secure quantum communication, in terms of three parameters: resource efficiency, eavesdropping check efficiency, and security (degree of preserving the confidentiality.
Hayslett, H T
1991-01-01
Statistics covers the basic principles of Statistics. The book starts by tackling the importance and the two kinds of statistics; the presentation of sample data; the definition, illustration and explanation of several measures of location; and the measures of variation. The text then discusses elementary probability, the normal distribution and the normal approximation to the binomial. Testing of statistical hypotheses and tests of hypotheses about the theoretical proportion of successes in a binomial population and about the theoretical mean of a normal population are explained. The text the
12th Workshop on Stochastic Models, Statistics and Their Applications
Rafajłowicz, Ewaryst; Szajowski, Krzysztof
2015-01-01
This volume presents the latest advances and trends in stochastic models and related statistical procedures. Selected peer-reviewed contributions focus on statistical inference, quality control, change-point analysis and detection, empirical processes, time series analysis, survival analysis and reliability, statistics for stochastic processes, big data in technology and the sciences, statistical genetics, experiment design, and stochastic models in engineering. Stochastic models and related statistical procedures play an important part in furthering our understanding of the challenging problems currently arising in areas of application such as the natural sciences, information technology, engineering, image analysis, genetics, energy and finance, to name but a few. This collection arises from the 12th Workshop on Stochastic Models, Statistics and Their Applications, Wroclaw, Poland.
Directory of Open Access Journals (Sweden)
Jose H. Guardiola
2010-01-01
Full Text Available This paper compares the academic performance of students in three similar elementary statistics courses taught by the same instructor, but with the lab component differing among the three. One course is traditionally taught without a lab component; the second with a lab component using scenarios and an extensive use of technology, but without explicit coordination between lab and lecture; and the third using a lab component with an extensive use of technology that carefully coordinates the lab with the lecture. Extensive use of technology means, in this context, using Minitab software in the lab section, doing homework and quizzes using MyMathlab ©, and emphasizing interpretation of computer output during lectures. Initially, an online instrument based on Gardner’s multiple intelligences theory, is given to students to try to identify students’ learning styles and intelligence types as covariates. An analysis of covariance is performed in order to compare differences in achievement. In this study there is no attempt to measure difference in student performance across the different treatments. The purpose of this study is to find indications of associations among variables that support the claim that statistics labs could be associated with superior academic achievement in one of these three instructional environments. Also, this study tries to identify individual student characteristics that could be associated with superior academic performance. This study did not find evidence of any individual student characteristics that could be associated with superior achievement. The response variable was computed as percentage of correct answers for the three exams during the semester added together. The results of this study indicate a significant difference across these three different instructional methods, showing significantly higher mean scores for the response variable on students taking the lab component that was carefully coordinated with
Introduction to statistical modelling: linear regression.
Lunt, Mark
2015-07-01
In many studies we wish to assess how a range of variables are associated with a particular outcome and also determine the strength of such relationships so that we can begin to understand how these factors relate to each other at a population level. Ultimately, we may also be interested in predicting the outcome from a series of predictive factors available at, say, a routine clinic visit. In a recent article in Rheumatology, Desai et al. did precisely that when they studied the prediction of hip and spine BMD from hand BMD and various demographic, lifestyle, disease and therapy variables in patients with RA. This article aims to introduce the statistical methodology that can be used in such a situation and explain the meaning of some of the terms employed. It will also outline some common pitfalls encountered when performing such analyses. © The Author 2013. Published by Oxford University Press on behalf of the British Society for Rheumatology. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
International Nuclear Information System (INIS)
Yahya, Noorazrul; Ebert, Martin A.; Bulsara, Max; House, Michael J.; Kennedy, Angel; Joseph, David J.; Denham, James W.
2016-01-01
Purpose: Given the paucity of available data concerning radiotherapy-induced urinary toxicity, it is important to ensure derivation of the most robust models with superior predictive performance. This work explores multiple statistical-learning strategies for prediction of urinary symptoms following external beam radiotherapy of the prostate. Methods: The performance of logistic regression, elastic-net, support-vector machine, random forest, neural network, and multivariate adaptive regression splines (MARS) to predict urinary symptoms was analyzed using data from 754 participants accrued by TROG03.04-RADAR. Predictive features included dose-surface data, comorbidities, and medication-intake. Four symptoms were analyzed: dysuria, haematuria, incontinence, and frequency, each with three definitions (grade ≥ 1, grade ≥ 2 and longitudinal) with event rate between 2.3% and 76.1%. Repeated cross-validations producing matched models were implemented. A synthetic minority oversampling technique was utilized in endpoints with rare events. Parameter optimization was performed on the training data. Area under the receiver operating characteristic curve (AUROC) was used to compare performance using sample size to detect differences of ≥0.05 at the 95% confidence level. Results: Logistic regression, elastic-net, random forest, MARS, and support-vector machine were the highest-performing statistical-learning strategies in 3, 3, 3, 2, and 1 endpoints, respectively. Logistic regression, MARS, elastic-net, random forest, neural network, and support-vector machine were the best, or were not significantly worse than the best, in 7, 7, 5, 5, 3, and 1 endpoints. The best-performing statistical model was for dysuria grade ≥ 1 with AUROC ± standard deviation of 0.649 ± 0.074 using MARS. For longitudinal frequency and dysuria grade ≥ 1, all strategies produced AUROC>0.6 while all haematuria endpoints and longitudinal incontinence models produced AUROC<0.6. Conclusions
Energy Technology Data Exchange (ETDEWEB)
Yahya, Noorazrul, E-mail: noorazrul.yahya@research.uwa.edu.au [School of Physics, University of Western Australia, Western Australia 6009, Australia and School of Health Sciences, National University of Malaysia, Bangi 43600 (Malaysia); Ebert, Martin A. [School of Physics, University of Western Australia, Western Australia 6009, Australia and Department of Radiation Oncology, Sir Charles Gairdner Hospital, Western Australia 6008 (Australia); Bulsara, Max [Institute for Health Research, University of Notre Dame, Fremantle, Western Australia 6959 (Australia); House, Michael J. [School of Physics, University of Western Australia, Western Australia 6009 (Australia); Kennedy, Angel [Department of Radiation Oncology, Sir Charles Gairdner Hospital, Western Australia 6008 (Australia); Joseph, David J. [Department of Radiation Oncology, Sir Charles Gairdner Hospital, Western Australia 6008, Australia and School of Surgery, University of Western Australia, Western Australia 6009 (Australia); Denham, James W. [School of Medicine and Public Health, University of Newcastle, New South Wales 2308 (Australia)
2016-05-15
Purpose: Given the paucity of available data concerning radiotherapy-induced urinary toxicity, it is important to ensure derivation of the most robust models with superior predictive performance. This work explores multiple statistical-learning strategies for prediction of urinary symptoms following external beam radiotherapy of the prostate. Methods: The performance of logistic regression, elastic-net, support-vector machine, random forest, neural network, and multivariate adaptive regression splines (MARS) to predict urinary symptoms was analyzed using data from 754 participants accrued by TROG03.04-RADAR. Predictive features included dose-surface data, comorbidities, and medication-intake. Four symptoms were analyzed: dysuria, haematuria, incontinence, and frequency, each with three definitions (grade ≥ 1, grade ≥ 2 and longitudinal) with event rate between 2.3% and 76.1%. Repeated cross-validations producing matched models were implemented. A synthetic minority oversampling technique was utilized in endpoints with rare events. Parameter optimization was performed on the training data. Area under the receiver operating characteristic curve (AUROC) was used to compare performance using sample size to detect differences of ≥0.05 at the 95% confidence level. Results: Logistic regression, elastic-net, random forest, MARS, and support-vector machine were the highest-performing statistical-learning strategies in 3, 3, 3, 2, and 1 endpoints, respectively. Logistic regression, MARS, elastic-net, random forest, neural network, and support-vector machine were the best, or were not significantly worse than the best, in 7, 7, 5, 5, 3, and 1 endpoints. The best-performing statistical model was for dysuria grade ≥ 1 with AUROC ± standard deviation of 0.649 ± 0.074 using MARS. For longitudinal frequency and dysuria grade ≥ 1, all strategies produced AUROC>0.6 while all haematuria endpoints and longitudinal incontinence models produced AUROC<0.6. Conclusions
Directory of Open Access Journals (Sweden)
Land Walker H
2011-01-01
Full Text Available Abstract Background When investigating covariate interactions and group associations with standard regression analyses, the relationship between the response variable and exposure may be difficult to characterize. When the relationship is nonlinear, linear modeling techniques do not capture the nonlinear information content. Statistical learning (SL techniques with kernels are capable of addressing nonlinear problems without making parametric assumptions. However, these techniques do not produce findings relevant for epidemiologic interpretations. A simulated case-control study was used to contrast the information embedding characteristics and separation boundaries produced by a specific SL technique with logistic regression (LR modeling representing a parametric approach. The SL technique was comprised of a kernel mapping in combination with a perceptron neural network. Because the LR model has an important epidemiologic interpretation, the SL method was modified to produce the analogous interpretation and generate odds ratios for comparison. Results The SL approach is capable of generating odds ratios for main effects and risk factor interactions that better capture nonlinear relationships between exposure variables and outcome in comparison with LR. Conclusions The integration of SL methods in epidemiology may improve both the understanding and interpretation of complex exposure/disease relationships.
Latent domain models for statistical machine translation
Hoàng, C.
2017-01-01
A data-driven approach to model translation suffers from the data mismatch problem and demands domain adaptation techniques. Given parallel training data originating from a specific domain, training an MT system on the data would result in a rather suboptimal translation for other domains. But does
Behavioral and statistical models of educational inequality
DEFF Research Database (Denmark)
Holm, Anders; Breen, Richard
2016-01-01
This paper addresses the question of how students and their families make educational decisions. We describe three types of behavioral model that might underlie decision-making and we show that they have consequences for what decisions are made. Our study thus has policy implications if we wish...
Statistical modelling of fine red wine production
Directory of Open Access Journals (Sweden)
María Rosa Castro
2010-01-01
Full Text Available Producing wine is a very important economic activity in the province of San Juan in Argentina; it is therefore most important to predict production regarding the quantity of raw material needed. This work was aimed at obtaining a model relating kilograms of crushed grape to the litres of wine so produced. Such model will be used for predicting precise future values and confidence intervals for determined quantities of crushed grapes. Data from a vineyard in the province of San Juan was thus used in this work. The sampling coefficient of correlation was calculated and a dispersion diagram was then constructed; this indicated a li- neal relationship between the litres of wine obtained and the kilograms of crushed grape. Two lineal models were then adopted and variance analysis was carried out because the data came from normal populations having the same variance. The most appropriate model was obtained from this analysis; it was validated with experimental values, a good approach being obtained.
Statistical models of global Langmuir mixing
Li, Qing; Fox-Kemper, Baylor; Breivik, Øyvind; Webb, Adrean
2017-05-01
The effects of Langmuir mixing on the surface ocean mixing may be parameterized by applying an enhancement factor which depends on wave, wind, and ocean state to the turbulent velocity scale in the K-Profile Parameterization. Diagnosing the appropriate enhancement factor online in global climate simulations is readily achieved by coupling with a prognostic wave model, but with significant computational and code development expenses. In this paper, two alternatives that do not require a prognostic wave model, (i) a monthly mean enhancement factor climatology, and (ii) an approximation to the enhancement factor based on the empirical wave spectra, are explored and tested in a global climate model. Both appear to reproduce the Langmuir mixing effects as estimated using a prognostic wave model, with nearly identical and substantial improvements in the simulated mixed layer depth and intermediate water ventilation over control simulations, but significantly less computational cost. Simpler approaches, such as ignoring Langmuir mixing altogether or setting a globally constant Langmuir number, are found to be deficient. Thus, the consequences of Stokes depth and misaligned wind and waves are important.
Sampling, Probability Models and Statistical Reasoning -RE ...
Indian Academy of Sciences (India)
random sampling allows data to be modelled with the help of probability ... g based on different trials to get an estimate of the experimental error. ... research interests lie in the .... if e is indeed the true value of the proportion of defectives in the.
Statistical Model Checking for Product Lines
DEFF Research Database (Denmark)
ter Beek, Maurice H.; Legay, Axel; Lluch Lafuente, Alberto
2016-01-01
average cost of products (in terms of the attributes of the products’ features) and the probability of features to be (un)installed at runtime. The product lines must be modelled in QFLan, which extends the probabilistic feature-oriented language PFLan with novel quantitative constraints among features...
A Statistical Model for Energy Intensity
Directory of Open Access Journals (Sweden)
Marjaneh Issapour
2012-12-01
Full Text Available A promising approach to improve scientific literacy in regards to global warming and climate change is using a simulation as part of a science education course. The simulation needs to employ scientific analysis of actual data from internationally accepted and reputable databases to demonstrate the reality of the current climate change situation. One of the most important criteria for using a simulation in a science education course is the fidelity of the model. The realism of the events and consequences modeled in the simulation is significant as well. Therefore, all underlying equations and algorithms used in the simulation must have real-world scientific basis. The "Energy Choices" simulation is one such simulation. The focus of this paper is the development of a mathematical model for "Energy Intensity" as a part of the overall system dynamics in "Energy Choices" simulation. This model will define the "Energy Intensity" as a function of other independent variables that can be manipulated by users of the simulation. The relationship discovered by this research will be applied to an algorithm in the "Energy Choices" simulation.
Structured Statistical Models of Inductive Reasoning
Kemp, Charles; Tenenbaum, Joshua B.
2009-01-01
Everyday inductive inferences are often guided by rich background knowledge. Formal models of induction should aim to incorporate this knowledge and should explain how different kinds of knowledge lead to the distinctive patterns of reasoning found in different inductive contexts. This article presents a Bayesian framework that attempts to meet…
Comparison of Statistical Methods for Detector Testing Programs
Energy Technology Data Exchange (ETDEWEB)
Rennie, John Alan [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Abhold, Mark [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2016-10-14
A typical goal for any detector testing program is to ascertain not only the performance of the detector systems under test, but also the confidence that systems accepted using that testing program’s acceptance criteria will exceed a minimum acceptable performance (which is usually expressed as the minimum acceptable success probability, p). A similar problem often arises in statistics, where we would like to ascertain the fraction, p, of a population of items that possess a property that may take one of two possible values. Typically, the problem is approached by drawing a fixed sample of size n, with the number of items out of n that possess the desired property, x, being termed successes. The sample mean gives an estimate of the population mean p ≈ x/n, although usually it is desirable to accompany such an estimate with a statement concerning the range within which p may fall and the confidence associated with that range. Procedures for establishing such ranges and confidence limits are described in detail by Clopper, Brown, and Agresti for two-sided symmetric confidence intervals.
Statistical Analysis and Modelling of Olkiluoto Structures
International Nuclear Information System (INIS)
Hellae, P.; Vaittinen, T.; Saksa, P.; Nummela, J.
2004-11-01
Posiva Oy is carrying out investigations for the disposal of the spent nuclear fuel at the Olkiluoto site in SW Finland. The investigations have focused on the central part of the island. The layout design of the entire repository requires characterization of notably larger areas and must rely at least at the current stage on borehole information from a rather sparse network and on the geophysical soundings providing information outside and between the holes. In this work, the structural data according to the current version of the Olkiluoto bedrock model is analyzed. The bedrock model relies much on the borehole data although results of the seismic surveys and, for example, pumping tests are used in determining the orientation and continuation of the structures. Especially in the analysis, questions related to the frequency of structures and size of the structures are discussed. The structures observed in the boreholes are mainly dipping gently to the southeast. About 9 % of the sample length belongs to structures. The proportion is higher in the upper parts of the rock. The number of fracture and crushed zones seems not to depend greatly on the depth, whereas the hydraulic features concentrate on the depth range above -100 m. Below level -300 m, the hydraulic conductivity occurs in connection of fractured zones. Especially the hydraulic features, but also fracture and crushed zones often occur in groups. The frequency of the structure (area of structures per total volume) is estimated to be of the order of 1/100m. The size of the local structures was estimated by calculating the intersection of the zone to the nearest borehole where the zone has not been detected. Stochastic models using the Fracman software by Golder Associates were generated based on the bedrock model data complemented with the magnetic ground survey data. The seismic surveys (from boreholes KR5, KR13, KR14, and KR19) were used as alternative input data. The generated models were tested by
Statistical modelling of railway track geometry degradation using Hierarchical Bayesian models
International Nuclear Information System (INIS)
Andrade, A.R.; Teixeira, P.F.
2015-01-01
Railway maintenance planners require a predictive model that can assess the railway track geometry degradation. The present paper uses a Hierarchical Bayesian model as a tool to model the main two quality indicators related to railway track geometry degradation: the standard deviation of longitudinal level defects and the standard deviation of horizontal alignment defects. Hierarchical Bayesian Models (HBM) are flexible statistical models that allow specifying different spatially correlated components between consecutive track sections, namely for the deterioration rates and the initial qualities parameters. HBM are developed for both quality indicators, conducting an extensive comparison between candidate models and a sensitivity analysis on prior distributions. HBM is applied to provide an overall assessment of the degradation of railway track geometry, for the main Portuguese railway line Lisbon–Oporto. - Highlights: • Rail track geometry degradation is analysed using Hierarchical Bayesian models. • A Gibbs sampling strategy is put forward to estimate the HBM. • Model comparison and sensitivity analysis find the most suitable model. • We applied the most suitable model to all the segments of the main Portuguese line. • Tackling spatial correlations using CAR structures lead to a better model fit
Modeling statistical properties of written text.
Directory of Open Access Journals (Sweden)
M Angeles Serrano
Full Text Available Written text is one of the fundamental manifestations of human language, and the study of its universal regularities can give clues about how our brains process information and how we, as a society, organize and share it. Among these regularities, only Zipf's law has been explored in depth. Other basic properties, such as the existence of bursts of rare words in specific documents, have only been studied independently of each other and mainly by descriptive models. As a consequence, there is a lack of understanding of linguistic processes as complex emergent phenomena. Beyond Zipf's law for word frequencies, here we focus on burstiness, Heaps' law describing the sublinear growth of vocabulary size with the length of a document, and the topicality of document collections, which encode correlations within and across documents absent in random null models. We introduce and validate a generative model that explains the simultaneous emergence of all these patterns from simple rules. As a result, we find a connection between the bursty nature of rare words and the topical organization of texts and identify dynamic word ranking and memory across documents as key mechanisms explaining the non trivial organization of written text. Our research can have broad implications and practical applications in computer science, cognitive science and linguistics.
Advanced data analysis in neuroscience integrating statistical and computational models
Durstewitz, Daniel
2017-01-01
This book is intended for use in advanced graduate courses in statistics / machine learning, as well as for all experimental neuroscientists seeking to understand statistical methods at a deeper level, and theoretical neuroscientists with a limited background in statistics. It reviews almost all areas of applied statistics, from basic statistical estimation and test theory, linear and nonlinear approaches for regression and classification, to model selection and methods for dimensionality reduction, density estimation and unsupervised clustering. Its focus, however, is linear and nonlinear time series analysis from a dynamical systems perspective, based on which it aims to convey an understanding also of the dynamical mechanisms that could have generated observed time series. Further, it integrates computational modeling of behavioral and neural dynamics with statistical estimation and hypothesis testing. This way computational models in neuroscience are not only explanat ory frameworks, but become powerfu...
Development of a statistical oil spill model for risk assessment.
Guo, Weijun
2017-11-01
To gain a better understanding of the impacts from potential risk sources, we developed an oil spill model using probabilistic method, which simulates numerous oil spill trajectories under varying environmental conditions. The statistical results were quantified from hypothetical oil spills under multiple scenarios, including area affected probability, mean oil slick thickness, and duration of water surface exposed to floating oil. The three sub-indices together with marine area vulnerability are merged to compute the composite index, characterizing the spatial distribution of risk degree. Integral of the index can be used to identify the overall risk from an emission source. The developed model has been successfully applied in comparison to and selection of an appropriate oil port construction location adjacent to a marine protected area for Phoca largha in China. The results highlight the importance of selection of candidates before project construction, since that risk estimation from two adjacent potential sources may turn out to be significantly different regarding hydrodynamic conditions and eco-environmental sensitivity. Copyright © 2017. Published by Elsevier Ltd.
Statistically Based Morphodynamic Modeling of Tracer Slowdown
Borhani, S.; Ghasemi, A.; Hill, K. M.; Viparelli, E.
2017-12-01
Tracer particles are used to study bedload transport in gravel-bed rivers. One of the advantages associated with using of tracer particles is that they allow for direct measures of the entrainment rates and their size distributions. The main issue in large scale studies with tracer particles is the difference between tracer stone short term and long term behavior. This difference is due to the fact that particles undergo vertical mixing or move to less active locations such as bars or even floodplains. For these reasons the average virtual velocity of tracer particle decreases in time, i.e. the tracer slowdown. In summary, tracer slowdown can have a significant impact on the estimation of bedload transport rate or long term dispersal of contaminated sediment. The vast majority of the morphodynamic models that account for the non-uniformity of the bed material (tracer and not tracer, in this case) are based on a discrete description of the alluvial deposit. The deposit is divided in two different regions; the active layer and the substrate. The active layer is a thin layer in the topmost part of the deposit whose particles can interact with the bed material transport. The substrate is the part of the deposit below the active layer. Due to the discrete representation of the alluvial deposit, active layer models are not able to reproduce tracer slowdown. In this study we try to model the slowdown of tracer particles with the continuous Parker-Paola-Leclair morphodynamic framework. This continuous, i.e. not layer-based, framework is based on a stochastic description of the temporal variation of bed surface elevation, and of the elevation specific particle entrainment and deposition. Particle entrainment rates are computed as a function of the flow and sediment characteristics, while particle deposition is estimated with a step length formulation. Here we present one of the first implementation of the continuum framework at laboratory scale, its validation against
Comparison of Nordic dose models
International Nuclear Information System (INIS)
Thykier-Nielsen, S.
1978-04-01
A comparison is made between the models used in the four Nordic countries, Finland, Norway, Sweden and Denmark, for calculation of concentrations and doses from releases of radioactive material to the atmosphere. The comparison is limited to the near-zone models, i.e. the models for calculation of concentrations and doses within 50 km from the release point, and it comprises the following types of calculation: a. Concentrations of airborne material, b. External gamma doses from a plume, c. External gamma doses from radioactive material deposited on the ground. All models are based on the gaussian dispersion model (the gaussian plume model). Unit releases of specific isotopes under specific meteorological conditions are assumed. On the basis of the calculation results from the models, it is concluded that there are no essential differences. The difference between the calculation results only exceeds a factor of 3 in special cases. It thus lies within the known limits of uncertainty for the gaussian plume model. (author)
Links to sources of cancer-related statistics, including the Surveillance, Epidemiology and End Results (SEER) Program, SEER-Medicare datasets, cancer survivor prevalence data, and the Cancer Trends Progress Report.
Statistical mechanics of the cluster Ising model
International Nuclear Information System (INIS)
Smacchia, Pietro; Amico, Luigi; Facchi, Paolo; Fazio, Rosario; Florio, Giuseppe; Pascazio, Saverio; Vedral, Vlatko
2011-01-01
We study a Hamiltonian system describing a three-spin-1/2 clusterlike interaction competing with an Ising-like antiferromagnetic interaction. We compute free energy, spin-correlation functions, and entanglement both in the ground and in thermal states. The model undergoes a quantum phase transition between an Ising phase with a nonvanishing magnetization and a cluster phase characterized by a string order. Any two-spin entanglement is found to vanish in both quantum phases because of a nontrivial correlation pattern. Nevertheless, the residual multipartite entanglement is maximal in the cluster phase and dependent on the magnetization in the Ising phase. We study the block entropy at the critical point and calculate the central charge of the system, showing that the criticality of the system is beyond the Ising universality class.
Statistical mechanics of the cluster Ising model
Energy Technology Data Exchange (ETDEWEB)
Smacchia, Pietro [SISSA - via Bonomea 265, I-34136, Trieste (Italy); Amico, Luigi [CNR-MATIS-IMM and Dipartimento di Fisica e Astronomia Universita di Catania, C/O ed. 10, viale Andrea Doria 6, I-95125 Catania (Italy); Facchi, Paolo [Dipartimento di Matematica and MECENAS, Universita di Bari, I-70125 Bari (Italy); INFN, Sezione di Bari, I-70126 Bari (Italy); Fazio, Rosario [NEST, Scuola Normale Superiore and Istituto Nanoscienze - CNR, 56126 Pisa (Italy); Center for Quantum Technology, National University of Singapore, 117542 Singapore (Singapore); Florio, Giuseppe; Pascazio, Saverio [Dipartimento di Fisica and MECENAS, Universita di Bari, I-70126 Bari (Italy); INFN, Sezione di Bari, I-70126 Bari (Italy); Vedral, Vlatko [Center for Quantum Technology, National University of Singapore, 117542 Singapore (Singapore); Department of Physics, National University of Singapore, 2 Science Drive 3, Singapore 117542 (Singapore); Department of Physics, University of Oxford, Clarendon Laboratory, Oxford, OX1 3PU (United Kingdom)
2011-08-15
We study a Hamiltonian system describing a three-spin-1/2 clusterlike interaction competing with an Ising-like antiferromagnetic interaction. We compute free energy, spin-correlation functions, and entanglement both in the ground and in thermal states. The model undergoes a quantum phase transition between an Ising phase with a nonvanishing magnetization and a cluster phase characterized by a string order. Any two-spin entanglement is found to vanish in both quantum phases because of a nontrivial correlation pattern. Nevertheless, the residual multipartite entanglement is maximal in the cluster phase and dependent on the magnetization in the Ising phase. We study the block entropy at the critical point and calculate the central charge of the system, showing that the criticality of the system is beyond the Ising universality class.
Probability of Detection (POD) as a statistical model for the validation of qualitative methods.
Wehling, Paul; LaBudde, Robert A; Brunelle, Sharon L; Nelson, Maria T
2011-01-01
A statistical model is presented for use in validation of qualitative methods. This model, termed Probability of Detection (POD), harmonizes the statistical concepts and parameters between quantitative and qualitative method validation. POD characterizes method response with respect to concentration as a continuous variable. The POD model provides a tool for graphical representation of response curves for qualitative methods. In addition, the model allows comparisons between candidate and reference methods, and provides calculations of repeatability, reproducibility, and laboratory effects from collaborative study data. Single laboratory study and collaborative study examples are given.
An improved mixing model providing joint statistics of scalar and scalar dissipation
Energy Technology Data Exchange (ETDEWEB)
Meyer, Daniel W. [Department of Energy Resources Engineering, Stanford University, Stanford, CA (United States); Jenny, Patrick [Institute of Fluid Dynamics, ETH Zurich (Switzerland)
2008-11-15
For the calculation of nonpremixed turbulent flames with thin reaction zones the joint probability density function (PDF) of the mixture fraction and its dissipation rate plays an important role. The corresponding PDF transport equation involves a mixing model for the closure of the molecular mixing term. Here, the parameterized scalar profile (PSP) mixing model is extended to provide the required joint statistics. Model predictions are validated using direct numerical simulation (DNS) data of a passive scalar mixing in a statistically homogeneous turbulent flow. Comparisons between the DNS and the model predictions are provided, which involve different initial scalar-field lengthscales. (author)
International Nuclear Information System (INIS)
Hendricks, J.S.
2003-01-01
MCNPX is a Monte Carlo N-Particle radiation transport code extending the capabilities of MCNP4C. As with MCNP, MCNPX uses nuclear data tables to transport neutrons, photons, and electrons. Unlike MCNP, MCNPX also uses (1) nuclear data tables to transport protons; (2) physics models to transport 30 additional particle types (deuterons, tritons, alphas, pions, muons, etc.); and (3) physics models to transport neutrons and protons when no tabular data are available or when the data are above the energy range (20 to 150 MeV) where the data tables end. MCNPX can mix and match data tables and physics models throughout a problem. For example, MCNPX can model neutron transport in a bismuth germinate (BGO) particle detector by using data tables for bismuth and oxygen and using physics models for germanium. Also, MCNPX can model neutron transport in UO 2 , making the best use of physics models and data tables: below 20 MeV, data tables are used; above 150 MeV, physics models are used; between 20 and 150 MeV, data tables are used for oxygen and models are used for uranium. The mix-and-match capability became available with MCNPX2.5.b (November 2002). For the first time, we present here comparisons that calculate radiation transport in materials with various combinations of data charts and model physics. The physics models are poor at low energies (<150 MeV); thus, data tables should be used when available. Our comparisons demonstrate the importance of the mix-and-match capability and indicate how well physics models work in the absence of data tables
Functional summary statistics for the Johnson-Mehl model
DEFF Research Database (Denmark)
Møller, Jesper; Ghorbani, Mohammad
The Johnson-Mehl germination-growth model is a spatio-temporal point process model which among other things have been used for the description of neurotransmitters datasets. However, for such datasets parametric Johnson-Mehl models fitted by maximum likelihood have yet not been evaluated by means...... of functional summary statistics. This paper therefore invents four functional summary statistics adapted to the Johnson-Mehl model, with two of them based on the second-order properties and the other two on the nuclei-boundary distances for the associated Johnson-Mehl tessellation. The functional summary...... statistics theoretical properties are investigated, non-parametric estimators are suggested, and their usefulness for model checking is examined in a simulation study. The functional summary statistics are also used for checking fitted parametric Johnson-Mehl models for a neurotransmitters dataset....
Statistical modelling in biostatistics and bioinformatics selected papers
Peng, Defen
2014-01-01
This book presents selected papers on statistical model development related mainly to the fields of Biostatistics and Bioinformatics. The coverage of the material falls squarely into the following categories: (a) Survival analysis and multivariate survival analysis, (b) Time series and longitudinal data analysis, (c) Statistical model development and (d) Applied statistical modelling. Innovations in statistical modelling are presented throughout each of the four areas, with some intriguing new ideas on hierarchical generalized non-linear models and on frailty models with structural dispersion, just to mention two examples. The contributors include distinguished international statisticians such as Philip Hougaard, John Hinde, Il Do Ha, Roger Payne and Alessandra Durio, among others, as well as promising newcomers. Some of the contributions have come from researchers working in the BIO-SI research programme on Biostatistics and Bioinformatics, centred on the Universities of Limerick and Galway in Ireland and fu...
Knowledge fusion: Comparison of fuzzy curve smoothers to statistically motivated curve smoothers
International Nuclear Information System (INIS)
Burr, T.; Strittmatter, R.B.
1996-03-01
This report describes work during FY 95 that was sponsored by the Department of Energy, Office of Nonproliferation and National Security (NN) Knowledge Fusion (KF) Project. The project team selected satellite sensor data to use as the one main example to which its analysis algorithms would be applied. The specific sensor-fusion problem has many generic features, which make it a worthwhile problem to attempt to solve in a general way. The generic problem is to recognize events of interest from multiple time series that define a possibly noisy background. By implementing a suite of time series modeling and forecasting methods and using well-chosen alarm criteria, we reduce the number of false alarms. We then further reduce the number of false alarms by analyzing all suspicious sections of data, as judged by the alarm criteria, with pattern recognition methods. This report gives a detailed comparison of two of the forecasting methods (fuzzy forecaster and statistically motivated curve smoothers as forecasters). The two methods are compared on five simulated and five real data sets. One of the five real data sets is satellite sensor data. The conclusion is the statistically motivated curve smoother is superior on simulated data of the type we studied. The statistically motivated method is also superior on most real data. In defense of the fuzzy-logic motivated methods, we point out that fuzzy-logic methods were never intended to compete with statistical methods on numeric data. Fuzzy logic was developed to handle real-world situations where either real data was not available or was supplemented with either ''expert opinion'' or some sort of linguistic information
Petersson, K M; Nichols, T E; Poline, J B; Holmes, A P
1999-01-01
Functional neuroimaging (FNI) provides experimental access to the intact living brain making it possible to study higher cognitive functions in humans. In this review and in a companion paper in this issue, we discuss some common methods used to analyse FNI data. The emphasis in both papers is on assumptions and limitations of the methods reviewed. There are several methods available to analyse FNI data indicating that none is optimal for all purposes. In order to make optimal use of the methods available it is important to know the limits of applicability. For the interpretation of FNI results it is also important to take into account the assumptions, approximations and inherent limitations of the methods used. This paper gives a brief overview over some non-inferential descriptive methods and common statistical models used in FNI. Issues relating to the complex problem of model selection are discussed. In general, proper model selection is a necessary prerequisite for the validity of the subsequent statistical inference. The non-inferential section describes methods that, combined with inspection of parameter estimates and other simple measures, can aid in the process of model selection and verification of assumptions. The section on statistical models covers approaches to global normalization and some aspects of univariate, multivariate, and Bayesian models. Finally, approaches to functional connectivity and effective connectivity are discussed. In the companion paper we review issues related to signal detection and statistical inference. PMID:10466149
International Nuclear Information System (INIS)
2005-01-01
For the years 2004 and 2005 the figures shown in the tables of Energy Review are partly preliminary. The annual statistics published in Energy Review are presented in more detail in a publication called Energy Statistics that comes out yearly. Energy Statistics also includes historical time-series over a longer period of time (see e.g. Energy Statistics, Statistics Finland, Helsinki 2004.) The applied energy units and conversion coefficients are shown in the back cover of the Review. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in GDP, energy consumption and electricity consumption, Carbon dioxide emissions from fossile fuels use, Coal consumption, Consumption of natural gas, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices in heat production, Fuel prices in electricity production, Price of electricity by type of consumer, Average monthly spot prices at the Nord pool power exchange, Total energy consumption by source and CO 2 -emissions, Supplies and total consumption of electricity GWh, Energy imports by country of origin in January-June 2003, Energy exports by recipient country in January-June 2003, Consumer prices of liquid fuels, Consumer prices of hard coal, natural gas and indigenous fuels, Price of natural gas by type of consumer, Price of electricity by type of consumer, Price of district heating by type of consumer, Excise taxes, value added taxes and fiscal charges and fees included in consumer prices of some energy sources and Energy taxes, precautionary stock fees and oil pollution fees
Mixed deterministic statistical modelling of regional ozone air pollution
Kalenderski, Stoitchko
2011-03-17
We develop a physically motivated statistical model for regional ozone air pollution by separating the ground-level pollutant concentration field into three components, namely: transport, local production and large-scale mean trend mostly dominated by emission rates. The model is novel in the field of environmental spatial statistics in that it is a combined deterministic-statistical model, which gives a new perspective to the modelling of air pollution. The model is presented in a Bayesian hierarchical formalism, and explicitly accounts for advection of pollutants, using the advection equation. We apply the model to a specific case of regional ozone pollution-the Lower Fraser valley of British Columbia, Canada. As a predictive tool, we demonstrate that the model vastly outperforms existing, simpler modelling approaches. Our study highlights the importance of simultaneously considering different aspects of an air pollution problem as well as taking into account the physical bases that govern the processes of interest. © 2011 John Wiley & Sons, Ltd..
A Model of Statistics Performance Based on Achievement Goal Theory.
Bandalos, Deborah L.; Finney, Sara J.; Geske, Jenenne A.
2003-01-01
Tests a model of statistics performance based on achievement goal theory. Both learning and performance goals affected achievement indirectly through study strategies, self-efficacy, and test anxiety. Implications of these findings for teaching and learning statistics are discussed. (Contains 47 references, 3 tables, 3 figures, and 1 appendix.)…
Kolmogorov complexity, pseudorandom generators and statistical models testing
Czech Academy of Sciences Publication Activity Database
Šindelář, Jan; Boček, Pavel
2002-01-01
Roč. 38, č. 6 (2002), s. 747-759 ISSN 0023-5954 R&D Projects: GA ČR GA102/99/1564 Institutional research plan: CEZ:AV0Z1075907 Keywords : Kolmogorov complexity * pseudorandom generators * statistical models testing Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.341, year: 2002
Statistical properties of several models of fractional random point processes
Bendjaballah, C.
2011-08-01
Statistical properties of several models of fractional random point processes have been analyzed from the counting and time interval statistics points of view. Based on the criterion of the reduced variance, it is seen that such processes exhibit nonclassical properties. The conditions for these processes to be treated as conditional Poisson processes are examined. Numerical simulations illustrate part of the theoretical calculations.
International Nuclear Information System (INIS)
2001-01-01
For the year 2000, part of the figures shown in the tables of the Energy Review are preliminary or estimated. The annual statistics of the Energy Review appear in more detail from the publication Energiatilastot - Energy Statistics issued annually, which also includes historical time series over a longer period (see e.g. Energiatilastot 1999, Statistics Finland, Helsinki 2000, ISSN 0785-3165). The inside of the Review's back cover shows the energy units and the conversion coefficients used for them. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in the volume of GNP and energy consumption, Changes in the volume of GNP and electricity, Coal consumption, Natural gas consumption, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices for heat production, Fuel prices for electricity production, Carbon dioxide emissions from the use of fossil fuels, Total energy consumption by source and CO 2 -emissions, Electricity supply, Energy imports by country of origin in 2000, Energy exports by recipient country in 2000, Consumer prices of liquid fuels, Consumer prices of hard coal, natural gas and indigenous fuels, Average electricity price by type of consumer, Price of district heating by type of consumer, Excise taxes, value added taxes and fiscal charges and fees included in consumer prices of some energy sources and Energy taxes and precautionary stock fees on oil products
International Nuclear Information System (INIS)
2000-01-01
For the year 1999 and 2000, part of the figures shown in the tables of the Energy Review are preliminary or estimated. The annual statistics of the Energy Review appear in more detail from the publication Energiatilastot - Energy Statistics issued annually, which also includes historical time series over a longer period (see e.g., Energiatilastot 1998, Statistics Finland, Helsinki 1999, ISSN 0785-3165). The inside of the Review's back cover shows the energy units and the conversion coefficients used for them. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in the volume of GNP and energy consumption, Changes in the volume of GNP and electricity, Coal consumption, Natural gas consumption, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices for heat production, Fuel prices for electricity production, Carbon dioxide emissions, Total energy consumption by source and CO 2 -emissions, Electricity supply, Energy imports by country of origin in January-March 2000, Energy exports by recipient country in January-March 2000, Consumer prices of liquid fuels, Consumer prices of hard coal, natural gas and indigenous fuels, Average electricity price by type of consumer, Price of district heating by type of consumer, Excise taxes, value added taxes and fiscal charges and fees included in consumer prices of some energy sources and Energy taxes and precautionary stock fees on oil products
International Nuclear Information System (INIS)
1999-01-01
For the year 1998 and the year 1999, part of the figures shown in the tables of the Energy Review are preliminary or estimated. The annual statistics of the Energy Review appear in more detail from the publication Energiatilastot - Energy Statistics issued annually, which also includes historical time series over a longer period (see e.g. Energiatilastot 1998, Statistics Finland, Helsinki 1999, ISSN 0785-3165). The inside of the Review's back cover shows the energy units and the conversion coefficients used for them. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in the volume of GNP and energy consumption, Changes in the volume of GNP and electricity, Coal consumption, Natural gas consumption, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices for heat production, Fuel prices for electricity production, Carbon dioxide emissions, Total energy consumption by source and CO 2 -emissions, Electricity supply, Energy imports by country of origin in January-June 1999, Energy exports by recipient country in January-June 1999, Consumer prices of liquid fuels, Consumer prices of hard coal, natural gas and indigenous fuels, Average electricity price by type of consumer, Price of district heating by type of consumer, Excise taxes, value added taxes and fiscal charges and fees included in consumer prices of some energy sources and Energy taxes and precautionary stock fees on oil products
Improving statistical reasoning theoretical models and practical implications
Sedlmeier, Peter
1999-01-01
This book focuses on how statistical reasoning works and on training programs that can exploit people''s natural cognitive capabilities to improve their statistical reasoning. Training programs that take into account findings from evolutionary psychology and instructional theory are shown to have substantially larger effects that are more stable over time than previous training regimens. The theoretical implications are traced in a neural network model of human performance on statistical reasoning problems. This book apppeals to judgment and decision making researchers and other cognitive scientists, as well as to teachers of statistics and probabilistic reasoning.
Yang, Yongji; Moser, Michael A J; Zhang, Edwin; Zhang, Wenjun; Zhang, Bing
2018-01-01
The aim of this study was to develop a statistical model for cell death by irreversible electroporation (IRE) and to show that the statistic model is more accurate than the electric field threshold model in the literature using cervical cancer cells in vitro. HeLa cell line was cultured and treated with different IRE protocols in order to obtain data for modeling the statistical relationship between the cell death and pulse-setting parameters. In total, 340 in vitro experiments were performed with a commercial IRE pulse system, including a pulse generator and an electric cuvette. Trypan blue staining technique was used to evaluate cell death after 4 hours of incubation following IRE treatment. Peleg-Fermi model was used in the study to build the statistical relationship using the cell viability data obtained from the in vitro experiments. A finite element model of IRE for the electric field distribution was also built. Comparison of ablation zones between the statistical model and electric threshold model (drawn from the finite element model) was used to show the accuracy of the proposed statistical model in the description of the ablation zone and its applicability in different pulse-setting parameters. The statistical models describing the relationships between HeLa cell death and pulse length and the number of pulses, respectively, were built. The values of the curve fitting parameters were obtained using the Peleg-Fermi model for the treatment of cervical cancer with IRE. The difference in the ablation zone between the statistical model and the electric threshold model was also illustrated to show the accuracy of the proposed statistical model in the representation of ablation zone in IRE. This study concluded that: (1) the proposed statistical model accurately described the ablation zone of IRE with cervical cancer cells, and was more accurate compared with the electric field model; (2) the proposed statistical model was able to estimate the value of electric
Statistical validation of normal tissue complication probability models
Xu, Cheng-Jian; van der Schaaf, Arjen; van t Veld, Aart; Langendijk, Johannes A.; Schilstra, Cornelis
2012-01-01
PURPOSE: To investigate the applicability and value of double cross-validation and permutation tests as established statistical approaches in the validation of normal tissue complication probability (NTCP) models. METHODS AND MATERIALS: A penalized regression method, LASSO (least absolute shrinkage
Some remarks on the statistical model of heavy ion collisions
International Nuclear Information System (INIS)
Koch, V.
2003-01-01
This contribution is an attempt to assess what can be learned from the remarkable success of this statistical model in describing ratios of particle abundances in ultra-relativistic heavy ion collisions
Eigenfunction statistics for Anderson model with Hölder continuous ...
Indian Academy of Sciences (India)
The Institute of Mathematical Sciences, Taramani, Chennai 600 113, India ... Anderson model; Hölder continuous measure; Poisson statistics. ...... [4] Combes J-M, Hislop P D and Klopp F, An optimal Wegner estimate and its application to.
LHCb: Statistical Comparison of CPU performance for LHCb applications on the Grid
Graciani, R
2009-01-01
The usage of CPU resources by LHCb on the Grid id dominated by two different applications: Gauss and Brunel. Gauss the application doing the Monte Carlo simulation of proton-proton collisions. Brunel is the application responsible for the reconstruction of the signals recorded by the detector converting them into objects that can be used for later physics analysis of the data (tracks, clusters,…) Both applications are based on the Gaudi and LHCb software frameworks. Gauss uses Pythia and Geant as underlying libraries for the simulation of the collision and the later passage of the generated particles through the LHCb detector. While Brunel makes use of LHCb specific code to process the data from each sub-detector. Both applications are CPU bound. Large Monte Carlo productions or data reconstructions running on the Grid are an ideal benchmark to compare the performance of the different CPU models for each case. Since the processed events are only statistically comparable, only statistical comparison of the...
A no extensive statistical model for the nucleon structure function
International Nuclear Information System (INIS)
Trevisan, Luis A.; Mirez, Carlos
2013-01-01
We studied an application of nonextensive thermodynamics to describe the structure function of nucleon, in a model where the usual Fermi-Dirac and Bose-Einstein energy distribution were replaced by the equivalent functions of the q-statistical. The parameters of the model are given by an effective temperature T, the q parameter (from Tsallis statistics), and two chemical potentials given by the corresponding up (u) and down (d) quark normalization in the nucleon.
Statistical models and NMR analysis of polymer microstructure
Statistical models can be used in conjunction with NMR spectroscopy to study polymer microstructure and polymerization mechanisms. Thus, Bernoullian, Markovian, and enantiomorphic-site models are well known. Many additional models have been formulated over the years for additional situations. Typica...
Comparison of Vehicle Choice Models
Energy Technology Data Exchange (ETDEWEB)
Stephens, Thomas S. [Argonne National Lab. (ANL), Argonne, IL (United States); Levinson, Rebecca S. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Brooker, Aaron [National Renewable Energy Lab. (NREL), Golden, CO (United States); Liu, Changzheng [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Lin, Zhenhong [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Birky, Alicia [Energetics Incorporated, Columbia, MD (United States); Kontou, Eleftheria [National Renewable Energy Lab. (NREL), Golden, CO (United States)
2017-10-31
Five consumer vehicle choice models that give projections of future sales shares of light-duty vehicles were compared by running each model using the same inputs, where possible, for two scenarios. The five models compared — LVCFlex, MA3T, LAVE-Trans, ParaChoice, and ADOPT — have been used in support of the Energy Efficiency and Renewable Energy (EERE) Vehicle Technologies Office in analyses of future light-duty vehicle markets under different assumptions about future vehicle technologies and market conditions. The models give projections of sales shares by powertrain technology. Projections made using common, but not identical, inputs showed qualitative agreement, with the exception of ADOPT. ADOPT estimated somewhat lower advanced vehicle shares, mostly composed of hybrid electric vehicles. Other models projected large shares of multiple advanced vehicle powertrains. Projections of models differed in significant ways, including how different technologies penetrated cars and light trucks. Since the models are constructed differently and take different inputs, not all inputs were identical, but were the same or very similar where possible. Projections by all models were in close agreement only in the first few years. Although the projections from LVCFlex, MA3T, LAVE-Trans, and ParaChoice were in qualitative agreement, there were significant differences in sales shares given by the different models for individual powertrain types, particularly in later years (2030 and later). For example, projected sales shares of conventional spark-ignition vehicles in 2030 for a given scenario ranged from 35% to 74%. Reasons for such differences are discussed, recognizing that these models were not developed to give quantitatively accurate predictions of future sales shares, but to represent vehicles markets realistically and capture the connections between sales and important influences. Model features were also compared at a high level, and suggestions for further comparison
International Nuclear Information System (INIS)
2003-01-01
For the year 2002, part of the figures shown in the tables of the Energy Review are partly preliminary. The annual statistics of the Energy Review also includes historical time-series over a longer period (see e.g. Energiatilastot 2001, Statistics Finland, Helsinki 2002). The applied energy units and conversion coefficients are shown in the inside back cover of the Review. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in GDP, energy consumption and electricity consumption, Carbon dioxide emissions from fossile fuels use, Coal consumption, Consumption of natural gas, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices in heat production, Fuel prices in electricity production, Price of electricity by type of consumer, Average monthly spot prices at the Nord pool power exchange, Total energy consumption by source and CO 2 -emissions, Supply and total consumption of electricity GWh, Energy imports by country of origin in January-June 2003, Energy exports by recipient country in January-June 2003, Consumer prices of liquid fuels, Consumer prices of hard coal, natural gas and indigenous fuels, Price of natural gas by type of consumer, Price of electricity by type of consumer, Price of district heating by type of consumer, Excise taxes, value added taxes and fiscal charges and fees included in consumer prices of some energy sources and Excise taxes, precautionary stock fees on oil pollution fees on energy products
International Nuclear Information System (INIS)
2004-01-01
For the year 2003 and 2004, the figures shown in the tables of the Energy Review are partly preliminary. The annual statistics of the Energy Review also includes historical time-series over a longer period (see e.g. Energiatilastot, Statistics Finland, Helsinki 2003, ISSN 0785-3165). The applied energy units and conversion coefficients are shown in the inside back cover of the Review. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in GDP, energy consumption and electricity consumption, Carbon dioxide emissions from fossile fuels use, Coal consumption, Consumption of natural gas, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices in heat production, Fuel prices in electricity production, Price of electricity by type of consumer, Average monthly spot prices at the Nord pool power exchange, Total energy consumption by source and CO 2 -emissions, Supplies and total consumption of electricity GWh, Energy imports by country of origin in January-March 2004, Energy exports by recipient country in January-March 2004, Consumer prices of liquid fuels, Consumer prices of hard coal, natural gas and indigenous fuels, Price of natural gas by type of consumer, Price of electricity by type of consumer, Price of district heating by type of consumer, Excise taxes, value added taxes and fiscal charges and fees included in consumer prices of some energy sources and Excise taxes, precautionary stock fees on oil pollution fees
International Nuclear Information System (INIS)
2000-01-01
For the year 1999 and 2000, part of the figures shown in the tables of the Energy Review are preliminary or estimated. The annual statistics of the Energy also includes historical time series over a longer period (see e.g., Energiatilastot 1999, Statistics Finland, Helsinki 2000, ISSN 0785-3165). The inside of the Review's back cover shows the energy units and the conversion coefficients used for them. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in the volume of GNP and energy consumption, Changes in the volume of GNP and electricity, Coal consumption, Natural gas consumption, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices for heat production, Fuel prices for electricity production, Carbon dioxide emissions, Total energy consumption by source and CO 2 -emissions, Electricity supply, Energy imports by country of origin in January-June 2000, Energy exports by recipient country in January-June 2000, Consumer prices of liquid fuels, Consumer prices of hard coal, natural gas and indigenous fuels, Average electricity price by type of consumer, Price of district heating by type of consumer, Excise taxes, value added taxes and fiscal charges and fees included in consumer prices of some energy sources and Energy taxes and precautionary stock fees on oil products
Thiessen, Erik D
2017-01-05
Statistical learning has been studied in a variety of different tasks, including word segmentation, object identification, category learning, artificial grammar learning and serial reaction time tasks (e.g. Saffran et al. 1996 Science 274: , 1926-1928; Orban et al. 2008 Proceedings of the National Academy of Sciences 105: , 2745-2750; Thiessen & Yee 2010 Child Development 81: , 1287-1303; Saffran 2002 Journal of Memory and Language 47: , 172-196; Misyak & Christiansen 2012 Language Learning 62: , 302-331). The difference among these tasks raises questions about whether they all depend on the same kinds of underlying processes and computations, or whether they are tapping into different underlying mechanisms. Prior theoretical approaches to statistical learning have often tried to explain or model learning in a single task. However, in many cases these approaches appear inadequate to explain performance in multiple tasks. For example, explaining word segmentation via the computation of sequential statistics (such as transitional probability) provides little insight into the nature of sensitivity to regularities among simultaneously presented features. In this article, we will present a formal computational approach that we believe is a good candidate to provide a unifying framework to explore and explain learning in a wide variety of statistical learning tasks. This framework suggests that statistical learning arises from a set of processes that are inherent in memory systems, including activation, interference, integration of information and forgetting (e.g. Perruchet & Vinter 1998 Journal of Memory and Language 39: , 246-263; Thiessen et al. 2013 Psychological Bulletin 139: , 792-814). From this perspective, statistical learning does not involve explicit computation of statistics, but rather the extraction of elements of the input into memory traces, and subsequent integration across those memory traces that emphasize consistent information (Thiessen and Pavlik
Models for probability and statistical inference theory and applications
Stapleton, James H
2007-01-01
This concise, yet thorough, book is enhanced with simulations and graphs to build the intuition of readersModels for Probability and Statistical Inference was written over a five-year period and serves as a comprehensive treatment of the fundamentals of probability and statistical inference. With detailed theoretical coverage found throughout the book, readers acquire the fundamentals needed to advance to more specialized topics, such as sampling, linear models, design of experiments, statistical computing, survival analysis, and bootstrapping.Ideal as a textbook for a two-semester sequence on probability and statistical inference, early chapters provide coverage on probability and include discussions of: discrete models and random variables; discrete distributions including binomial, hypergeometric, geometric, and Poisson; continuous, normal, gamma, and conditional distributions; and limit theory. Since limit theory is usually the most difficult topic for readers to master, the author thoroughly discusses mo...
Comparison of statistical sampling methods with ScannerBit, the GAMBIT scanning module
Energy Technology Data Exchange (ETDEWEB)
Martinez, Gregory D. [University of California, Physics and Astronomy Department, Los Angeles, CA (United States); McKay, James; Scott, Pat [Imperial College London, Department of Physics, Blackett Laboratory, London (United Kingdom); Farmer, Ben; Conrad, Jan [AlbaNova University Centre, Oskar Klein Centre for Cosmoparticle Physics, Stockholm (Sweden); Stockholm University, Department of Physics, Stockholm (Sweden); Roebber, Elinore [McGill University, Department of Physics, Montreal, QC (Canada); Putze, Antje [LAPTh, Universite de Savoie, CNRS, Annecy-le-Vieux (France); Collaboration: The GAMBIT Scanner Workgroup
2017-11-15
We introduce ScannerBit, the statistics and sampling module of the public, open-source global fitting framework GAMBIT. ScannerBit provides a standardised interface to different sampling algorithms, enabling the use and comparison of multiple computational methods for inferring profile likelihoods, Bayesian posteriors, and other statistical quantities. The current version offers random, grid, raster, nested sampling, differential evolution, Markov Chain Monte Carlo (MCMC) and ensemble Monte Carlo samplers. We also announce the release of a new standalone differential evolution sampler, Diver, and describe its design, usage and interface to ScannerBit. We subject Diver and three other samplers (the nested sampler MultiNest, the MCMC GreAT, and the native ScannerBit implementation of the ensemble Monte Carlo algorithm T-Walk) to a battery of statistical tests. For this we use a realistic physical likelihood function, based on the scalar singlet model of dark matter. We examine the performance of each sampler as a function of its adjustable settings, and the dimensionality of the sampling problem. We evaluate performance on four metrics: optimality of the best fit found, completeness in exploring the best-fit region, number of likelihood evaluations, and total runtime. For Bayesian posterior estimation at high resolution, T-Walk provides the most accurate and timely mapping of the full parameter space. For profile likelihood analysis in less than about ten dimensions, we find that Diver and MultiNest score similarly in terms of best fit and speed, outperforming GreAT and T-Walk; in ten or more dimensions, Diver substantially outperforms the other three samplers on all metrics. (orig.)
Energy Technology Data Exchange (ETDEWEB)
Tenant, Sean; Pang, Chun Lap; Dissanayake, Prageeth [Peninsula Radiology Academy, Plymouth (United Kingdom); Vardhanabhuti, Varut [Plymouth University Peninsula Schools of Medicine and Dentistry, Plymouth (United Kingdom); University of Hong Kong, Department of Diagnostic Radiology, Li Ka Shing Faculty of Medicine, Pokfulam (China); Stuckey, Colin; Gutteridge, Catherine [Plymouth Hospitals NHS Trust, Plymouth (United Kingdom); Hyde, Christopher [University of Exeter Medical School, St Luke' s Campus, Exeter (United Kingdom); Roobottom, Carl [Plymouth University Peninsula Schools of Medicine and Dentistry, Plymouth (United Kingdom); Plymouth Hospitals NHS Trust, Plymouth (United Kingdom)
2017-10-15
To evaluate the accuracy of reduced-dose CT scans reconstructed using a new generation of model-based iterative reconstruction (MBIR) in the imaging of urinary tract stone disease, compared with a standard-dose CT using 30% adaptive statistical iterative reconstruction. This single-institution prospective study recruited 125 patients presenting either with acute renal colic or for follow-up of known urinary tract stones. They underwent two immediately consecutive scans, one at standard dose settings and one at the lowest dose (highest noise index) the scanner would allow. The reduced-dose scans were reconstructed using both ASIR 30% and MBIR algorithms and reviewed independently by two radiologists. Objective and subjective image quality measures as well as diagnostic data were obtained. The reduced-dose MBIR scan was 100% concordant with the reference standard for the assessment of ureteric stones. It was extremely accurate at identifying calculi of 3 mm and above. The algorithm allowed a dose reduction of 58% without any loss of scan quality. A reduced-dose CT scan using MBIR is accurate in acute imaging for renal colic symptoms and for urolithiasis follow-up and allows a significant reduction in dose. (orig.)
International Nuclear Information System (INIS)
Hultenmo, Maria; Caisander, Haakan; Mack, Karsten; Thilander-Klang, Anne
2016-01-01
The diagnostic image quality of 75 paediatric abdominal computed tomography (CT) examinations reconstructed with two different iterative reconstruction (IR) algorithms-adaptive statistical IR (ASiR TM ) and model-based IR (Veo TM )-was compared. Axial and coronal images were reconstructed with 70 % ASiR with the Soft TM convolution kernel and with the Veo algorithm. The thickness of the reconstructed images was 2.5 or 5 mm depending on the scanning protocol used. Four radiologists graded the delineation of six abdominal structures and the diagnostic usefulness of the image quality. The Veo reconstruction significantly improved the visibility of most of the structures compared with ASiR in all subgroups of images. For coronal images, the Veo reconstruction resulted in significantly improved ratings of the diagnostic use of the image quality compared with the ASiR reconstruction. This was not seen for the axial images. The greatest improvement using Veo reconstruction was observed for the 2.5 mm coronal slices. (authors)
Data Analysis A Model Comparison Approach, Second Edition
Judd, Charles M; Ryan, Carey S
2008-01-01
This completely rewritten classic text features many new examples, insights and topics including mediational, categorical, and multilevel models. Substantially reorganized, this edition provides a briefer, more streamlined examination of data analysis. Noted for its model-comparison approach and unified framework based on the general linear model, the book provides readers with a greater understanding of a variety of statistical procedures. This consistent framework, including consistent vocabulary and notation, is used throughout to develop fewer but more powerful model building techniques. T
A Stochastic Fractional Dynamics Model of Rainfall Statistics
Kundu, Prasun; Travis, James
2013-04-01
Rainfall varies in space and time in a highly irregular manner and is described naturally in terms of a stochastic process. A characteristic feature of rainfall statistics is that they depend strongly on the space-time scales over which rain data are averaged. A spectral model of precipitation has been developed based on a stochastic differential equation of fractional order for the point rain rate, that allows a concise description of the second moment statistics of rain at any prescribed space-time averaging scale. The model is designed to faithfully reflect the scale dependence and is thus capable of providing a unified description of the statistics of both radar and rain gauge data. The underlying dynamical equation can be expressed in terms of space-time derivatives of fractional orders that are adjusted together with other model parameters to fit the data. The form of the resulting spectrum gives the model adequate flexibility to capture the subtle interplay between the spatial and temporal scales of variability of rain but strongly constrains the predicted statistical behavior as a function of the averaging length and times scales. The main restriction is the assumption that the statistics of the precipitation field is spatially homogeneous and isotropic and stationary in time. We test the model with radar and gauge data collected contemporaneously at the NASA TRMM ground validation sites located near Melbourne, Florida and in Kwajalein Atoll, Marshall Islands in the tropical Pacific. We estimate the parameters by tuning them to the second moment statistics of the radar data. The model predictions are then found to fit the second moment statistics of the gauge data reasonably well without any further adjustment. Some data sets containing periods of non-stationary behavior that involves occasional anomalously correlated rain events, present a challenge for the model.
Comparison of four statistical and machine learning methods for crash severity prediction.
Iranitalab, Amirfarrokh; Khattak, Aemal
2017-11-01
Crash severity prediction models enable different agencies to predict the severity of a reported crash with unknown severity or the severity of crashes that may be expected to occur sometime in the future. This paper had three main objectives: comparison of the performance of four statistical and machine learning methods including Multinomial Logit (MNL), Nearest Neighbor Classification (NNC), Support Vector Machines (SVM) and Random Forests (RF), in predicting traffic crash severity; developing a crash costs-based approach for comparison of crash severity prediction methods; and investigating the effects of data clustering methods comprising K-means Clustering (KC) and Latent Class Clustering (LCC), on the performance of crash severity prediction models. The 2012-2015 reported crash data from Nebraska, United States was obtained and two-vehicle crashes were extracted as the analysis data. The dataset was split into training/estimation (2012-2014) and validation (2015) subsets. The four prediction methods were trained/estimated using the training/estimation dataset and the correct prediction rates for each crash severity level, overall correct prediction rate and a proposed crash costs-based accuracy measure were obtained for the validation dataset. The correct prediction rates and the proposed approach showed NNC had the best prediction performance in overall and in more severe crashes. RF and SVM had the next two sufficient performances and MNL was the weakest method. Data clustering did not affect the prediction results of SVM, but KC improved the prediction performance of MNL, NNC and RF, while LCC caused improvement in MNL and RF but weakened the performance of NNC. Overall correct prediction rate had almost the exact opposite results compared to the proposed approach, showing that neglecting the crash costs can lead to misjudgment in choosing the right prediction method. Copyright © 2017 Elsevier Ltd. All rights reserved.
Statistical comparison of two or more SAGE libraries: one tag at a time
Schaaf, Gerben J.; van Ruissen, Fred; van Kampen, Antoine; Kool, Marcel; Ruijter, Jan M.
2008-01-01
Several statistical tests have been introduced for the comparison of serial analysis of gene expression (SAGE) libraries to quantitatively analyze the differential expression of genes. As each SAGE library is only one measurement, the necessary information on biological variation or experimental
International Nuclear Information System (INIS)
Ketema, D.J.; Harry, R.J.S.; Zijp, W.L.
1990-09-01
Upon request of the ESARDA working group 'Low enriched uranium conversion - and fuel fabrication plants' an interlaboratory comparison was organized, to assess the precision and accuracy concerning the determination of uranium by the potentiometric titration method. This report presents the results of a statistical evaluation on the data of the first phase of this exercise. (author). 9 refs.; 5 figs.; 24 tabs
Variability aware compact model characterization for statistical circuit design optimization
Qiao, Ying; Qian, Kun; Spanos, Costas J.
2012-03-01
Variability modeling at the compact transistor model level can enable statistically optimized designs in view of limitations imposed by the fabrication technology. In this work we propose an efficient variabilityaware compact model characterization methodology based on the linear propagation of variance. Hierarchical spatial variability patterns of selected compact model parameters are directly calculated from transistor array test structures. This methodology has been implemented and tested using transistor I-V measurements and the EKV-EPFL compact model. Calculation results compare well to full-wafer direct model parameter extractions. Further studies are done on the proper selection of both compact model parameters and electrical measurement metrics used in the method.
Linear mixed models a practical guide using statistical software
West, Brady T; Galecki, Andrzej T
2006-01-01
Simplifying the often confusing array of software programs for fitting linear mixed models (LMMs), Linear Mixed Models: A Practical Guide Using Statistical Software provides a basic introduction to primary concepts, notation, software implementation, model interpretation, and visualization of clustered and longitudinal data. This easy-to-navigate reference details the use of procedures for fitting LMMs in five popular statistical software packages: SAS, SPSS, Stata, R/S-plus, and HLM. The authors introduce basic theoretical concepts, present a heuristic approach to fitting LMMs based on bo
Speech emotion recognition based on statistical pitch model
Institute of Scientific and Technical Information of China (English)
WANG Zhiping; ZHAO Li; ZOU Cairong
2006-01-01
A modified Parzen-window method, which keep high resolution in low frequencies and keep smoothness in high frequencies, is proposed to obtain statistical model. Then, a gender classification method utilizing the statistical model is proposed, which have a 98% accuracy of gender classification while long sentence is dealt with. By separation the male voice and female voice, the mean and standard deviation of speech training samples with different emotion are used to create the corresponding emotion models. Then the Bhattacharyya distance between the test sample and statistical models of pitch, are utilized for emotion recognition in speech.The normalization of pitch for the male voice and female voice are also considered, in order to illustrate them into a uniform space. Finally, the speech emotion recognition experiment based on K Nearest Neighbor shows that, the correct rate of 81% is achieved, where it is only 73.85%if the traditional parameters are utilized.
Multiple commodities in statistical microeconomics: Model and market
Baaquie, Belal E.; Yu, Miao; Du, Xin
2016-11-01
A statistical generalization of microeconomics has been made in Baaquie (2013). In Baaquie et al. (2015), the market behavior of single commodities was analyzed and it was shown that market data provides strong support for the statistical microeconomic description of commodity prices. The case of multiple commodities is studied and a parsimonious generalization of the single commodity model is made for the multiple commodities case. Market data shows that the generalization can accurately model the simultaneous correlation functions of up to four commodities. To accurately model five or more commodities, further terms have to be included in the model. This study shows that the statistical microeconomics approach is a comprehensive and complete formulation of microeconomics, and which is independent to the mainstream formulation of microeconomics.
Adaptive Maneuvering Frequency Method of Current Statistical Model
Institute of Scientific and Technical Information of China (English)
Wei Sun; Yongjian Yang
2017-01-01
Current statistical model(CSM) has a good performance in maneuvering target tracking. However, the fixed maneuvering frequency will deteriorate the tracking results, such as a serious dynamic delay, a slowly converging speedy and a limited precision when using Kalman filter(KF) algorithm. In this study, a new current statistical model and a new Kalman filter are proposed to improve the performance of maneuvering target tracking. The new model which employs innovation dominated subjection function to adaptively adjust maneuvering frequency has a better performance in step maneuvering target tracking, while a fluctuant phenomenon appears. As far as this problem is concerned, a new adaptive fading Kalman filter is proposed as well. In the new Kalman filter, the prediction values are amended in time by setting judgment and amendment rules,so that tracking precision and fluctuant phenomenon of the new current statistical model are improved. The results of simulation indicate the effectiveness of the new algorithm and the practical guiding significance.
Modelling diversity in building occupant behaviour: a novel statistical approach
DEFF Research Database (Denmark)
Haldi, Frédéric; Calì, Davide; Andersen, Rune Korsholm
2016-01-01
We propose an advanced modelling framework to predict the scope and effects of behavioural diversity regarding building occupant actions on window openings, shading devices and lighting. We develop a statistical approach based on generalised linear mixed models to account for the longitudinal nat...
A classical statistical model of heavy ion collisions
International Nuclear Information System (INIS)
Schmidt, R.; Teichert, J.
1980-01-01
The use of the computer code TRAJEC which represents the numerical realization of a classical statistical model for heavy ion collisions is described. The code calculates the results of a classical friction model as well as various multi-differential cross sections for heavy ion collisions. INPUT and OUTPUT information of the code are described. Two examples of data sets are given [ru
On an uncorrelated jet model with Bose-Einstein statistics
International Nuclear Information System (INIS)
Bilic, N.; Dadic, I.; Martinis, M.
1978-01-01
Starting from the density of states of an ideal Bose-Einstein gas, an uncorrelated jet model with Bose-Einstein statistics has been formulated. The transition to continuum is based on the Touschek invariant measure. It has been shown that in this model average multiplicity increases logarithmically with total energy, while the inclusive distribution shows ln s violation of scaling. (author)
Complex Data Modeling and Computationally Intensive Statistical Methods
Mantovan, Pietro
2010-01-01
The last years have seen the advent and development of many devices able to record and store an always increasing amount of complex and high dimensional data; 3D images generated by medical scanners or satellite remote sensing, DNA microarrays, real time financial data, system control datasets. The analysis of this data poses new challenging problems and requires the development of novel statistical models and computational methods, fueling many fascinating and fast growing research areas of modern statistics. The book offers a wide variety of statistical methods and is addressed to statistici
Understanding and forecasting polar stratospheric variability with statistical models
Directory of Open Access Journals (Sweden)
C. Blume
2012-07-01
Full Text Available The variability of the north-polar stratospheric vortex is a prominent aspect of the middle atmosphere. This work investigates a wide class of statistical models with respect to their ability to model geopotential and temperature anomalies, representing variability in the polar stratosphere. Four partly nonstationary, nonlinear models are assessed: linear discriminant analysis (LDA; a cluster method based on finite elements (FEM-VARX; a neural network, namely the multi-layer perceptron (MLP; and support vector regression (SVR. These methods model time series by incorporating all significant external factors simultaneously, including ENSO, QBO, the solar cycle, volcanoes, to then quantify their statistical importance. We show that variability in reanalysis data from 1980 to 2005 is successfully modeled. The period from 2005 to 2011 can be hindcasted to a certain extent, where MLP performs significantly better than the remaining models. However, variability remains that cannot be statistically hindcasted within the current framework, such as the unexpected major warming in January 2009. Finally, the statistical model with the best generalization performance is used to predict a winter 2011/12 with warm and weak vortex conditions. A vortex breakdown is predicted for late January, early February 2012.
Statistical Validation of Engineering and Scientific Models: Background
International Nuclear Information System (INIS)
Hills, Richard G.; Trucano, Timothy G.
1999-01-01
A tutorial is presented discussing the basic issues associated with propagation of uncertainty analysis and statistical validation of engineering and scientific models. The propagation of uncertainty tutorial illustrates the use of the sensitivity method and the Monte Carlo method to evaluate the uncertainty in predictions for linear and nonlinear models. Four example applications are presented; a linear model, a model for the behavior of a damped spring-mass system, a transient thermal conduction model, and a nonlinear transient convective-diffusive model based on Burger's equation. Correlated and uncorrelated model input parameters are considered. The model validation tutorial builds on the material presented in the propagation of uncertainty tutoriaI and uses the damp spring-mass system as the example application. The validation tutorial illustrates several concepts associated with the application of statistical inference to test model predictions against experimental observations. Several validation methods are presented including error band based, multivariate, sum of squares of residuals, and optimization methods. After completion of the tutorial, a survey of statistical model validation literature is presented and recommendations for future work are made
Statistical Validation of Normal Tissue Complication Probability Models
Energy Technology Data Exchange (ETDEWEB)
Xu Chengjian, E-mail: c.j.xu@umcg.nl [Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen (Netherlands); Schaaf, Arjen van der; Veld, Aart A. van' t; Langendijk, Johannes A. [Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen (Netherlands); Schilstra, Cornelis [Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen (Netherlands); Radiotherapy Institute Friesland, Leeuwarden (Netherlands)
2012-09-01
Purpose: To investigate the applicability and value of double cross-validation and permutation tests as established statistical approaches in the validation of normal tissue complication probability (NTCP) models. Methods and Materials: A penalized regression method, LASSO (least absolute shrinkage and selection operator), was used to build NTCP models for xerostomia after radiation therapy treatment of head-and-neck cancer. Model assessment was based on the likelihood function and the area under the receiver operating characteristic curve. Results: Repeated double cross-validation showed the uncertainty and instability of the NTCP models and indicated that the statistical significance of model performance can be obtained by permutation testing. Conclusion: Repeated double cross-validation and permutation tests are recommended to validate NTCP models before clinical use.
Statistical validation of normal tissue complication probability models.
Xu, Cheng-Jian; van der Schaaf, Arjen; Van't Veld, Aart A; Langendijk, Johannes A; Schilstra, Cornelis
2012-09-01
To investigate the applicability and value of double cross-validation and permutation tests as established statistical approaches in the validation of normal tissue complication probability (NTCP) models. A penalized regression method, LASSO (least absolute shrinkage and selection operator), was used to build NTCP models for xerostomia after radiation therapy treatment of head-and-neck cancer. Model assessment was based on the likelihood function and the area under the receiver operating characteristic curve. Repeated double cross-validation showed the uncertainty and instability of the NTCP models and indicated that the statistical significance of model performance can be obtained by permutation testing. Repeated double cross-validation and permutation tests are recommended to validate NTCP models before clinical use. Copyright © 2012 Elsevier Inc. All rights reserved.
Shell model in large spaces and statistical spectroscopy
International Nuclear Information System (INIS)
Kota, V.K.B.
1996-01-01
For many nuclear structure problems of current interest it is essential to deal with shell model in large spaces. For this, three different approaches are now in use and two of them are: (i) the conventional shell model diagonalization approach but taking into account new advances in computer technology; (ii) the shell model Monte Carlo method. A brief overview of these two methods is given. Large space shell model studies raise fundamental questions regarding the information content of the shell model spectrum of complex nuclei. This led to the third approach- the statistical spectroscopy methods. The principles of statistical spectroscopy have their basis in nuclear quantum chaos and they are described (which are substantiated by large scale shell model calculations) in some detail. (author)
Computationally efficient statistical differential equation modeling using homogenization
Hooten, Mevin B.; Garlick, Martha J.; Powell, James A.
2013-01-01
Statistical models using partial differential equations (PDEs) to describe dynamically evolving natural systems are appearing in the scientific literature with some regularity in recent years. Often such studies seek to characterize the dynamics of temporal or spatio-temporal phenomena such as invasive species, consumer-resource interactions, community evolution, and resource selection. Specifically, in the spatial setting, data are often available at varying spatial and temporal scales. Additionally, the necessary numerical integration of a PDE may be computationally infeasible over the spatial support of interest. We present an approach to impose computationally advantageous changes of support in statistical implementations of PDE models and demonstrate its utility through simulation using a form of PDE known as “ecological diffusion.” We also apply a statistical ecological diffusion model to a data set involving the spread of mountain pine beetle (Dendroctonus ponderosae) in Idaho, USA.
Growth Curve Models and Applications : Indian Statistical Institute
2017-01-01
Growth curve models in longitudinal studies are widely used to model population size, body height, biomass, fungal growth, and other variables in the biological sciences, but these statistical methods for modeling growth curves and analyzing longitudinal data also extend to general statistics, economics, public health, demographics, epidemiology, SQC, sociology, nano-biotechnology, fluid mechanics, and other applied areas. There is no one-size-fits-all approach to growth measurement. The selected papers in this volume build on presentations from the GCM workshop held at the Indian Statistical Institute, Giridih, on March 28-29, 2016. They represent recent trends in GCM research on different subject areas, both theoretical and applied. This book includes tools and possibilities for further work through new techniques and modification of existing ones. The volume includes original studies, theoretical findings and case studies from a wide range of app lied work, and these contributions have been externally r...
Statistical Model of the 2001 Czech Census for Interactive Presentation
Czech Academy of Sciences Publication Activity Database
Grim, Jiří; Hora, Jan; Boček, Pavel; Somol, Petr; Pudil, Pavel
Vol. 26, č. 4 (2010), s. 1-23 ISSN 0282-423X R&D Projects: GA ČR GA102/07/1594; GA MŠk 1M0572 Grant - others:GA MŠk(CZ) 2C06019 Institutional research plan: CEZ:AV0Z10750506 Keywords : Interactive statistical model * census data presentation * distribution mixtures * data modeling * EM algorithm * incomplete data * data reproduction accuracy * data mining Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.492, year: 2010 http://library.utia.cas.cz/separaty/2010/RO/grim-0350513.pdf
The Statistical Modeling of the Trends Concerning the Romanian Population
Directory of Open Access Journals (Sweden)
Gabriela OPAIT
2014-11-01
Full Text Available This paper reflects the statistical modeling concerning the resident population in Romania, respectively the total of the romanian population, through by means of the „Least Squares Method”. Any country it develops by increasing of the population, respectively of the workforce, which is a factor of influence for the growth of the Gross Domestic Product (G.D.P.. The „Least Squares Method” represents a statistical technique for to determine the trend line of the best fit concerning a model.
A statistical mechanics model for free-for-all airplane passenger boarding
Energy Technology Data Exchange (ETDEWEB)
Steffen, Jason H.; /Fermilab
2008-08-01
I discuss a model for free-for-all passenger boarding which is employed by some discount air carriers. The model is based on the principles of statistical mechanics where each seat in the aircraft has an associated energy which reflects the preferences of travelers. As each passenger enters the airplane they select their seats using Boltzmann statistics, proceed to that location, load their luggage, sit down, and the partition function seen by remaining passengers is modified to reflect this fact. I discuss the various model parameters and make qualitative comparisons of this passenger boarding model with those that involve assigned seats. The model can be used to predict the probability that certain seats will be occupied at different times during the boarding process. These results might provide a useful description of this boarding method. The model is a relatively unusual application of undergraduate level physics and describes a situation familiar to many students and faculty.
A statistical mechanics model for free-for-all airplane passenger boarding
Steffen, Jason H.
2008-12-01
I discuss a model for free-for-all passenger boarding which is employed by some discount air carriers. The model is based on the principles of statistical mechanics, where each seat in the aircraft has an associated energy which reflects the preferences of travelers. As each passenger enters the airplane they select their seats using Boltzmann statistics, proceed to that location, load their luggage, sit down, and the partition function seen by remaining passengers is modified to reflect this fact. I discuss the various model parameters and make qualitative comparisons of this passenger boarding model with those that involve assigned seats. The model can be used to predict the probability that certain seats will be occupied at different times during the boarding process. These results might provide a useful description of this boarding method. The model is a relatively unusual application of undergraduate level physics and describes a situation familiar to many students and faculty.
A statistical mechanics model for free-for-all airplane passenger boarding
International Nuclear Information System (INIS)
Steffen, Jason H.; Fermilab
2008-01-01
I discuss a model for free-for-all passenger boarding which is employed by some discount air carriers. The model is based on the principles of statistical mechanics where each seat in the aircraft has an associated energy which reflects the preferences of travelers. As each passenger enters the airplane they select their seats using Boltzmann statistics, proceed to that location, load their luggage, sit down, and the partition function seen by remaining passengers is modified to reflect this fact. I discuss the various model parameters and make qualitative comparisons of this passenger boarding model with those that involve assigned seats. The model can be used to predict the probability that certain seats will be occupied at different times during the boarding process. These results might provide a useful description of this boarding method. The model is a relatively unusual application of undergraduate level physics and describes a situation familiar to many students and faculty
Applied systems ecology: models, data, and statistical methods
Energy Technology Data Exchange (ETDEWEB)
Eberhardt, L L
1976-01-01
In this report, systems ecology is largely equated to mathematical or computer simulation modelling. The need for models in ecology stems from the necessity to have an integrative device for the diversity of ecological data, much of which is observational, rather than experimental, as well as from the present lack of a theoretical structure for ecology. Different objectives in applied studies require specialized methods. The best predictive devices may be regression equations, often non-linear in form, extracted from much more detailed models. A variety of statistical aspects of modelling, including sampling, are discussed. Several aspects of population dynamics and food-chain kinetics are described, and it is suggested that the two presently separated approaches should be combined into a single theoretical framework. It is concluded that future efforts in systems ecology should emphasize actual data and statistical methods, as well as modelling.
Analyzing sickness absence with statistical models for survival data
DEFF Research Database (Denmark)
Christensen, Karl Bang; Andersen, Per Kragh; Smith-Hansen, Lars
2007-01-01
OBJECTIVES: Sickness absence is the outcome in many epidemiologic studies and is often based on summary measures such as the number of sickness absences per year. In this study the use of modern statistical methods was examined by making better use of the available information. Since sickness...... absence data deal with events occurring over time, the use of statistical models for survival data has been reviewed, and the use of frailty models has been proposed for the analysis of such data. METHODS: Three methods for analyzing data on sickness absences were compared using a simulation study...... involving the following: (i) Poisson regression using a single outcome variable (number of sickness absences), (ii) analysis of time to first event using the Cox proportional hazards model, and (iii) frailty models, which are random effects proportional hazards models. Data from a study of the relation...
Modeling of asphalt-rubber rotational viscosity by statistical analysis and neural networks
Directory of Open Access Journals (Sweden)
Luciano Pivoto Specht
2007-03-01
Full Text Available It is of a great importance to know binders' viscosity in order to perform handling, mixing, application processes and asphalt mixes compaction in highway surfacing. This paper presents the results of viscosity measurement in asphalt-rubber binders prepared in laboratory. The binders were prepared varying the rubber content, rubber particle size, duration and temperature of mixture, all following a statistical design plan. The statistical analysis and artificial neural networks were used to create mathematical models for prediction of the binders viscosity. The comparison between experimental data and simulated results with the generated models showed best performance of the neural networks analysis in contrast to the statistic models. The results indicated that the rubber content and duration of mixture have major influence on the observed viscosity for the considered interval of parameters variation.
A Review of Modeling Bioelectrochemical Systems: Engineering and Statistical Aspects
Directory of Open Access Journals (Sweden)
Shuai Luo
2016-02-01
Full Text Available Bioelectrochemical systems (BES are promising technologies to convert organic compounds in wastewater to electrical energy through a series of complex physical-chemical, biological and electrochemical processes. Representative BES such as microbial fuel cells (MFCs have been studied and advanced for energy recovery. Substantial experimental and modeling efforts have been made for investigating the processes involved in electricity generation toward the improvement of the BES performance for practical applications. However, there are many parameters that will potentially affect these processes, thereby making the optimization of system performance hard to be achieved. Mathematical models, including engineering models and statistical models, are powerful tools to help understand the interactions among the parameters in BES and perform optimization of BES configuration/operation. This review paper aims to introduce and discuss the recent developments of BES modeling from engineering and statistical aspects, including analysis on the model structure, description of application cases and sensitivity analysis of various parameters. It is expected to serves as a compass for integrating the engineering and statistical modeling strategies to improve model accuracy for BES development.
New robust statistical procedures for the polytomous logistic regression models.
Castilla, Elena; Ghosh, Abhik; Martin, Nirian; Pardo, Leandro
2018-05-17
This article derives a new family of estimators, namely the minimum density power divergence estimators, as a robust generalization of the maximum likelihood estimator for the polytomous logistic regression model. Based on these estimators, a family of Wald-type test statistics for linear hypotheses is introduced. Robustness properties of both the proposed estimators and the test statistics are theoretically studied through the classical influence function analysis. Appropriate real life examples are presented to justify the requirement of suitable robust statistical procedures in place of the likelihood based inference for the polytomous logistic regression model. The validity of the theoretical results established in the article are further confirmed empirically through suitable simulation studies. Finally, an approach for the data-driven selection of the robustness tuning parameter is proposed with empirical justifications. © 2018, The International Biometric Society.
Metamodel comparison and model comparison for safety assurance
Luo, Y.; Engelen, L.J.P.; Brand, van den M.G.J.; Bondavelli, A.; Ceccarelli, A.; Ortmeier, F.
2014-01-01
In safety-critical domains, conceptual models are created in the form of metamodels using different concepts from possibly overlapping domains. Comparison between those conceptual models can facilitate the reuse of models from one domain to another. This paper describes the mappings detected when
Simple classical model for Fano statistics in radiation detectors
Energy Technology Data Exchange (ETDEWEB)
Jordan, David V. [Pacific Northwest National Laboratory, National Security Division - Radiological and Chemical Sciences Group PO Box 999, Richland, WA 99352 (United States)], E-mail: David.Jordan@pnl.gov; Renholds, Andrea S.; Jaffe, John E.; Anderson, Kevin K.; Rene Corrales, L.; Peurrung, Anthony J. [Pacific Northwest National Laboratory, National Security Division - Radiological and Chemical Sciences Group PO Box 999, Richland, WA 99352 (United States)
2008-02-01
A simple classical model that captures the essential statistics of energy partitioning processes involved in the creation of information carriers (ICs) in radiation detectors is presented. The model pictures IC formation from a fixed amount of deposited energy in terms of the statistically analogous process of successively sampling water from a large, finite-volume container ('bathtub') with a small dipping implement ('shot or whiskey glass'). The model exhibits sub-Poisson variance in the distribution of the number of ICs generated (the 'Fano effect'). Elementary statistical analysis of the model clarifies the role of energy conservation in producing the Fano effect and yields Fano's prescription for computing the relative variance of the IC number distribution in terms of the mean and variance of the underlying, single-IC energy distribution. The partitioning model is applied to the development of the impact ionization cascade in semiconductor radiation detectors. It is shown that, in tandem with simple assumptions regarding the distribution of energies required to create an (electron, hole) pair, the model yields an energy-independent Fano factor of 0.083, in accord with the lower end of the range of literature values reported for silicon and high-purity germanium. The utility of this simple picture as a diagnostic tool for guiding or constraining more detailed, 'microscopic' physical models of detector material response to ionizing radiation is discussed.
Development of 3D statistical mandible models for cephalometric measurements
International Nuclear Information System (INIS)
Kim, Sung Goo; Yi, Won Jin; Hwang, Soon Jung; Choi, Soon Chul; Lee, Sam Sun; Heo, Min Suk; Huh, Kyung Hoe; Kim, Tae Il; Hong, Helen; Yoo, Ji Hyun
2012-01-01
The aim of this study was to provide sex-matched three-dimensional (3D) statistical shape models of the mandible, which would provide cephalometric parameters for 3D treatment planning and cephalometric measurements in orthognathic surgery. The subjects used to create the 3D shape models of the mandible included 23 males and 23 females. The mandibles were segmented semi-automatically from 3D facial CT images. Each individual mandible shape was reconstructed as a 3D surface model, which was parameterized to establish correspondence between different individual surfaces. The principal component analysis (PCA) applied to all mandible shapes produced a mean model and characteristic models of variation. The cephalometric parameters were measured directly from the mean models to evaluate the 3D shape models. The means of the measured parameters were compared with those from other conventional studies. The male and female 3D statistical mean models were developed from 23 individual mandibles, respectively. The male and female characteristic shapes of variation produced by PCA showed a large variability included in the individual mandibles. The cephalometric measurements from the developed models were very close to those from some conventional studies. We described the construction of 3D mandibular shape models and presented the application of the 3D mandibular template in cephalometric measurements. Optimal reference models determined from variations produced by PCA could be used for craniofacial patients with various types of skeletal shape.
Development of 3D statistical mandible models for cephalometric measurements
Energy Technology Data Exchange (ETDEWEB)
Kim, Sung Goo; Yi, Won Jin; Hwang, Soon Jung; Choi, Soon Chul; Lee, Sam Sun; Heo, Min Suk; Huh, Kyung Hoe; Kim, Tae Il [School of Dentistry, Seoul National University, Seoul (Korea, Republic of); Hong, Helen; Yoo, Ji Hyun [Division of Multimedia Engineering, Seoul Women' s University, Seoul (Korea, Republic of)
2012-09-15
The aim of this study was to provide sex-matched three-dimensional (3D) statistical shape models of the mandible, which would provide cephalometric parameters for 3D treatment planning and cephalometric measurements in orthognathic surgery. The subjects used to create the 3D shape models of the mandible included 23 males and 23 females. The mandibles were segmented semi-automatically from 3D facial CT images. Each individual mandible shape was reconstructed as a 3D surface model, which was parameterized to establish correspondence between different individual surfaces. The principal component analysis (PCA) applied to all mandible shapes produced a mean model and characteristic models of variation. The cephalometric parameters were measured directly from the mean models to evaluate the 3D shape models. The means of the measured parameters were compared with those from other conventional studies. The male and female 3D statistical mean models were developed from 23 individual mandibles, respectively. The male and female characteristic shapes of variation produced by PCA showed a large variability included in the individual mandibles. The cephalometric measurements from the developed models were very close to those from some conventional studies. We described the construction of 3D mandibular shape models and presented the application of the 3D mandibular template in cephalometric measurements. Optimal reference models determined from variations produced by PCA could be used for craniofacial patients with various types of skeletal shape.
Statistical sampling and modelling for cork oak and eucalyptus stands
Paulo, M.J.
2002-01-01
This thesis focuses on the use of modern statistical methods to solve problems on sampling, optimal cutting time and agricultural modelling in Portuguese cork oak and eucalyptus stands. The results are contained in five chapters that have been submitted for publication
Two-dimensional models in statistical mechanics and field theory
International Nuclear Information System (INIS)
Koberle, R.
1980-01-01
Several features of two-dimensional models in statistical mechanics and Field theory, such as, lattice quantum chromodynamics, Z(N), Gross-Neveu and CP N-1 are discussed. The problems of confinement and dynamical mass generation are also analyzed. (L.C.) [pt
Statistical Modeling of Energy Production by Photovoltaic Farms
Czech Academy of Sciences Publication Activity Database
Brabec, Marek; Pelikán, Emil; Krč, Pavel; Eben, Kryštof; Musílek, P.
2011-01-01
Roč. 5, č. 9 (2011), s. 785-793 ISSN 1934-8975 Grant - others:GA AV ČR(CZ) M100300904 Institutional research plan: CEZ:AV0Z10300504 Keywords : electrical energy * solar energy * numerical weather prediction model * nonparametric regression * beta regression Subject RIV: BB - Applied Statistics, Operational Research
Model selection for contingency tables with algebraic statistics
Krampe, A.; Kuhnt, S.; Gibilisco, P.; Riccimagno, E.; Rogantin, M.P.; Wynn, H.P.
2009-01-01
Goodness-of-fit tests based on chi-square approximations are commonly used in the analysis of contingency tables. Results from algebraic statistics combined with MCMC methods provide alternatives to the chi-square approximation. However, within a model selection procedure usually a large number of
Syntactic discriminative language model rerankers for statistical machine translation
Carter, S.; Monz, C.
2011-01-01
This article describes a method that successfully exploits syntactic features for n-best translation candidate reranking using perceptrons. We motivate the utility of syntax by demonstrating the superior performance of parsers over n-gram language models in differentiating between Statistical
Using statistical compatibility to derive advanced probabilistic fatigue models
Czech Academy of Sciences Publication Activity Database
Fernández-Canteli, A.; Castillo, E.; López-Aenlle, M.; Seitl, Stanislav
2010-01-01
Roč. 2, č. 1 (2010), s. 1131-1140 E-ISSN 1877-7058. [Fatigue 2010. Praha, 06.06.2010-11.06.2010] Institutional research plan: CEZ:AV0Z20410507 Keywords : Fatigue models * Statistical compatibility * Functional equations Subject RIV: JL - Materials Fatigue, Friction Mechanics
Statistical properties of the nuclear shell-model Hamiltonian
International Nuclear Information System (INIS)
Dias, H.; Hussein, M.S.; Oliveira, N.A. de
1986-01-01
The statistical properties of realistic nuclear shell-model Hamiltonian are investigated in sd-shell nuclei. The probability distribution of the basic-vector amplitude is calculated and compared with the Porter-Thomas distribution. Relevance of the results to the calculation of the giant resonance mixing parameter is pointed out. (Author) [pt
Statistical shape model with random walks for inner ear segmentation
DEFF Research Database (Denmark)
Pujadas, Esmeralda Ruiz; Kjer, Hans Martin; Piella, Gemma
2016-01-01
is required. We propose a new framework for segmentation of micro-CT cochlear images using random walks combined with a statistical shape model (SSM). The SSM allows us to constrain the less contrasted areas and ensures valid inner ear shape outputs. Additionally, a topology preservation method is proposed...
Hierarchical modelling for the environmental sciences statistical methods and applications
Clark, James S
2006-01-01
New statistical tools are changing the way in which scientists analyze and interpret data and models. Hierarchical Bayes and Markov Chain Monte Carlo methods for analysis provide a consistent framework for inference and prediction where information is heterogeneous and uncertain, processes are complicated, and responses depend on scale. Nowhere are these methods more promising than in the environmental sciences.
A Statistical Model for the Estimation of Natural Gas Consumption
Czech Academy of Sciences Publication Activity Database
Vondráček, Jiří; Pelikán, Emil; Konár, Ondřej; Čermáková, Jana; Eben, Kryštof; Malý, Marek; Brabec, Marek
2008-01-01
Roč. 85, c. 5 (2008), s. 362-370 ISSN 0306-2619 R&D Projects: GA AV ČR 1ET400300513 Institutional research plan: CEZ:AV0Z10300504 Keywords : nonlinear regression * gas consumption modeling Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 1.371, year: 2008
Statistical learning modeling method for space debris photometric measurement
Sun, Wenjing; Sun, Jinqiu; Zhang, Yanning; Li, Haisen
2016-03-01
Photometric measurement is an important way to identify the space debris, but the present methods of photometric measurement have many constraints on star image and need complex image processing. Aiming at the problems, a statistical learning modeling method for space debris photometric measurement is proposed based on the global consistency of the star image, and the statistical information of star images is used to eliminate the measurement noises. First, the known stars on the star image are divided into training stars and testing stars. Then, the training stars are selected as the least squares fitting parameters to construct the photometric measurement model, and the testing stars are used to calculate the measurement accuracy of the photometric measurement model. Experimental results show that, the accuracy of the proposed photometric measurement model is about 0.1 magnitudes.
Workshop on Model Uncertainty and its Statistical Implications
1988-01-01
In this book problems related to the choice of models in such diverse fields as regression, covariance structure, time series analysis and multinomial experiments are discussed. The emphasis is on the statistical implications for model assessment when the assessment is done with the same data that generated the model. This is a problem of long standing, notorious for its difficulty. Some contributors discuss this problem in an illuminating way. Others, and this is a truly novel feature, investigate systematically whether sample re-use methods like the bootstrap can be used to assess the quality of estimators or predictors in a reliable way given the initial model uncertainty. The book should prove to be valuable for advanced practitioners and statistical methodologists alike.
Statistical models describing the energy signature of buildings
DEFF Research Database (Denmark)
Bacher, Peder; Madsen, Henrik; Thavlov, Anders
2010-01-01
Approximately one third of the primary energy production in Denmark is used for heating in buildings. Therefore efforts to accurately describe and improve energy performance of the building mass are very important. For this purpose statistical models describing the energy signature of a building, i...... or varying energy prices. The paper will give an overview of statistical methods and applied models based on experiments carried out in FlexHouse, which is an experimental building in SYSLAB, Risø DTU. The models are of different complexity and can provide estimates of physical quantities such as UA......-values, time constants of the building, and other parameters related to the heat dynamics. A method for selecting the most appropriate model for a given building is outlined and finally a perspective of the applications is given. Aknowledgements to the Danish Energy Saving Trust and the Interreg IV ``Vind i...
Improved air ventilation rate estimation based on a statistical model
International Nuclear Information System (INIS)
Brabec, M.; Jilek, K.
2004-01-01
A new approach to air ventilation rate estimation from CO measurement data is presented. The approach is based on a state-space dynamic statistical model, allowing for quick and efficient estimation. Underlying computations are based on Kalman filtering, whose practical software implementation is rather easy. The key property is the flexibility of the model, allowing various artificial regimens of CO level manipulation to be treated. The model is semi-parametric in nature and can efficiently handle time-varying ventilation rate. This is a major advantage, compared to some of the methods which are currently in practical use. After a formal introduction of the statistical model, its performance is demonstrated on real data from routine measurements. It is shown how the approach can be utilized in a more complex situation of major practical relevance, when time-varying air ventilation rate and radon entry rate are to be estimated simultaneously from concurrent radon and CO measurements
Bayesian Nonparametric Statistical Inference for Shock Models and Wear Processes.
1979-12-01
also note that the results in Section 2 do not depend on the support of F .) This shock model have been studied by Esary, Marshall and Proschan (1973...Barlow and Proschan (1975), among others. The analogy of the shock model in risk and acturial analysis has been given by BUhlmann (1970, Chapter 2... Mathematical Statistics, Vol. 4, pp. 894-906. Billingsley, P. (1968), CONVERGENCE OF PROBABILITY MEASURES, John Wiley, New York. BUhlmann, H. (1970
Statistical and RBF NN models : providing forecasts and risk assessment
Marček, Milan
2009-01-01
Forecast accuracy of economic and financial processes is a popular measure for quantifying the risk in decision making. In this paper, we develop forecasting models based on statistical (stochastic) methods, sometimes called hard computing, and on a soft method using granular computing. We consider the accuracy of forecasting models as a measure for risk evaluation. It is found that the risk estimation process based on soft methods is simplified and less critical to the question w...
A Statistical Model for Synthesis of Detailed Facial Geometry
Golovinskiy, Aleksey; Matusik, Wojciech; Pfister, Hanspeter; Rusinkiewicz, Szymon; Funkhouser, Thomas
2006-01-01
Detailed surface geometry contributes greatly to the visual realism of 3D face models. However, acquiring high-resolution face geometry is often tedious and expensive. Consequently, most face models used in games, virtual reality, or computer vision look unrealistically smooth. In this paper, we introduce a new statistical technique for the analysis and synthesis of small three-dimensional facial features, such as wrinkles and pores. We acquire high-resolution face geometry for people across ...
Image sequence analysis in nuclear medicine: (1) Parametric imaging using statistical modelling
International Nuclear Information System (INIS)
Liehn, J.C.; Hannequin, P.; Valeyre, J.
1989-01-01
This is a review of parametric imaging methods on Nuclear Medicine. A Parametric Image is an image in which each pixel value is a function of the value of the same pixel of an image sequence. The Local Model Method is the fitting of each pixel time activity curve by a model which parameter values form the Parametric Images. The Global Model Method is the modelling of the changes between two images. It is applied to image comparison. For both methods, the different models, the identification criterion, the optimization methods and the statistical properties of the images are discussed. The analysis of one or more Parametric Images is performed using 1D or 2D histograms. The statistically significant Parametric Images, (Images of significant Variances, Amplitudes and Differences) are also proposed [fr
A method for statistical comparison of data sets and its uses in analysis of nuclear physics data
International Nuclear Information System (INIS)
Bityukov, S.I.; Smirnova, V.V.; Krasnikov, N.V.; Maksimushkina, A.V.; Nikitenko, A.N.
2014-01-01
Authors propose a method for statistical comparison of two data sets. The method is based on the method of statistical comparison of histograms. As an estimator of quality of the decision made, it is proposed to use the value which it is possible to call the probability that the decision (data sets are various) is correct [ru
Onisko, Agnieszka; Druzdzel, Marek J; Austin, R Marshall
2016-01-01
Classical statistics is a well-established approach in the analysis of medical data. While the medical community seems to be familiar with the concept of a statistical analysis and its interpretation, the Bayesian approach, argued by many of its proponents to be superior to the classical frequentist approach, is still not well-recognized in the analysis of medical data. The goal of this study is to encourage data analysts to use the Bayesian approach, such as modeling with graphical probabilistic networks, as an insightful alternative to classical statistical analysis of medical data. This paper offers a comparison of two approaches to analysis of medical time series data: (1) classical statistical approach, such as the Kaplan-Meier estimator and the Cox proportional hazards regression model, and (2) dynamic Bayesian network modeling. Our comparison is based on time series cervical cancer screening data collected at Magee-Womens Hospital, University of Pittsburgh Medical Center over 10 years. The main outcomes of our comparison are cervical cancer risk assessments produced by the three approaches. However, our analysis discusses also several aspects of the comparison, such as modeling assumptions, model building, dealing with incomplete data, individualized risk assessment, results interpretation, and model validation. Our study shows that the Bayesian approach is (1) much more flexible in terms of modeling effort, and (2) it offers an individualized risk assessment, which is more cumbersome for classical statistical approaches.
WE-A-201-02: Modern Statistical Modeling
Energy Technology Data Exchange (ETDEWEB)
Niemierko, A.
2016-06-15
Chris Marshall: Memorial Introduction Donald Edmonds Herbert Jr., or Don to his colleagues and friends, exemplified the “big tent” vision of medical physics, specializing in Applied Statistics and Dynamical Systems theory. He saw, more clearly than most, that “Making models is the difference between doing science and just fooling around [ref Woodworth, 2004]”. Don developed an interest in chemistry at school by “reading a book” - a recurring theme in his story. He was awarded a Westinghouse Science scholarship and attended the Carnegie Institute of Technology (later Carnegie Mellon University) where his interest turned to physics and led to a BS in Physics after transfer to Northwestern University. After (voluntary) service in the Navy he earned his MS in Physics from the University of Oklahoma, which led him to Johns Hopkins University in Baltimore to pursue a PhD. The early death of his wife led him to take a salaried position in the Physics Department of Colorado College in Colorado Springs so as to better care for their young daughter. There, a chance invitation from Dr. Juan del Regato to teach physics to residents at the Penrose Cancer Hospital introduced him to Medical Physics, and he decided to enter the field. He received his PhD from the University of London (UK) under Prof. Joseph Rotblat, where I first met him, and where he taught himself statistics. He returned to Penrose as a clinical medical physicist, also largely self-taught. In 1975 he formalized an evolving interest in statistical analysis as Professor of Radiology and Head of the Division of Physics and Statistics at the College of Medicine of the University of South Alabama in Mobile, AL where he remained for the rest of his career. He also served as the first Director of their Bio-Statistics and Epidemiology Core Unit working in part on a sickle-cell disease. After retirement he remained active as Professor Emeritus. Don served for several years as a consultant to the Nuclear
WE-A-201-02: Modern Statistical Modeling
International Nuclear Information System (INIS)
Niemierko, A.
2016-01-01
Chris Marshall: Memorial Introduction Donald Edmonds Herbert Jr., or Don to his colleagues and friends, exemplified the “big tent” vision of medical physics, specializing in Applied Statistics and Dynamical Systems theory. He saw, more clearly than most, that “Making models is the difference between doing science and just fooling around [ref Woodworth, 2004]”. Don developed an interest in chemistry at school by “reading a book” - a recurring theme in his story. He was awarded a Westinghouse Science scholarship and attended the Carnegie Institute of Technology (later Carnegie Mellon University) where his interest turned to physics and led to a BS in Physics after transfer to Northwestern University. After (voluntary) service in the Navy he earned his MS in Physics from the University of Oklahoma, which led him to Johns Hopkins University in Baltimore to pursue a PhD. The early death of his wife led him to take a salaried position in the Physics Department of Colorado College in Colorado Springs so as to better care for their young daughter. There, a chance invitation from Dr. Juan del Regato to teach physics to residents at the Penrose Cancer Hospital introduced him to Medical Physics, and he decided to enter the field. He received his PhD from the University of London (UK) under Prof. Joseph Rotblat, where I first met him, and where he taught himself statistics. He returned to Penrose as a clinical medical physicist, also largely self-taught. In 1975 he formalized an evolving interest in statistical analysis as Professor of Radiology and Head of the Division of Physics and Statistics at the College of Medicine of the University of South Alabama in Mobile, AL where he remained for the rest of his career. He also served as the first Director of their Bio-Statistics and Epidemiology Core Unit working in part on a sickle-cell disease. After retirement he remained active as Professor Emeritus. Don served for several years as a consultant to the Nuclear
Computer modelling of statistical properties of SASE FEL radiation
International Nuclear Information System (INIS)
Saldin, E. L.; Schneidmiller, E. A.; Yurkov, M. V.
1997-01-01
The paper describes an approach to computer modelling of statistical properties of the radiation from self amplified spontaneous emission free electron laser (SASE FEL). The present approach allows one to calculate the following statistical properties of the SASE FEL radiation: time and spectral field correlation functions, distribution of the fluctuations of the instantaneous radiation power, distribution of the energy in the electron bunch, distribution of the radiation energy after monochromator installed at the FEL amplifier exit and the radiation spectrum. All numerical results presented in the paper have been calculated for the 70 nm SASE FEL at the TESLA Test Facility being under construction at DESY
Stochastic geometry, spatial statistics and random fields models and algorithms
2015-01-01
Providing a graduate level introduction to various aspects of stochastic geometry, spatial statistics and random fields, this volume places a special emphasis on fundamental classes of models and algorithms as well as on their applications, for example in materials science, biology and genetics. This book has a strong focus on simulations and includes extensive codes in Matlab and R, which are widely used in the mathematical community. It can be regarded as a continuation of the recent volume 2068 of Lecture Notes in Mathematics, where other issues of stochastic geometry, spatial statistics and random fields were considered, with a focus on asymptotic methods.
GIA Model Statistics for GRACE Hydrology, Cryosphere, and Ocean Science
Caron, L.; Ivins, E. R.; Larour, E.; Adhikari, S.; Nilsson, J.; Blewitt, G.
2018-03-01
We provide a new analysis of glacial isostatic adjustment (GIA) with the goal of assembling the model uncertainty statistics required for rigorously extracting trends in surface mass from the Gravity Recovery and Climate Experiment (GRACE) mission. Such statistics are essential for deciphering sea level, ocean mass, and hydrological changes because the latter signals can be relatively small (≤2 mm/yr water height equivalent) over very large regions, such as major ocean basins and watersheds. With abundant new >7 year continuous measurements of vertical land motion (VLM) reported by Global Positioning System stations on bedrock and new relative sea level records, our new statistical evaluation of GIA uncertainties incorporates Bayesian methodologies. A unique aspect of the method is that both the ice history and 1-D Earth structure vary through a total of 128,000 forward models. We find that best fit models poorly capture the statistical inferences needed to correctly invert for lower mantle viscosity and that GIA uncertainty exceeds the uncertainty ascribed to trends from 14 years of GRACE data in polar regions.
A Model Fit Statistic for Generalized Partial Credit Model
Liang, Tie; Wells, Craig S.
2009-01-01
Investigating the fit of a parametric model is an important part of the measurement process when implementing item response theory (IRT), but research examining it is limited. A general nonparametric approach for detecting model misfit, introduced by J. Douglas and A. S. Cohen (2001), has exhibited promising results for the two-parameter logistic…
Risk prediction model: Statistical and artificial neural network approach
Paiman, Nuur Azreen; Hariri, Azian; Masood, Ibrahim
2017-04-01
Prediction models are increasingly gaining popularity and had been used in numerous areas of studies to complement and fulfilled clinical reasoning and decision making nowadays. The adoption of such models assist physician's decision making, individual's behavior, and consequently improve individual outcomes and the cost-effectiveness of care. The objective of this paper is to reviewed articles related to risk prediction model in order to understand the suitable approach, development and the validation process of risk prediction model. A qualitative review of the aims, methods and significant main outcomes of the nineteen published articles that developed risk prediction models from numerous fields were done. This paper also reviewed on how researchers develop and validate the risk prediction models based on statistical and artificial neural network approach. From the review done, some methodological recommendation in developing and validating the prediction model were highlighted. According to studies that had been done, artificial neural network approached in developing the prediction model were more accurate compared to statistical approach. However currently, only limited published literature discussed on which approach is more accurate for risk prediction model development.
Modeling CCN effects on tropical convection: An statistical perspective
Carrio, G. G.; Cotton, W. R.; Massie, S. T.
2012-12-01
This modeling study examines the response of tropical convection to the enhancement of CCN concentrations from a statistical perspective. The sensitivity runs were performed using RAMS version 6.0, covering almost the entire Amazonian Aerosol Characterization Experiment period (AMAZE, wet season of 2008). The main focus of the analysis was the indirect aerosol effects on the probability density functions (PDFs) of various cloud properties. RAMS was configured to work with four two-way interactive nested grids with 42 vertical levels and horizontal grid spacing of 150, 37.5, 7.5, and 1.5 km. Grids 2 and 3 were used to simulate the synoptic and mesoscale environments, while grid 4 was used to resolve deep convection. Comparisons were made using the finest grid with a domain size of 300 X 300km, approximately centered on the city of Manaus (3.1S, 60.01W). The vertical grid was stretched using with 75m spacing at the finest levels to provide better resolution within the first 1.5 km, and the model top extended to approximately 22 km above ground level. RAMS was initialized on February 10 2008 (00:00 UTC), the length of simulations was 32 days, and GSF data were used for initialization and nudging of the coarser-grid boundaries. The control run considered a CCN concentration of 300cm-3 while other several other simulations considered an influx of higher CCN concentrations (up to 1300/cc) . The latter concentration was observed near the end of the AMAZE project period. Both direct and indirect effects of these CCN particles were considered. Model output data (finest grid) every 15 min were used to compute the PDFs for each model level. When increasing aerosol concentrations, significant impacts were simulated for the PDFs of the water contents of various hydrometeors, vertical motions, area with precipitation, latent heat releases, among other quantities. In most cases, they exhibited a peculiar non-monotonic response similar to that seen in two previous studies of ours
Testing statistical significance scores of sequence comparison methods with structure similarity
Directory of Open Access Journals (Sweden)
Leunissen Jack AM
2006-10-01
Full Text Available Abstract Background In the past years the Smith-Waterman sequence comparison algorithm has gained popularity due to improved implementations and rapidly increasing computing power. However, the quality and sensitivity of a database search is not only determined by the algorithm but also by the statistical significance testing for an alignment. The e-value is the most commonly used statistical validation method for sequence database searching. The CluSTr database and the Protein World database have been created using an alternative statistical significance test: a Z-score based on Monte-Carlo statistics. Several papers have described the superiority of the Z-score as compared to the e-value, using simulated data. We were interested if this could be validated when applied to existing, evolutionary related protein sequences. Results All experiments are performed on the ASTRAL SCOP database. The Smith-Waterman sequence comparison algorithm with both e-value and Z-score statistics is evaluated, using ROC, CVE and AP measures. The BLAST and FASTA algorithms are used as reference. We find that two out of three Smith-Waterman implementations with e-value are better at predicting structural similarities between proteins than the Smith-Waterman implementation with Z-score. SSEARCH especially has very high scores. Conclusion The compute intensive Z-score does not have a clear advantage over the e-value. The Smith-Waterman implementations give generally better results than their heuristic counterparts. We recommend using the SSEARCH algorithm combined with e-values for pairwise sequence comparisons.
Statistical model selection with “Big Data”
Directory of Open Access Journals (Sweden)
Jurgen A. Doornik
2015-12-01
Full Text Available Big Data offer potential benefits for statistical modelling, but confront problems including an excess of false positives, mistaking correlations for causes, ignoring sampling biases and selecting by inappropriate methods. We consider the many important requirements when searching for a data-based relationship using Big Data, and the possible role of Autometrics in that context. Paramount considerations include embedding relationships in general initial models, possibly restricting the number of variables to be selected over by non-statistical criteria (the formulation problem, using good quality data on all variables, analyzed with tight significance levels by a powerful selection procedure, retaining available theory insights (the selection problem while testing for relationships being well specified and invariant to shifts in explanatory variables (the evaluation problem, using a viable approach that resolves the computational problem of immense numbers of possible models.
Experimental, statistical, and biological models of radon carcinogenesis
International Nuclear Information System (INIS)
Cross, F.T.
1991-09-01
Risk models developed for underground miners have not been consistently validated in studies of populations exposed to indoor radon. Imprecision in risk estimates results principally from differences between exposures in mines as compared to domestic environments and from uncertainties about the interaction between cigarette-smoking and exposure to radon decay products. Uncertainties in extrapolating miner data to domestic exposures can be reduced by means of a broad-based health effects research program that addresses the interrelated issues of exposure, respiratory tract dose, carcinogenesis (molecular/cellular and animal studies, plus developing biological and statistical models), and the relationship of radon to smoking and other copollutant exposures. This article reviews experimental animal data on radon carcinogenesis observed primarily in rats at Pacific Northwest Laboratory. Recent experimental and mechanistic carcinogenesis models of exposures to radon, uranium ore dust, and cigarette smoke are presented with statistical analyses of animal data. 20 refs., 1 fig
Multimesonic decays of charmonium states in the statistical quark model
International Nuclear Information System (INIS)
Montvay, I.; Toth, J.D.
1978-01-01
The data known at present of multimesonic decays of chi and psi states are fitted in a statistical quark model, in which the matrix elements are assumed to be constant and resonances as well as both strong and second order electromagnetic processes are taken into account. The experimental data are well reproduced by the model. Unknown branching ratios for the rest of multimesonic channels are predicted. The fit leaves about 40% for baryonic and radiative channels in the case of J/psi(3095). The fitted parameters of the J/psi decays are used to predict the mesonic decays of the pseudoscalar eta c. The statistical quark model seems to allow the calculation of competitive multiparticle processes for the studied decays. (D.P.)
SoS contract verification using statistical model checking
Directory of Open Access Journals (Sweden)
Alessandro Mignogna
2013-11-01
Full Text Available Exhaustive formal verification for systems of systems (SoS is impractical and cannot be applied on a large scale. In this paper we propose to use statistical model checking for efficient verification of SoS. We address three relevant aspects for systems of systems: 1 the model of the SoS, which includes stochastic aspects; 2 the formalization of the SoS requirements in the form of contracts; 3 the tool-chain to support statistical model checking for SoS. We adapt the SMC technique for application to heterogeneous SoS. We extend the UPDM/SysML specification language to express the SoS requirements that the implemented strategies over the SoS must satisfy. The requirements are specified with a new contract language specifically designed for SoS, targeting a high-level English- pattern language, but relying on an accurate semantics given by the standard temporal logics. The contracts are verified against the UPDM/SysML specification using the Statistical Model Checker (SMC PLASMA combined with the simulation engine DESYRE, which integrates heterogeneous behavioral models through the functional mock-up interface (FMI standard. The tool-chain allows computing an estimation of the satisfiability of the contracts by the SoS. The results help the system architect to trade-off different solutions to guide the evolution of the SoS.
Structural reliability in context of statistical uncertainties and modelling discrepancies
International Nuclear Information System (INIS)
Pendola, Maurice
2000-01-01
Structural reliability methods have been largely improved during the last years and have showed their ability to deal with uncertainties during the design stage or to optimize the functioning and the maintenance of industrial installations. They are based on a mechanical modeling of the structural behavior according to the considered failure modes and on a probabilistic representation of input parameters of this modeling. In practice, only limited statistical information is available to build the probabilistic representation and different sophistication levels of the mechanical modeling may be introduced. Thus, besides the physical randomness, other uncertainties occur in such analyses. The aim of this work is triple: 1. at first, to propose a methodology able to characterize the statistical uncertainties due to the limited number of data in order to take them into account in the reliability analyses. The obtained reliability index measures the confidence in the structure considering the statistical information available. 2. Then, to show a methodology leading to reliability results evaluated from a particular mechanical modeling but by using a less sophisticated one. The objective is then to decrease the computational efforts required by the reference modeling. 3. Finally, to propose partial safety factors that are evolving as a function of the number of statistical data available and as a function of the sophistication level of the mechanical modeling that is used. The concepts are illustrated in the case of a welded pipe and in the case of a natural draught cooling tower. The results show the interest of the methodologies in an industrial context. [fr
Martin, Justin D.
2017-01-01
This essay presents data from a census of statistics requirements and offerings at all 4-year journalism programs in the United States (N = 369) and proposes a model of a potential course in statistics for journalism majors. The author proposes that three philosophies underlie a statistics course for journalism students. Such a course should (a)…
A statistical model for radar images of agricultural scenes
Frost, V. S.; Shanmugan, K. S.; Holtzman, J. C.; Stiles, J. A.
1982-01-01
The presently derived and validated statistical model for radar images containing many different homogeneous fields predicts the probability density functions of radar images of entire agricultural scenes, thereby allowing histograms of large scenes composed of a variety of crops to be described. Seasat-A SAR images of agricultural scenes are accurately predicted by the model on the basis of three assumptions: each field has the same SNR, all target classes cover approximately the same area, and the true reflectivity characterizing each individual target class is a uniformly distributed random variable. The model is expected to be useful in the design of data processing algorithms and for scene analysis using radar images.
Comparison between the Lactation Model and the Test-Day Model ...
African Journals Online (AJOL)
ARC-IRENE
National Genetic Evaluation, using a Fixed Regression Test-day Model (TDM). This comparison is made for. Ayrshire, Guernsey, Holstein and Jersey cows participating in the South African Dairy Animal Improvement. Scheme. Specific differences between the two models were documented, with differences in statistical.
Discrete ellipsoidal statistical BGK model and Burnett equations
Zhang, Yu-Dong; Xu, Ai-Guo; Zhang, Guang-Cai; Chen, Zhi-Hua; Wang, Pei
2018-06-01
A new discrete Boltzmann model, the discrete ellipsoidal statistical Bhatnagar-Gross-Krook (ESBGK) model, is proposed to simulate nonequilibrium compressible flows. Compared with the original discrete BGK model, the discrete ES-BGK has a flexible Prandtl number. For the discrete ES-BGK model in the Burnett level, two kinds of discrete velocity model are introduced and the relations between nonequilibrium quantities and the viscous stress and heat flux in the Burnett level are established. The model is verified via four benchmark tests. In addition, a new idea is introduced to recover the actual distribution function through the macroscopic quantities and their space derivatives. The recovery scheme works not only for discrete Boltzmann simulation but also for hydrodynamic ones, for example, those based on the Navier-Stokes or the Burnett equations.
Statistics of a neuron model driven by asymmetric colored noise.
Müller-Hansen, Finn; Droste, Felix; Lindner, Benjamin
2015-02-01
Irregular firing of neurons can be modeled as a stochastic process. Here we study the perfect integrate-and-fire neuron driven by dichotomous noise, a Markovian process that jumps between two states (i.e., possesses a non-Gaussian statistics) and exhibits nonvanishing temporal correlations (i.e., represents a colored noise). Specifically, we consider asymmetric dichotomous noise with two different transition rates. Using a first-passage-time formulation, we derive exact expressions for the probability density and the serial correlation coefficient of the interspike interval (time interval between two subsequent neural action potentials) and the power spectrum of the spike train. Furthermore, we extend the model by including additional Gaussian white noise, and we give approximations for the interspike interval (ISI) statistics in this case. Numerical simulations are used to validate the exact analytical results for pure dichotomous noise, and to test the approximations of the ISI statistics when Gaussian white noise is included. The results may help to understand how correlations and asymmetry of noise and signals in nerve cells shape neuronal firing statistics.
Spatio-temporal statistical models with applications to atmospheric processes
International Nuclear Information System (INIS)
Wikle, C.K.
1996-01-01
This doctoral dissertation is presented as three self-contained papers. An introductory chapter considers traditional spatio-temporal statistical methods used in the atmospheric sciences from a statistical perspective. Although this section is primarily a review, many of the statistical issues considered have not been considered in the context of these methods and several open questions are posed. The first paper attempts to determine a means of characterizing the semiannual oscillation (SAO) spatial variation in the northern hemisphere extratropical height field. It was discovered that the midlatitude SAO in 500hPa geopotential height could be explained almost entirely as a result of spatial and temporal asymmetries in the annual variation of stationary eddies. It was concluded that the mechanism for the SAO in the northern hemisphere is a result of land-sea contrasts. The second paper examines the seasonal variability of mixed Rossby-gravity waves (MRGW) in lower stratospheric over the equatorial Pacific. Advanced cyclostationary time series techniques were used for analysis. It was found that there are significant twice-yearly peaks in MRGW activity. Analyses also suggested a convergence of horizontal momentum flux associated with these waves. In the third paper, a new spatio-temporal statistical model is proposed that attempts to consider the influence of both temporal and spatial variability. This method is mainly concerned with prediction in space and time, and provides a spatially descriptive and temporally dynamic model
Solar radiation data - statistical analysis and simulation models
Energy Technology Data Exchange (ETDEWEB)
Mustacchi, C; Cena, V; Rocchi, M; Haghigat, F
1984-01-01
The activities consisted in collecting meteorological data on magnetic tape for ten european locations (with latitudes ranging from 42/sup 0/ to 56/sup 0/ N), analysing the multi-year sequences, developing mathematical models to generate synthetic sequences having the same statistical properties of the original data sets, and producing one or more Short Reference Years (SRY's) for each location. The meteorological parameters examinated were (for all the locations) global + diffuse radiation on horizontal surface, dry bulb temperature, sunshine duration. For some of the locations additional parameters were available, namely, global, beam and diffuse radiation on surfaces other than horizontal, wet bulb temperature, wind velocity, cloud type, cloud cover. The statistical properties investigated were mean, variance, autocorrelation, crosscorrelation with selected parameters, probability density function. For all the meteorological parameters, various mathematical models were built: linear regression, stochastic models of the AR and the DAR type. In each case, the model with the best statistical behaviour was selected for the production of a SRY for the relevant parameter/location.
A statistical model for porous structure of rocks
Institute of Scientific and Technical Information of China (English)
JU Yang; YANG YongMing; SONG ZhenDuo; XU WenJing
2008-01-01
The geometric features and the distribution properties of pores in rocks were In-vestigated by means of CT scanning tests of sandstones. The centroidal coordl-nares of pores, the statistic characterristics of pore distance, quantity, size and their probability density functions were formulated in this paper. The Monte Carlo method and the random number generating algorithm were employed to generate two series of random numbers with the desired statistic characteristics and prob-ability density functions upon which the random distribution of pore position, dis-tance and quantity were determined. A three-dimensional porous structural model of sandstone was constructed based on the FLAC3D program and the information of the pore position and distribution that the series of random numbers defined. On the basis of modelling, the Brazil split tests of rock discs were carried out to ex-amine the stress distribution, the pattern of element failure and the inoaculation of failed elements. The simulation indicated that the proposed model was consistent with the realistic porous structure of rock in terms of their statistic properties of pores and geometric similarity. The built-up model disclosed the influence of pores on the stress distribution, failure mode of material elements and the inosculation of failed elements.
A statistical model for porous structure of rocks
Institute of Scientific and Technical Information of China (English)
2008-01-01
The geometric features and the distribution properties of pores in rocks were in- vestigated by means of CT scanning tests of sandstones. The centroidal coordi- nates of pores, the statistic characterristics of pore distance, quantity, size and their probability density functions were formulated in this paper. The Monte Carlo method and the random number generating algorithm were employed to generate two series of random numbers with the desired statistic characteristics and prob- ability density functions upon which the random distribution of pore position, dis- tance and quantity were determined. A three-dimensional porous structural model of sandstone was constructed based on the FLAC3D program and the information of the pore position and distribution that the series of random numbers defined. On the basis of modelling, the Brazil split tests of rock discs were carried out to ex- amine the stress distribution, the pattern of element failure and the inosculation of failed elements. The simulation indicated that the proposed model was consistent with the realistic porous structure of rock in terms of their statistic properties of pores and geometric similarity. The built-up model disclosed the influence of pores on the stress distribution, failure mode of material elements and the inosculation of failed elements.
Bayesian statistic methods and theri application in probabilistic simulation models
Directory of Open Access Journals (Sweden)
Sergio Iannazzo
2007-03-01
Full Text Available Bayesian statistic methods are facing a rapidly growing level of interest and acceptance in the field of health economics. The reasons of this success are probably to be found on the theoretical fundaments of the discipline that make these techniques more appealing to decision analysis. To this point should be added the modern IT progress that has developed different flexible and powerful statistical software framework. Among them probably one of the most noticeably is the BUGS language project and its standalone application for MS Windows WinBUGS. Scope of this paper is to introduce the subject and to show some interesting applications of WinBUGS in developing complex economical models based on Markov chains. The advantages of this approach reside on the elegance of the code produced and in its capability to easily develop probabilistic simulations. Moreover an example of the integration of bayesian inference models in a Markov model is shown. This last feature let the analyst conduce statistical analyses on the available sources of evidence and exploit them directly as inputs in the economic model.
Can spatial statistical river temperature models be transferred between catchments?
Jackson, Faye L.; Fryer, Robert J.; Hannah, David M.; Malcolm, Iain A.
2017-09-01
There has been increasing use of spatial statistical models to understand and predict river temperature (Tw) from landscape covariates. However, it is not financially or logistically feasible to monitor all rivers and the transferability of such models has not been explored. This paper uses Tw data from four river catchments collected in August 2015 to assess how well spatial regression models predict the maximum 7-day rolling mean of daily maximum Tw (Twmax) within and between catchments. Models were fitted for each catchment separately using (1) landscape covariates only (LS models) and (2) landscape covariates and an air temperature (Ta) metric (LS_Ta models). All the LS models included upstream catchment area and three included a river network smoother (RNS) that accounted for unexplained spatial structure. The LS models transferred reasonably to other catchments, at least when predicting relative levels of Twmax. However, the predictions were biased when mean Twmax differed between catchments. The RNS was needed to characterise and predict finer-scale spatially correlated variation. Because the RNS was unique to each catchment and thus non-transferable, predictions were better within catchments than between catchments. A single model fitted to all catchments found no interactions between the landscape covariates and catchment, suggesting that the landscape relationships were transferable. The LS_Ta models transferred less well, with particularly poor performance when the relationship with the Ta metric was physically implausible or required extrapolation outside the range of the data. A single model fitted to all catchments found catchment-specific relationships between Twmax and the Ta metric, indicating that the Ta metric was not transferable. These findings improve our understanding of the transferability of spatial statistical river temperature models and provide a foundation for developing new approaches for predicting Tw at unmonitored locations across
International Nuclear Information System (INIS)
Lü, Xiaoshu; Lu, Tao; Kibert, Charles J.; Viljanen, Martti
2015-01-01
Highlights: • This paper presents a new modeling method to forecast energy demands. • The model is based on physical–statistical approach to improving forecast accuracy. • A new method is proposed to address the heterogeneity challenge. • Comparison with measurements shows accurate forecasts of the model. • The first physical–statistical/heterogeneous building energy modeling approach is proposed and validated. - Abstract: Energy consumption forecasting is a critical and necessary input to planning and controlling energy usage in the building sector which accounts for 40% of the world’s energy use and the world’s greatest fraction of greenhouse gas emissions. However, due to the diversity and complexity of buildings as well as the random nature of weather conditions, energy consumption and loads are stochastic and difficult to predict. This paper presents a new methodology for energy demand forecasting that addresses the heterogeneity challenges in energy modeling of buildings. The new method is based on a physical–statistical approach designed to account for building heterogeneity to improve forecast accuracy. The physical model provides a theoretical input to characterize the underlying physical mechanism of energy flows. Then stochastic parameters are introduced into the physical model and the statistical time series model is formulated to reflect model uncertainties and individual heterogeneity in buildings. A new method of model generalization based on a convex hull technique is further derived to parameterize the individual-level model parameters for consistent model coefficients while maintaining satisfactory modeling accuracy for heterogeneous buildings. The proposed method and its validation are presented in detail for four different sports buildings with field measurements. The results show that the proposed methodology and model can provide a considerable improvement in forecasting accuracy
Probing the exchange statistics of one-dimensional anyon models
Greschner, Sebastian; Cardarelli, Lorenzo; Santos, Luis
2018-05-01
We propose feasible scenarios for revealing the modified exchange statistics in one-dimensional anyon models in optical lattices based on an extension of the multicolor lattice-depth modulation scheme introduced in [Phys. Rev. A 94, 023615 (2016), 10.1103/PhysRevA.94.023615]. We show that the fast modulation of a two-component fermionic lattice gas in the presence a magnetic field gradient, in combination with additional resonant microwave fields, allows for the quantum simulation of hardcore anyon models with periodic boundary conditions. Such a semisynthetic ring setup allows for realizing an interferometric arrangement sensitive to the anyonic statistics. Moreover, we show as well that simple expansion experiments may reveal the formation of anomalously bound pairs resulting from the anyonic exchange.
Statistical inference to advance network models in epidemiology.
Welch, David; Bansal, Shweta; Hunter, David R
2011-03-01
Contact networks are playing an increasingly important role in the study of epidemiology. Most of the existing work in this area has focused on considering the effect of underlying network structure on epidemic dynamics by using tools from probability theory and computer simulation. This work has provided much insight on the role that heterogeneity in host contact patterns plays on infectious disease dynamics. Despite the important understanding afforded by the probability and simulation paradigm, this approach does not directly address important questions about the structure of contact networks such as what is the best network model for a particular mode of disease transmission, how parameter values of a given model should be estimated, or how precisely the data allow us to estimate these parameter values. We argue that these questions are best answered within a statistical framework and discuss the role of statistical inference in estimating contact networks from epidemiological data. Copyright © 2011 Elsevier B.V. All rights reserved.
Statistical models of a gas diffusion electrode: II. Current resistent
Energy Technology Data Exchange (ETDEWEB)
Proksch, D B; Winsel, O W
1965-07-01
The authors describe an apparatus for measuring the flow resistance of gas diffusion electrodes which is a mechanical analog of the Wheatstone bridge for measuring electric resistance. The flow resistance of a circular DSK electrode sheet, consisting of two covering layers and a working layer between them, was measured as a function of the gas pressure. While the pressure first was increased and then decreased, a hysteresis occurred, which is discussed and explained by a statistical model of a porous electrode.
A Statistical Model for Soliton Particle Interaction in Plasmas
DEFF Research Database (Denmark)
Dysthe, K. B.; Pécseli, Hans; Truelsen, J.
1986-01-01
A statistical model for soliton-particle interaction is presented. A master equation is derived for the time evolution of the particle velocity distribution as induced by resonant interaction with Korteweg-de Vries solitons. The detailed energy balance during the interaction subsequently determines...... the evolution of the soliton amplitude distribution. The analysis applies equally well for weakly nonlinear plasma waves in a strongly magnetized waveguide, or for ion acoustic waves propagating in one-dimensional systems....
Statistical model of a gas diffusion electrode. III. Photomicrograph study
Energy Technology Data Exchange (ETDEWEB)
Winsel, A W
1965-12-01
A linear section through a gas diffusion electrode produces a certain distribution function of sinews with the pores. From this distribution function some qualities of the pore structure are derived, and an automatic device to determine the distribution function is described. With a statistical model of a gas diffusion electrode the behavior of a DSK electrode is discussed and compared with earlier measurements of the flow resistance of this material.
A statistical model of structure functions and quantum chromodynamics
International Nuclear Information System (INIS)
Mac, E.; Ugaz, E.; Universidad Nacional de Ingenieria, Lima
1989-01-01
We consider a model for the x-dependence of the quark distributions in the proton. Within the context of simple statistical assumptions, we obtain the parton densities in the infinite momentum frame. In a second step lowest order QCD corrections are incorporated to these distributions. Crude, but reasonable, agreement with experiment is found for the F 2 , valence and q, anti q distributions for x> or approx.0.2. (orig.)
Modeling the basic superconductor thermodynamical-statistical characteristics
International Nuclear Information System (INIS)
Palenskis, V.; Maknys, K.
1999-01-01
In accordance with the Landau second-order phase transition and other thermodynamical-statistical relations for superconductors, and using the energy gap as an order parameter in the electron free energy presentation, the fundamental characteristics of electrons, such as the free energy, the total energy, the energy gap, the entropy, and the heat capacity dependences on temperature were obtained. The obtained modeling results, in principle, well reflect the basic low- and high-temperature superconductor characteristics
Environmental radionuclide concentrations: statistical model to determine uniformity of distribution
International Nuclear Information System (INIS)
Cawley, C.N.; Fenyves, E.J.; Spitzberg, D.B.; Wiorkowski, J.; Chehroudi, M.T.
1980-01-01
In the evaluation of data from environmental sampling and measurement, a basic question is whether the radionuclide (or pollutant) is distributed uniformly. Since physical measurements have associated errors, it is inappropriate to consider the measurements alone in this determination. Hence, a statistical model has been developed. It consists of a weighted analysis of variance with subsequent t-tests between weighted and independent means. A computer program to perform the calculations is included
International Nuclear Information System (INIS)
De Oliveira, Z.M.
1980-01-01
A detailed analysis of the simple statistical model description for delayed neutron emission of 87 Br, 137 I, 85 As and 135 Sb has been performed. In agreement with experimental findings, structure in the #betta#-strength function is required to reproduce the envelope of the neutron spectrum from 87 Br. For 85 As and 135 Sb the model is found incapable of simultaneously reproducing envelopes of delayed neutron spectra and neutron branching ratios to excited states in the final nuclei for any choice of #betta#-strength function. The results indicate that partial widths for neutron emission are not compatible with optical-model transmission coefficients. The simple shell model with pairing is shown to qualitatively describe the main features of the #betta#-strength functions for decay of 87 Br and 91 93 95 97 Rb. It is found that the location of apparent resonances in the experimental data are in rough agreement with the location of centroids of strength calculated with this model. An extension of the shell model picture which includes the Gamow-Teller residual interaction is used to investigate decay properties of 84 86 As, 86 92 Br and 88 102 Rb. For a realistic choice of interaction strength, the half lives of these isotopes are fairly well reproduced and semiquantitative agreement with experimental #betta#-strength functions is found. Delayed neutron emission probabilities are reproduced for precursors nearer stability with systematic deviations being observed for the heavier nuclei. Contrary to the assumption of a structureless Gamow-Teller giant resonance as embodied gross theory of #betta#-decay, we find that structures in the tail of the Gamow-Teller giant resonances are expected which strongly influence the decay properties of nuclides in this region
Statistical methods for mechanistic model validation: Salt Repository Project
International Nuclear Information System (INIS)
Eggett, D.L.
1988-07-01
As part of the Department of Energy's Salt Repository Program, Pacific Northwest Laboratory (PNL) is studying the emplacement of nuclear waste containers in a salt repository. One objective of the SRP program is to develop an overall waste package component model which adequately describes such phenomena as container corrosion, waste form leaching, spent fuel degradation, etc., which are possible in the salt repository environment. The form of this model will be proposed, based on scientific principles and relevant salt repository conditions with supporting data. The model will be used to predict the future characteristics of the near field environment. This involves several different submodels such as the amount of time it takes a brine solution to contact a canister in the repository, how long it takes a canister to corrode and expose its contents to the brine, the leach rate of the contents of the canister, etc. These submodels are often tested in a laboratory and should be statistically validated (in this context, validate means to demonstrate that the model adequately describes the data) before they can be incorporated into the waste package component model. This report describes statistical methods for validating these models. 13 refs., 1 fig., 3 tabs
Estimating preferential flow in karstic aquifers using statistical mixed models.
Anaya, Angel A; Padilla, Ingrid; Macchiavelli, Raul; Vesper, Dorothy J; Meeker, John D; Alshawabkeh, Akram N
2014-01-01
Karst aquifers are highly productive groundwater systems often associated with conduit flow. These systems can be highly vulnerable to contamination, resulting in a high potential for contaminant exposure to humans and ecosystems. This work develops statistical models to spatially characterize flow and transport patterns in karstified limestone and determines the effect of aquifer flow rates on these patterns. A laboratory-scale Geo-HydroBed model is used to simulate flow and transport processes in a karstic limestone unit. The model consists of stainless steel tanks containing a karstified limestone block collected from a karst aquifer formation in northern Puerto Rico. Experimental work involves making a series of flow and tracer injections, while monitoring hydraulic and tracer response spatially and temporally. Statistical mixed models (SMMs) are applied to hydraulic data to determine likely pathways of preferential flow in the limestone units. The models indicate a highly heterogeneous system with dominant, flow-dependent preferential flow regions. Results indicate that regions of preferential flow tend to expand at higher groundwater flow rates, suggesting a greater volume of the system being flushed by flowing water at higher rates. Spatial and temporal distribution of tracer concentrations indicates the presence of conduit-like and diffuse flow transport in the system, supporting the notion of both combined transport mechanisms in the limestone unit. The temporal response of tracer concentrations at different locations in the model coincide with, and confirms the preferential flow distribution generated with the SMMs used in the study. © 2013, National Ground Water Association.
A generalized statistical model for the size distribution of wealth
International Nuclear Information System (INIS)
Clementi, F; Gallegati, M; Kaniadakis, G
2012-01-01
In a recent paper in this journal (Clementi et al 2009 J. Stat. Mech. P02037), we proposed a new, physically motivated, distribution function for modeling individual incomes, having its roots in the framework of the κ-generalized statistical mechanics. The performance of the κ-generalized distribution was checked against real data on personal income for the United States in 2003. In this paper we extend our previous model so as to be able to account for the distribution of wealth. Probabilistic functions and inequality measures of this generalized model for wealth distribution are obtained in closed form. In order to check the validity of the proposed model, we analyze the US household wealth distributions from 1984 to 2009 and conclude an excellent agreement with the data that is superior to any other model already known in the literature. (paper)
A generalized statistical model for the size distribution of wealth
Clementi, F.; Gallegati, M.; Kaniadakis, G.
2012-12-01
In a recent paper in this journal (Clementi et al 2009 J. Stat. Mech. P02037), we proposed a new, physically motivated, distribution function for modeling individual incomes, having its roots in the framework of the κ-generalized statistical mechanics. The performance of the κ-generalized distribution was checked against real data on personal income for the United States in 2003. In this paper we extend our previous model so as to be able to account for the distribution of wealth. Probabilistic functions and inequality measures of this generalized model for wealth distribution are obtained in closed form. In order to check the validity of the proposed model, we analyze the US household wealth distributions from 1984 to 2009 and conclude an excellent agreement with the data that is superior to any other model already known in the literature.
The choice of statistical methods for comparisons of dosimetric data in radiotherapy.
Chaikh, Abdulhamid; Giraud, Jean-Yves; Perrin, Emmanuel; Bresciani, Jean-Pierre; Balosso, Jacques
2014-09-18
Novel irradiation techniques are continuously introduced in radiotherapy to optimize the accuracy, the security and the clinical outcome of treatments. These changes could raise the question of discontinuity in dosimetric presentation and the subsequent need for practice adjustments in case of significant modifications. This study proposes a comprehensive approach to compare different techniques and tests whether their respective dose calculation algorithms give rise to statistically significant differences in the treatment doses for the patient. Statistical investigation principles are presented in the framework of a clinical example based on 62 fields of radiotherapy for lung cancer. The delivered doses in monitor units were calculated using three different dose calculation methods: the reference method accounts the dose without tissues density corrections using Pencil Beam Convolution (PBC) algorithm, whereas new methods calculate the dose with tissues density correction for 1D and 3D using Modified Batho (MB) method and Equivalent Tissue air ratio (ETAR) method, respectively. The normality of the data and the homogeneity of variance between groups were tested using Shapiro-Wilks and Levene test, respectively, then non-parametric statistical tests were performed. Specifically, the dose means estimated by the different calculation methods were compared using Friedman's test and Wilcoxon signed-rank test. In addition, the correlation between the doses calculated by the three methods was assessed using Spearman's rank and Kendall's rank tests. The Friedman's test showed a significant effect on the calculation method for the delivered dose of lung cancer patients (p Wilcoxon signed-rank test of paired comparisons indicated that the delivered dose was significantly reduced using density-corrected methods as compared to the reference method. Spearman's and Kendall's rank tests indicated a positive correlation between the doses calculated with the different methods
UPPAAL-SMC: Statistical Model Checking for Priced Timed Automata
DEFF Research Database (Denmark)
Bulychev, Petr; David, Alexandre; Larsen, Kim Guldstrand
2012-01-01
on a series of extensions of the statistical model checking approach generalized to handle real-time systems and estimate undecidable problems. U PPAAL - SMC comes together with a friendly user interface that allows a user to specify complex problems in an efficient manner as well as to get feedback...... in the form of probability distributions and compare probabilities to analyze performance aspects of systems. The focus of the survey is on the evolution of the tool – including modeling and specification formalisms as well as techniques applied – together with applications of the tool to case studies....
Statistical mechanics of attractor neural network models with synaptic depression
International Nuclear Information System (INIS)
Igarashi, Yasuhiko; Oizumi, Masafumi; Otsubo, Yosuke; Nagata, Kenji; Okada, Masato
2009-01-01
Synaptic depression is known to control gain for presynaptic inputs. Since cortical neurons receive thousands of presynaptic inputs, and their outputs are fed into thousands of other neurons, the synaptic depression should influence macroscopic properties of neural networks. We employ simple neural network models to explore the macroscopic effects of synaptic depression. Systems with the synaptic depression cannot be analyzed due to asymmetry of connections with the conventional equilibrium statistical-mechanical approach. Thus, we first propose a microscopic dynamical mean field theory. Next, we derive macroscopic steady state equations and discuss the stabilities of steady states for various types of neural network models.
A model independent safeguard against background mismodeling for statistical inference
Energy Technology Data Exchange (ETDEWEB)
Priel, Nadav; Landsman, Hagar; Manfredini, Alessandro; Budnik, Ranny [Department of Particle Physics and Astrophysics, Weizmann Institute of Science, Herzl St. 234, Rehovot (Israel); Rauch, Ludwig, E-mail: nadav.priel@weizmann.ac.il, E-mail: rauch@mpi-hd.mpg.de, E-mail: hagar.landsman@weizmann.ac.il, E-mail: alessandro.manfredini@weizmann.ac.il, E-mail: ran.budnik@weizmann.ac.il [Teilchen- und Astroteilchenphysik, Max-Planck-Institut für Kernphysik, Saupfercheckweg 1, 69117 Heidelberg (Germany)
2017-05-01
We propose a safeguard procedure for statistical inference that provides universal protection against mismodeling of the background. The method quantifies and incorporates the signal-like residuals of the background model into the likelihood function, using information available in a calibration dataset. This prevents possible false discovery claims that may arise through unknown mismodeling, and corrects the bias in limit setting created by overestimated or underestimated background. We demonstrate how the method removes the bias created by an incomplete background model using three realistic case studies.
Smith, Joseph M.; Mather, Martha E.
2012-01-01
Ecological indicators are science-based tools used to assess how human activities have impacted environmental resources. For monitoring and environmental assessment, existing species assemblage data can be used to make these comparisons through time or across sites. An impediment to using assemblage data, however, is that these data are complex and need to be simplified in an ecologically meaningful way. Because multivariate statistics are mathematical relationships, statistical groupings may not make ecological sense and will not have utility as indicators. Our goal was to define a process to select defensible and ecologically interpretable statistical simplifications of assemblage data in which researchers and managers can have confidence. For this, we chose a suite of statistical methods, compared the groupings that resulted from these analyses, identified convergence among groupings, then we interpreted the groupings using species and ecological guilds. When we tested this approach using a statewide stream fish dataset, not all statistical methods worked equally well. For our dataset, logistic regression (Log), detrended correspondence analysis (DCA), cluster analysis (CL), and non-metric multidimensional scaling (NMDS) provided consistent, simplified output. Specifically, the Log, DCA, CL-1, and NMDS-1 groupings were ≥60% similar to each other, overlapped with the fluvial-specialist ecological guild, and contained a common subset of species. Groupings based on number of species (e.g., Log, DCA, CL and NMDS) outperformed groupings based on abundance [e.g., principal components analysis (PCA) and Poisson regression]. Although the specific methods that worked on our test dataset have generality, here we are advocating a process (e.g., identifying convergent groupings with redundant species composition that are ecologically interpretable) rather than the automatic use of any single statistical tool. We summarize this process in step-by-step guidance for the
Document Categorization with Modified Statistical Language Models for Agglutinative Languages
Directory of Open Access Journals (Sweden)
Tantug
2010-11-01
Full Text Available In this paper, we investigate the document categorization task with statistical language models. Our study mainly focuses on categorization of documents in agglutinative languages. Due to the productive morphology of agglutinative languages, the number of word forms encountered in naturally occurring text is very large. From the language modeling perspective, a large vocabulary results in serious data sparseness problems. In order to cope with this drawback, previous studies in various application areas suggest modified language models based on different morphological units. It is reported that performance improvements can be achieved with these modified language models. In our document categorization experiments, we use standard word form based language models as well as other modified language models based on root words, root words and part-of-speech information, truncated word forms and character sequences. Additionally, to find an optimum parameter set, multiple tests are carried out with different language model orders and smoothing methods. Similar to previous studies on other tasks, our experimental results on categorization of Turkish documents reveal that applying linguistic preprocessing steps for language modeling provides improvements over standard language models to some extent. However, it is also observed that similar level of performance improvements can also be acquired by simpler character level or truncated word form models which are language independent.
A neighborhood statistics model for predicting stream pathogen indicator levels.
Pandey, Pramod K; Pasternack, Gregory B; Majumder, Mahbubul; Soupir, Michelle L; Kaiser, Mark S
2015-03-01
Because elevated levels of water-borne Escherichia coli in streams are a leading cause of water quality impairments in the U.S., water-quality managers need tools for predicting aqueous E. coli levels. Presently, E. coli levels may be predicted using complex mechanistic models that have a high degree of unchecked uncertainty or simpler statistical models. To assess spatio-temporal patterns of instream E. coli levels, herein we measured E. coli, a pathogen indicator, at 16 sites (at four different times) within the Squaw Creek watershed, Iowa, and subsequently, the Markov Random Field model was exploited to develop a neighborhood statistics model for predicting instream E. coli levels. Two observed covariates, local water temperature (degrees Celsius) and mean cross-sectional depth (meters), were used as inputs to the model. Predictions of E. coli levels in the water column were compared with independent observational data collected from 16 in-stream locations. The results revealed that spatio-temporal averages of predicted and observed E. coli levels were extremely close. Approximately 66 % of individual predicted E. coli concentrations were within a factor of 2 of the observed values. In only one event, the difference between prediction and observation was beyond one order of magnitude. The mean of all predicted values at 16 locations was approximately 1 % higher than the mean of the observed values. The approach presented here will be useful while assessing instream contaminations such as pathogen/pathogen indicator levels at the watershed scale.
Efficient Parallel Statistical Model Checking of Biochemical Networks
Directory of Open Access Journals (Sweden)
Paolo Ballarini
2009-12-01
Full Text Available We consider the problem of verifying stochastic models of biochemical networks against behavioral properties expressed in temporal logic terms. Exact probabilistic verification approaches such as, for example, CSL/PCTL model checking, are undermined by a huge computational demand which rule them out for most real case studies. Less demanding approaches, such as statistical model checking, estimate the likelihood that a property is satisfied by sampling executions out of the stochastic model. We propose a methodology for efficiently estimating the likelihood that a LTL property P holds of a stochastic model of a biochemical network. As with other statistical verification techniques, the methodology we propose uses a stochastic simulation algorithm for generating execution samples, however there are three key aspects that improve the efficiency: first, the sample generation is driven by on-the-fly verification of P which results in optimal overall simulation time. Second, the confidence interval estimation for the probability of P to hold is based on an efficient variant of the Wilson method which ensures a faster convergence. Third, the whole methodology is designed according to a parallel fashion and a prototype software tool has been implemented that performs the sampling/verification process in parallel over an HPC architecture.
Model-generated air quality statistics for application in vegetation response models in Alberta
International Nuclear Information System (INIS)
McVehil, G.E.; Nosal, M.
1990-01-01
To test and apply vegetation response models in Alberta, air pollution statistics representative of various parts of the Province are required. At this time, air quality monitoring data of the requisite accuracy and time resolution are not available for most parts of Alberta. Therefore, there exists a need to develop appropriate air quality statistics. The objectives of the work reported here were to determine the applicability of model generated air quality statistics and to develop by modelling, realistic and representative time series of hourly SO 2 concentrations that could be used to generate the statistics demanded by vegetation response models
The GNASH preequilibrium-statistical nuclear model code
International Nuclear Information System (INIS)
Arthur, E. D.
1988-01-01
The following report is based on materials presented in a series of lectures at the International Center for Theoretical Physics, Trieste, which were designed to describe the GNASH preequilibrium statistical model code and its use. An overview is provided of the code with emphasis upon code's calculational capabilities and the theoretical models that have been implemented in it. Two sample problems are discussed, the first dealing with neutron reactions on 58 Ni. the second illustrates the fission model capabilities implemented in the code and involves n + 235 U reactions. Finally a description is provided of current theoretical model and code development underway. Examples of calculated results using these new capabilities are also given. 19 refs., 17 figs., 3 tabs
The Impact of Statistical Leakage Models on Design Yield Estimation
Directory of Open Access Journals (Sweden)
Rouwaida Kanj
2011-01-01
Full Text Available Device mismatch and process variation models play a key role in determining the functionality and yield of sub-100 nm design. Average characteristics are often of interest, such as the average leakage current or the average read delay. However, detecting rare functional fails is critical for memory design and designers often seek techniques that enable accurately modeling such events. Extremely leaky devices can inflict functionality fails. The plurality of leaky devices on a bitline increase the dimensionality of the yield estimation problem. Simplified models are possible by adopting approximations to the underlying sum of lognormals. The implications of such approximations on tail probabilities may in turn bias the yield estimate. We review different closed form approximations and compare against the CDF matching method, which is shown to be most effective method for accurate statistical leakage modeling.
Schedulability of Herschel revisited using statistical model checking
DEFF Research Database (Denmark)
David, Alexandre; Larsen, Kim Guldstrand; Legay, Axel
2015-01-01
-approximation technique. We can safely conclude that the system is schedulable for varying values of BCET. For the cases where deadlines are violated, we use polyhedra to try to confirm the witnesses. Our alternative method to confirm non-schedulability uses statistical model-checking (SMC) to generate counter...... and blocking times of tasks. Consequently, the method may falsely declare deadline violations that will never occur during execution. This paper is a continuation of previous work of the authors in applying extended timed automata model checking (using the tool UPPAAL) to obtain more exact schedulability...... analysis, here in the presence of non-deterministic computation times of tasks given by intervals [BCET,WCET]. Computation intervals with preemptive schedulers make the schedulability analysis of the resulting task model undecidable. Our contribution is to propose a combination of model checking techniques...
Crimp, Steven; Jin, Huidong; Kokic, Philip; Bakar, Shuvo; Nicholls, Neville
2018-04-01
Anthropogenic climate change has already been shown to effect the frequency, intensity, spatial extent, duration and seasonality of extreme climate events. Understanding these changes is an important step in determining exposure, vulnerability and focus for adaptation. In an attempt to support adaptation decision-making we have examined statistical modelling techniques to improve the representation of global climate model (GCM) derived projections of minimum temperature extremes (frosts) in Australia. We examine the spatial changes in minimum temperature extreme metrics (e.g. monthly and seasonal frost frequency etc.), for a region exhibiting the strongest station trends in Australia, and compare these changes with minimum temperature extreme metrics derived from 10 GCMs, from the Coupled Model Inter-comparison Project Phase 5 (CMIP 5) datasets, and via statistical downscaling. We compare the observed trends with those derived from the "raw" GCM minimum temperature data as well as examine whether quantile matching (QM) or spatio-temporal (spTimerQM) modelling with Quantile Matching can be used to improve the correlation between observed and simulated extreme minimum temperatures. We demonstrate, that the spTimerQM modelling approach provides correlations with observed daily minimum temperatures for the period August to November of 0.22. This represents an almost fourfold improvement over either the "raw" GCM or QM results. The spTimerQM modelling approach also improves correlations with observed monthly frost frequency statistics to 0.84 as opposed to 0.37 and 0.81 for the "raw" GCM and QM results respectively. We apply the spatio-temporal model to examine future extreme minimum temperature projections for the period 2016 to 2048. The spTimerQM modelling results suggest the persistence of current levels of frost risk out to 2030, with the evidence of continuing decadal variation.
Experimental investigation of statistical models describing distribution of counts
International Nuclear Information System (INIS)
Salma, I.; Zemplen-Papp, E.
1992-01-01
The binomial, Poisson and modified Poisson models which are used for describing the statistical nature of the distribution of counts are compared theoretically, and conclusions for application are considered. The validity of the Poisson and the modified Poisson statistical distribution for observing k events in a short time interval is investigated experimentally for various measuring times. The experiments to measure the influence of the significant radioactive decay were performed with 89 Y m (T 1/2 =16.06 s), using a multichannel analyser (4096 channels) in the multiscaling mode. According to the results, Poisson statistics describe the counting experiment for short measuring times (up to T=0.5T 1/2 ) and its application is recommended. However, analysis of the data demonstrated, with confidence, that for long measurements (T≥T 1/2 ) Poisson distribution is not valid and the modified Poisson function is preferable. The practical implications in calculating uncertainties and in optimizing the measuring time are discussed. Differences between the standard deviations evaluated on the basis of the Poisson and binomial models are especially significant for experiments with long measuring time (T/T 1/2 ≥2) and/or large detection efficiency (ε>0.30). Optimization of the measuring time for paired observations yields the same solution for either the binomial or the Poisson distribution. (orig.)
Fast optimization of statistical potentials for structurally constrained phylogenetic models
Directory of Open Access Journals (Sweden)
Rodrigue Nicolas
2009-09-01
Full Text Available Abstract Background Statistical approaches for protein design are relevant in the field of molecular evolutionary studies. In recent years, new, so-called structurally constrained (SC models of protein-coding sequence evolution have been proposed, which use statistical potentials to assess sequence-structure compatibility. In a previous work, we defined a statistical framework for optimizing knowledge-based potentials especially suited to SC models. Our method used the maximum likelihood principle and provided what we call the joint potentials. However, the method required numerical estimations by the use of computationally heavy Markov Chain Monte Carlo sampling algorithms. Results Here, we develop an alternative optimization procedure, based on a leave-one-out argument coupled to fast gradient descent algorithms. We assess that the leave-one-out potential yields very similar results to the joint approach developed previously, both in terms of the resulting potential parameters, and by Bayes factor evaluation in a phylogenetic context. On the other hand, the leave-one-out approach results in a considerable computational benefit (up to a 1,000 fold decrease in computational time for the optimization procedure. Conclusion Due to its computational speed, the optimization method we propose offers an attractive alternative for the design and empirical evaluation of alternative forms of potentials, using large data sets and high-dimensional parameterizations.
Statistical Downscaling of Temperature with the Random Forest Model
Directory of Open Access Journals (Sweden)
Bo Pang
2017-01-01
Full Text Available The issues with downscaling the outputs of a global climate model (GCM to a regional scale that are appropriate to hydrological impact studies are investigated using the random forest (RF model, which has been shown to be superior for large dataset analysis and variable importance evaluation. The RF is proposed for downscaling daily mean temperature in the Pearl River basin in southern China. Four downscaling models were developed and validated by using the observed temperature series from 61 national stations and large-scale predictor variables derived from the National Center for Environmental Prediction–National Center for Atmospheric Research reanalysis dataset. The proposed RF downscaling model was compared to multiple linear regression, artificial neural network, and support vector machine models. Principal component analysis (PCA and partial correlation analysis (PAR were used in the predictor selection for the other models for a comprehensive study. It was shown that the model efficiency of the RF model was higher than that of the other models according to five selected criteria. By evaluating the predictor importance, the RF could choose the best predictor combination without using PCA and PAR. The results indicate that the RF is a feasible tool for the statistical downscaling of temperature.
Statistics of excitations in the electron glass model
Palassini, Matteo
2011-03-01
We study the statistics of elementary excitations in the classical electron glass model of localized electrons interacting via the unscreened Coulomb interaction in the presence of disorder. We reconsider the long-standing puzzle of the exponential suppression of the single-particle density of states near the Fermi level, by measuring accurately the density of states of charged and electron-hole pair excitations via finite temperature Monte Carlo simulation and zero-temperature relaxation. We also investigate the statistics of large charge rearrangements after a perturbation of the system, which may shed some light on the slow relaxation and glassy phenomena recently observed in a variety of Anderson insulators. In collaboration with Martin Goethe.
Sensitivity properties of a biosphere model based on BATS and a statistical-dynamical climate model
Energy Technology Data Exchange (ETDEWEB)
Zhang, T. (Yale Univ., New Haven, CT (United States))
1994-06-01
A biosphere model based on the Biosphere-Atmosphere Transfer Scheme (BATS) and the Saltzman-Vernekar (SV) statistical-dynamical climate model is developed. Some equations of BATS are adopted either intact or with modifications, some are conceptually modified, and still others are replaced with equations of the SV model. The model is designed so that it can be run independently as long as the parameters related to the physiology and physiognomy of the vegetation, the atmospheric conditions, solar radiation, and soil conditions are given. With this stand-alone biosphere model, a series of sensitivity investigations, particularly the model sensitivity to fractional area of vegetation cover, soil surface water availability, and solar radiation for different types of vegetation, were conducted as a first step. These numerical experiments indicate that the presence of a vegetation cover greatly enhances the exchanges of momentum, water vapor, and energy between the atmosphere and the surface of the earth. An interesting result is that a dense and thick vegetation cover tends to serve as an environment conditioner or, more specifically, a thermostat and a humidistat, since the soil surface temperature, foliage temperature, and temperature and vapor pressure of air within the foliage are practically insensitive to variation of soil surface water availability and even solar radiation within a wide range. An attempt is also made to simulate the gradual deterioration of environment accompanying gradual degradation of a tropical forest to grasslands. Comparison with field data shows that this model can realistically simulate the land surface processes involving biospheric variations. 46 refs., 10 figs., 6 tabs.
Hybrid perturbation methods based on statistical time series models
San-Juan, Juan Félix; San-Martín, Montserrat; Pérez, Iván; López, Rosario
2016-04-01
In this work we present a new methodology for orbit propagation, the hybrid perturbation theory, based on the combination of an integration method and a prediction technique. The former, which can be a numerical, analytical or semianalytical theory, generates an initial approximation that contains some inaccuracies derived from the fact that, in order to simplify the expressions and subsequent computations, not all the involved forces are taken into account and only low-order terms are considered, not to mention the fact that mathematical models of perturbations not always reproduce physical phenomena with absolute precision. The prediction technique, which can be based on either statistical time series models or computational intelligence methods, is aimed at modelling and reproducing missing dynamics in the previously integrated approximation. This combination results in the precision improvement of conventional numerical, analytical and semianalytical theories for determining the position and velocity of any artificial satellite or space debris object. In order to validate this methodology, we present a family of three hybrid orbit propagators formed by the combination of three different orders of approximation of an analytical theory and a statistical time series model, and analyse their capability to process the effect produced by the flattening of the Earth. The three considered analytical components are the integration of the Kepler problem, a first-order and a second-order analytical theories, whereas the prediction technique is the same in the three cases, namely an additive Holt-Winters method.
Bayesian Sensitivity Analysis of Statistical Models with Missing Data.
Zhu, Hongtu; Ibrahim, Joseph G; Tang, Niansheng
2014-04-01
Methods for handling missing data depend strongly on the mechanism that generated the missing values, such as missing completely at random (MCAR) or missing at random (MAR), as well as other distributional and modeling assumptions at various stages. It is well known that the resulting estimates and tests may be sensitive to these assumptions as well as to outlying observations. In this paper, we introduce various perturbations to modeling assumptions and individual observations, and then develop a formal sensitivity analysis to assess these perturbations in the Bayesian analysis of statistical models with missing data. We develop a geometric framework, called the Bayesian perturbation manifold, to characterize the intrinsic structure of these perturbations. We propose several intrinsic influence measures to perform sensitivity analysis and quantify the effect of various perturbations to statistical models. We use the proposed sensitivity analysis procedure to systematically investigate the tenability of the non-ignorable missing at random (NMAR) assumption. Simulation studies are conducted to evaluate our methods, and a dataset is analyzed to illustrate the use of our diagnostic measures.
A statistical model for interpreting computerized dynamic posturography data
Feiveson, Alan H.; Metter, E. Jeffrey; Paloski, William H.
2002-01-01
Computerized dynamic posturography (CDP) is widely used for assessment of altered balance control. CDP trials are quantified using the equilibrium score (ES), which ranges from zero to 100, as a decreasing function of peak sway angle. The problem of how best to model and analyze ESs from a controlled study is considered. The ES often exhibits a skewed distribution in repeated trials, which can lead to incorrect inference when applying standard regression or analysis of variance models. Furthermore, CDP trials are terminated when a patient loses balance. In these situations, the ES is not observable, but is assigned the lowest possible score--zero. As a result, the response variable has a mixed discrete-continuous distribution, further compromising inference obtained by standard statistical methods. Here, we develop alternative methodology for analyzing ESs under a stochastic model extending the ES to a continuous latent random variable that always exists, but is unobserved in the event of a fall. Loss of balance occurs conditionally, with probability depending on the realized latent ES. After fitting the model by a form of quasi-maximum-likelihood, one may perform statistical inference to assess the effects of explanatory variables. An example is provided, using data from the NIH/NIA Baltimore Longitudinal Study on Aging.
Model output statistics applied to wind power prediction
Energy Technology Data Exchange (ETDEWEB)
Joensen, A; Giebel, G; Landberg, L [Risoe National Lab., Roskilde (Denmark); Madsen, H; Nielsen, H A [The Technical Univ. of Denmark, Dept. of Mathematical Modelling, Lyngby (Denmark)
1999-03-01
Being able to predict the output of a wind farm online for a day or two in advance has significant advantages for utilities, such as better possibility to schedule fossil fuelled power plants and a better position on electricity spot markets. In this paper prediction methods based on Numerical Weather Prediction (NWP) models are considered. The spatial resolution used in NWP models implies that these predictions are not valid locally at a specific wind farm. Furthermore, due to the non-stationary nature and complexity of the processes in the atmosphere, and occasional changes of NWP models, the deviation between the predicted and the measured wind will be time dependent. If observational data is available, and if the deviation between the predictions and the observations exhibits systematic behavior, this should be corrected for; if statistical methods are used, this approaches is usually referred to as MOS (Model Output Statistics). The influence of atmospheric turbulence intensity, topography, prediction horizon length and auto-correlation of wind speed and power is considered, and to take the time-variations into account, adaptive estimation methods are applied. Three estimation techniques are considered and compared, Extended Kalman Filtering, recursive least squares and a new modified recursive least squares algorithm. (au) EU-JOULE-3. 11 refs.
Prediction of dimethyl disulfide levels from biosolids using statistical modeling.
Gabriel, Steven A; Vilalai, Sirapong; Arispe, Susanna; Kim, Hyunook; McConnell, Laura L; Torrents, Alba; Peot, Christopher; Ramirez, Mark
2005-01-01
Two statistical models were used to predict the concentration of dimethyl disulfide (DMDS) released from biosolids produced by an advanced wastewater treatment plant (WWTP) located in Washington, DC, USA. The plant concentrates sludge from primary sedimentation basins in gravity thickeners (GT) and sludge from secondary sedimentation basins in dissolved air flotation (DAF) thickeners. The thickened sludge is pumped into blending tanks and then fed into centrifuges for dewatering. The dewatered sludge is then conditioned with lime before trucking out from the plant. DMDS, along with other volatile sulfur and nitrogen-containing chemicals, is known to contribute to biosolids odors. These models identified oxidation/reduction potential (ORP) values of a GT and DAF, the amount of sludge dewatered by centrifuges, and the blend ratio between GT thickened sludge and DAF thickened sludge in blending tanks as control variables. The accuracy of the developed regression models was evaluated by checking the adjusted R2 of the regression as well as the signs of coefficients associated with each variable. In general, both models explained observed DMDS levels in sludge headspace samples. The adjusted R2 value of the regression models 1 and 2 were 0.79 and 0.77, respectively. Coefficients for each regression model also had the correct sign. Using the developed models, plant operators can adjust the controllable variables to proactively decrease this odorant. Therefore, these models are a useful tool in biosolids management at WWTPs.
Energy Technology Data Exchange (ETDEWEB)
Xu Chengjian, E-mail: c.j.xu@umcg.nl [Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen (Netherlands); Schaaf, Arjen van der; Schilstra, Cornelis; Langendijk, Johannes A.; Veld, Aart A. van' t [Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen (Netherlands)
2012-03-15
Purpose: To study the impact of different statistical learning methods on the prediction performance of multivariate normal tissue complication probability (NTCP) models. Methods and Materials: In this study, three learning methods, stepwise selection, least absolute shrinkage and selection operator (LASSO), and Bayesian model averaging (BMA), were used to build NTCP models of xerostomia following radiotherapy treatment for head and neck cancer. Performance of each learning method was evaluated by a repeated cross-validation scheme in order to obtain a fair comparison among methods. Results: It was found that the LASSO and BMA methods produced models with significantly better predictive power than that of the stepwise selection method. Furthermore, the LASSO method yields an easily interpretable model as the stepwise method does, in contrast to the less intuitive BMA method. Conclusions: The commonly used stepwise selection method, which is simple to execute, may be insufficient for NTCP modeling. The LASSO method is recommended.
Xu, Cheng-Jian; van der Schaaf, Arjen; Schilstra, Cornelis; Langendijk, Johannes A; van't Veld, Aart A
2012-03-15
To study the impact of different statistical learning methods on the prediction performance of multivariate normal tissue complication probability (NTCP) models. In this study, three learning methods, stepwise selection, least absolute shrinkage and selection operator (LASSO), and Bayesian model averaging (BMA), were used to build NTCP models of xerostomia following radiotherapy treatment for head and neck cancer. Performance of each learning method was evaluated by a repeated cross-validation scheme in order to obtain a fair comparison among methods. It was found that the LASSO and BMA methods produced models with significantly better predictive power than that of the stepwise selection method. Furthermore, the LASSO method yields an easily interpretable model as the stepwise method does, in contrast to the less intuitive BMA method. The commonly used stepwise selection method, which is simple to execute, may be insufficient for NTCP modeling. The LASSO method is recommended. Copyright Â© 2012 Elsevier Inc. All rights reserved.
International Nuclear Information System (INIS)
Xu Chengjian; Schaaf, Arjen van der; Schilstra, Cornelis; Langendijk, Johannes A.; Veld, Aart A. van’t
2012-01-01
Purpose: To study the impact of different statistical learning methods on the prediction performance of multivariate normal tissue complication probability (NTCP) models. Methods and Materials: In this study, three learning methods, stepwise selection, least absolute shrinkage and selection operator (LASSO), and Bayesian model averaging (BMA), were used to build NTCP models of xerostomia following radiotherapy treatment for head and neck cancer. Performance of each learning method was evaluated by a repeated cross-validation scheme in order to obtain a fair comparison among methods. Results: It was found that the LASSO and BMA methods produced models with significantly better predictive power than that of the stepwise selection method. Furthermore, the LASSO method yields an easily interpretable model as the stepwise method does, in contrast to the less intuitive BMA method. Conclusions: The commonly used stepwise selection method, which is simple to execute, may be insufficient for NTCP modeling. The LASSO method is recommended.
Statistical approach for uncertainty quantification of experimental modal model parameters
DEFF Research Database (Denmark)
Luczak, M.; Peeters, B.; Kahsin, M.
2014-01-01
Composite materials are widely used in manufacture of aerospace and wind energy structural components. These load carrying structures are subjected to dynamic time-varying loading conditions. Robust structural dynamics identification procedure impose tight constraints on the quality of modal models...... represent different complexity levels ranging from coupon, through sub-component up to fully assembled aerospace and wind energy structural components made of composite materials. The proposed method is demonstrated on two application cases of a small and large wind turbine blade........ This paper aims at a systematic approach for uncertainty quantification of the parameters of the modal models estimated from experimentally obtained data. Statistical analysis of modal parameters is implemented to derive an assessment of the entire modal model uncertainty measure. Investigated structures...
Statistical mechanics of sparse generalization and graphical model selection
International Nuclear Information System (INIS)
Lage-Castellanos, Alejandro; Pagnani, Andrea; Weigt, Martin
2009-01-01
One of the crucial tasks in many inference problems is the extraction of an underlying sparse graphical model from a given number of high-dimensional measurements. In machine learning, this is frequently achieved using, as a penalty term, the L p norm of the model parameters, with p≤1 for efficient dilution. Here we propose a statistical mechanics analysis of the problem in the setting of perceptron memorization and generalization. Using a replica approach, we are able to evaluate the relative performance of naive dilution (obtained by learning without dilution, following by applying a threshold to the model parameters), L 1 dilution (which is frequently used in convex optimization) and L 0 dilution (which is optimal but computationally hard to implement). Whereas both L p diluted approaches clearly outperform the naive approach, we find a small region where L 0 works almost perfectly and strongly outperforms the simpler to implement L 1 dilution
Exploiting linkage disequilibrium in statistical modelling in quantitative genomics
DEFF Research Database (Denmark)
Wang, Lei
Alleles at two loci are said to be in linkage disequilibrium (LD) when they are correlated or statistically dependent. Genomic prediction and gene mapping rely on the existence of LD between gentic markers and causul variants of complex traits. In the first part of the thesis, a novel method...... to quantify and visualize local variation in LD along chromosomes in describet, and applied to characterize LD patters at the local and genome-wide scale in three Danish pig breeds. In the second part, different ways of taking LD into account in genomic prediction models are studied. One approach is to use...... the recently proposed antedependence models, which treat neighbouring marker effects as correlated; another approach involves use of haplotype block information derived using the program Beagle. The overall conclusion is that taking LD information into account in genomic prediction models potentially improves...
A statistical model for field emission in superconducting cavities
International Nuclear Information System (INIS)
Padamsee, H.; Green, K.; Jost, W.; Wright, B.
1993-01-01
A statistical model is used to account for several features of performance of an ensemble of superconducting cavities. The input parameters are: the number of emitters/area, a distribution function for emitter β values, a distribution function for emissive areas, and a processing threshold. The power deposited by emitters is calculated from the field emission current and electron impact energy. The model can successfully account for the fraction of tests that reach the maximum field Epk in an ensemble of cavities, for eg, 1-cells at sign 3 GHz or 5-cells at sign 1.5 GHz. The model is used to predict the level of power needed to successfully process cavities of various surface areas with high pulsed power processing (HPP)
International Nuclear Information System (INIS)
Heinrich, S.
2006-01-01
Nucleus fission process is a very complex phenomenon and, even nowadays, no realistic models describing the overall process are available. The work presented here deals with a theoretical description of fission fragments distributions in mass, charge, energy and deformation. We have reconsidered and updated the B.D. Wilking Scission Point model. Our purpose was to test if this statistic model applied at the scission point and by introducing new results of modern microscopic calculations allows to describe quantitatively the fission fragments distributions. We calculate the surface energy available at the scission point as a function of the fragments deformations. This surface is obtained from a Hartree Fock Bogoliubov microscopic calculation which guarantee a realistic description of the potential dependence on the deformation for each fragment. The statistic balance is described by the level densities of the fragment. We have tried to avoid as much as possible the input of empirical parameters in the model. Our only parameter, the distance between each fragment at the scission point, is discussed by comparison with scission configuration obtained from full dynamical microscopic calculations. Also, the comparison between our results and experimental data is very satisfying and allow us to discuss the success and limitations of our approach. We finally proposed ideas to improve the model, in particular by applying dynamical corrections. (author)
A Statistical Graphical Model of the California Reservoir System
Taeb, A.; Reager, J. T.; Turmon, M.; Chandrasekaran, V.
2017-11-01
The recent California drought has highlighted the potential vulnerability of the state's water management infrastructure to multiyear dry intervals. Due to the high complexity of the network, dynamic storage changes in California reservoirs on a state-wide scale have previously been difficult to model using either traditional statistical or physical approaches. Indeed, although there is a significant line of research on exploring models for single (or a small number of) reservoirs, these approaches are not amenable to a system-wide modeling of the California reservoir network due to the spatial and hydrological heterogeneities of the system. In this work, we develop a state-wide statistical graphical model to characterize the dependencies among a collection of 55 major California reservoirs across the state; this model is defined with respect to a graph in which the nodes index reservoirs and the edges specify the relationships or dependencies between reservoirs. We obtain and validate this model in a data-driven manner based on reservoir volumes over the period 2003-2016. A key feature of our framework is a quantification of the effects of external phenomena that influence the entire reservoir network. We further characterize the degree to which physical factors (e.g., state-wide Palmer Drought Severity Index (PDSI), average temperature, snow pack) and economic factors (e.g., consumer price index, number of agricultural workers) explain these external influences. As a consequence of this analysis, we obtain a system-wide health diagnosis of the reservoir network as a function of PDSI.
MASKED AREAS IN SHEAR PEAK STATISTICS: A FORWARD MODELING APPROACH
International Nuclear Information System (INIS)
Bard, D.; Kratochvil, J. M.; Dawson, W.
2016-01-01
The statistics of shear peaks have been shown to provide valuable cosmological information beyond the power spectrum, and will be an important constraint of models of cosmology in forthcoming astronomical surveys. Surveys include masked areas due to bright stars, bad pixels etc., which must be accounted for in producing constraints on cosmology from shear maps. We advocate a forward-modeling approach, where the impacts of masking and other survey artifacts are accounted for in the theoretical prediction of cosmological parameters, rather than correcting survey data to remove them. We use masks based on the Deep Lens Survey, and explore the impact of up to 37% of the survey area being masked on LSST and DES-scale surveys. By reconstructing maps of aperture mass the masking effect is smoothed out, resulting in up to 14% smaller statistical uncertainties compared to simply reducing the survey area by the masked area. We show that, even in the presence of large survey masks, the bias in cosmological parameter estimation produced in the forward-modeling process is ≈1%, dominated by bias caused by limited simulation volume. We also explore how this potential bias scales with survey area and evaluate how much small survey areas are impacted by the differences in cosmological structure in the data and simulated volumes, due to cosmic variance
A Tensor Statistical Model for Quantifying Dynamic Functional Connectivity.
Zhu, Yingying; Zhu, Xiaofeng; Kim, Minjeong; Yan, Jin; Wu, Guorong
2017-06-01
Functional connectivity (FC) has been widely investigated in many imaging-based neuroscience and clinical studies. Since functional Magnetic Resonance Image (MRI) signal is just an indirect reflection of brain activity, it is difficult to accurately quantify the FC strength only based on signal correlation. To address this limitation, we propose a learning-based tensor model to derive high sensitivity and specificity connectome biomarkers at the individual level from resting-state fMRI images. First, we propose a learning-based approach to estimate the intrinsic functional connectivity. In addition to the low level region-to-region signal correlation, latent module-to-module connection is also estimated and used to provide high level heuristics for measuring connectivity strength. Furthermore, sparsity constraint is employed to automatically remove the spurious connections, thus alleviating the issue of searching for optimal threshold. Second, we integrate our learning-based approach with the sliding-window technique to further reveal the dynamics of functional connectivity. Specifically, we stack the functional connectivity matrix within each sliding window and form a 3D tensor where the third dimension denotes for time. Then we obtain dynamic functional connectivity (dFC) for each individual subject by simultaneously estimating the within-sliding-window functional connectivity and characterizing the across-sliding-window temporal dynamics. Third, in order to enhance the robustness of the connectome patterns extracted from dFC, we extend the individual-based 3D tensors to a population-based 4D tensor (with the fourth dimension stands for the training subjects) and learn the statistics of connectome patterns via 4D tensor analysis. Since our 4D tensor model jointly (1) optimizes dFC for each training subject and (2) captures the principle connectome patterns, our statistical model gains more statistical power of representing new subject than current state
Development of modelling algorithm of technological systems by statistical tests
Shemshura, E. A.; Otrokov, A. V.; Chernyh, V. G.
2018-03-01
The paper tackles the problem of economic assessment of design efficiency regarding various technological systems at the stage of their operation. The modelling algorithm of a technological system was performed using statistical tests and with account of the reliability index allows estimating the level of machinery technical excellence and defining the efficiency of design reliability against its performance. Economic feasibility of its application shall be determined on the basis of service quality of a technological system with further forecasting of volumes and the range of spare parts supply.
New statistical model of inelastic fast neutron scattering
International Nuclear Information System (INIS)
Stancicj, V.
1975-07-01
A new statistical model for treating the fast neutron inelastic scattering has been proposed by using the general expressions of the double differential cross section in impuls approximation. The use of the Fermi-Dirac distribution of nucleons makes it possible to derive an analytical expression of the fast neutron inelastic scattering kernel including the angular momenta coupling. The obtained values of the inelastic fast neutron cross section calculated from the derived expression of the scattering kernel are in a good agreement with the experiments. A main advantage of the derived expressions is in their simplicity for the practical calculations
Woods, Christopher; Fernee, Christianne; Browne, Martin; Zakrzewski, Sonia; Dickinson, Alexander
2017-01-01
This paper introduces statistical shape modelling (SSM) for use in osteoarchaeology research. SSM is a full field, multi-material analytical technique, and is presented as a supplementary geometric morphometric (GM) tool. Lower mandibular canines from two archaeological populations and one modern population were sampled, digitised using micro-CT, aligned, registered to a baseline and statistically modelled using principal component analysis (PCA). Sample material properties were incorporated as a binary enamel/dentin parameter. Results were assessed qualitatively and quantitatively using anatomical landmarks. Finally, the technique's application was demonstrated for inter-sample comparison through analysis of the principal component (PC) weights. It was found that SSM could provide high detail qualitative and quantitative insight with respect to archaeological inter- and intra-sample variability. This technique has value for archaeological, biomechanical and forensic applications including identification, finite element analysis (FEA) and reconstruction from partial datasets.
Directory of Open Access Journals (Sweden)
Christopher Woods
Full Text Available This paper introduces statistical shape modelling (SSM for use in osteoarchaeology research. SSM is a full field, multi-material analytical technique, and is presented as a supplementary geometric morphometric (GM tool. Lower mandibular canines from two archaeological populations and one modern population were sampled, digitised using micro-CT, aligned, registered to a baseline and statistically modelled using principal component analysis (PCA. Sample material properties were incorporated as a binary enamel/dentin parameter. Results were assessed qualitatively and quantitatively using anatomical landmarks. Finally, the technique's application was demonstrated for inter-sample comparison through analysis of the principal component (PC weights. It was found that SSM could provide high detail qualitative and quantitative insight with respect to archaeological inter- and intra-sample variability. This technique has value for archaeological, biomechanical and forensic applications including identification, finite element analysis (FEA and reconstruction from partial datasets.
Steinberg, P. D.; Brener, G.; Duffy, D.; Nearing, G. S.; Pelissier, C.
2017-12-01
Hyperparameterization, of statistical models, i.e. automated model scoring and selection, such as evolutionary algorithms, grid searches, and randomized searches, can improve forecast model skill by reducing errors associated with model parameterization, model structure, and statistical properties of training data. Ensemble Learning Models (Elm), and the related Earthio package, provide a flexible interface for automating the selection of parameters and model structure for machine learning models common in climate science and land cover classification, offering convenient tools for loading NetCDF, HDF, Grib, or GeoTiff files, decomposition methods like PCA and manifold learning, and parallel training and prediction with unsupervised and supervised classification, clustering, and regression estimators. Continuum Analytics is using Elm to experiment with statistical soil moisture forecasting based on meteorological forcing data from NASA's North American Land Data Assimilation System (NLDAS). There Elm is using the NSGA-2 multiobjective optimization algorithm for optimizing statistical preprocessing of forcing data to improve goodness-of-fit for statistical models (i.e. feature engineering). This presentation will discuss Elm and its components, including dask (distributed task scheduling), xarray (data structures for n-dimensional arrays), and scikit-learn (statistical preprocessing, clustering, classification, regression), and it will show how NSGA-2 is being used for automate selection of soil moisture forecast statistical models for North America.
Comparison of precipitation nowcasting by extrapolation and statistical-advection methods
Czech Academy of Sciences Publication Activity Database
Sokol, Zbyněk; Kitzmiller, D.; Pešice, Petr; Mejsnar, Jan
2013-01-01
Roč. 123, 1 April (2013), s. 17-30 ISSN 0169-8095 R&D Projects: GA MŠk ME09033 Institutional support: RVO:68378289 Keywords : Precipitation forecast * Statistical models * Regression * Quantitative precipitation forecast * Extrapolation forecast Subject RIV: DG - Athmosphere Sciences, Meteorology Impact factor: 2.421, year: 2013 http://www.sciencedirect.com/science/article/pii/S0169809512003390
Statistical Models for Inferring Vegetation Composition from Fossil Pollen
Paciorek, C.; McLachlan, J. S.; Shang, Z.
2011-12-01
Fossil pollen provide information about vegetation composition that can be used to help understand how vegetation has changed over the past. However, these data have not traditionally been analyzed in a way that allows for statistical inference about spatio-temporal patterns and trends. We build a Bayesian hierarchical model called STEPPS (Spatio-Temporal Empirical Prediction from Pollen in Sediments) that predicts forest composition in southern New England, USA, over the last two millenia based on fossil pollen. The critical relationships between abundances of tree taxa in the pollen record and abundances in actual vegetation are estimated using modern (Forest Inventory Analysis) data and (witness tree) data from colonial records. This gives us two time points at which both pollen and direct vegetation data are available. Based on these relationships, and incorporating our uncertainty about them, we predict forest composition using fossil pollen. We estimate the spatial distribution and relative abundances of tree species and draw inference about how these patterns have changed over time. Finally, we describe ongoing work to extend the modeling to the upper Midwest of the U.S., including an approach to infer tree density and thereby estimate the prairie-forest boundary in Minnesota and Wisconsin. This work is part of the PalEON project, which brings together a team of ecosystem modelers, paleoecologists, and statisticians with the goal of reconstructing vegetation responses to climate during the last two millenia in the northeastern and midwestern United States. The estimates from the statistical modeling will be used to assess and calibrate ecosystem models that are used to project ecological changes in response to global change.
Quantitative Imaging Biomarkers: A Review of Statistical Methods for Computer Algorithm Comparisons
2014-01-01
Quantitative biomarkers from medical images are becoming important tools for clinical diagnosis, staging, monitoring, treatment planning, and development of new therapies. While there is a rich history of the development of quantitative imaging biomarker (QIB) techniques, little attention has been paid to the validation and comparison of the computer algorithms that implement the QIB measurements. In this paper we provide a framework for QIB algorithm comparisons. We first review and compare various study designs, including designs with the true value (e.g. phantoms, digital reference images, and zero-change studies), designs with a reference standard (e.g. studies testing equivalence with a reference standard), and designs without a reference standard (e.g. agreement studies and studies of algorithm precision). The statistical methods for comparing QIB algorithms are then presented for various study types using both aggregate and disaggregate approaches. We propose a series of steps for establishing the performance of a QIB algorithm, identify limitations in the current statistical literature, and suggest future directions for research. PMID:24919829
Quantitative imaging biomarkers: a review of statistical methods for computer algorithm comparisons.
Obuchowski, Nancy A; Reeves, Anthony P; Huang, Erich P; Wang, Xiao-Feng; Buckler, Andrew J; Kim, Hyun J Grace; Barnhart, Huiman X; Jackson, Edward F; Giger, Maryellen L; Pennello, Gene; Toledano, Alicia Y; Kalpathy-Cramer, Jayashree; Apanasovich, Tatiyana V; Kinahan, Paul E; Myers, Kyle J; Goldgof, Dmitry B; Barboriak, Daniel P; Gillies, Robert J; Schwartz, Lawrence H; Sullivan, Daniel C
2015-02-01
Quantitative biomarkers from medical images are becoming important tools for clinical diagnosis, staging, monitoring, treatment planning, and development of new therapies. While there is a rich history of the development of quantitative imaging biomarker (QIB) techniques, little attention has been paid to the validation and comparison of the computer algorithms that implement the QIB measurements. In this paper we provide a framework for QIB algorithm comparisons. We first review and compare various study designs, including designs with the true value (e.g. phantoms, digital reference images, and zero-change studies), designs with a reference standard (e.g. studies testing equivalence with a reference standard), and designs without a reference standard (e.g. agreement studies and studies of algorithm precision). The statistical methods for comparing QIB algorithms are then presented for various study types using both aggregate and disaggregate approaches. We propose a series of steps for establishing the performance of a QIB algorithm, identify limitations in the current statistical literature, and suggest future directions for research. © The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.
Takahashi, Masahiro; Kimura, Fumiko; Umezawa, Tatsuya; Watanabe, Yusuke; Ogawa, Harumi
2016-01-01
Adaptive statistical iterative reconstruction (ASIR) has been used to reduce radiation dose in cardiac computed tomography. However, change of image parameters by ASIR as compared to filtered back projection (FBP) may influence quantification of coronary calcium. To investigate the influence of ASIR on calcium quantification in comparison to FBP. In 352 patients, CT images were reconstructed using FBP alone, FBP combined with ASIR 30%, 50%, 70%, and ASIR 100% based on the same raw data. Image noise, plaque density, Agatston scores and calcium volumes were compared among the techniques. Image noise, Agatston score, and calcium volume decreased significantly with ASIR compared to FBP (each P ASIR reduced Agatston score by 10.5% to 31.0%. In calcified plaques both of patients and a phantom, ASIR decreased maximum CT values and calcified plaque size. In comparison to FBP, adaptive statistical iterative reconstruction (ASIR) may significantly decrease Agatston scores and calcium volumes. Copyright © 2016 Society of Cardiovascular Computed Tomography. Published by Elsevier Inc. All rights reserved.
Statistical molecular design of balanced compound libraries for QSAR modeling.
Linusson, A; Elofsson, M; Andersson, I E; Dahlgren, M K
2010-01-01
A fundamental step in preclinical drug development is the computation of quantitative structure-activity relationship (QSAR) models, i.e. models that link chemical features of compounds with activities towards a target macromolecule associated with the initiation or progression of a disease. QSAR models are computed by combining information on the physicochemical and structural features of a library of congeneric compounds, typically assembled from two or more building blocks, and biological data from one or more in vitro assays. Since the models provide information on features affecting the compounds' biological activity they can be used as guides for further optimization. However, in order for a QSAR model to be relevant to the targeted disease, and drug development in general, the compound library used must contain molecules with balanced variation of the features spanning the chemical space believed to be important for interaction with the biological target. In addition, the assays used must be robust and deliver high quality data that are directly related to the function of the biological target and the associated disease state. In this review, we discuss and exemplify the concept of statistical molecular design (SMD) in the selection of building blocks and final synthetic targets (i.e. compounds to synthesize) to generate information-rich, balanced libraries for biological testing and computation of QSAR models.
Directory of Open Access Journals (Sweden)
Pavel Grasgruber
2016-09-01
Full Text Available Background: The aim of this ecological study was to identify the main nutritional factors related to the prevalence of cardiovascular diseases (CVDs in Europe, based on a comparison of international statistics. Design: The mean consumption of 62 food items from the FAOSTAT database (1993–2008 was compared with the actual statistics of five CVD indicators in 42 European countries. Several other exogenous factors (health expenditure, smoking, body mass index and the historical stability of results were also examined. Results: We found exceptionally strong relationships between some of the examined factors, the highest being a correlation between raised cholesterol in men and the combined consumption of animal fat and animal protein (r=0.92, p<0.001. The most significant dietary correlate of low CVD risk was high total fat and animal protein consumption. Additional statistical analyses further highlighted citrus fruits, high-fat dairy (cheese and tree nuts. Among other non-dietary factors, health expenditure showed by far the highest correlation coefficients. The major correlate of high CVD risk was the proportion of energy from carbohydrates and alcohol, or from potato and cereal carbohydrates. Similar patterns were observed between food consumption and CVD statistics from the period 1980–2000, which shows that these relationships are stable over time. However, we found striking discrepancies in men's CVD statistics from 1980 and 1990, which can probably explain the origin of the ‘saturated fat hypothesis’ that influenced public health policies in the following decades. Conclusion: Our results do not support the association between CVDs and saturated fat, which is still contained in official dietary guidelines. Instead, they agree with data accumulated from recent studies that link CVD risk with the high glycaemic index/load of carbohydrate-based diets. In the absence of any scientific evidence connecting saturated fat with CVDs, these
Statistics of the Navier–Stokes-alpha-beta regularization model for fluid turbulence
International Nuclear Information System (INIS)
Hinz, Denis F; Kim, Tae-Yeon; Fried, Eliot
2014-01-01
We explore one-point and two-point statistics of the Navier–Stokes-αβ regularization model at moderate Reynolds number (Re ≈ 200) in homogeneous isotropic turbulence. The results are compared to the limit cases of the Navier–Stokes-α model and the Navier–Stokes-αβ model without subgrid-scale stress, as well as with high-resolution direct numerical simulation. After reviewing spectra of different energy norms of the Navier–Stokes-αβ model, the Navier–Stokes-α model, and Navier–Stokes-αβ model without subgrid-scale stress, we present probability density functions and normalized probability density functions of the filtered and unfiltered velocity increments along with longitudinal velocity structure functions of the regularization models and direct numerical simulation results. We highlight differences in the statistical properties of the unfiltered and filtered velocity fields entering the governing equations of the Navier–Stokes-α and Navier–Stokes-αβ models and discuss the usability of both velocity fields for realistic flow predictions. The influence of the modified viscous term in the Navier–Stokes-αβ model is studied through comparison to the case where the underlying subgrid-scale stress tensor is neglected. Whereas, the filtered velocity field is found to have physically more viable probability density functions and structure functions for the approximation of direct numerical simulation results, the unfiltered velocity field is found to have flatness factors close to direct numerical simulation results. (paper)
Mantovani, Daniela; Sutherland, Holly
2003-01-01
This paper reports an exercise to validate EUROMOD output for 1998 by comparing income statistics calculated from the baseline micro-output with comparable statistics from other sources, including the European Community Household Panel. The main potential reasons for discrepancies are identified. While there are some specific national issues that arise, there are two main general points to consider in interpreting EUROMOD estimates of social indicators across EU member States: (a) the method ...
A combined statistical model for multiple motifs search
International Nuclear Information System (INIS)
Gao Lifeng; Liu Xin; Guan Shan
2008-01-01
Transcription factor binding sites (TFBS) play key roles in genebior 6.8 wavelet expression and regulation. They are short sequence segments with definite structure and can be recognized by the corresponding transcription factors correctly. From the viewpoint of statistics, the candidates of TFBS should be quite different from the segments that are randomly combined together by nucleotide. This paper proposes a combined statistical model for finding over-represented short sequence segments in different kinds of data set. While the over-represented short sequence segment is described by position weight matrix, the nucleotide distribution at most sites of the segment should be far from the background nucleotide distribution. The central idea of this approach is to search for such kind of signals. This algorithm is tested on 3 data sets, including binding sites data set of cyclic AMP receptor protein in E.coli, PlantProm DB which is a non-redundant collection of proximal promoter sequences from different species, collection of the intergenic sequences of the whole genome of E.Coli. Even though the complexity of these three data sets is quite different, the results show that this model is rather general and sensible. (general)
Huffman and linear scanning methods with statistical language models.
Roark, Brian; Fried-Oken, Melanie; Gibbons, Chris
2015-03-01
Current scanning access methods for text generation in AAC devices are limited to relatively few options, most notably row/column variations within a matrix. We present Huffman scanning, a new method for applying statistical language models to binary-switch, static-grid typing AAC interfaces, and compare it to other scanning options under a variety of conditions. We present results for 16 adults without disabilities and one 36-year-old man with locked-in syndrome who presents with complex communication needs and uses AAC scanning devices for writing. Huffman scanning with a statistical language model yielded significant typing speedups for the 16 participants without disabilities versus any of the other methods tested, including two row/column scanning methods. A similar pattern of results was found with the individual with locked-in syndrome. Interestingly, faster typing speeds were obtained with Huffman scanning using a more leisurely scan rate than relatively fast individually calibrated scan rates. Overall, the results reported here demonstrate great promise for the usability of Huffman scanning as a faster alternative to row/column scanning.
Statistical Method to Overcome Overfitting Issue in Rational Function Models
Alizadeh Moghaddam, S. H.; Mokhtarzade, M.; Alizadeh Naeini, A.; Alizadeh Moghaddam, S. A.
2017-09-01
Rational function models (RFMs) are known as one of the most appealing models which are extensively applied in geometric correction of satellite images and map production. Overfitting is a common issue, in the case of terrain dependent RFMs, that degrades the accuracy of RFMs-derived geospatial products. This issue, resulting from the high number of RFMs' parameters, leads to ill-posedness of the RFMs. To tackle this problem, in this study, a fast and robust statistical approach is proposed and compared to Tikhonov regularization (TR) method, as a frequently-used solution to RFMs' overfitting. In the proposed method, a statistical test, namely, significance test is applied to search for the RFMs' parameters that are resistant against overfitting issue. The performance of the proposed method was evaluated for two real data sets of Cartosat-1 satellite images. The obtained results demonstrate the efficiency of the proposed method in term of the achievable level of accuracy. This technique, indeed, shows an improvement of 50-80% over the TR.
Simon, E.; Nowicki, S.; Neumann, T.; Tyahla, L.; Saba, J. L.; Guerber, J. R.; Bonin, J. A.; DiMarzio, J. P.
2017-12-01
The Cryosphere model Comparison tool (CmCt) is a web based ice sheet model validation tool that is being developed by NASA to facilitate direct comparison between observational data and various ice sheet models. The CmCt allows the user to take advantage of several decades worth of observations from Greenland and Antarctica. Currently, the CmCt can be used to compare ice sheet models provided by the user with remotely sensed satellite data from ICESat (Ice, Cloud, and land Elevation Satellite) laser altimetry, GRACE (Gravity Recovery and Climate Experiment) satellite, and radar altimetry (ERS-1, ERS-2, and Envisat). One or more models can be uploaded through the CmCt website and compared with observational data, or compared to each other or other models. The CmCt calculates statistics on the differences between the model and observations, and other quantitative and qualitative metrics, which can be used to evaluate the different model simulations against the observations. The qualitative metrics consist of a range of visual outputs and the quantitative metrics consist of several whole-ice-sheet scalar values that can be used to assign an overall score to a particular simulation. The comparison results from CmCt are useful in quantifying improvements within a specific model (or within a class of models) as a result of differences in model dynamics (e.g., shallow vs. higher-order dynamics approximations), model physics (e.g., representations of ice sheet rheological or basal processes), or model resolution (mesh resolution and/or changes in the spatial resolution of input datasets). The framework and metrics could also be used for use as a model-to-model intercomparison tool, simply by swapping outputs from another model as the observational datasets. Future versions of the tool will include comparisons with other datasets that are of interest to the modeling community, such as ice velocity, ice thickness, and surface mass balance.
Statistical Network Analysis for Functional MRI: Mean Networks and Group Comparisons.
Directory of Open Access Journals (Sweden)
Cedric E Ginestet
2014-05-01
Full Text Available Comparing networks in neuroscience is hard, because the topological properties of a given network are necessarily dependent on the number of edges of that network. This problem arises in the analysis of both weighted and unweighted networks. The term density is often used in this context, in order to refer to the mean edge weight of a weighted network, or to the number of edges in an unweighted one. Comparing families of networks is therefore statistically difficult because differences in topology are necessarily associated with differences in density. In this review paper, we consider this problem from two different perspectives, which include (i the construction of summary networks, such as how to compute and visualize the mean network from a sample of network-valued data points; and (ii how to test for topological differences, when two families of networks also exhibit significant differences in density. In the first instance, we show that the issue of summarizing a family of networks can be conducted by either adopting a mass-univariate approach, which produces a statistical parametric network (SPN, or by directly computing the mean network, provided that a metric has been specified on the space of all networks with a given number of nodes. In the second part of this review, we then highlight the inherent problems associated with the comparison of topological functions of families of networks that differ in density. In particular, we show that a wide range of topological summaries, such as global efficiency and network modularity are highly sensitive to differences in density. Moreover, these problems are not restricted to unweighted metrics, as we demonstrate that the same issues remain present when considering the weighted versions of these metrics. We conclude by encouraging caution, when reporting such statistical comparisons, and by emphasizing the importance of constructing summary networks.
Graphene growth process modeling: a physical-statistical approach
Wu, Jian; Huang, Qiang
2014-09-01
As a zero-band semiconductor, graphene is an attractive material for a wide variety of applications such as optoelectronics. Among various techniques developed for graphene synthesis, chemical vapor deposition on copper foils shows high potential for producing few-layer and large-area graphene. Since fabrication of high-quality graphene sheets requires the understanding of growth mechanisms, and methods of characterization and control of grain size of graphene flakes, analytical modeling of graphene growth process is therefore essential for controlled fabrication. The graphene growth process starts with randomly nucleated islands that gradually develop into complex shapes, grow in size, and eventually connect together to cover the copper foil. To model this complex process, we develop a physical-statistical approach under the assumption of self-similarity during graphene growth. The growth kinetics is uncovered by separating island shapes from area growth rate. We propose to characterize the area growth velocity using a confined exponential model, which not only has clear physical explanation, but also fits the real data well. For the shape modeling, we develop a parametric shape model which can be well explained by the angular-dependent growth rate. This work can provide useful information for the control and optimization of graphene growth process on Cu foil.
The choice of statistical methods for comparisons of dosimetric data in radiotherapy
International Nuclear Information System (INIS)
Chaikh, Abdulhamid; Giraud, Jean-Yves; Perrin, Emmanuel; Bresciani, Jean-Pierre; Balosso, Jacques
2014-01-01
Novel irradiation techniques are continuously introduced in radiotherapy to optimize the accuracy, the security and the clinical outcome of treatments. These changes could raise the question of discontinuity in dosimetric presentation and the subsequent need for practice adjustments in case of significant modifications. This study proposes a comprehensive approach to compare different techniques and tests whether their respective dose calculation algorithms give rise to statistically significant differences in the treatment doses for the patient. Statistical investigation principles are presented in the framework of a clinical example based on 62 fields of radiotherapy for lung cancer. The delivered doses in monitor units were calculated using three different dose calculation methods: the reference method accounts the dose without tissues density corrections using Pencil Beam Convolution (PBC) algorithm, whereas new methods calculate the dose with tissues density correction for 1D and 3D using Modified Batho (MB) method and Equivalent Tissue air ratio (ETAR) method, respectively. The normality of the data and the homogeneity of variance between groups were tested using Shapiro-Wilks and Levene test, respectively, then non-parametric statistical tests were performed. Specifically, the dose means estimated by the different calculation methods were compared using Friedman’s test and Wilcoxon signed-rank test. In addition, the correlation between the doses calculated by the three methods was assessed using Spearman’s rank and Kendall’s rank tests. The Friedman’s test showed a significant effect on the calculation method for the delivered dose of lung cancer patients (p <0.001). The density correction methods yielded to lower doses as compared to PBC by on average (−5 ± 4.4 SD) for MB and (−4.7 ± 5 SD) for ETAR. Post-hoc Wilcoxon signed-rank test of paired comparisons indicated that the delivered dose was significantly reduced using density
Mohd Fo'ad Rohani; Mohd Aizaini Maarof; Ali Selamat; Houssain Kettani
2010-01-01
This paper proposes a Multi-Level Sampling (MLS) approach for continuous Loss of Self-Similarity (LoSS) detection using iterative window. The method defines LoSS based on Second Order Self-Similarity (SOSS) statistical model. The Optimization Method (OM) is used to estimate self-similarity parameter since it is fast and more accurate in comparison with other estimation methods known in the literature. Probability of LoSS detection is introduced to measure continuous LoSS detection performance...
An integrated model of statistical process control and maintenance based on the delayed monitoring
International Nuclear Information System (INIS)
Yin, Hui; Zhang, Guojun; Zhu, Haiping; Deng, Yuhao; He, Fei
2015-01-01
This paper develops an integrated model of statistical process control and maintenance decision. The proposal of delayed monitoring policy postpones the sampling process till a scheduled time and contributes to ten-scenarios of the production process, where equipment failure may occur besides quality shift. The equipment failure and the control chart alert trigger the corrective maintenance and the predictive maintenance, respectively. The occurrence probability, the cycle time and the cycle cost of each scenario are obtained by integral calculation; therefore, a mathematical model is established to minimize the expected cost by using the genetic algorithm. A Monte Carlo simulation experiment is conducted and compared with the integral calculation in order to ensure the analysis of the ten-scenario model. Another ordinary integrated model without delayed monitoring is also established as comparison. The results of a numerical example indicate satisfactory economic performance of the proposed model. Finally, a sensitivity analysis is performed to investigate the effect of model parameters. - Highlights: • We develop an integrated model of statistical process control and maintenance. • We propose delayed monitoring policy and derive an economic model with 10 scenarios. • We consider two deterioration mechanisms, quality shift and equipment failure. • The delayed monitoring policy will help reduce the expected cost
Flashover of a vacuum-insulator interface: A statistical model
Directory of Open Access Journals (Sweden)
W. A. Stygar
2004-07-01
Full Text Available We have developed a statistical model for the flashover of a 45° vacuum-insulator interface (such as would be found in an accelerator subject to a pulsed electric field. The model assumes that the initiation of a flashover plasma is a stochastic process, that the characteristic statistical component of the flashover delay time is much greater than the plasma formative time, and that the average rate at which flashovers occur is a power-law function of the instantaneous value of the electric field. Under these conditions, we find that the flashover probability is given by 1-exp(-E_{p}^{β}t_{eff}C/k^{β}, where E_{p} is the peak value in time of the spatially averaged electric field E(t, t_{eff}≡∫[E(t/E_{p}]^{β}dt is the effective pulse width, C is the insulator circumference, k∝exp(λ/d, and β and λ are constants. We define E(t as V(t/d, where V(t is the voltage across the insulator and d is the insulator thickness. Since the model assumes that flashovers occur at random azimuthal locations along the insulator, it does not apply to systems that have a significant defect, i.e., a location contaminated with debris or compromised by an imperfection at which flashovers repeatedly take place, and which prevents a random spatial distribution. The model is consistent with flashover measurements to within 7% for pulse widths between 0.5 ns and 10 μs, and to within a factor of 2 between 0.5 ns and 90 s (a span of over 11 orders of magnitude. For these measurements, E_{p} ranges from 64 to 651 kV/cm, d from 0.50 to 4.32 cm, and C from 4.96 to 95.74 cm. The model is significantly more accurate, and is valid over a wider range of parameters, than the J. C. Martin flashover relation that has been in use since 1971 [J. C. Martin on Pulsed Power, edited by T. H. Martin, A. H. Guenther, and M. Kristiansen (Plenum, New York, 1996]. We have generalized the statistical model to estimate the total-flashover probability of an
Statistical Comparison of the Baseline Mechanical Properties of NBG-18 and PCEA Graphite
Energy Technology Data Exchange (ETDEWEB)
Mark C. Carroll; David T. Rohrbaugh
2013-08-01
High-purity graphite is the core structural material of choice in the Very High Temperature Reactor (VHTR), a graphite-moderated, helium-cooled design that is capable of producing process heat for power generation and for industrial process that require temperatures higher than the outlet temperatures of present nuclear reactors. The Baseline Graphite Characterization Program is endeavoring to minimize the conservative estimates of as-manufactured mechanical and physical properties by providing comprehensive data that captures the level of variation in measured values. In addition to providing a comprehensive comparison between these values in different nuclear grades, the program is also carefully tracking individual specimen source, position, and orientation information in order to provide comparisons and variations between different lots, different billets, and different positions from within a single billet. This report is a preliminary comparison between the two grades of graphite that were initially favored in the two main VHTR designs. NBG-18, a medium-grain pitch coke graphite from SGL formed via vibration molding, was the favored structural material in the pebble-bed configuration, while PCEA, a smaller grain, petroleum coke, extruded graphite from GrafTech was favored for the prismatic configuration. An analysis of the comparison between these two grades will include not only the differences in fundamental and statistically-significant individual strength levels, but also the differences in variability in properties within each of the grades that will ultimately provide the basis for the prediction of in-service performance. The comparative performance of the different types of nuclear grade graphites will continue to evolve as thousands more specimens are fully characterized from the numerous grades of graphite being evaluated.
Comparison of small n statistical tests of differential expression applied to microarrays
Directory of Open Access Journals (Sweden)
Lee Anna Y
2009-02-01
Full Text Available Abstract Background DNA microarrays provide data for genome wide patterns of expression between observation classes. Microarray studies often have small samples sizes, however, due to cost constraints or specimen availability. This can lead to poor random error estimates and inaccurate statistical tests of differential expression. We compare the performance of the standard t-test, fold change, and four small n statistical test methods designed to circumvent these problems. We report results of various normalization methods for empirical microarray data and of various random error models for simulated data. Results Three Empirical Bayes methods (CyberT, BRB, and limma t-statistics were the most effective statistical tests across simulated and both 2-colour cDNA and Affymetrix experimental data. The CyberT regularized t-statistic in particular was able to maintain expected false positive rates with simulated data showing high variances at low gene intensities, although at the cost of low true positive rates. The Local Pooled Error (LPE test introduced a bias that lowered false positive rates below theoretically expected values and had lower power relative to the top performers. The standard two-sample t-test and fold change were also found to be sub-optimal for detecting differentially expressed genes. The generalized log transformation was shown to be beneficial in improving results with certain data sets, in particular high variance cDNA data. Conclusion Pre-processing of data influences performance and the proper combination of pre-processing and statistical testing is necessary for obtaining the best results. All three Empirical Bayes methods assessed in our study are good choices for statistical tests for small n microarray studies for both Affymetrix and cDNA data. Choice of method for a particular study will depend on software and normalization preferences.
A Statistical Model for Regional Tornado Climate Studies.
Directory of Open Access Journals (Sweden)
Thomas H Jagger
Full Text Available Tornado reports are locally rare, often clustered, and of variable quality making it difficult to use them directly to describe regional tornado climatology. Here a statistical model is demonstrated that overcomes some of these difficulties and produces a smoothed regional-scale climatology of tornado occurrences. The model is applied to data aggregated at the level of counties. These data include annual population, annual tornado counts and an index of terrain roughness. The model has a term to capture the smoothed frequency relative to the state average. The model is used to examine whether terrain roughness is related to tornado frequency and whether there are differences in tornado activity by County Warning Area (CWA. A key finding is that tornado reports increase by 13% for a two-fold increase in population across Kansas after accounting for improvements in rating procedures. Independent of this relationship, tornadoes have been increasing at an annual rate of 1.9%. Another finding is the pattern of correlated residuals showing more Kansas tornadoes in a corridor of counties running roughly north to south across the west central part of the state consistent with the dryline climatology. The model is significantly improved by adding terrain roughness. The effect amounts to an 18% reduction in the number of tornadoes for every ten meter increase in elevation standard deviation. The model indicates that tornadoes are 51% more likely to occur in counties served by the CWAs of DDC and GID than elsewhere in the state. Flexibility of the model is illustrated by fitting it to data from Illinois, Mississippi, South Dakota, and Ohio.
A Statistical Model for Regional Tornado Climate Studies.
Jagger, Thomas H; Elsner, James B; Widen, Holly M
2015-01-01
Tornado reports are locally rare, often clustered, and of variable quality making it difficult to use them directly to describe regional tornado climatology. Here a statistical model is demonstrated that overcomes some of these difficulties and produces a smoothed regional-scale climatology of tornado occurrences. The model is applied to data aggregated at the level of counties. These data include annual population, annual tornado counts and an index of terrain roughness. The model has a term to capture the smoothed frequency relative to the state average. The model is used to examine whether terrain roughness is related to tornado frequency and whether there are differences in tornado activity by County Warning Area (CWA). A key finding is that tornado reports increase by 13% for a two-fold increase in population across Kansas after accounting for improvements in rating procedures. Independent of this relationship, tornadoes have been increasing at an annual rate of 1.9%. Another finding is the pattern of correlated residuals showing more Kansas tornadoes in a corridor of counties running roughly north to south across the west central part of the state consistent with the dryline climatology. The model is significantly improved by adding terrain roughness. The effect amounts to an 18% reduction in the number of tornadoes for every ten meter increase in elevation standard deviation. The model indicates that tornadoes are 51% more likely to occur in counties served by the CWAs of DDC and GID than elsewhere in the state. Flexibility of the model is illustrated by fitting it to data from Illinois, Mississippi, South Dakota, and Ohio.
A comparison of linear and nonlinear statistical techniques in performance attribution.
Chan, N H; Genovese, C R
2001-01-01
Performance attribution is usually conducted under the linear framework of multifactor models. Although commonly used by practitioners in finance, linear multifactor models are known to be less than satisfactory in many situations. After a brief survey of nonlinear methods, nonlinear statistical techniques are applied to performance attribution of a portfolio constructed from a fixed universe of stocks using factors derived from some commonly used cross sectional linear multifactor models. By rebalancing this portfolio monthly, the cumulative returns for procedures based on standard linear multifactor model and three nonlinear techniques-model selection, additive models, and neural networks-are calculated and compared. It is found that the first two nonlinear techniques, especially in combination, outperform the standard linear model. The results in the neural-network case are inconclusive because of the great variety of possible models. Although these methods are more complicated and may require some tuning, toolboxes are developed and suggestions on calibration are proposed. This paper demonstrates the usefulness of modern nonlinear statistical techniques in performance attribution.
Linear mixed models a practical guide using statistical software
West, Brady T; Galecki, Andrzej T
2014-01-01
Highly recommended by JASA, Technometrics, and other journals, the first edition of this bestseller showed how to easily perform complex linear mixed model (LMM) analyses via a variety of software programs. Linear Mixed Models: A Practical Guide Using Statistical Software, Second Edition continues to lead readers step by step through the process of fitting LMMs. This second edition covers additional topics on the application of LMMs that are valuable for data analysts in all fields. It also updates the case studies using the latest versions of the software procedures and provides up-to-date information on the options and features of the software procedures available for fitting LMMs in SAS, SPSS, Stata, R/S-plus, and HLM.New to the Second Edition A new chapter on models with crossed random effects that uses a case study to illustrate software procedures capable of fitting these models Power analysis methods for longitudinal and clustered study designs, including software options for power analyses and suggest...
Corrected Statistical Energy Analysis Model for Car Interior Noise
Directory of Open Access Journals (Sweden)
A. Putra
2015-01-01
Full Text Available Statistical energy analysis (SEA is a well-known method to analyze the flow of acoustic and vibration energy in a complex structure. For an acoustic space where significant absorptive materials are present, direct field component from the sound source dominates the total sound field rather than a reverberant field, where the latter becomes the basis in constructing the conventional SEA model. Such environment can be found in a car interior and thus a corrected SEA model is proposed here to counter this situation. The model is developed by eliminating the direct field component from the total sound field and only the power after the first reflection is considered. A test car cabin was divided into two subsystems and by using a loudspeaker as a sound source, the power injection method in SEA was employed to obtain the corrected coupling loss factor and the damping loss factor from the corrected SEA model. These parameters were then used to predict the sound pressure level in the interior cabin using the injected input power from the engine. The results show satisfactory agreement with the directly measured SPL.
Stochastic Spatial Models in Ecology: A Statistical Physics Approach
Pigolotti, Simone; Cencini, Massimo; Molina, Daniel; Muñoz, Miguel A.
2017-11-01
Ecosystems display a complex spatial organization. Ecologists have long tried to characterize them by looking at how different measures of biodiversity change across spatial scales. Ecological neutral theory has provided simple predictions accounting for general empirical patterns in communities of competing species. However, while neutral theory in well-mixed ecosystems is mathematically well understood, spatial models still present several open problems, limiting the quantitative understanding of spatial biodiversity. In this review, we discuss the state of the art in spatial neutral theory. We emphasize the connection between spatial ecological models and the physics of non-equilibrium phase transitions and how concepts developed in statistical physics translate in population dynamics, and vice versa. We focus on non-trivial scaling laws arising at the critical dimension D = 2 of spatial neutral models, and their relevance for biological populations inhabiting two-dimensional environments. We conclude by discussing models incorporating non-neutral effects in the form of spatial and temporal disorder, and analyze how their predictions deviate from those of purely neutral theories.
Percolation for a model of statistically inhomogeneous random media
International Nuclear Information System (INIS)
Quintanilla, J.; Torquato, S.
1999-01-01
We study clustering and percolation phenomena for a model of statistically inhomogeneous two-phase random media, including functionally graded materials. This model consists of inhomogeneous fully penetrable (Poisson distributed) disks and can be constructed for any specified variation of volume fraction. We quantify the transition zone in the model, defined by the frontier of the cluster of disks which are connected to the disk-covered portion of the model, by defining the coastline function and correlation functions for the coastline. We find that the behavior of these functions becomes largely independent of the specific choice of grade in volume fraction as the separation of length scales becomes large. We also show that the correlation function behaves in a manner similar to that of fractal Brownian motion. Finally, we study fractal characteristics of the frontier itself and compare to similar properties for two-dimensional percolation on a lattice. In particular, we show that the average location of the frontier appears to be related to the percolation threshold for homogeneous fully penetrable disks. copyright 1999 American Institute of Physics
Glass viscosity calculation based on a global statistical modelling approach
Energy Technology Data Exchange (ETDEWEB)
Fluegel, Alex
2007-02-01
A global statistical glass viscosity model was developed for predicting the complete viscosity curve, based on more than 2200 composition-property data of silicate glasses from the scientific literature, including soda-lime-silica container and float glasses, TV panel glasses, borosilicate fiber wool and E type glasses, low expansion borosilicate glasses, glasses for nuclear waste vitrification, lead crystal glasses, binary alkali silicates, and various further compositions from over half a century. It is shown that within a measurement series from a specific laboratory the reported viscosity values are often over-estimated at higher temperatures due to alkali and boron oxide evaporation during the measurement and glass preparation, including data by Lakatos et al. (1972) and the recently published High temperature glass melt property database for process modeling by Seward et al. (2005). Similarly, in the glass transition range many experimental data of borosilicate glasses are reported too high due to phase separation effects. The developed global model corrects those errors. The model standard error was 9-17°C, with R^2 = 0.985-0.989. The prediction 95% confidence interval for glass in mass production largely depends on the glass composition of interest, the composition uncertainty, and the viscosity level. New insights in the mixed-alkali effect are provided.
Lin, Jen-Jen; Cheng, Jung-Yu; Huang, Li-Fei; Lin, Ying-Hsiu; Wan, Yung-Liang; Tsui, Po-Hsiang
2017-05-01
The Nakagami distribution is an approximation useful to the statistics of ultrasound backscattered signals for tissue characterization. Various estimators may affect the Nakagami parameter in the detection of changes in backscattered statistics. In particular, the moment-based estimator (MBE) and maximum likelihood estimator (MLE) are two primary methods used to estimate the Nakagami parameters of ultrasound signals. This study explored the effects of the MBE and different MLE approximations on Nakagami parameter estimations. Ultrasound backscattered signals of different scatterer number densities were generated using a simulation model, and phantom experiments and measurements of human liver tissues were also conducted to acquire real backscattered echoes. Envelope signals were employed to estimate the Nakagami parameters by using the MBE, first- and second-order approximations of MLE (MLE 1 and MLE 2 , respectively), and Greenwood approximation (MLE gw ) for comparisons. The simulation results demonstrated that, compared with the MBE and MLE 1 , the MLE 2 and MLE gw enabled more stable parameter estimations with small sample sizes. Notably, the required data length of the envelope signal was 3.6 times the pulse length. The phantom and tissue measurement results also showed that the Nakagami parameters estimated using the MLE 2 and MLE gw could simultaneously differentiate various scatterer concentrations with lower standard deviations and reliably reflect physical meanings associated with the backscattered statistics. Therefore, the MLE 2 and MLE gw are suggested as estimators for the development of Nakagami-based methodologies for ultrasound tissue characterization. Copyright © 2017 Elsevier B.V. All rights reserved.
StAR: a simple tool for the statistical comparison of ROC curves
Directory of Open Access Journals (Sweden)
Melo Francisco
2008-06-01
Full Text Available Abstract Background As in many different areas of science and technology, most important problems in bioinformatics rely on the proper development and assessment of binary classifiers. A generalized assessment of the performance of binary classifiers is typically carried out through the analysis of their receiver operating characteristic (ROC curves. The area under the ROC curve (AUC constitutes a popular indicator of the performance of a binary classifier. However, the assessment of the statistical significance of the difference between any two classifiers based on this measure is not a straightforward task, since not many freely available tools exist. Most existing software is either not free, difficult to use or not easy to automate when a comparative assessment of the performance of many binary classifiers is intended. This constitutes the typical scenario for the optimization of parameters when developing new classifiers and also for their performance validation through the comparison to previous art. Results In this work we describe and release new software to assess the statistical significance of the observed difference between the AUCs of any two classifiers for a common task estimated from paired data or unpaired balanced data. The software is able to perform a pairwise comparison of many classifiers in a single run, without requiring any expert or advanced knowledge to use it. The software relies on a non-parametric test for the difference of the AUCs that accounts for the correlation of the ROC curves. The results are displayed graphically and can be easily customized by the user. A human-readable report is generated and the complete data resulting from the analysis are also available for download, which can be used for further analysis with other software. The software is released as a web server that can be used in any client platform and also as a standalone application for the Linux operating system. Conclusion A new software for
Analytical model of SiPM time resolution and order statistics with crosstalk
International Nuclear Information System (INIS)
Vinogradov, S.
2015-01-01
Time resolution is the most important parameter of photon detectors in a wide range of time-of-flight and time correlation applications within the areas of high energy physics, medical imaging, and others. Silicon photomultipliers (SiPM) have been initially recognized as perfect photon-number-resolving detectors; now they also provide outstanding results in the scintillator timing resolution. However, crosstalk and afterpulsing introduce false secondary non-Poissonian events, and SiPM time resolution models are experiencing significant difficulties with that. This study presents an attempt to develop an analytical model of the timing resolution of an SiPM taking into account statistics of secondary events resulting from a crosstalk. Two approaches have been utilized to derive an analytical expression for time resolution: the first one based on statistics of independent identically distributed detection event times and the second one based on order statistics of these times. The first approach is found to be more straightforward and “analytical-friendly” to model analog SiPMs. Comparisons of coincidence resolving times predicted by the model with the known experimental results from a LYSO:Ce scintillator and a Hamamatsu MPPC are presented
Analytical model of SiPM time resolution and order statistics with crosstalk
Energy Technology Data Exchange (ETDEWEB)
Vinogradov, S., E-mail: Sergey.Vinogradov@liverpool.ac.uk [University of Liverpool and Cockcroft Institute, Sci-Tech Daresbury, Keckwick Lane, Warrington WA4 4AD (United Kingdom); P.N. Lebedev Physical Institute of the Russian Academy of Sciences, 119991 Leninskiy Prospekt 53, Moscow (Russian Federation)
2015-07-01
Time resolution is the most important parameter of photon detectors in a wide range of time-of-flight and time correlation applications within the areas of high energy physics, medical imaging, and others. Silicon photomultipliers (SiPM) have been initially recognized as perfect photon-number-resolving detectors; now they also provide outstanding results in the scintillator timing resolution. However, crosstalk and afterpulsing introduce false secondary non-Poissonian events, and SiPM time resolution models are experiencing significant difficulties with that. This study presents an attempt to develop an analytical model of the timing resolution of an SiPM taking into account statistics of secondary events resulting from a crosstalk. Two approaches have been utilized to derive an analytical expression for time resolution: the first one based on statistics of independent identically distributed detection event times and the second one based on order statistics of these times. The first approach is found to be more straightforward and “analytical-friendly” to model analog SiPMs. Comparisons of coincidence resolving times predicted by the model with the known experimental results from a LYSO:Ce scintillator and a Hamamatsu MPPC are presented.
A Statistical Toolbox For Mining And Modeling Spatial Data
Directory of Open Access Journals (Sweden)
D’Aubigny Gérard
2016-12-01
Full Text Available Most data mining projects in spatial economics start with an evaluation of a set of attribute variables on a sample of spatial entities, looking for the existence and strength of spatial autocorrelation, based on the Moran’s and the Geary’s coefficients, the adequacy of which is rarely challenged, despite the fact that when reporting on their properties, many users seem likely to make mistakes and to foster confusion. My paper begins by a critical appraisal of the classical definition and rational of these indices. I argue that while intuitively founded, they are plagued by an inconsistency in their conception. Then, I propose a principled small change leading to corrected spatial autocorrelation coefficients, which strongly simplifies their relationship, and opens the way to an augmented toolbox of statistical methods of dimension reduction and data visualization, also useful for modeling purposes. A second section presents a formal framework, adapted from recent work in statistical learning, which gives theoretical support to our definition of corrected spatial autocorrelation coefficients. More specifically, the multivariate data mining methods presented here, are easily implementable on the existing (free software, yield methods useful to exploit the proposed corrections in spatial data analysis practice, and, from a mathematical point of view, whose asymptotic behavior, already studied in a series of papers by Belkin & Niyogi, suggests that they own qualities of robustness and a limited sensitivity to the Modifiable Areal Unit Problem (MAUP, valuable in exploratory spatial data analysis.
Statistical mechanics of learning orthogonal signals for general covariance models
International Nuclear Information System (INIS)
Hoyle, David C
2010-01-01
Statistical mechanics techniques have proved to be useful tools in quantifying the accuracy with which signal vectors are extracted from experimental data. However, analysis has previously been limited to specific model forms for the population covariance C, which may be inappropriate for real world data sets. In this paper we obtain new statistical mechanical results for a general population covariance matrix C. For data sets consisting of p sample points in R N we use the replica method to study the accuracy of orthogonal signal vectors estimated from the sample data. In the asymptotic limit of N,p→∞ at fixed α = p/N, we derive analytical results for the signal direction learning curves. In the asymptotic limit the learning curves follow a single universal form, each displaying a retarded learning transition. An explicit formula for the location of the retarded learning transition is obtained and we find marked variation in the location of the retarded learning transition dependent on the distribution of population covariance eigenvalues. The results of the replica analysis are confirmed against simulation
Comparison of Statistical Post-Processing Methods for Probabilistic Wind Speed Forecasting
Han, Keunhee; Choi, JunTae; Kim, Chansoo
2018-02-01
In this study, the statistical post-processing methods that include bias-corrected and probabilistic forecasts of wind speed measured in PyeongChang, which is scheduled to host the 2018 Winter Olympics, are compared and analyzed to provide more accurate weather information. The six post-processing methods used in this study are as follows: mean bias-corrected forecast, mean and variance bias-corrected forecast, decaying averaging forecast, mean absolute bias-corrected forecast, and the alternative implementations of ensemble model output statistics (EMOS) and Bayesian model averaging (BMA) models, which are EMOS and BMA exchangeable models by assuming exchangeable ensemble members and simplified version of EMOS and BMA models. Observations for wind speed were obtained from the 26 stations in PyeongChang and 51 ensemble member forecasts derived from the European Centre for Medium-Range Weather Forecasts (ECMWF Directorate, 2012) that were obtained between 1 May 2013 and 18 March 2016. Prior to applying the post-processing methods, reliability analysis was conducted by using rank histograms to identify the statistical consistency of ensemble forecast and corresponding observations. Based on the results of our study, we found that the prediction skills of probabilistic forecasts of EMOS and BMA models were superior to the biascorrected forecasts in terms of deterministic prediction, whereas in probabilistic prediction, BMA models showed better prediction skill than EMOS. Even though the simplified version of BMA model exhibited best prediction skill among the mentioned six methods, the results showed that the differences of prediction skills between the versions of EMOS and BMA were negligible.
Hendricks, J S
2003-01-01
MCNPX is a Monte Carlo N-Particle radiation transport code extending the capabilities of MCNP4C. As with MCNP, MCNPX uses nuclear data tables to transport neutrons, photons, and electrons. Unlike MCNP, MCNPX also uses (1) nuclear data tables to transport protons; (2) physics models to transport 30 additional particle types (deuterons, tritons, alphas, pions, muons, etc.); and (3) physics models to transport neutrons and protons when no tabular data are available or when the data are above the energy range (20 to 150 MeV) where the data tables end. MCNPX can mix and match data tables and physics models throughout a problem. For example, MCNPX can model neutron transport in a bismuth germinate (BGO) particle detector by using data tables for bismuth and oxygen and using physics models for germanium. Also, MCNPX can model neutron transport in UO sub 2 , making the best use of physics models and data tables: below 20 MeV, data tables are used; above 150 MeV, physics models are used; between 20 and 150 MeV, data t...
Comparison of advanced aerodynamic models
Energy Technology Data Exchange (ETDEWEB)
McWilliam, M.; Cline, S.; Lawton, S.; Crawford, C. [Victoria Univ., BC (Canada). Inst. for Integrated Energy Systems; Victoria Univ., BC (Canada). Sustainable Systems Design Laboratory
2010-07-01
This PowerPoint presentation discussed the development of aerodynamic tools for designing sweep and out-of-plane curvatures for wind turbine blades. Potential flow and vortex methods are used to simulate individual vortex elements at the blade and in the wake, and are appropriate modelling tools are both out-of-plane and sweep curvatures. Centrifugal pumping, hub loss, and turbulent wake models are used to correct the blade element momentum (BEM) theory, where a blade's wake is modelled as a momentum balance between the far upstream and downstream. Wake shape can be numerically solved using the vortex theory. Wake vorticity is then integrated to characterize rotor conditions. Potential flow and vortex methods are used to account for the influence of the rotor and to model the wake structure. Details of experimental studies and validation test cases using the modelling methods were provided. tabs., figs.
Energy Technology Data Exchange (ETDEWEB)
Almeida, Arthur C.; Barros, Paulo L.C.; Monteiro, Jose H.A.; Rocha, Brigida R.P. [Universidade Federal do Para (DEEC/UFPA), Belem, PA (Brazil). Dept. de Engenharia Eletrica e Computacao. Grupo de Pesquisa ENERBIO], e-mails: arthur@ufpa.br, jhumberto01@yahoo.com.br, brigida@ufpa.br, paulo.contente@ufra.edu.br
2006-07-01
The current methodologies for calculating the volume of biomass and the consequent potential energy widely used in forest inventories, based primarily in statistical methodology to obtain their results. However, more recent techniques, based on the ability of nonlinear mappings, offered by artificial neural networks, have been used successfully in several areas of technology, with superior performance. This work shows a comparison between the statistical model to estimate the volume of trees and a model based on neural networks, which can be used with advantage for this activity related with biomass energy planning.
Hiedemann, Bridget; Jones, Stacey M.
2010-01-01
We compare the effectiveness of academic service learning to that of case studies in an undergraduate introductory business statistics course. Students in six sections of the course were assigned either an academic service learning project (ASL) or business case studies (CS). We examine two learning outcomes: students' performance on the final…
Comparison of Vehicle Choice Models
Energy Technology Data Exchange (ETDEWEB)
Stephens, Thomas S. [Argonne National Lab. (ANL), Argonne, IL (United States); Levinson, Rebecca S. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Brooker, Aaron [National Renewable Energy Lab. (NREL), Golden, CO (United States); Liu, Changzheng [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Lin, Zhenhong [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Birky, Alicia [Energetics Incorporated, Columbia, MD (United States); Kontou, Eleftheria [National Renewable Energy Lab. (NREL), Golden, CO (United States)
2017-10-01
Five consumer vehicle choice models that give projections of future sales shares of light-duty vehicles were compared by running each model using the same inputs, where possible, for two scenarios. The five models compared — LVCFlex, MA3T, LAVE-Trans, ParaChoice, and ADOPT — have been used in support of the Energy Efficiency and Renewable Energy (EERE) Vehicle Technologies Office in analyses of future light-duty vehicle markets under different assumptions about future vehicle technologies and market conditions. The models give projections of sales shares by powertrain technology. Projections made using common, but not identical, inputs showed qualitative agreement, with the exception of ADOPT. ADOPT estimated somewhat lower advanced vehicle shares, mostly composed of hybrid electric vehicles. Other models projected large shares of multiple advanced vehicle powertrains. Projections of models differed in significant ways, including how different technologies penetrated cars and light trucks. Since the models are constructed differently and take different inputs, not all inputs were identical, but were the same or very similar where possible.
Representation of the contextual statistical model by hyperbolic amplitudes
International Nuclear Information System (INIS)
Khrennikov, Andrei
2005-01-01
We continue the development of a so-called contextual statistical model (here context has the meaning of a complex of physical conditions). It is shown that, besides contexts producing the conventional trigonometric cos-interference, there exist contexts producing the hyperbolic cos-interference. Starting with the corresponding interference formula of total probability we represent such contexts by hyperbolic probabilistic amplitudes or in the abstract formalism by normalized vectors of a hyperbolic analogue of the Hilbert space. There is obtained a hyperbolic Born's rule. Incompatible observables are represented by noncommutative operators. This paper can be considered as the first step towards hyperbolic quantum probability. We also discuss possibilities of experimental verification of hyperbolic quantum mechanics: in physics of elementary particles, string theory as well as in experiments with nonphysical systems, e.g., in psychology, cognitive sciences, and economy
α-ternary decay of Cf isotopes, statistical model
International Nuclear Information System (INIS)
Joseph, Jayesh George; Santhosh, K.P.
2017-01-01
The process of splitting a heavier nucleus to three simultaneous fragments is termed as ternary fission and compared to usual binary fission, it is a rare process. Depending on the nature of third particle either it is called light charged particle (LCP) accompanying fission if it is light or true ternary fission if all three fragments have nearly same mass distributions. After experimental observations in early seventies, initially with a slow pace, now theoretical studies in ternary fission has turned to a hot topic in nuclear decay studies especially in past one decade. Mean while various models have been developed, existing being modified and seeking for new with a hope that it can beam a little more light to the profound nature of nuclear interaction. In this study a statistical method, level density formulation, has been employed
The INTRACOIN model comparison project
International Nuclear Information System (INIS)
Lawson, G.
1982-01-01
The International Nuclide Transport Code Intercomparison (INTRACOIN) project is investigating the different models and associated computer codes describing the transport of radionuclides in flowing ground-water following the disposal of solid radioactive wastes in geologic formations. Level I of the project has shown good agreement in the numerical accuracy of most of the codes. In Level II the ability of the codes to model field experiments with radioactive tracers will be compared. Level III will show to what extent the adoption of different models and computer codes for the transport of radionuclides with ground water affects the results of repository assessments. (U.K.)
Smooth extrapolation of unknown anatomy via statistical shape models
Grupp, R. B.; Chiang, H.; Otake, Y.; Murphy, R. J.; Gordon, C. R.; Armand, M.; Taylor, R. H.
2015-03-01
Several methods to perform extrapolation of unknown anatomy were evaluated. The primary application is to enhance surgical procedures that may use partial medical images or medical images of incomplete anatomy. Le Fort-based, face-jaw-teeth transplant is one such procedure. From CT data of 36 skulls and 21 mandibles separate Statistical Shape Models of the anatomical surfaces were created. Using the Statistical Shape Models, incomplete surfaces were projected to obtain complete surface estimates. The surface estimates exhibit non-zero error in regions where the true surface is known; it is desirable to keep the true surface and seamlessly merge the estimated unknown surface. Existing extrapolation techniques produce non-smooth transitions from the true surface to the estimated surface, resulting in additional error and a less aesthetically pleasing result. The three extrapolation techniques evaluated were: copying and pasting of the surface estimate (non-smooth baseline), a feathering between the patient surface and surface estimate, and an estimate generated via a Thin Plate Spline trained from displacements between the surface estimate and corresponding vertices of the known patient surface. Feathering and Thin Plate Spline approaches both yielded smooth transitions. However, feathering corrupted known vertex values. Leave-one-out analyses were conducted, with 5% to 50% of known anatomy removed from the left-out patient and estimated via the proposed approaches. The Thin Plate Spline approach yielded smaller errors than the other two approaches, with an average vertex error improvement of 1.46 mm and 1.38 mm for the skull and mandible respectively, over the baseline approach.
Statistical shape modeling based renal volume measurement using tracked ultrasound
Pai Raikar, Vipul; Kwartowitz, David M.
2017-03-01
Autosomal dominant polycystic kidney disease (ADPKD) is the fourth most common cause of kidney transplant worldwide accounting for 7-10% of all cases. Although ADPKD usually progresses over many decades, accurate risk prediction is an important task.1 Identifying patients with progressive disease is vital to providing new treatments being developed and enable them to enter clinical trials for new therapy. Among other factors, total kidney volume (TKV) is a major biomarker predicting the progression of ADPKD. Consortium for Radiologic Imaging Studies in Polycystic Kidney Disease (CRISP)2 have shown that TKV is an early, and accurate measure of cystic burden and likely growth rate. It is strongly associated with loss of renal function.3 While ultrasound (US) has proven as an excellent tool for diagnosing the disease; monitoring short-term changes using ultrasound has been shown to not be accurate. This is attributed to high operator variability and reproducibility as compared to tomographic modalities such as CT and MR (Gold standard). Ultrasound has emerged as one of the standout modality for intra-procedural imaging and with methods for spatial localization has afforded us the ability to track 2D ultrasound in physical space which it is being used. In addition to this, the vast amount of recorded tomographic data can be used to generate statistical shape models that allow us to extract clinical value from archived image sets. In this work, we aim at improving the prognostic value of US in managing ADPKD by assessing the accuracy of using statistical shape model augmented US data, to predict TKV, with the end goal of monitoring short-term changes.
DEFF Research Database (Denmark)
Tataru, Paula Cristina; Hobolth, Asger
2011-01-01
past evolutionary events (exact times and types of changes) are unaccessible and the past must be inferred from DNA sequence data observed in the present. RESULTS: We describe and implement three algorithms for computing linear combinations of expected values of the sufficient statistics, conditioned......BACKGROUND: Continuous time Markov chains (CTMCs) is a widely used model for describing the evolution of DNA sequences on the nucleotide, amino acid or codon level. The sufficient statistics for CTMCs are the time spent in a state and the number of changes between any two states. In applications...... of the algorithms is available at www.birc.au.dk/~paula/. CONCLUSIONS: We use two different models to analyze the accuracy and eight experiments to investigate the speed of the three algorithms. We find that they have similar accuracy and that EXPM is the slowest method. Furthermore we find that UNI is usually...
Critical, statistical, and thermodynamical properties of lattice models
Energy Technology Data Exchange (ETDEWEB)
Varma, Vipin Kerala
2013-10-15
In this thesis we investigate zero temperature and low temperature properties - critical, statistical and thermodynamical - of lattice models in the contexts of bosonic cold atom systems, magnetic materials, and non-interacting particles on various lattice geometries. We study quantum phase transitions in the Bose-Hubbard model with higher body interactions, as relevant for optical lattice experiments of strongly interacting bosons, in one and two dimensions; the universality of the Mott insulator to superfluid transition is found to remain unchanged for even large three body interaction strengths. A systematic renormalization procedure is formulated to fully re-sum these higher (three and four) body interactions into the two body terms. In the strongly repulsive limit, we analyse the zero and low temperature physics of interacting hard-core bosons on the kagome lattice at various fillings. Evidence for a disordered phase in the Ising limit of the model is presented; in the strong coupling limit, the transition between the valence bond solid and the superfluid is argued to be first order at the tip of the solid lobe.
The statistical multifragmentation model: Origins and recent advances
International Nuclear Information System (INIS)
Donangelo, R.; Souza, S. R.
2016-01-01
We review the Statistical Multifragmentation Model (SMM) which considers a generalization of the liquid-drop model for hot nuclei and allows one to calculate thermodynamic quantities characterizing the nuclear ensemble at the disassembly stage. We show how to determine probabilities of definite partitions of finite nuclei and how to determine, through Monte Carlo calculations, observables such as the caloric curve, multiplicity distributions, heat capacity, among others. Some experimental measurements of the caloric curve confirmed the SMM predictions of over 10 years before, leading to a surge in the interest in the model. However, the experimental determination of the fragmentation temperatures relies on the yields of different isotopic species, which were not correctly calculated in the schematic, liquid-drop picture, employed in the SMM. This led to a series of improvements in the SMM, in particular to the more careful choice of nuclear masses and energy densities, specially for the lighter nuclei. With these improvements the SMM is able to make quantitative determinations of isotope production. We show the application of SMM to the production of exotic nuclei through multifragmentation. These preliminary calculations demonstrate the need for a careful choice of the system size and excitation energy to attain maximum yields.
Critical, statistical, and thermodynamical properties of lattice models
International Nuclear Information System (INIS)
Varma, Vipin Kerala
2013-10-01
In this thesis we investigate zero temperature and low temperature properties - critical, statistical and thermodynamical - of lattice models in the contexts of bosonic cold atom systems, magnetic materials, and non-interacting particles on various lattice geometries. We study quantum phase transitions in the Bose-Hubbard model with higher body interactions, as relevant for optical lattice experiments of strongly interacting bosons, in one and two dimensions; the universality of the Mott insulator to superfluid transition is found to remain unchanged for even large three body interaction strengths. A systematic renormalization procedure is formulated to fully re-sum these higher (three and four) body interactions into the two body terms. In the strongly repulsive limit, we analyse the zero and low temperature physics of interacting hard-core bosons on the kagome lattice at various fillings. Evidence for a disordered phase in the Ising limit of the model is presented; in the strong coupling limit, the transition between the valence bond solid and the superfluid is argued to be first order at the tip of the solid lobe.
Constraining statistical-model parameters using fusion and spallation reactions
Directory of Open Access Journals (Sweden)
Charity Robert J.
2011-10-01
Full Text Available The de-excitation of compound nuclei has been successfully described for several decades by means of statistical models. However, such models involve a large number of free parameters and ingredients that are often underconstrained by experimental data. We show how the degeneracy of the model ingredients can be partially lifted by studying different entrance channels for de-excitation, which populate different regions of the parameter space of the compound nucleus. Fusion reactions, in particular, play an important role in this strategy because they ﬁx three out of four of the compound-nucleus parameters (mass, charge and total excitation energy. The present work focuses on ﬁssion and intermediate-mass-fragment emission cross sections. We prove how equivalent parameter sets for fusion-ﬁssion reactions can be resolved using another entrance channel, namely spallation reactions. Intermediate-mass-fragment emission can be constrained in a similar way. An interpretation of the best-ﬁt IMF barriers in terms of the Wigner energies of the nascent fragments is discussed.
Optimizing DNA assembly based on statistical language modelling.
Fang, Gang; Zhang, Shemin; Dong, Yafei
2017-12-15
By successively assembling genetic parts such as BioBrick according to grammatical models, complex genetic constructs composed of dozens of functional blocks can be built. However, usually every category of genetic parts includes a few or many parts. With increasing quantity of genetic parts, the process of assembling more than a few sets of these parts can be expensive, time consuming and error prone. At the last step of assembling it is somewhat difficult to decide which part should be selected. Based on statistical language model, which is a probability distribution P(s) over strings S that attempts to reflect how frequently a string S occurs as a sentence, the most commonly used parts will be selected. Then, a dynamic programming algorithm was designed to figure out the solution of maximum probability. The algorithm optimizes the results of a genetic design based on a grammatical model and finds an optimal solution. In this way, redundant operations can be reduced and the time and cost required for conducting biological experiments can be minimized. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.
Terminal-Dependent Statistical Inference for the FBSDEs Models
Directory of Open Access Journals (Sweden)
Yunquan Song
2014-01-01
Full Text Available The original stochastic differential equations (OSDEs and forward-backward stochastic differential equations (FBSDEs are often used to model complex dynamic process that arise in financial, ecological, and many other areas. The main difference between OSDEs and FBSDEs is that the latter is designed to depend on a terminal condition, which is a key factor in some financial and ecological circumstances. It is interesting but challenging to estimate FBSDE parameters from noisy data and the terminal condition. However, to the best of our knowledge, the terminal-dependent statistical inference for such a model has not been explored in the existing literature. We proposed a nonparametric terminal control variables estimation method to address this problem. The reason why we use the terminal control variables is that the newly proposed inference procedures inherit the terminal-dependent characteristic. Through this new proposed method, the estimators of the functional coefficients of the FBSDEs model are obtained. The asymptotic properties of the estimators are also discussed. Simulation studies show that the proposed method gives satisfying estimates for the FBSDE parameters from noisy data and the terminal condition. A simulation is performed to test the feasibility of our method.
Statistical inference for imperfect maintenance models with missing data
International Nuclear Information System (INIS)
Dijoux, Yann; Fouladirad, Mitra; Nguyen, Dinh Tuan
2016-01-01
The paper considers complex industrial systems with incomplete maintenance history. A corrective maintenance is performed after the occurrence of a failure and its efficiency is assumed to be imperfect. In maintenance analysis, the databases are not necessarily complete. Specifically, the observations are assumed to be window-censored. This situation arises relatively frequently after the purchase of a second-hand unit or in the absence of maintenance record during the burn-in phase. The joint assessment of the wear-out of the system and the maintenance efficiency is investigated under missing data. A review along with extensions of statistical inference procedures from an observation window are proposed in the case of perfect and minimal repair using the renewal and Poisson theories, respectively. Virtual age models are employed to model imperfect repair. In this framework, new estimation procedures are developed. In particular, maximum likelihood estimation methods are derived for the most classical virtual age models. The benefits of the new estimation procedures are highlighted by numerical simulations and an application to a real data set. - Highlights: • New estimation procedures for window-censored observations and imperfect repair. • Extensions of inference methods for perfect and minimal repair with missing data. • Overview of maximum likelihood method with complete and incomplete observations. • Benefits of the new procedures highlighted by simulation studies and real application.
The statistical multifragmentation model: Origins and recent advances
Energy Technology Data Exchange (ETDEWEB)
Donangelo, R., E-mail: donangel@fing.edu.uy [Instituto de Física, Facultad de Ingeniería, Universidad de la República, Julio Herrera y Reissig 565, 11300, Montevideo (Uruguay); Instituto de Física, Universidade Federal do Rio de Janeiro, C.P. 68528, 21941-972 Rio de Janeiro - RJ (Brazil); Souza, S. R., E-mail: srsouza@if.ufrj.br [Instituto de Física, Universidade Federal do Rio de Janeiro, C.P. 68528, 21941-972 Rio de Janeiro - RJ (Brazil); Instituto de Física, Universidade Federal do Rio Grande do Sul, C.P. 15051, 91501-970 Porto Alegre - RS (Brazil)
2016-07-07
We review the Statistical Multifragmentation Model (SMM) which considers a generalization of the liquid-drop model for hot nuclei and allows one to calculate thermodynamic quantities characterizing the nuclear ensemble at the disassembly stage. We show how to determine probabilities of definite partitions of finite nuclei and how to determine, through Monte Carlo calculations, observables such as the caloric curve, multiplicity distributions, heat capacity, among others. Some experimental measurements of the caloric curve confirmed the SMM predictions of over 10 years before, leading to a surge in the interest in the model. However, the experimental determination of the fragmentation temperatures relies on the yields of different isotopic species, which were not correctly calculated in the schematic, liquid-drop picture, employed in the SMM. This led to a series of improvements in the SMM, in particular to the more careful choice of nuclear masses and energy densities, specially for the lighter nuclei. With these improvements the SMM is able to make quantitative determinations of isotope production. We show the application of SMM to the production of exotic nuclei through multifragmentation. These preliminary calculations demonstrate the need for a careful choice of the system size and excitation energy to attain maximum yields.
Statistical Clustering and Compositional Modeling of Iapetus VIMS Spectral Data
Pinilla-Alonso, N.; Roush, T. L.; Marzo, G.; Dalle Ore, C. M.; Cruikshank, D. P.
2009-12-01
It has long been known that the surfaces of Saturn's major satellites are predominantly icy objects [e.g. 1 and references therein]. Since 2004, these bodies have been the subject of observations by the Cassini-VIMS (Visual and Infrared Mapping Spectrometer) experiment [2]. Iapetus has the unique property that the hemisphere centered on the apex of its locked synchronous orbital motion around Saturn has a very low geometrical albedo of 2-6%, while the opposite hemisphere is about 10 times more reflective. The nature and origin of the dark material of Iapetus has remained a question since its discovery [3 and references therein]. The nature of this material and how it is distributed on the surface of this body, can shed new light into the knowledge of the Saturnian system. We apply statistical clustering [4] and theoretical modeling [5,6] to address the surface composition of Iapetus. The VIMS data evaluated were obtained during the second flyby of Iapetus, in September 2007. This close approach allowed VIMS to obtain spectra at relatively high spatial resolution, ~1-22 km/pixel. The data we study sampled the trailing hemisphere and part of the dark leading one. The statistical clustering [4] is used to identify statistically distinct spectra on Iapetus. The composition of these distinct spectra are evaluated using theoretical models [5,6]. We thank Allan Meyer for his help. This research was supported by an appointment to the NASA Postdoctoral Program at the Ames Research Center, administered by Oak Ridge Associated Universities through a contract with NASA. [1] A, Coradini et al., 2009, Earth, Moon & Planets, 105, 289-310. [2] Brown et al., 2004, Space Science Reviews, 115, 111-168. [3] Cruikshank, D. et al Icarus, 2008, 193, 334-343. [4] Marzo, G. et al. 2008, Journal of Geophysical Research, 113, E12, CiteID E12009. [5] Hapke, B. 1993, Theory of reflectance and emittance spectroscopy, Cambridge University Press. [6] Shkuratov, Y. et al. 1999, Icarus, 137, 235-246.
Towards a Statistical Model of Tropical Cyclone Genesis
Fernandez, A.; Kashinath, K.; McAuliffe, J.; Prabhat, M.; Stark, P. B.; Wehner, M. F.
2017-12-01
Tropical Cyclones (TCs) are important extreme weather phenomena that have a strong impact on humans. TC forecasts are largely based on global numerical models that produce TC-like features. Aspects of Tropical Cyclones such as their formation/genesis, evolution, intensification and dissipation over land are important and challenging problems in climate science. This study investigates the environmental conditions associated with Tropical Cyclone Genesis (TCG) by testing how accurately a statistical model can predict TCG in the CAM5.1 climate model. TCG events are defined using TECA software @inproceedings{Prabhat2015teca, title={TECA: Petascale Pattern Recognition for Climate Science}, author={Prabhat and Byna, Surendra and Vishwanath, Venkatram and Dart, Eli and Wehner, Michael and Collins, William D}, booktitle={Computer Analysis of Images and Patterns}, pages={426-436}, year={2015}, organization={Springer}} to extract TC trajectories from CAM5.1. L1-regularized logistic regression (L1LR) is applied to the CAM5.1 output. The predictions have nearly perfect accuracy for data not associated with TC tracks and high accuracy differentiating between high vorticity and low vorticity systems. The model's active variables largely correspond to current hypotheses about important factors for TCG, such as wind field patterns and local pressure minima, and suggests new routes for investigation. Furthermore, our model's predictions of TC activity are competitive with the output of an instantaneous version of Emanuel and Nolan's Genesis Potential Index (GPI) @inproceedings{eman04, title = "Tropical cyclone activity and the global climate system", author = "Kerry Emanuel and Nolan, {David S.}", year = "2004", pages = "240-241", booktitle = "26th Conference on Hurricanes and Tropical Meteorology"}.
Editorial to: Six papers on Dynamic Statistical Models
DEFF Research Database (Denmark)
2014-01-01
statistical methodology and theory for large and complex data sets that included biostatisticians and mathematical statisticians from three faculties at the University of Copenhagen. The satellite meeting took place August 17–19, 2011. Its purpose was to bring together researchers in statistics and related......The following six papers are based on invited lectures at the satellite meeting held at the University of Copenhagen before the 58th World Statistics Congress of the International Statistical Institute in Dublin in 2011. At the invitation of the Bernoulli Society, the satellite meeting...... was organized around the theme “Dynamic Statistical Models” as a part of the Program of Excellence at the University of Copenhagen on “Statistical methods for complex and high dimensional models” (http://statistics.ku.dk/). The Excellence Program in Statistics was a research project to develop and investigate...
The issue of statistical power for overall model fit in evaluating structural equation models
Directory of Open Access Journals (Sweden)
Richard HERMIDA
2015-06-01
Full Text Available Statistical power is an important concept for psychological research. However, examining the power of a structural equation model (SEM is rare in practice. This article provides an accessible review of the concept of statistical power for the Root Mean Square Error of Approximation (RMSEA index of overall model fit in structural equation modeling. By way of example, we examine the current state of power in the literature by reviewing studies in top Industrial-Organizational (I/O Psychology journals using SEMs. Results indicate that in many studies, power is very low, which implies acceptance of invalid models. Additionally, we examined methodological situations which may have an influence on statistical power of SEMs. Results showed that power varies significantly as a function of model type and whether or not the model is the main model for the study. Finally, results indicated that power is significantly related to model fit statistics used in evaluating SEMs. The results from this quantitative review imply that researchers should be more vigilant with respect to power in structural equation modeling. We therefore conclude by offering methodological best practices to increase confidence in the interpretation of structural equation modeling results with respect to statistical power issues.
Vajargah, Kianoush Fathi; Sadeghi-Bazargani, Homayoun; Mehdizadeh-Esfanjani, Robab; Savadi-Oskouei, Daryoush; Farhoudi, Mehdi
2012-01-01
The objective of the present study was to assess the comparable applicability of orthogonal projections to latent structures (OPLS) statistical model vs traditional linear regression in order to investigate the role of trans cranial doppler (TCD) sonography in predicting ischemic stroke prognosis. The study was conducted on 116 ischemic stroke patients admitted to a specialty neurology ward. The Unified Neurological Stroke Scale was used once for clinical evaluation on the first week of admission and again six months later. All data was primarily analyzed using simple linear regression and later considered for multivariate analysis using PLS/OPLS models through the SIMCA P+12 statistical software package. The linear regression analysis results used for the identification of TCD predictors of stroke prognosis were confirmed through the OPLS modeling technique. Moreover, in comparison to linear regression, the OPLS model appeared to have higher sensitivity in detecting the predictors of ischemic stroke prognosis and detected several more predictors. Applying the OPLS model made it possible to use both single TCD measures/indicators and arbitrarily dichotomized measures of TCD single vessel involvement as well as the overall TCD result. In conclusion, the authors recommend PLS/OPLS methods as complementary rather than alternative to the available classical regression models such as linear regression.
International Nuclear Information System (INIS)
Gallaher, B.; Mercier, T.; Black, P.; Mullen, K.
2000-01-01
Four governmental agencies conducted a round of groundwater, surface water, and spring water sampling at the Los Alamos National Laboratory during 1998. Samples were split among the four parties and sent to independent analytical laboratories. Results from three of the agencies were available for this study. Comparisons of analytical results that were paired by location and date were made between the various analytical laboratories. The results for over 50 split samples analyzed for inorganic chemicals, metals, and radionuclides were compared. Statistical analyses included non-parametric (sign test and signed-ranks test) and parametric (paired t-test and linear regression) methods. The data pairs were tested for statistically significant differences, defined by an observed significance level, or p-value, less than 0.05. The main conclusion is that the laboratories' performances are similar across most of the analytes that were measured. In some 95% of the laboratory measurements there was agreement on whether contaminant levels exceeded regulatory limits. The most significant differences in performance were noted for the radioactive suite, particularly for gross alpha particle activity and Sr-90
Comparison of long-term Moscow and Danish NLC observations: statistical results
Directory of Open Access Journals (Sweden)
P. Dalin
2006-11-01
Full Text Available Noctilucent clouds (NLC are the highest clouds in the Earth's atmosphere, observed close to the mesopause at 80–90 km altitudes. Systematic NLC observations conducted in Moscow for the period of 1962–2005 and in Denmark for 1983–2005 are compared and statistical results both for seasonally summarized NLC parameters and for individual NLC appearances are described. Careful attention is paid to the weather conditions during each season of observations. This turns out to be a very important factor both for the NLC case study and for long-term data set analysis. Time series of seasonal values show moderate similarity (taking into account the weather conditions but, at the same time, the comparison of individual cases of NLC occurrence reveals substantial differences. There are positive trends in the Moscow and Danish normalized NLC brightness as well as nearly zero trend in the Moscow normalized NLC occurrence frequency but these long-term changes are not statistically significant. The quasi-ten-year cycle in NLC parameters is about 1 year shorter than the solar cycle during the same period. The characteristic scale of NLC fields is estimated for the first time and it is found to be less than 800 km.
Energy Technology Data Exchange (ETDEWEB)
Berghausen, P.E. Jr.; Mathews, T.W.
1987-01-01
The security plans of nuclear power plants generally require that all personnel who are to have access to protected areas or vital islands be screened for emotional stability. In virtually all instances, the screening involves the administration of one or more psychological tests, usually including the Minnesota Multiphasic Personality Inventory (MMPI). At some plants, all employees receive a structured clinical interview after they have taken the MMPI and results have been obtained. At other plants, only those employees with dirty MMPI are interviewed. This latter protocol is referred to as interviews by exception. Behaviordyne Psychological Corp. has succeeded in removing some of the uncertainty associated with interview-by-exception protocols by developing an empirically based, predictive equation. This equation permits utility companies to make informed choices regarding the risks they are assuming. A conceptual problem exists with the predictive equation, however. Like most predictive equations currently in use, it is based on Fisherian statistics, involving least-squares analyses. Consequently, Behaviordyne Psychological Corp., in conjunction with T.W. Mathews and Associates, has just developed a second predictive equation, one based on contingent probability statistics. The particular technique used in the multi-contingent analysis of probability systems (MAPS) approach. The present paper presents a comparison of predictive accuracy of the two equations: the one derived using Fisherian techniques versus the one thing contingent probability techniques.
International Nuclear Information System (INIS)
Berghausen, P.E. Jr.; Mathews, T.W.
1987-01-01
The security plans of nuclear power plants generally require that all personnel who are to have access to protected areas or vital islands be screened for emotional stability. In virtually all instances, the screening involves the administration of one or more psychological tests, usually including the Minnesota Multiphasic Personality Inventory (MMPI). At some plants, all employees receive a structured clinical interview after they have taken the MMPI and results have been obtained. At other plants, only those employees with dirty MMPI are interviewed. This latter protocol is referred to as interviews by exception. Behaviordyne Psychological Corp. has succeeded in removing some of the uncertainty associated with interview-by-exception protocols by developing an empirically based, predictive equation. This equation permits utility companies to make informed choices regarding the risks they are assuming. A conceptual problem exists with the predictive equation, however. Like most predictive equations currently in use, it is based on Fisherian statistics, involving least-squares analyses. Consequently, Behaviordyne Psychological Corp., in conjunction with T.W. Mathews and Associates, has just developed a second predictive equation, one based on contingent probability statistics. The particular technique used in the multi-contingent analysis of probability systems (MAPS) approach. The present paper presents a comparison of predictive accuracy of the two equations: the one derived using Fisherian techniques versus the one thing contingent probability techniques
Analysing earthquake slip models with the spatial prediction comparison test
Zhang, L.; Mai, Paul Martin; Thingbaijam, Kiran Kumar; Razafindrakoto, H. N. T.; Genton, Marc G.
2014-01-01
Earthquake rupture models inferred from inversions of geophysical and/or geodetic data exhibit remarkable variability due to uncertainties in modelling assumptions, the use of different inversion algorithms, or variations in data selection and data processing. A robust statistical comparison of different rupture models obtained for a single earthquake is needed to quantify the intra-event variability, both for benchmark exercises and for real earthquakes. The same approach may be useful to characterize (dis-)similarities in events that are typically grouped into a common class of events (e.g. moderate-size crustal strike-slip earthquakes or tsunamigenic large subduction earthquakes). For this purpose, we examine the performance of the spatial prediction comparison test (SPCT), a statistical test developed to compare spatial (random) fields by means of a chosen loss function that describes an error relation between a 2-D field (‘model’) and a reference model. We implement and calibrate the SPCT approach for a suite of synthetic 2-D slip distributions, generated as spatial random fields with various characteristics, and then apply the method to results of a benchmark inversion exercise with known solution. We find the SPCT to be sensitive to different spatial correlations lengths, and different heterogeneity levels of the slip distributions. The SPCT approach proves to be a simple and effective tool for ranking the slip models with respect to a reference model.
Analysing earthquake slip models with the spatial prediction comparison test
Zhang, L.
2014-11-10
Earthquake rupture models inferred from inversions of geophysical and/or geodetic data exhibit remarkable variability due to uncertainties in modelling assumptions, the use of different inversion algorithms, or variations in data selection and data processing. A robust statistical comparison of different rupture models obtained for a single earthquake is needed to quantify the intra-event variability, both for benchmark exercises and for real earthquakes. The same approach may be useful to characterize (dis-)similarities in events that are typically grouped into a common class of events (e.g. moderate-size crustal strike-slip earthquakes or tsunamigenic large subduction earthquakes). For this purpose, we examine the performance of the spatial prediction comparison test (SPCT), a statistical test developed to compare spatial (random) fields by means of a chosen loss function that describes an error relation between a 2-D field (‘model’) and a reference model. We implement and calibrate the SPCT approach for a suite of synthetic 2-D slip distributions, generated as spatial random fields with various characteristics, and then apply the method to results of a benchmark inversion exercise with known solution. We find the SPCT to be sensitive to different spatial correlations lengths, and different heterogeneity levels of the slip distributions. The SPCT approach proves to be a simple and effective tool for ranking the slip models with respect to a reference model.
Application of a Bayesian algorithm for the Statistical Energy model updating of a railway coach
DEFF Research Database (Denmark)
Sadri, Mehran; Brunskog, Jonas; Younesian, Davood
2016-01-01
into account based on published data on comparison between experimental and theoretical results, so that the variance of the theory is estimated. The Monte Carlo Metropolis Hastings algorithm is employed to estimate the modified values of the parameters. It is shown that the algorithm can be efficiently used......The classical statistical energy analysis (SEA) theory is a common approach for vibroacoustic analysis of coupled complex structures, being efficient to predict high-frequency noise and vibration of engineering systems. There are however some limitations in applying the conventional SEA...... the performance of the proposed strategy, the SEA model updating of a railway passenger coach is carried out. First, a sensitivity analysis is carried out to select the most sensitive parameters of the SEA model. For the selected parameters of the model, prior probability density functions are then taken...
Nateghi, Roshanak; Guikema, Seth D; Quiring, Steven M
2011-12-01
This article compares statistical methods for modeling power outage durations during hurricanes and examines the predictive accuracy of these methods. Being able to make accurate predictions of power outage durations is valuable because the information can be used by utility companies to plan their restoration efforts more efficiently. This information can also help inform customers and public agencies of the expected outage times, enabling better collective response planning, and coordination of restoration efforts for other critical infrastructures that depend on electricity. In the long run, outage duration estimates for future storm scenarios may help utilities and public agencies better allocate risk management resources to balance the disruption from hurricanes with the cost of hardening power systems. We compare the out-of-sample predictive accuracy of five distinct statistical models for estimating power outage duration times caused by Hurricane Ivan in 2004. The methods compared include both regression models (accelerated failure time (AFT) and Cox proportional hazard models (Cox PH)) and data mining techniques (regression trees, Bayesian additive regression trees (BART), and multivariate additive regression splines). We then validate our models against two other hurricanes. Our results indicate that BART yields the best prediction accuracy and that it is possible to predict outage durations with reasonable accuracy. © 2011 Society for Risk Analysis.
Automated robust generation of compact 3D statistical shape models
Vrtovec, Tomaz; Likar, Bostjan; Tomazevic, Dejan; Pernus, Franjo
2004-05-01
Ascertaining the detailed shape and spatial arrangement of anatomical structures is important not only within diagnostic settings but also in the areas of planning, simulation, intraoperative navigation, and tracking of pathology. Robust, accurate and efficient automated segmentation of anatomical structures is difficult because of their complexity and inter-patient variability. Furthermore, the position of the patient during image acquisition, the imaging device and protocol, image resolution, and other factors induce additional variations in shape and appearance. Statistical shape models (SSMs) have proven quite successful in capturing structural variability. A possible approach to obtain a 3D SSM is to extract reference voxels by precisely segmenting the structure in one, reference image. The corresponding voxels in other images are determined by registering the reference image to each other image. The SSM obtained in this way describes statistically plausible shape variations over the given population as well as variations due to imperfect registration. In this paper, we present a completely automated method that significantly reduces shape variations induced by imperfect registration, thus allowing a more accurate description of variations. At each iteration, the derived SSM is used for coarse registration, which is further improved by describing finer variations of the structure. The method was tested on 64 lumbar spinal column CT scans, from which 23, 38, 45, 46 and 42 volumes of interest containing vertebra L1, L2, L3, L4 and L5, respectively, were extracted. Separate SSMs were generated for each vertebra. The results show that the method is capable of reducing the variations induced by registration errors.
Patch-based generative shape model and MDL model selection for statistical analysis of archipelagos
DEFF Research Database (Denmark)
Ganz, Melanie; Nielsen, Mads; Brandt, Sami
2010-01-01
We propose a statistical generative shape model for archipelago-like structures. These kind of structures occur, for instance, in medical images, where our intention is to model the appearance and shapes of calcifications in x-ray radio graphs. The generative model is constructed by (1) learning ...
Paprotny, D.; Morales Napoles, O.; Jonkman, S.N.
2017-01-01
Flood hazard is currently being researched on continental and global scales, using models of increasing complexity. In this paper we investigate a different, simplified approach, which combines statistical and physical models in place of conventional rainfall-run-off models to carry out flood
Novick, Steven; Shen, Yan; Yang, Harry; Peterson, John; LeBlond, Dave; Altan, Stan
2015-01-01
Dissolution (or in vitro release) studies constitute an important aspect of pharmaceutical drug development. One important use of such studies is for justifying a biowaiver for post-approval changes which requires establishing equivalence between the new and old product. We propose a statistically rigorous modeling approach for this purpose based on the estimation of what we refer to as the F2 parameter, an extension of the commonly used f2 statistic. A Bayesian test procedure is proposed in relation to a set of composite hypotheses that capture the similarity requirement on the absolute mean differences between test and reference dissolution profiles. Several examples are provided to illustrate the application. Results of our simulation study comparing the performance of f2 and the proposed method show that our Bayesian approach is comparable to or in many cases superior to the f2 statistic as a decision rule. Further useful extensions of the method, such as the use of continuous-time dissolution modeling, are considered.
Multi-reader ROC studies with split-plot designs: a comparison of statistical methods.
Obuchowski, Nancy A; Gallas, Brandon D; Hillis, Stephen L
2012-12-01
Multireader imaging trials often use a factorial design, in which study patients undergo testing with all imaging modalities and readers interpret the results of all tests for all patients. A drawback of this design is the large number of interpretations required of each reader. Split-plot designs have been proposed as an alternative, in which one or a subset of readers interprets all images of a sample of patients, while other readers interpret the images of other samples of patients. In this paper, the authors compare three methods of analysis for the split-plot design. Three statistical methods are presented: the Obuchowski-Rockette method modified for the split-plot design, a newly proposed marginal-mean analysis-of-variance approach, and an extension of the three-sample U-statistic method. A simulation study using the Roe-Metz model was performed to compare the type I error rate, power, and confidence interval coverage of the three test statistics. The type I error rates for all three methods are close to the nominal level but tend to be slightly conservative. The statistical power is nearly identical for the three methods. The coverage of 95% confidence intervals falls close to the nominal coverage for small and large sample sizes. The split-plot multireader, multicase study design can be statistically efficient compared to the factorial design, reducing the number of interpretations required per reader. Three methods of analysis, shown to have nominal type I error rates, similar power, and nominal confidence interval coverage, are available for this study design. Copyright © 2012 AUR. All rights reserved.
Characterizing and Addressing the Need for Statistical Adjustment of Global Climate Model Data
White, K. D.; Baker, B.; Mueller, C.; Villarini, G.; Foley, P.; Friedman, D.
2017-12-01
As part of its mission to research and measure the effects of the changing climate, the U. S. Army Corps of Engineers (USACE) regularly uses the World Climate Research Programme's Coupled Model Intercomparison Project Phase 5 (CMIP5) multi-model dataset. However, these data are generated at a global level and are not fine-tuned for specific watersheds. This often causes CMIP5 output to vary from locally observed patterns in the climate. Several downscaling methods have been developed to increase the resolution of the CMIP5 data and decrease systemic differences to support decision-makers as they evaluate results at the watershed scale. Evaluating preliminary comparisons of observed and projected flow frequency curves over the US revealed a simple framework for water resources decision makers to plan and design water resources management measures under changing conditions using standard tools. Using this framework as a basis, USACE has begun to explore to use of statistical adjustment to alter global climate model data to better match the locally observed patterns while preserving the general structure and behavior of the model data. When paired with careful measurement and hypothesis testing, statistical adjustment can be particularly effective at navigating the compromise between the locally observed patterns and the global climate model structures for decision makers.
Detailed modeling of the statistical uncertainty of Thomson scattering measurements
International Nuclear Information System (INIS)
Morton, L A; Parke, E; Hartog, D J Den
2013-01-01
The uncertainty of electron density and temperature fluctuation measurements is determined by statistical uncertainty introduced by multiple noise sources. In order to quantify these uncertainties precisely, a simple but comprehensive model was made of the noise sources in the MST Thomson scattering system and of the resulting variance in the integrated scattered signals. The model agrees well with experimental and simulated results. The signal uncertainties are then used by our existing Bayesian analysis routine to find the most likely electron temperature and density, with confidence intervals. In the model, photonic noise from scattered light and plasma background light is multiplied by the noise enhancement factor (F) of the avalanche photodiode (APD). Electronic noise from the amplifier and digitizer is added. The amplifier response function shapes the signal and induces correlation in the noise. The data analysis routine fits a characteristic pulse to the digitized signals from the amplifier, giving the integrated scattered signals. A finite digitization rate loses information and can cause numerical integration error. We find a formula for the variance of the scattered signals in terms of the background and pulse amplitudes, and three calibration constants. The constants are measured easily under operating conditions, resulting in accurate estimation of the scattered signals' uncertainty. We measure F ≈ 3 for our APDs, in agreement with other measurements for similar APDs. This value is wavelength-independent, simplifying analysis. The correlated noise we observe is reproduced well using a Gaussian response function. Numerical integration error can be made negligible by using an interpolated characteristic pulse, allowing digitization rates as low as the detector bandwidth. The effect of background noise is also determined
Local yield stress statistics in model amorphous solids
Barbot, Armand; Lerbinger, Matthias; Hernandez-Garcia, Anier; García-García, Reinaldo; Falk, Michael L.; Vandembroucq, Damien; Patinet, Sylvain
2018-03-01
We develop and extend a method presented by Patinet, Vandembroucq, and Falk [Phys. Rev. Lett. 117, 045501 (2016), 10.1103/PhysRevLett.117.045501] to compute the local yield stresses at the atomic scale in model two-dimensional Lennard-Jones glasses produced via differing quench protocols. This technique allows us to sample the plastic rearrangements in a nonperturbative manner for different loading directions on a well-controlled length scale. Plastic activity upon shearing correlates strongly with the locations of low yield stresses in the quenched states. This correlation is higher in more structurally relaxed systems. The distribution of local yield stresses is also shown to strongly depend on the quench protocol: the more relaxed the glass, the higher the local plastic thresholds. Analysis of the magnitude of local plastic relaxations reveals that stress drops follow exponential distributions, justifying the hypothesis of an average characteristic amplitude often conjectured in mesoscopic or continuum models. The amplitude of the local plastic rearrangements increases on average with the yield stress, regardless of the system preparation. The local yield stress varies with the shear orientation tested and strongly correlates with the plastic rearrangement locations when the system is sheared correspondingly. It is thus argued that plastic rearrangements are the consequence of shear transformation zones encoded in the glass structure that possess weak slip planes along different orientations. Finally, we justify the length scale employed in this work and extract the yield threshold statistics as a function of the size of the probing zones. This method makes it possible to derive physically grounded models of plasticity for amorphous materials by directly revealing the relevant details of the shear transformation zones that mediate this process.
COMPARISON OF STATISTICALLY CONTROLLED MACHINING SOLUTIONS OF TITANIUM ALLOYS USING USM
Directory of Open Access Journals (Sweden)
R. Singh
2010-06-01
Full Text Available The purpose of the present investigation is to compare the statistically controlled machining solution of titanium alloys using ultrasonic machining (USM. In this study, the previously developed Taguchi model for USM of titanium and its alloys has been investigated and compared. Relationships between the material removal rate, tool wear rate, surface roughness and other controllable machining parameters (power rating, tool type, slurry concentration, slurry type, slurry temperature and slurry size have been deduced. The results of this study suggest that at the best settings of controllable machining parameters for titanium alloys (based upon the Taguchi design, the machining solution with USM is statistically controlled, which is not observed for other settings of input parameters on USM.
Laugel, Amélie; Menendez, Melisa; Benoit, Michel; Mattarolo, Giovanni; Mendez, Fernando
2013-04-01
Wave climate forecasting is a major issue for numerous marine and coastal related activities, such as offshore industries, flooding risks assessment and wave energy resource evaluation, among others. Generally, there are two main ways to predict the impacts of the climate change on the wave climate at regional scale: the dynamical and the statistical downscaling of GCM (Global Climate Model). In this study, both methods have been applied on the French coast (Atlantic , English Channel and North Sea shoreline) under three climate change scenarios (A1B, A2, B1) simulated with the GCM ARPEGE-CLIMAT, from Météo-France (AR4, IPCC). The aim of the work is to characterise the wave climatology of the 21st century and compare the statistical and dynamical methods pointing out advantages and disadvantages of each approach. The statistical downscaling method proposed by the Environmental Hydraulics Institute of Cantabria (Spain) has been applied (Menendez et al., 2011). At a particular location, the sea-state climate (Predictand Y) is defined as a function, Y=f(X), of several atmospheric circulation patterns (Predictor X). Assuming these climate associations between predictor and predictand are stationary, the statistical approach has been used to project the future wave conditions with reference to the GCM. The statistical relations between predictor and predictand have been established over 31 years, from 1979 to 2009. The predictor is built as the 3-days-averaged squared sea level pressure gradient from the hourly CFSR database (Climate Forecast System Reanalysis, http://cfs.ncep.noaa.gov/cfsr/). The predictand has been extracted from the 31-years hindcast sea-state database ANEMOC-2 performed with the 3G spectral wave model TOMAWAC (Benoit et al., 1996), developed at EDF R&D LNHE and Saint-Venant Laboratory for Hydraulics and forced by the CFSR 10m wind field. Significant wave height, peak period and mean wave direction have been extracted with an hourly-resolution at
International Nuclear Information System (INIS)
Nedic, Vladimir; Despotovic, Danijela; Cvetanovic, Slobodan; Despotovic, Milan; Babic, Sasa
2014-01-01
Traffic is the main source of noise in urban environments and significantly affects human mental and physical health and labor productivity. Therefore it is very important to model the noise produced by various vehicles. Techniques for traffic noise prediction are mainly based on regression analysis, which generally is not good enough to describe the trends of noise. In this paper the application of artificial neural networks (ANNs) for the prediction of traffic noise is presented. As input variables of the neural network, the proposed structure of the traffic flow and the average speed of the traffic flow are chosen. The output variable of the network is the equivalent noise level in the given time period L eq . Based on these parameters, the network is modeled, trained and tested through a comparative analysis of the calculated values and measured levels of traffic noise using the originally developed user friendly software package. It is shown that the artificial neural networks can be a useful tool for the prediction of noise with sufficient accuracy. In addition, the measured values were also used to calculate equivalent noise level by means of classical methods, and comparative analysis is given. The results clearly show that ANN approach is superior in traffic noise level prediction to any other statistical method. - Highlights: • We proposed an ANN model for prediction of traffic noise. • We developed originally designed user friendly software package. • The results are compared with classical statistical methods. • The results are much better predictive capabilities of ANN model
Energy Technology Data Exchange (ETDEWEB)
Nedic, Vladimir, E-mail: vnedic@kg.ac.rs [Faculty of Philology and Arts, University of Kragujevac, Jovana Cvijića bb, 34000 Kragujevac (Serbia); Despotovic, Danijela, E-mail: ddespotovic@kg.ac.rs [Faculty of Economics, University of Kragujevac, Djure Pucara Starog 3, 34000 Kragujevac (Serbia); Cvetanovic, Slobodan, E-mail: slobodan.cvetanovic@eknfak.ni.ac.rs [Faculty of Economics, University of Niš, Trg kralja Aleksandra Ujedinitelja, 18000 Niš (Serbia); Despotovic, Milan, E-mail: mdespotovic@kg.ac.rs [Faculty of Engineering, University of Kragujevac, Sestre Janjic 6, 34000 Kragujevac (Serbia); Babic, Sasa, E-mail: babicsf@yahoo.com [College of Applied Mechanical Engineering, Trstenik (Serbia)
2014-11-15
Traffic is the main source of noise in urban environments and significantly affects human mental and physical health and labor productivity. Therefore it is very important to model the noise produced by various vehicles. Techniques for traffic noise prediction are mainly based on regression analysis, which generally is not good enough to describe the trends of noise. In this paper the application of artificial neural networks (ANNs) for the prediction of traffic noise is presented. As input variables of the neural network, the proposed structure of the traffic flow and the average speed of the traffic flow are chosen. The output variable of the network is the equivalent noise level in the given time period L{sub eq}. Based on these parameters, the network is modeled, trained and tested through a comparative analysis of the calculated values and measured levels of traffic noise using the originally developed user friendly software package. It is shown that the artificial neural networks can be a useful tool for the prediction of noise with sufficient accuracy. In addition, the measured values were also used to calculate equivalent noise level by means of classical methods, and comparative analysis is given. The results clearly show that ANN approach is superior in traffic noise level prediction to any other statistical method. - Highlights: • We proposed an ANN model for prediction of traffic noise. • We developed originally designed user friendly software package. • The results are compared with classical statistical methods. • The results are much better predictive capabilities of ANN model.
Bias in iterative reconstruction of low-statistics PET data: benefits of a resolution model
Energy Technology Data Exchange (ETDEWEB)
Walker, M D; Asselin, M-C; Julyan, P J; Feldmann, M; Matthews, J C [School of Cancer and Enabling Sciences, Wolfson Molecular Imaging Centre, MAHSC, University of Manchester, Manchester M20 3LJ (United Kingdom); Talbot, P S [Mental Health and Neurodegeneration Research Group, Wolfson Molecular Imaging Centre, MAHSC, University of Manchester, Manchester M20 3LJ (United Kingdom); Jones, T, E-mail: matthew.walker@manchester.ac.uk [Academic Department of Radiation Oncology, Christie Hospital, University of Manchester, Manchester M20 4BX (United Kingdom)
2011-02-21
Iterative image reconstruction methods such as ordered-subset expectation maximization (OSEM) are widely used in PET. Reconstructions via OSEM are however reported to be biased for low-count data. We investigated this and considered the impact for dynamic PET. Patient listmode data were acquired in [{sup 11}C]DASB and [{sup 15}O]H{sub 2}O scans on the HRRT brain PET scanner. These data were subsampled to create many independent, low-count replicates. The data were reconstructed and the images from low-count data were compared to the high-count originals (from the same reconstruction method). This comparison enabled low-statistics bias to be calculated for the given reconstruction, as a function of the noise-equivalent counts (NEC). Two iterative reconstruction methods were tested, one with and one without an image-based resolution model (RM). Significant bias was observed when reconstructing data of low statistical quality, for both subsampled human and simulated data. For human data, this bias was substantially reduced by including a RM. For [{sup 11}C]DASB the low-statistics bias in the caudate head at 1.7 M NEC (approx. 30 s) was -5.5% and -13% with and without RM, respectively. We predicted biases in the binding potential of -4% and -10%. For quantification of cerebral blood flow for the whole-brain grey- or white-matter, using [{sup 15}O]H{sub 2}O and the PET autoradiographic method, a low-statistics bias of <2.5% and <4% was predicted for reconstruction with and without the RM. The use of a resolution model reduces low-statistics bias and can hence be beneficial for quantitative dynamic PET.
DEFF Research Database (Denmark)
A methodology is presented that combines modelling based on first principles and data based modelling into a modelling cycle that facilitates fast decision-making based on statistical methods. A strong feature of this methodology is that given a first principles model along with process data......, the corresponding modelling cycle model of the given system for a given purpose. A computer-aided tool, which integrates the elements of the modelling cycle, is also presented, and an example is given of modelling a fed-batch bioreactor....
Two statistical approaches, weighted regression on time, discharge, and season and generalized additive models, have recently been used to evaluate water quality trends in estuaries. Both models have been used in similar contexts despite differences in statistical foundations and...
Digital Repository Service at National Institute of Oceanography (India)
Srinivas, K.; Das, V.K.; DineshKumar, P.K.
This study investigates the suitability of statistical models for their predictive potential for the monthly mean sea level at different stations along the west and east coasts of the Indian subcontinent. Statistical modelling of the monthly mean...
Tract-oriented statistical group comparison of diffusion in sheet-like white matter
DEFF Research Database (Denmark)
Lyksborg, Mark; Dyrby, T. B.; Sorensen, P. S.
2013-01-01
tube-like shapes, not always suitable for modelling the white matter tracts of the brain. The tract-oriented technique aimed at group studies, integrates the usage of multivariate features and outputs a single value of significance indicating tract-specific differences. This is in contrast to voxel...... based analysis techniques which outputs a significance per voxel basis, and requires multiple comparison correction. We demonstrate our technique by comparing a group of controls with a group of Multiple Sclerosis subjects obtaining significant differences on 11 different fascicle structures....
Statistical physics of medical diagnostics: Study of a probabilistic model.
Mashaghi, Alireza; Ramezanpour, Abolfazl
2018-03-01
We study a diagnostic strategy which is based on the anticipation of the diagnostic process by simulation of the dynamical process starting from the initial findings. We show that such a strategy could result in more accurate diagnoses compared to a strategy that is solely based on the direct implications of the initial observations. We demonstrate this by employing the mean-field approximation of statistical physics to compute the posterior disease probabilities for a given subset of observed signs (symptoms) in a probabilistic model of signs and diseases. A Monte Carlo optimization algorithm is then used to maximize an objective function of the sequence of observations, which favors the more decisive observations resulting in more polarized disease probabilities. We see how the observed signs change the nature of the macroscopic (Gibbs) states of the sign and disease probability distributions. The structure of these macroscopic states in the configuration space of the variables affects the quality of any approximate inference algorithm (so the diagnostic performance) which tries to estimate the sign-disease marginal probabilities. In particular, we find that the simulation (or extrapolation) of the diagnostic process is helpful when the disease landscape is not trivial and the system undergoes a phase transition to an ordered phase.
Statistical physics of medical diagnostics: Study of a probabilistic model
Mashaghi, Alireza; Ramezanpour, Abolfazl
2018-03-01
We study a diagnostic strategy which is based on the anticipation of the diagnostic process by simulation of the dynamical process starting from the initial findings. We show that such a strategy could result in more accurate diagnoses compared to a strategy that is solely based on the direct implications of the initial observations. We demonstrate this by employing the mean-field approximation of statistical physics to compute the posterior disease probabilities for a given subset of observed signs (symptoms) in a probabilistic model of signs and diseases. A Monte Carlo optimization algorithm is then used to maximize an objective function of the sequence of observations, which favors the more decisive observations resulting in more polarized disease probabilities. We see how the observed signs change the nature of the macroscopic (Gibbs) states of the sign and disease probability distributions. The structure of these macroscopic states in the configuration space of the variables affects the quality of any approximate inference algorithm (so the diagnostic performance) which tries to estimate the sign-disease marginal probabilities. In particular, we find that the simulation (or extrapolation) of the diagnostic process is helpful when the disease landscape is not trivial and the system undergoes a phase transition to an ordered phase.
Increased Statistical Efficiency in a Lognormal Mean Model
Directory of Open Access Journals (Sweden)
Grant H. Skrepnek
2014-01-01
Full Text Available Within the context of clinical and other scientific research, a substantial need exists for an accurate determination of the point estimate in a lognormal mean model, given that highly skewed data are often present. As such, logarithmic transformations are often advocated to achieve the assumptions of parametric statistical inference. Despite this, existing approaches that utilize only a sample’s mean and variance may not necessarily yield the most efficient estimator. The current investigation developed and tested an improved efficient point estimator for a lognormal mean by capturing more complete information via the sample’s coefficient of variation. Results of an empirical simulation study across varying sample sizes and population standard deviations indicated relative improvements in efficiency of up to 129.47 percent compared to the usual maximum likelihood estimator and up to 21.33 absolute percentage points above the efficient estimator presented by Shen and colleagues (2006. The relative efficiency of the proposed estimator increased particularly as a function of decreasing sample size and increasing population standard deviation.