Wu, Stephen; Angelikopoulos, Panagiotis; Tauriello, Gerardo; Papadimitriou, Costas; Koumoutsakos, Petros
2016-12-28
We propose a hierarchical Bayesian framework to systematically integrate heterogeneous data for the calibration of force fields in Molecular Dynamics (MD) simulations. Our approach enables the fusion of diverse experimental data sets of the physico-chemical properties of a system at different thermodynamic conditions. We demonstrate the value of this framework for the robust calibration of MD force-fields for water using experimental data of its diffusivity, radial distribution function, and density. In order to address the high computational cost associated with the hierarchical Bayesian models, we develop a novel surrogate model based on the empirical interpolation method. Further computational savings are achieved by implementing a highly parallel transitional Markov chain Monte Carlo technique. The present method bypasses possible subjective weightings of the experimental data in identifying MD force-field parameters.
Energy Technology Data Exchange (ETDEWEB)
Lai, Canhai; Xu, Zhijie; Pan, Wenxiao; Sun, Xin; Storlie, Curtis; Marcy, Peter; Dietiker, Jean-François; Li, Tingwen; Spenik, James
2016-01-01
To quantify the predictive confidence of a solid sorbent-based carbon capture design, a hierarchical validation methodology—consisting of basic unit problems with increasing physical complexity coupled with filtered model-based geometric upscaling has been developed and implemented. This paper describes the computational fluid dynamics (CFD) multi-phase reactive flow simulations and the associated data flows among different unit problems performed within the said hierarchical validation approach. The bench-top experiments used in this calibration and validation effort were carefully designed to follow the desired simple-to-complex unit problem hierarchy, with corresponding data acquisition to support model parameters calibrations at each unit problem level. A Bayesian calibration procedure is employed and the posterior model parameter distributions obtained at one unit-problem level are used as prior distributions for the same parameters in the next-tier simulations. Overall, the results have demonstrated that the multiphase reactive flow models within MFIX can be used to capture the bed pressure, temperature, CO2 capture capacity, and kinetics with quantitative accuracy. The CFD modeling methodology and associated uncertainty quantification techniques presented herein offer a solid framework for estimating the predictive confidence in the virtual scale up of a larger carbon capture device.
Seichter, Felicia; Vogt, Josef; Radermacher, Peter; Mizaikoff, Boris
2017-01-25
The calibration of analytical systems is time-consuming and the effort for daily calibration routines should therefore be minimized, while maintaining the analytical accuracy and precision. The 'calibration transfer' approach proposes to combine calibration data already recorded with actual calibrations measurements. However, this strategy was developed for the multivariate, linear analysis of spectroscopic data, and thus, cannot be applied to sensors with a single response channel and/or a non-linear relationship between signal and desired analytical concentration. To fill this gap for a non-linear calibration equation, we assume that the coefficients for the equation, collected over several calibration runs, are normally distributed. Considering that coefficients of an actual calibration are a sample of this distribution, only a few standards are needed for a complete calibration data set. The resulting calibration transfer approach is demonstrated for a fluorescence oxygen sensor and implemented as a hierarchical Bayesian model, combined with a Lagrange Multipliers technique and Monte-Carlo Markov-Chain sampling. The latter provides realistic estimates for coefficients and prediction together with accurate error bounds by simulating known measurement errors and system fluctuations. Performance criteria for validation and optimal selection of a reduced set of calibration samples were developed and lead to a setup which maintains the analytical performance of a full calibration. Strategies for a rapid determination of problems occurring in a daily calibration routine, are proposed, thereby opening the possibility of correcting the problem just in time.
Collaborative Hierarchical Sparse Modeling
Sprechmann, Pablo; Sapiro, Guillermo; Eldar, Yonina C
2010-01-01
Sparse modeling is a powerful framework for data analysis and processing. Traditionally, encoding in this framework is done by solving an l_1-regularized linear regression problem, usually called Lasso. In this work we first combine the sparsity-inducing property of the Lasso model, at the individual feature level, with the block-sparsity property of the group Lasso model, where sparse groups of features are jointly encoded, obtaining a sparsity pattern hierarchically structured. This results in the hierarchical Lasso, which shows important practical modeling advantages. We then extend this approach to the collaborative case, where a set of simultaneously coded signals share the same sparsity pattern at the higher (group) level but not necessarily at the lower one. Signals then share the same active groups, or classes, but not necessarily the same active set. This is very well suited for applications such as source separation. An efficient optimization procedure, which guarantees convergence to the global opt...
Directory of Open Access Journals (Sweden)
Björn J. Döring
2013-12-01
Full Text Available A synthetic aperture radar (SAR system requires external absolute calibration so that radiometric measurements can be exploited in numerous scientific and commercial applications. Besides estimating a calibration factor, metrological standards also demand the derivation of a respective calibration uncertainty. This uncertainty is currently not systematically determined. Here for the first time it is proposed to use hierarchical modeling and Bayesian statistics as a consistent method for handling and analyzing the hierarchical data typically acquired during external calibration campaigns. Through the use of Markov chain Monte Carlo simulations, a joint posterior probability can be conveniently derived from measurement data despite the necessary grouping of data samples. The applicability of the method is demonstrated through a case study: The radar reflectivity of DLR’s new C-band Kalibri transponder is derived through a series of RADARSAT-2 acquisitions and a comparison with reference point targets (corner reflectors. The systematic derivation of calibration uncertainties is seen as an important step toward traceable radiometric calibration of synthetic aperture radars.
Modeling hierarchical structures - Hierarchical Linear Modeling using MPlus
Jelonek, M
2006-01-01
The aim of this paper is to present the technique (and its linkage with physics) of overcoming problems connected to modeling social structures, which are typically hierarchical. Hierarchical Linear Models provide a conceptual and statistical mechanism for drawing conclusions regarding the influence of phenomena at different levels of analysis. In the social sciences it is used to analyze many problems such as educational, organizational or market dilemma. This paper introduces the logic of modeling hierarchical linear equations and estimation based on MPlus software. I present my own model to illustrate the impact of different factors on school acceptation level.
Modeling hierarchical structures - Hierarchical Linear Modeling using MPlus
Jelonek, Magdalena
2006-01-01
The aim of this paper is to present the technique (and its linkage with physics) of overcoming problems connected to modeling social structures, which are typically hierarchical. Hierarchical Linear Models provide a conceptual and statistical mechanism for drawing conclusions regarding the influence of phenomena at different levels of analysis. In the social sciences it is used to analyze many problems such as educational, organizational or market dilemma. This paper introduces the logic of m...
Energy Technology Data Exchange (ETDEWEB)
C. Ahlers; H. Liu
2000-03-12
The purpose of this Analysis/Model Report (AMR) is to document the Calibrated Properties Model that provides calibrated parameter sets for unsaturated zone (UZ) flow and transport process models for the Yucca Mountain Site Characterization Project (YMP). This work was performed in accordance with the ''AMR Development Plan for U0035 Calibrated Properties Model REV00. These calibrated property sets include matrix and fracture parameters for the UZ Flow and Transport Model (UZ Model), drift seepage models, drift-scale and mountain-scale coupled-processes models, and Total System Performance Assessment (TSPA) models as well as Performance Assessment (PA) and other participating national laboratories and government agencies. These process models provide the necessary framework to test conceptual hypotheses of flow and transport at different scales and predict flow and transport behavior under a variety of climatic and thermal-loading conditions.
Tashiro, Tohru
2014-03-01
We propose a new model about diffusion of a product which includes a memory of how many adopters or advertisements a non-adopter met, where (non-)adopters mean people (not) possessing the product. This effect is lacking in the Bass model. As an application, we utilize the model to fit the iPod sales data, and so the better agreement is obtained than the Bass model.
Tashiro, Tohru
2013-01-01
We propose a new model about diffusion of a product which includes a memory of how many adopters or advertisements a non-adopter met, where (non-)adopters mean people (not) possessing the product. This effect is lacking in the Bass model. As an application, we utilize the model to fit the iPod sales data, and so the better agreement is obtained than the Bass model.
Hierarchical Cont-Bouchaud model
Paluch, Robert; Holyst, Janusz A
2015-01-01
We extend the well-known Cont-Bouchaud model to include a hierarchical topology of agent's interactions. The influence of hierarchy on system dynamics is investigated by two models. The first one is based on a multi-level, nested Erdos-Renyi random graph and individual decisions by agents according to Potts dynamics. This approach does not lead to a broad return distribution outside a parameter regime close to the original Cont-Bouchaud model. In the second model we introduce a limited hierarchical Erdos-Renyi graph, where merging of clusters at a level h+1 involves only clusters that have merged at the previous level h and we use the original Cont-Bouchaud agent dynamics on resulting clusters. The second model leads to a heavy-tail distribution of cluster sizes and relative price changes in a wide range of connection densities, not only close to the percolation threshold.
Hierarchical model of matching
Pedrycz, Witold; Roventa, Eugene
1992-01-01
The issue of matching two fuzzy sets becomes an essential design aspect of many algorithms including fuzzy controllers, pattern classifiers, knowledge-based systems, etc. This paper introduces a new model of matching. Its principal features involve the following: (1) matching carried out with respect to the grades of membership of fuzzy sets as well as some functionals defined on them (like energy, entropy,transom); (2) concepts of hierarchies in the matching model leading to a straightforward distinction between 'local' and 'global' levels of matching; and (3) a distributed character of the model realized as a logic-based neural network.
Hierarchical topic modeling with nested hierarchical Dirichlet process
Institute of Scientific and Technical Information of China (English)
Yi-qun DING; Shan-ping LI; Zhen ZHANG; Bin SHEN
2009-01-01
This paper deals with the statistical modeling of latent topic hierarchies in text corpora. The height of the topic tree is assumed as fixed, while the number of topics on each level as unknown a priori and to be inferred from data. Taking a nonparametric Bayesian approach to this problem, we propose a new probabilistic generative model based on the nested hierarchical Dirichlet process (nHDP) and present a Markov chain Monte Carlo sampling algorithm for the inference of the topic tree structure as welt as the word distribution of each topic and topic distribution of each document. Our theoretical analysis and experiment results show that this model can produce a more compact hierarchical topic structure and captures more free-grained topic relationships compared to the hierarchical latent Dirichlet allocation model.
Multicollinearity in hierarchical linear models.
Yu, Han; Jiang, Shanhe; Land, Kenneth C
2015-09-01
This study investigates an ill-posed problem (multicollinearity) in Hierarchical Linear Models from both the data and the model perspectives. We propose an intuitive, effective approach to diagnosing the presence of multicollinearity and its remedies in this class of models. A simulation study demonstrates the impacts of multicollinearity on coefficient estimates, associated standard errors, and variance components at various levels of multicollinearity for finite sample sizes typical in social science studies. We further investigate the role multicollinearity plays at each level for estimation of coefficient parameters in terms of shrinkage. Based on these analyses, we recommend a top-down method for assessing multicollinearity in HLMs that first examines the contextual predictors (Level-2 in a two-level model) and then the individual predictors (Level-1) and uses the results for data collection, research problem redefinition, model re-specification, variable selection and estimation of a final model.
Directory of Open Access Journals (Sweden)
Sezar Gülbaz
2015-01-01
Full Text Available The land development and increase in urbanization in a watershed affect water quantityand water quality. On one hand, urbanization provokes the adjustment of geomorphicstructure of the streams, ultimately raises peak flow rate which causes flood; on theother hand, it diminishes water quality which results in an increase in Total SuspendedSolid (TSS. Consequently, sediment accumulation in downstream of urban areas isobserved which is not preferred for longer life of dams. In order to overcome thesediment accumulation problem in dams, the amount of TSS in streams and inwatersheds should be taken under control. Low Impact Development (LID is a BestManagement Practice (BMP which may be used for this purpose. It is a land planningand engineering design method which is applied in managing storm water runoff inorder to reduce flooding as well as simultaneously improve water quality. LID includestechniques to predict suspended solid loads in surface runoff generated over imperviousurban surfaces. In this study, the impact of LID-BMPs on surface runoff and TSS isinvestigated by employing a calibrated hydrodynamic model for Sazlidere Watershedwhich is located in Istanbul, Turkey. For this purpose, a calibrated hydrodynamicmodel was developed by using Environmental Protection Agency Storm WaterManagement Model (EPA SWMM. For model calibration and validation, we set up arain gauge and a flow meter into the field and obtain rainfall and flow rate data. Andthen, we select several LID types such as retention basins, vegetative swales andpermeable pavement and we obtain their influence on peak flow rate and pollutantbuildup and washoff for TSS. Consequently, we observe the possible effects ofLID on surface runoff and TSS in Sazlidere Watershed.
SURF Model Calibration Strategy
Energy Technology Data Exchange (ETDEWEB)
Menikoff, Ralph [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2017-03-10
SURF and SURFplus are high explosive reactive burn models for shock initiation and propagation of detonation waves. They are engineering models motivated by the ignition & growth concept of high spots and for SURFplus a second slow reaction for the energy release from carbon clustering. A key feature of the SURF model is that there is a partial decoupling between model parameters and detonation properties. This enables reduced sets of independent parameters to be calibrated sequentially for the initiation and propagation regimes. Here we focus on a methodology for tting the initiation parameters to Pop plot data based on 1-D simulations to compute a numerical Pop plot. In addition, the strategy for tting the remaining parameters for the propagation regime and failure diameter is discussed.
A Model of Hierarchical Key Assignment Scheme
Institute of Scientific and Technical Information of China (English)
ZHANG Zhigang; ZHAO Jing; XU Maozhi
2006-01-01
A model of the hierarchical key assignment scheme is approached in this paper, which can be used with any cryptography algorithm. Besides, the optimal dynamic control property of a hierarchical key assignment scheme will be defined in this paper. Also, our scheme model will meet this property.
HIERARCHICAL OPTIMIZATION MODEL ON GEONETWORK
Directory of Open Access Journals (Sweden)
Z. Zha
2012-07-01
Full Text Available In existing construction experience of Spatial Data Infrastructure (SDI, GeoNetwork, as the geographical information integrated solution, is an effective way of building SDI. During GeoNetwork serving as an internet application, several shortcomings are exposed. The first one is that the time consuming of data loading has been considerately increasing with the growth of metadata count. Consequently, the efficiency of query and search service becomes lower. Another problem is that stability and robustness are both ruined since huge amount of metadata. The final flaw is that the requirements of multi-user concurrent accessing based on massive data are not effectively satisfied on the internet. A novel approach, Hierarchical Optimization Model (HOM, is presented to solve the incapability of GeoNetwork working with massive data in this paper. HOM optimizes the GeoNetwork from these aspects: internal procedure, external deployment strategies, etc. This model builds an efficient index for accessing huge metadata and supporting concurrent processes. In this way, the services based on GeoNetwork can maintain stable while running massive metadata. As an experiment, we deployed more than 30 GeoNetwork nodes, and harvest nearly 1.1 million metadata. From the contrast between the HOM-improved software and the original one, the model makes indexing and retrieval processes more quickly and keeps the speed stable on metadata amount increasing. It also shows stable on multi-user concurrent accessing to system services, the experiment achieved good results and proved that our optimization model is efficient and reliable.
Hierarchical modeling and analysis for spatial data
Banerjee, Sudipto; Gelfand, Alan E
2003-01-01
Among the many uses of hierarchical modeling, their application to the statistical analysis of spatial and spatio-temporal data from areas such as epidemiology And environmental science has proven particularly fruitful. Yet to date, the few books that address the subject have been either too narrowly focused on specific aspects of spatial analysis, or written at a level often inaccessible to those lacking a strong background in mathematical statistics.Hierarchical Modeling and Analysis for Spatial Data is the first accessible, self-contained treatment of hierarchical methods, modeling, and dat
A Model for Slicing JAVA Programs Hierarchically
Institute of Scientific and Technical Information of China (English)
Bi-Xin Li; Xiao-Cong Fan; Jun Pang; Jian-Jun Zhao
2004-01-01
Program slicing can be effectively used to debug, test, analyze, understand and maintain objectoriented software. In this paper, a new slicing model is proposed to slice Java programs based on their inherent hierarchical feature. The main idea of hierarchical slicing is to slice programs in a stepwise way, from package level, to class level, method level, and finally up to statement level. The stepwise slicing algorithm and the related graph reachability algorithms are presented, the architecture of the Java program Analyzing Tool (JATO) based on hierarchical slicing model is provided, the applications and a small case study are also discussed.
When to Use Hierarchical Linear Modeling
National Research Council Canada - National Science Library
Veronika Huta
2014-01-01
Previous publications on hierarchical linear modeling (HLM) have provided guidance on how to perform the analysis, yet there is relatively little information on two questions that arise even before analysis...
An introduction to hierarchical linear modeling
National Research Council Canada - National Science Library
Woltman, Heather; Feldstain, Andrea; MacKay, J. Christine; Rocchi, Meredith
2012-01-01
This tutorial aims to introduce Hierarchical Linear Modeling (HLM). A simple explanation of HLM is provided that describes when to use this statistical technique and identifies key factors to consider before conducting this analysis...
Conservation Laws in the Hierarchical Model
Beijeren, H. van; Gallavotti, G.; Knops, H.
1974-01-01
An exposition of the renormalization-group equations for the hierarchical model is given. Attention is drawn to some properties of the spin distribution functions which are conserved under the action of the renormalization group.
Bayesian Calibration of Microsimulation Models.
Rutter, Carolyn M; Miglioretti, Diana L; Savarino, James E
2009-12-01
Microsimulation models that describe disease processes synthesize information from multiple sources and can be used to estimate the effects of screening and treatment on cancer incidence and mortality at a population level. These models are characterized by simulation of individual event histories for an idealized population of interest. Microsimulation models are complex and invariably include parameters that are not well informed by existing data. Therefore, a key component of model development is the choice of parameter values. Microsimulation model parameter values are selected to reproduce expected or known results though the process of model calibration. Calibration may be done by perturbing model parameters one at a time or by using a search algorithm. As an alternative, we propose a Bayesian method to calibrate microsimulation models that uses Markov chain Monte Carlo. We show that this approach converges to the target distribution and use a simulation study to demonstrate its finite-sample performance. Although computationally intensive, this approach has several advantages over previously proposed methods, including the use of statistical criteria to select parameter values, simultaneous calibration of multiple parameters to multiple data sources, incorporation of information via prior distributions, description of parameter identifiability, and the ability to obtain interval estimates of model parameters. We develop a microsimulation model for colorectal cancer and use our proposed method to calibrate model parameters. The microsimulation model provides a good fit to the calibration data. We find evidence that some parameters are identified primarily through prior distributions. Our results underscore the need to incorporate multiple sources of variability (i.e., due to calibration data, unknown parameters, and estimated parameters and predicted values) when calibrating and applying microsimulation models.
Classification using Hierarchical Naive Bayes models
DEFF Research Database (Denmark)
Langseth, Helge; Dyhre Nielsen, Thomas
2006-01-01
Classification problems have a long history in the machine learning literature. One of the simplest, and yet most consistently well-performing set of classifiers is the Naïve Bayes models. However, an inherent problem with these classifiers is the assumption that all attributes used to describe...... an instance are conditionally independent given the class of that instance. When this assumption is violated (which is often the case in practice) it can reduce classification accuracy due to “information double-counting” and interaction omission. In this paper we focus on a relatively new set of models......, termed Hierarchical Naïve Bayes models. Hierarchical Naïve Bayes models extend the modeling flexibility of Naïve Bayes models by introducing latent variables to relax some of the independence statements in these models. We propose a simple algorithm for learning Hierarchical Naïve Bayes models...
Analysis hierarchical model for discrete event systems
Ciortea, E. M.
2015-11-01
The This paper presents the hierarchical model based on discrete event network for robotic systems. Based on the hierarchical approach, Petri network is analysed as a network of the highest conceptual level and the lowest level of local control. For modelling and control of complex robotic systems using extended Petri nets. Such a system is structured, controlled and analysed in this paper by using Visual Object Net ++ package that is relatively simple and easy to use, and the results are shown as representations easy to interpret. The hierarchical structure of the robotic system is implemented on computers analysed using specialized programs. Implementation of hierarchical model discrete event systems, as a real-time operating system on a computer network connected via a serial bus is possible, where each computer is dedicated to local and Petri model of a subsystem global robotic system. Since Petri models are simplified to apply general computers, analysis, modelling, complex manufacturing systems control can be achieved using Petri nets. Discrete event systems is a pragmatic tool for modelling industrial systems. For system modelling using Petri nets because we have our system where discrete event. To highlight the auxiliary time Petri model using transport stream divided into hierarchical levels and sections are analysed successively. Proposed robotic system simulation using timed Petri, offers the opportunity to view the robotic time. Application of goods or robotic and transmission times obtained by measuring spot is obtained graphics showing the average time for transport activity, using the parameters sets of finished products. individually.
Semiparametric Quantile Modelling of Hierarchical Data
Institute of Scientific and Technical Information of China (English)
Mao Zai TIAN; Man Lai TANG; Ping Shing CHAN
2009-01-01
The classic hierarchical linear model formulation provides a considerable flexibility for modelling the random effects structure and a powerful tool for analyzing nested data that arise in various areas such as biology, economics and education. However, it assumes the within-group errors to be independently and identically distributed (i.i.d.) and models at all levels to be linear. Most importantly, traditional hierarchical models (just like other ordinary mean regression methods) cannot characterize the entire conditional distribution of a dependent variable given a set of covariates and fail to yield robust estimators. In this article, we relax the aforementioned and normality assumptions, and develop a so-called Hierarchical Semiparametric Quantile Regression Models in which the within-group errors could be heteroscedastic and models at some levels are allowed to be nonparametric. We present the ideas with a 2-level model. The level-l model is specified as a nonparametric model whereas level-2 model is set as a parametric model. Under the proposed semiparametric setting the vector of partial derivatives of the nonparametric function in level-1 becomes the response variable vector in level 2. The proposed method allows us to model the fixed effects in the innermost level (i.e., level 2) as a function of the covariates instead of a constant effect. We outline some mild regularity conditions required for convergence and asymptotic normality for our estimators. We illustrate our methodology with a real hierarchical data set from a laboratory study and some simulation studies.
Hierarchical linear regression models for conditional quantiles
Institute of Scientific and Technical Information of China (English)
TIAN Maozai; CHEN Gemai
2006-01-01
The quantile regression has several useful features and therefore is gradually developing into a comprehensive approach to the statistical analysis of linear and nonlinear response models,but it cannot deal effectively with the data with a hierarchical structure.In practice,the existence of such data hierarchies is neither accidental nor ignorable,it is a common phenomenon.To ignore this hierarchical data structure risks overlooking the importance of group effects,and may also render many of the traditional statistical analysis techniques used for studying data relationships invalid.On the other hand,the hierarchical models take a hierarchical data structure into account and have also many applications in statistics,ranging from overdispersion to constructing min-max estimators.However,the hierarchical models are virtually the mean regression,therefore,they cannot be used to characterize the entire conditional distribution of a dependent variable given high-dimensional covariates.Furthermore,the estimated coefficient vector (marginal effects)is sensitive to an outlier observation on the dependent variable.In this article,a new approach,which is based on the Gauss-Seidel iteration and taking a full advantage of the quantile regression and hierarchical models,is developed.On the theoretical front,we also consider the asymptotic properties of the new method,obtaining the simple conditions for an n1/2-convergence and an asymptotic normality.We also illustrate the use of the technique with the real educational data which is hierarchical and how the results can be explained.
Hierarchical models and chaotic spin glasses
Berker, A. Nihat; McKay, Susan R.
1984-09-01
Renormalization-group studies in position space have led to the discovery of hierarchical models which are exactly solvable, exhibiting nonclassical critical behavior at finite temperature. Position-space renormalization-group approximations that had been widely and successfully used are in fact alternatively applicable as exact solutions of hierarchical models, this realizability guaranteeing important physical requirements. For example, a hierarchized version of the Sierpiriski gasket is presented, corresponding to a renormalization-group approximation which has quantitatively yielded the multicritical phase diagrams of submonolayers on graphite. Hierarchical models are now being studied directly as a testing ground for new concepts. For example, with the introduction of frustration, chaotic renormalization-group trajectories were obtained for the first time. Thus, strong and weak correlations are randomly intermingled at successive length scales, and a new microscopic picture and mechanism for a spin glass emerges. An upper critical dimension occurs via a boundary crisis mechanism in cluster-hierarchical variants developed to have well-behaved susceptibilities.
Hierarchic Models of Turbulence, Superfluidity and Superconductivity
Kaivarainen, A
2000-01-01
New models of Turbulence, Superfluidity and Superconductivity, based on new Hierarchic theory, general for liquids and solids (physics/0102086), have been proposed. CONTENTS: 1 Turbulence. General description; 2 Mesoscopic mechanism of turbulence; 3 Superfluidity. General description; 4 Mesoscopic scenario of fluidity; 5 Superfluidity as a hierarchic self-organization process; 6 Superfluidity in 3He; 7 Superconductivity: General properties of metals and semiconductors; Plasma oscillations; Cyclotron resonance; Electroconductivity; 8. Microscopic theory of superconductivity (BCS); 9. Mesoscopic scenario of superconductivity: Interpretation of experimental data in the framework of mesoscopic model of superconductivity.
Strategic games on a hierarchical network model
Institute of Scientific and Technical Information of China (English)
无
2008-01-01
Among complex network models, the hierarchical network model is the one most close to such real networks as world trade web, metabolic network, WWW, actor network, and so on. It has not only the property of power-law degree distribution, but growth based on growth and preferential attachment, showing the scale-free degree distribution property. In this paper, we study the evolution of cooperation on a hierarchical network model, adopting the prisoner's dilemma (PD) game and snowdrift game (SG) as metaphors of the interplay between connected nodes. BA model provides a unifying framework for the emergence of cooperation. But interestingly, we found that on hierarchical model, there is no sign of cooperation for PD game, while the frequency of cooperation decreases as the common benefit decreases for SG. By comparing the scaling clustering coefficient properties of the hierarchical network model with that of BA model, we found that the former amplifies the effect of hubs. Considering different performances of PD game and SG on complex network, we also found that common benefit leads to cooperation in the evolution. Thus our study may shed light on the emergence of cooperation in both natural and social environments.
Hierarchical Context Modeling for Video Event Recognition.
Wang, Xiaoyang; Ji, Qiang
2016-10-11
Current video event recognition research remains largely target-centered. For real-world surveillance videos, targetcentered event recognition faces great challenges due to large intra-class target variation, limited image resolution, and poor detection and tracking results. To mitigate these challenges, we introduced a context-augmented video event recognition approach. Specifically, we explicitly capture different types of contexts from three levels including image level, semantic level, and prior level. At the image level, we introduce two types of contextual features including the appearance context features and interaction context features to capture the appearance of context objects and their interactions with the target objects. At the semantic level, we propose a deep model based on deep Boltzmann machine to learn event object representations and their interactions. At the prior level, we utilize two types of prior-level contexts including scene priming and dynamic cueing. Finally, we introduce a hierarchical context model that systematically integrates the contextual information at different levels. Through the hierarchical context model, contexts at different levels jointly contribute to the event recognition. We evaluate the hierarchical context model for event recognition on benchmark surveillance video datasets. Results show that incorporating contexts in each level can improve event recognition performance, and jointly integrating three levels of contexts through our hierarchical model achieves the best performance.
Managing Clustered Data Using Hierarchical Linear Modeling
Warne, Russell T.; Li, Yan; McKyer, E. Lisako J.; Condie, Rachel; Diep, Cassandra S.; Murano, Peter S.
2012-01-01
Researchers in nutrition research often use cluster or multistage sampling to gather participants for their studies. These sampling methods often produce violations of the assumption of data independence that most traditional statistics share. Hierarchical linear modeling is a statistical method that can overcome violations of the independence…
Managing Clustered Data Using Hierarchical Linear Modeling
Warne, Russell T.; Li, Yan; McKyer, E. Lisako J.; Condie, Rachel; Diep, Cassandra S.; Murano, Peter S.
2012-01-01
Researchers in nutrition research often use cluster or multistage sampling to gather participants for their studies. These sampling methods often produce violations of the assumption of data independence that most traditional statistics share. Hierarchical linear modeling is a statistical method that can overcome violations of the independence…
The Infinite Hierarchical Factor Regression Model
Rai, Piyush
2009-01-01
We propose a nonparametric Bayesian factor regression model that accounts for uncertainty in the number of factors, and the relationship between factors. To accomplish this, we propose a sparse variant of the Indian Buffet Process and couple this with a hierarchical model over factors, based on Kingman's coalescent. We apply this model to two problems (factor analysis and factor regression) in gene-expression data analysis.
Hierarchical models in the brain.
Directory of Open Access Journals (Sweden)
Karl Friston
2008-11-01
Full Text Available This paper describes a general model that subsumes many parametric models for continuous data. The model comprises hidden layers of state-space or dynamic causal models, arranged so that the output of one provides input to another. The ensuing hierarchy furnishes a model for many types of data, of arbitrary complexity. Special cases range from the general linear model for static data to generalised convolution models, with system noise, for nonlinear time-series analysis. Crucially, all of these models can be inverted using exactly the same scheme, namely, dynamic expectation maximization. This means that a single model and optimisation scheme can be used to invert a wide range of models. We present the model and a brief review of its inversion to disclose the relationships among, apparently, diverse generative models of empirical data. We then show that this inversion can be formulated as a simple neural network and may provide a useful metaphor for inference and learning in the brain.
Hierarchical model of vulnerabilities for emotional disorders.
Norton, Peter J; Mehta, Paras D
2007-01-01
Clark and Watson's (1991) tripartite model of anxiety and depression has had a dramatic impact on our understanding of the dispositional variables underlying emotional disorders. More recently, calls have been made to examine not simply the influence of negative affectivity (NA) but also mediating factors that might better explain how NA influences anxious and depressive syndromes (e.g. Taylor, 1998; Watson, 2005). Extending preliminary projects, this study evaluated two hierarchical models of NA, mediating factors of anxiety sensitivity and intolerance of uncertainty, and specific emotional manifestations. Data provided a very good fit to a model elaborated from preliminary studies, lending further support to hierarchical models of emotional vulnerabilities. Implications for classification and diagnosis are discussed.
Bayesian hierarchical modeling of drug stability data.
Chen, Jie; Zhong, Jinglin; Nie, Lei
2008-06-15
Stability data are commonly analyzed using linear fixed or random effect model. The linear fixed effect model does not take into account the batch-to-batch variation, whereas the random effect model may suffer from the unreliable shelf-life estimates due to small sample size. Moreover, both methods do not utilize any prior information that might have been available. In this article, we propose a Bayesian hierarchical approach to modeling drug stability data. Under this hierarchical structure, we first use Bayes factor to test the poolability of batches. Given the decision on poolability of batches, we then estimate the shelf-life that applies to all batches. The approach is illustrated with two example data sets and its performance is compared in simulation studies with that of the commonly used frequentist methods. (c) 2008 John Wiley & Sons, Ltd.
Model Calibration in Watershed Hydrology
Yilmaz, Koray K.; Vrugt, Jasper A.; Gupta, Hoshin V.; Sorooshian, Soroosh
2009-01-01
Hydrologic models use relatively simple mathematical equations to conceptualize and aggregate the complex, spatially distributed, and highly interrelated water, energy, and vegetation processes in a watershed. A consequence of process aggregation is that the model parameters often do not represent directly measurable entities and must, therefore, be estimated using measurements of the system inputs and outputs. During this process, known as model calibration, the parameters are adjusted so that the behavior of the model approximates, as closely and consistently as possible, the observed response of the hydrologic system over some historical period of time. This Chapter reviews the current state-of-the-art of model calibration in watershed hydrology with special emphasis on our own contributions in the last few decades. We discuss the historical background that has led to current perspectives, and review different approaches for manual and automatic single- and multi-objective parameter estimation. In particular, we highlight the recent developments in the calibration of distributed hydrologic models using parameter dimensionality reduction sampling, parameter regularization and parallel computing.
Hierarchical Climate Modeling for Cosmoclimatology
Ohfuchi, Wataru
2010-05-01
It has been reported that there are correlations among solar activity, amount of galactic cosmic ray, amount of low clouds and surface air temperature (Svensmark and Friis-Chistensen, 1997). These correlations seem to exist for current climate change, Little Ice Age, and geological time scale climate changes. Some hypothetic mechanisms have been argued for the correlations but it still needs quantitative studies to understand the mechanism. In order to decrease uncertainties, only first principles or laws very close to first principles should be used. Our group at Japan Agency for Marine-Earth Science and Technology has started modeling effort to tackle this problem. We are constructing models from galactic cosmic ray inducing ionization, to aerosol formation, to cloud formation, to global climate. In this talk, we introduce our modeling activities. For aerosol formation, we use molecular dynamics. For cloud formation, we use a new cloud microphysics model called "super droplet method". We also try to couple a nonhydrostatic atmospheric regional cloud resolving model and a hydrostatic atmospheric general circulation model.
Hierarchical Boltzmann simulations and model error estimation
Torrilhon, Manuel; Sarna, Neeraj
2017-08-01
A hierarchical simulation approach for Boltzmann's equation should provide a single numerical framework in which a coarse representation can be used to compute gas flows as accurately and efficiently as in computational fluid dynamics, but a subsequent refinement allows to successively improve the result to the complete Boltzmann result. We use Hermite discretization, or moment equations, for the steady linearized Boltzmann equation for a proof-of-concept of such a framework. All representations of the hierarchy are rotationally invariant and the numerical method is formulated on fully unstructured triangular and quadrilateral meshes using a implicit discontinuous Galerkin formulation. We demonstrate the performance of the numerical method on model problems which in particular highlights the relevance of stability of boundary conditions on curved domains. The hierarchical nature of the method allows also to provide model error estimates by comparing subsequent representations. We present various model errors for a flow through a curved channel with obstacles.
Hierarchical mixture models for assessing fingerprint individuality
Dass, Sarat C.; Li, Mingfei
2009-01-01
The study of fingerprint individuality aims to determine to what extent a fingerprint uniquely identifies an individual. Recent court cases have highlighted the need for measures of fingerprint individuality when a person is identified based on fingerprint evidence. The main challenge in studies of fingerprint individuality is to adequately capture the variability of fingerprint features in a population. In this paper hierarchical mixture models are introduced to infer the extent of individua...
Semantic Image Segmentation with Contextual Hierarchical Models.
Seyedhosseini, Mojtaba; Tasdizen, Tolga
2016-05-01
Semantic segmentation is the problem of assigning an object label to each pixel. It unifies the image segmentation and object recognition problems. The importance of using contextual information in semantic segmentation frameworks has been widely realized in the field. We propose a contextual framework, called contextual hierarchical model (CHM), which learns contextual information in a hierarchical framework for semantic segmentation. At each level of the hierarchy, a classifier is trained based on downsampled input images and outputs of previous levels. Our model then incorporates the resulting multi-resolution contextual information into a classifier to segment the input image at original resolution. This training strategy allows for optimization of a joint posterior probability at multiple resolutions through the hierarchy. Contextual hierarchical model is purely based on the input image patches and does not make use of any fragments or shape examples. Hence, it is applicable to a variety of problems such as object segmentation and edge detection. We demonstrate that CHM performs at par with state-of-the-art on Stanford background and Weizmann horse datasets. It also outperforms state-of-the-art edge detection methods on NYU depth dataset and achieves state-of-the-art on Berkeley segmentation dataset (BSDS 500).
Magnetic susceptibilities of cluster-hierarchical models
McKay, Susan R.; Berker, A. Nihat
1984-02-01
The exact magnetic susceptibilities of hierarchical models are calculated near and away from criticality, in both the ordered and disordered phases. The mechanism and phenomenology are discussed for models with susceptibilities that are physically sensible, e.g., nondivergent away from criticality. Such models are found based upon the Niemeijer-van Leeuwen cluster renormalization. A recursion-matrix method is presented for the renormalization-group evaluation of response functions. Diagonalization of this matrix at fixed points provides simple criteria for well-behaved densities and response functions.
Three Layer Hierarchical Model for Chord
Directory of Open Access Journals (Sweden)
Waqas A. Imtiaz
2012-12-01
Full Text Available Increasing popularity of decentralized Peer-to-Peer (P2P architecture emphasizes on the need to come across an overlay structure that can provide efficient content discovery mechanism, accommodate high churn rate and adapt to failures in the presence of heterogeneity among the peers. Traditional p2p systems incorporate distributed client-server communication, which finds the peer efficiently that store a desires data item, with minimum delay and reduced overhead. However traditional models are not able to solve the problems relating scalability and high churn rates. Hierarchical model were introduced to provide better fault isolation, effective bandwidth utilization, a superior adaptation to the underlying physical network and a reduction of the lookup path length as additional advantages. It is more efficient and easier to manage than traditional p2p networks. This paper discusses a further step in p2p hierarchy via 3-layers hierarchical model with distributed database architecture in different layer, each of which is connected through its root. The peers are divided into three categories according to their physical stability and strength. They are Ultra Super-peer, Super-peer and Ordinary Peer and we assign these peers to first, second and third level of hierarchy respectively. Peers in a group in lower layer have their own local database which hold as associated super-peer in middle layer and access the database among the peers through user queries. In our 3-layer hierarchical model for DHT algorithms, we used an advanced Chord algorithm with optimized finger table which can remove the redundant entry in the finger table in upper layer that influences the system to reduce the lookup latency. Our research work finally resulted that our model really provides faster search since the network lookup latency is decreased by reducing the number of hops. The peers in such network then can contribute with improve functionality and can perform well in
An introduction to hierarchical linear modeling
Directory of Open Access Journals (Sweden)
Heather Woltman
2012-02-01
Full Text Available This tutorial aims to introduce Hierarchical Linear Modeling (HLM. A simple explanation of HLM is provided that describes when to use this statistical technique and identifies key factors to consider before conducting this analysis. The first section of the tutorial defines HLM, clarifies its purpose, and states its advantages. The second section explains the mathematical theory, equations, and conditions underlying HLM. HLM hypothesis testing is performed in the third section. Finally, the fourth section provides a practical example of running HLM, with which readers can follow along. Throughout this tutorial, emphasis is placed on providing a straightforward overview of the basic principles of HLM.
Universality: Accurate Checks in Dyson's Hierarchical Model
Godina, J. J.; Meurice, Y.; Oktay, M. B.
2003-06-01
In this talk we present high-accuracy calculations of the susceptibility near βc for Dyson's hierarchical model in D = 3. Using linear fitting, we estimate the leading (γ) and subleading (Δ) exponents. Independent estimates are obtained by calculating the first two eigenvalues of the linearized renormalization group transformation. We found γ = 1.29914073 ± 10 -8 and, Δ = 0.4259469 ± 10-7 independently of the choice of local integration measure (Ising or Landau-Ginzburg). After a suitable rescaling, the approximate fixed points for a large class of local measure coincide accurately with a fixed point constructed by Koch and Wittwer.
A Hierarchical Bayesian Model for Crowd Emotions
Urizar, Oscar J.; Baig, Mirza S.; Barakova, Emilia I.; Regazzoni, Carlo S.; Marcenaro, Lucio; Rauterberg, Matthias
2016-01-01
Estimation of emotions is an essential aspect in developing intelligent systems intended for crowded environments. However, emotion estimation in crowds remains a challenging problem due to the complexity in which human emotions are manifested and the capability of a system to perceive them in such conditions. This paper proposes a hierarchical Bayesian model to learn in unsupervised manner the behavior of individuals and of the crowd as a single entity, and explore the relation between behavior and emotions to infer emotional states. Information about the motion patterns of individuals are described using a self-organizing map, and a hierarchical Bayesian network builds probabilistic models to identify behaviors and infer the emotional state of individuals and the crowd. This model is trained and tested using data produced from simulated scenarios that resemble real-life environments. The conducted experiments tested the efficiency of our method to learn, detect and associate behaviors with emotional states yielding accuracy levels of 74% for individuals and 81% for the crowd, similar in performance with existing methods for pedestrian behavior detection but with novel concepts regarding the analysis of crowds. PMID:27458366
When to Use Hierarchical Linear Modeling
Directory of Open Access Journals (Sweden)
Veronika Huta
2014-04-01
Full Text Available Previous publications on hierarchical linear modeling (HLM have provided guidance on how to perform the analysis, yet there is relatively little information on two questions that arise even before analysis: Does HLM apply to ones data and research question? And if it does apply, how does one choose between HLM and other methods sometimes used in these circumstances, including multiple regression, repeated-measures or mixed ANOVA, and structural equation modeling or path analysis? The purpose of this tutorial is to briefly introduce HLM and then to review some of the considerations that are helpful in answering these questions, including the nature of the data, the model to be tested, and the information desired on the output. Some examples of how the same analysis could be performed in HLM, repeated-measures or mixed ANOVA, and structural equation modeling or path analysis are also provided. .
A hierarchical model of temporal perception.
Pöppel, E
1997-05-01
Temporal perception comprises subjective phenomena such as simultaneity, successiveness, temporal order, subjective present, temporal continuity and subjective duration. These elementary temporal experiences are hierarchically related to each other. Functional system states with a duration of 30 ms are implemented by neuronal oscillations and they provide a mechanism to define successiveness. These system states are also responsible for the identification of basic events. For a sequential representation of several events time tags are allocated, resulting in an ordinal representation of such events. A mechanism of temporal integration binds successive events into perceptual units of 3 s duration. Such temporal integration, which is automatic and presemantic, is also operative in movement control and other cognitive activities. Because of the omnipresence of this integration mechanism it is used for a pragmatic definition of the subjective present. Temporal continuity is the result of a semantic connection between successive integration intervals. Subjective duration is known to depend on mental load and attentional demand, high load resulting in long time estimates. In the hierarchical model proposed, system states of 30 ms and integration intervals of 3 s, together with a memory store, provide an explanatory neuro-cognitive machinery for differential subjective duration.
Antiferromagnetic Ising Model in Hierarchical Networks
Cheng, Xiang; Boettcher, Stefan
2015-03-01
The Ising antiferromagnet is a convenient model of glassy dynamics. It can introduce geometric frustrations and may give rise to a spin glass phase and glassy relaxation at low temperatures [ 1 ] . We apply the antiferromagnetic Ising model to 3 hierarchical networks which share features of both small world networks and regular lattices. Their recursive and fixed structures make them suitable for exact renormalization group analysis as well as numerical simulations. We first explore the dynamical behaviors using simulated annealing and discover an extremely slow relaxation at low temperatures. Then we employ the Wang-Landau algorithm to investigate the energy landscape and the corresponding equilibrium behaviors for different system sizes. Besides the Monte Carlo methods, renormalization group [ 2 ] is used to study the equilibrium properties in the thermodynamic limit and to compare with the results from simulated annealing and Wang-Landau sampling. Supported through NSF Grant DMR-1207431.
Hierarchical Data Structures, Institutional Research, and Multilevel Modeling
O'Connell, Ann A.; Reed, Sandra J.
2012-01-01
Multilevel modeling (MLM), also referred to as hierarchical linear modeling (HLM) or mixed models, provides a powerful analytical framework through which to study colleges and universities and their impact on students. Due to the natural hierarchical structure of data obtained from students or faculty in colleges and universities, MLM offers many…
Entrepreneurial intention modeling using hierarchical multiple regression
Directory of Open Access Journals (Sweden)
Marina Jeger
2014-12-01
Full Text Available The goal of this study is to identify the contribution of effectuation dimensions to the predictive power of the entrepreneurial intention model over and above that which can be accounted for by other predictors selected and confirmed in previous studies. As is often the case in social and behavioral studies, some variables are likely to be highly correlated with each other. Therefore, the relative amount of variance in the criterion variable explained by each of the predictors depends on several factors such as the order of variable entry and sample specifics. The results show the modest predictive power of two dimensions of effectuation prior to the introduction of the theory of planned behavior elements. The article highlights the main advantages of applying hierarchical regression in social sciences as well as in the specific context of entrepreneurial intention formation, and addresses some of the potential pitfalls that this type of analysis entails.
Hierarchical spatiotemporal matrix models for characterizing invasions.
Hooten, Mevin B; Wikle, Christopher K; Dorazio, Robert M; Royle, J Andrew
2007-06-01
The growth and dispersal of biotic organisms is an important subject in ecology. Ecologists are able to accurately describe survival and fecundity in plant and animal populations and have developed quantitative approaches to study the dynamics of dispersal and population size. Of particular interest are the dynamics of invasive species. Such nonindigenous animals and plants can levy significant impacts on native biotic communities. Effective models for relative abundance have been developed; however, a better understanding of the dynamics of actual population size (as opposed to relative abundance) in an invasion would be beneficial to all branches of ecology. In this article, we adopt a hierarchical Bayesian framework for modeling the invasion of such species while addressing the discrete nature of the data and uncertainty associated with the probability of detection. The nonlinear dynamics between discrete time points are intuitively modeled through an embedded deterministic population model with density-dependent growth and dispersal components. Additionally, we illustrate the importance of accommodating spatially varying dispersal rates. The method is applied to the specific case of the Eurasian Collared-Dove, an invasive species at mid-invasion in the United States at the time of this writing.
Constructive Epistemic Modeling: A Hierarchical Bayesian Model Averaging Method
Tsai, F. T. C.; Elshall, A. S.
2014-12-01
Constructive epistemic modeling is the idea that our understanding of a natural system through a scientific model is a mental construct that continually develops through learning about and from the model. Using the hierarchical Bayesian model averaging (HBMA) method [1], this study shows that segregating different uncertain model components through a BMA tree of posterior model probabilities, model prediction, within-model variance, between-model variance and total model variance serves as a learning tool [2]. First, the BMA tree of posterior model probabilities permits the comparative evaluation of the candidate propositions of each uncertain model component. Second, systemic model dissection is imperative for understanding the individual contribution of each uncertain model component to the model prediction and variance. Third, the hierarchical representation of the between-model variance facilitates the prioritization of the contribution of each uncertain model component to the overall model uncertainty. We illustrate these concepts using the groundwater modeling of a siliciclastic aquifer-fault system. The sources of uncertainty considered are from geological architecture, formation dip, boundary conditions and model parameters. The study shows that the HBMA analysis helps in advancing knowledge about the model rather than forcing the model to fit a particularly understanding or merely averaging several candidate models. [1] Tsai, F. T.-C., and A. S. Elshall (2013), Hierarchical Bayesian model averaging for hydrostratigraphic modeling: Uncertainty segregation and comparative evaluation. Water Resources Research, 49, 5520-5536, doi:10.1002/wrcr.20428. [2] Elshall, A.S., and F. T.-C. Tsai (2014). Constructive epistemic modeling of groundwater flow with geological architecture and boundary condition uncertainty under Bayesian paradigm, Journal of Hydrology, 517, 105-119, doi: 10.1016/j.jhydrol.2014.05.027.
Classifying hospitals as mortality outliers: logistic versus hierarchical logistic models.
Alexandrescu, Roxana; Bottle, Alex; Jarman, Brian; Aylin, Paul
2014-05-01
The use of hierarchical logistic regression for provider profiling has been recommended due to the clustering of patients within hospitals, but has some associated difficulties. We assess changes in hospital outlier status based on standard logistic versus hierarchical logistic modelling of mortality. The study population consisted of all patients admitted to acute, non-specialist hospitals in England between 2007 and 2011 with a primary diagnosis of acute myocardial infarction, acute cerebrovascular disease or fracture of neck of femur or a primary procedure of coronary artery bypass graft or repair of abdominal aortic aneurysm. We compared standardised mortality ratios (SMRs) from non-hierarchical models with SMRs from hierarchical models, without and with shrinkage estimates of the predicted probabilities (Model 1 and Model 2). The SMRs from standard logistic and hierarchical models were highly statistically significantly correlated (r > 0.91, p = 0.01). More outliers were recorded in the standard logistic regression than hierarchical modelling only when using shrinkage estimates (Model 2): 21 hospitals (out of a cumulative number of 565 pairs of hospitals under study) changed from a low outlier and 8 hospitals changed from a high outlier based on the logistic regression to a not-an-outlier based on shrinkage estimates. Both standard logistic and hierarchical modelling have identified nearly the same hospitals as mortality outliers. The choice of methodological approach should, however, also consider whether the modelling aim is judgment or improvement, as shrinkage may be more appropriate for the former than the latter.
Higher-Order Item Response Models for Hierarchical Latent Traits
Huang, Hung-Yu; Wang, Wen-Chung; Chen, Po-Hsi; Su, Chi-Ming
2013-01-01
Many latent traits in the human sciences have a hierarchical structure. This study aimed to develop a new class of higher order item response theory models for hierarchical latent traits that are flexible in accommodating both dichotomous and polytomous items, to estimate both item and person parameters jointly, to allow users to specify…
On the renormalization group transformation for scalar hierarchical models
Energy Technology Data Exchange (ETDEWEB)
Koch, H. (Texas Univ., Austin (USA). Dept. of Mathematics); Wittwer, P. (Geneva Univ. (Switzerland). Dept. de Physique Theorique)
1991-06-01
We give a new proof for the existence of a non-Gaussian hierarchical renormalization group fixed point, using what could be called a beta-function for this problem. We also discuss the asymptotic behavior of this fixed point, and the connection between the hierarchical models of Dyson and Gallavotti. (orig.).
Hierarchical Geometric Constraint Model for Parametric Feature Based Modeling
Institute of Scientific and Technical Information of China (English)
高曙明; 彭群生
1997-01-01
A new geometric constraint model is described,which is hierarchical and suitable for parametric feature based modeling.In this model,different levels of geometric information are repesented to support various stages of a design process.An efficient approach to parametric feature based modeling is also presented,adopting the high level geometric constraint model.The low level geometric model such as B-reps can be derived automatically from the hig level geometric constraint model,enabling designers to perform their task of detailed design.
What are hierarchical models and how do we analyze them?
Royle, Andy
2016-01-01
In this chapter we provide a basic definition of hierarchical models and introduce the two canonical hierarchical models in this book: site occupancy and N-mixture models. The former is a hierarchical extension of logistic regression and the latter is a hierarchical extension of Poisson regression. We introduce basic concepts of probability modeling and statistical inference including likelihood and Bayesian perspectives. We go through the mechanics of maximizing the likelihood and characterizing the posterior distribution by Markov chain Monte Carlo (MCMC) methods. We give a general perspective on topics such as model selection and assessment of model fit, although we demonstrate these topics in practice in later chapters (especially Chapters 5, 6, 7, and 10 Chapter 5 Chapter 6 Chapter 7 Chapter 10)
Hierarchical Neural Regression Models for Customer Churn Prediction
Directory of Open Access Journals (Sweden)
Golshan Mohammadi
2013-01-01
Full Text Available As customers are the main assets of each industry, customer churn prediction is becoming a major task for companies to remain in competition with competitors. In the literature, the better applicability and efficiency of hierarchical data mining techniques has been reported. This paper considers three hierarchical models by combining four different data mining techniques for churn prediction, which are backpropagation artificial neural networks (ANN, self-organizing maps (SOM, alpha-cut fuzzy c-means (α-FCM, and Cox proportional hazards regression model. The hierarchical models are ANN + ANN + Cox, SOM + ANN + Cox, and α-FCM + ANN + Cox. In particular, the first component of the models aims to cluster data in two churner and nonchurner groups and also filter out unrepresentative data or outliers. Then, the clustered data as the outputs are used to assign customers to churner and nonchurner groups by the second technique. Finally, the correctly classified data are used to create Cox proportional hazards model. To evaluate the performance of the hierarchical models, an Iranian mobile dataset is considered. The experimental results show that the hierarchical models outperform the single Cox regression baseline model in terms of prediction accuracy, Types I and II errors, RMSE, and MAD metrics. In addition, the α-FCM + ANN + Cox model significantly performs better than the two other hierarchical models.
Calibration and verification of environmental models
Lee, S. S.; Sengupta, S.; Weinberg, N.; Hiser, H.
1976-01-01
The problems of calibration and verification of mesoscale models used for investigating power plant discharges are considered. The value of remote sensors for data acquisition is discussed as well as an investigation of Biscayne Bay in southern Florida.
Displaced calibration of PM10 measurements using spatio-temporal models
Directory of Open Access Journals (Sweden)
Daniela Cocchi
2007-12-01
Full Text Available PM10 monitoring networks are equipped with heterogeneous samplers. Some of these samplers are known to underestimate true levels of concentrations (non-reference samplers. In this paper we propose a hierarchical spatio-temporal Bayesian model for the calibration of measurements recorded using non-reference samplers, by borrowing strength from non co-located reference sampler measurements.
A Method to Test Model Calibration Techniques
Energy Technology Data Exchange (ETDEWEB)
Judkoff, Ron; Polly, Ben; Neymark, Joel
2016-08-26
This paper describes a method for testing model calibration techniques. Calibration is commonly used in conjunction with energy retrofit audit models. An audit is conducted to gather information about the building needed to assemble an input file for a building energy modeling tool. A calibration technique is used to reconcile model predictions with utility data, and then the 'calibrated model' is used to predict energy savings from a variety of retrofit measures and combinations thereof. Current standards and guidelines such as BPI-2400 and ASHRAE-14 set criteria for 'goodness of fit' and assume that if the criteria are met, then the calibration technique is acceptable. While it is logical to use the actual performance data of the building to tune the model, it is not certain that a good fit will result in a model that better predicts post-retrofit energy savings. Therefore, the basic idea here is that the simulation program (intended for use with the calibration technique) is used to generate surrogate utility bill data and retrofit energy savings data against which the calibration technique can be tested. This provides three figures of merit for testing a calibration technique, 1) accuracy of the post-retrofit energy savings prediction, 2) closure on the 'true' input parameter values, and 3) goodness of fit to the utility bill data. The paper will also discuss the pros and cons of using this synthetic surrogate data approach versus trying to use real data sets of actual buildings.
Study of chaos based on a hierarchical model
Energy Technology Data Exchange (ETDEWEB)
Yagi, Masatoshi; Itoh, Sanae-I. [Kyushu Univ., Fukuoka (Japan). Research Inst. for Applied Mechanics
2001-12-01
Study of chaos based on a hierarchical model is briefly reviewed. Here we categorize hierarchical model equations, i.e., (1) a model with a few degrees of freedom, e.g., the Lorenz model, (2) a model with intermediate degrees of freedom like a shell model, and (3) a model with many degrees of freedom such as a Navier-Stokes equation. We discuss the nature of chaos and turbulence described by these models via Lyapunov exponents. The interpretation of results observed in fundamental plasma experiments is also shown based on a shell model. (author)
An Unsupervised Model for Exploring Hierarchical Semantics from Social Annotations
Zhou, Mianwei; Bao, Shenghua; Wu, Xian; Yu, Yong
This paper deals with the problem of exploring hierarchical semantics from social annotations. Recently, social annotation services have become more and more popular in Semantic Web. It allows users to arbitrarily annotate web resources, thus, largely lowers the barrier to cooperation. Furthermore, through providing abundant meta-data resources, social annotation might become a key to the development of Semantic Web. However, on the other hand, social annotation has its own apparent limitations, for instance, 1) ambiguity and synonym phenomena and 2) lack of hierarchical information. In this paper, we propose an unsupervised model to automatically derive hierarchical semantics from social annotations. Using a social bookmark service Del.icio.us as example, we demonstrate that the derived hierarchical semantics has the ability to compensate those shortcomings. We further apply our model on another data set from Flickr to testify our model's applicability on different environments. The experimental results demonstrate our model's efficiency.
Modeling the deformation behavior of nanocrystalline alloy with hierarchical microstructures
Energy Technology Data Exchange (ETDEWEB)
Liu, Hongxi; Zhou, Jianqiu, E-mail: zhouj@njtech.edu.cn [Nanjing Tech University, Department of Mechanical Engineering (China); Zhao, Yonghao, E-mail: yhzhao@njust.edu.cn [Nanjing University of Science and Technology, Nanostructural Materials Research Center, School of Materials Science and Engineering (China)
2016-02-15
A mechanism-based plasticity model based on dislocation theory is developed to describe the mechanical behavior of the hierarchical nanocrystalline alloys. The stress–strain relationship is derived by invoking the impeding effect of the intra-granular solute clusters and the inter-granular nanostructures on the dislocation movements along the sliding path. We found that the interaction between dislocations and the hierarchical microstructures contributes to the strain hardening property and greatly influence the ductility of nanocrystalline metals. The analysis indicates that the proposed model can successfully describe the enhanced strength of the nanocrystalline hierarchical alloy. Moreover, the strain hardening rate is sensitive to the volume fraction of the hierarchical microstructures. The present model provides a new perspective to design the microstructures for optimizing the mechanical properties in nanostructural metals.
Road network safety evaluation using Bayesian hierarchical joint model.
Wang, Jie; Huang, Helai
2016-05-01
Safety and efficiency are commonly regarded as two significant performance indicators of transportation systems. In practice, road network planning has focused on road capacity and transport efficiency whereas the safety level of a road network has received little attention in the planning stage. This study develops a Bayesian hierarchical joint model for road network safety evaluation to help planners take traffic safety into account when planning a road network. The proposed model establishes relationships between road network risk and micro-level variables related to road entities and traffic volume, as well as socioeconomic, trip generation and network density variables at macro level which are generally used for long term transportation plans. In addition, network spatial correlation between intersections and their connected road segments is also considered in the model. A road network is elaborately selected in order to compare the proposed hierarchical joint model with a previous joint model and a negative binomial model. According to the results of the model comparison, the hierarchical joint model outperforms the joint model and negative binomial model in terms of the goodness-of-fit and predictive performance, which indicates the reasonableness of considering the hierarchical data structure in crash prediction and analysis. Moreover, both random effects at the TAZ level and the spatial correlation between intersections and their adjacent segments are found to be significant, supporting the employment of the hierarchical joint model as an alternative in road-network-level safety modeling as well.
Bayesian calibration of car-following models
Van Hinsbergen, C.P.IJ.; Van Lint, H.W.C.; Hoogendoorn, S.P.; Van Zuylen, H.J.
2010-01-01
Recent research has revealed that there exist large inter-driver differences in car-following behavior such that different car-following models may apply to different drivers. This study applies Bayesian techniques to the calibration of car-following models, where prior distributions on each model p
Overnight Index Rate: Model, calibration and simulation
Olga Yashkir; Yuri Yashkir
2014-01-01
In this study, the extended Overnight Index Rate (OIR) model is presented. The fitting function for the probability distribution of the OIR daily returns is based on three different Gaussian distributions which provide modelling of the narrow central peak and the wide fat-tailed component. The calibration algorithm for the model is developed and investigated using the historical OIR data.
Overnight Index Rate: Model, calibration and simulation
Directory of Open Access Journals (Sweden)
Olga Yashkir
2014-12-01
Full Text Available In this study, the extended Overnight Index Rate (OIR model is presented. The fitting function for the probability distribution of the OIR daily returns is based on three different Gaussian distributions which provide modelling of the narrow central peak and the wide fat-tailed component. The calibration algorithm for the model is developed and investigated using the historical OIR data.
Overnight Index Rate: Model, Calibration, and Simulation
Olga Yashkir; Yuri Yashkir
2013-01-01
In this study, the extended Overnight Index Rate (OIR) model is presented. The fitting function for the probability distribution of the OIR daily returns is based on three different Gaussian distributions which provide modelling of the narrow central peak and the wide fat-tailed component. The calibration algorithm for the model is developed and investigated using the historical OIR data.
Modeling Bivariate Longitudinal Hormone Profiles by Hierarchical State Space Models.
Liu, Ziyue; Cappola, Anne R; Crofford, Leslie J; Guo, Wensheng
2014-01-01
The hypothalamic-pituitary-adrenal (HPA) axis is crucial in coping with stress and maintaining homeostasis. Hormones produced by the HPA axis exhibit both complex univariate longitudinal profiles and complex relationships among different hormones. Consequently, modeling these multivariate longitudinal hormone profiles is a challenging task. In this paper, we propose a bivariate hierarchical state space model, in which each hormone profile is modeled by a hierarchical state space model, with both population-average and subject-specific components. The bivariate model is constructed by concatenating the univariate models based on the hypothesized relationship. Because of the flexible framework of state space form, the resultant models not only can handle complex individual profiles, but also can incorporate complex relationships between two hormones, including both concurrent and feedback relationship. Estimation and inference are based on marginal likelihood and posterior means and variances. Computationally efficient Kalman filtering and smoothing algorithms are used for implementation. Application of the proposed method to a study of chronic fatigue syndrome and fibromyalgia reveals that the relationships between adrenocorticotropic hormone and cortisol in the patient group are weaker than in healthy controls.
The Role of Prototype Learning in Hierarchical Models of Vision
Thomure, Michael David
2014-01-01
I conduct a study of learning in HMAX-like models, which are hierarchical models of visual processing in biological vision systems. Such models compute a new representation for an image based on the similarity of image sub-parts to a number of specific patterns, called prototypes. Despite being a central piece of the overall model, the issue of…
Adaptable Multivariate Calibration Models for Spectral Applications
Energy Technology Data Exchange (ETDEWEB)
THOMAS,EDWARD V.
1999-12-20
Multivariate calibration techniques have been used in a wide variety of spectroscopic situations. In many of these situations spectral variation can be partitioned into meaningful classes. For example, suppose that multiple spectra are obtained from each of a number of different objects wherein the level of the analyte of interest varies within each object over time. In such situations the total spectral variation observed across all measurements has two distinct general sources of variation: intra-object and inter-object. One might want to develop a global multivariate calibration model that predicts the analyte of interest accurately both within and across objects, including new objects not involved in developing the calibration model. However, this goal might be hard to realize if the inter-object spectral variation is complex and difficult to model. If the intra-object spectral variation is consistent across objects, an effective alternative approach might be to develop a generic intra-object model that can be adapted to each object separately. This paper contains recommendations for experimental protocols and data analysis in such situations. The approach is illustrated with an example involving the noninvasive measurement of glucose using near-infrared reflectance spectroscopy. Extensions to calibration maintenance and calibration transfer are discussed.
Free-Energy Bounds for Hierarchical Spin Models
Castellana, Michele; Barra, Adriano; Guerra, Francesco
2014-04-01
In this paper we study two non-mean-field (NMF) spin models built on a hierarchical lattice: the hierarchical Edward-Anderson model (HEA) of a spin glass, and Dyson's hierarchical model (DHM) of a ferromagnet. For the HEA, we prove the existence of the thermodynamic limit of the free energy and the replica-symmetry-breaking (RSB) free-energy bounds previously derived for the Sherrington-Kirkpatrick model of a spin glass. These RSB mean-field bounds are exact only if the order-parameter fluctuations (OPF) vanish: given that such fluctuations are not negligible in NMF models, we develop a novel strategy to tackle part of OPF in hierarchical models. The method is based on absorbing part of OPF of a block of spins into an effective Hamiltonian of the underlying spin blocks. We illustrate this method for DHM and show that, compared to the mean-field bound for the free energy, it provides a tighter NMF bound, with a critical temperature closer to the exact one. To extend this method to the HEA model, a suitable generalization of Griffith's correlation inequalities for Ising ferromagnets is needed: since correlation inequalities for spin glasses are still an open topic, we leave the extension of this method to hierarchical spin glasses as a future perspective.
Immune System Model Calibration by Genetic Algorithm
Presbitero, A.; Krzhizhanovskaya, V.; Mancini, E.; Brands, R.; Sloot, P.
2016-01-01
We aim to develop a mathematical model of the human immune system for advanced individualized healthcare where medication plan is fine-tuned to fit a patient's conditions through monitored biochemical processes. One of the challenges is calibrating model parameters to satisfy existing experimental
Improving Environmental Model Calibration and Prediction
2011-01-18
groundwater model calibration. Adv. Water Resour., 29(4):605–623, 2006. [9] B.E. Skahill, J.S. Baggett, S. Frankenstein , and C.W. Downer. More efficient...of Hydrology, Environmental Modelling & Software, or Water Resources Research). Skahill, B., Baggett, J., Frankenstein , S., and Downer, C.W. (2009
Calibration suspended sediment model Markermeer
Boderie, P.; Van Kessel, T.; De Boer, G.
2009-01-01
In deze studie is een computermodel voor het Markermeer opgezet, ingeregeld en gevalideerd. Het model beschrijft dynamsch de stroming van water, waterpeilen, golven en slib in het water en in de bodem. Het model is gecalibreerd voorde periode augustus 2007 - april 2008 en gevalideerd voor de periode
A hierarchical linear model for tree height prediction.
Vicente J. Monleon
2003-01-01
Measuring tree height is a time-consuming process. Often, tree diameter is measured and height is estimated from a published regression model. Trees used to develop these models are clustered into stands, but this structure is ignored and independence is assumed. In this study, hierarchical linear models that account explicitly for the clustered structure of the data...
Modelling hierarchical and modular complex networks: division and independence
Kim, D.-H.; Rodgers, G. J.; Kahng, B.; Kim, D.
2005-06-01
We introduce a growing network model which generates both modular and hierarchical structure in a self-organized way. To this end, we modify the Barabási-Albert model into the one evolving under the principles of division and independence as well as growth and preferential attachment (PA). A newly added vertex chooses one of the modules composed of existing vertices, and attaches edges to vertices belonging to that module following the PA rule. When the module size reaches a proper size, the module is divided into two, and a new module is created. The karate club network studied by Zachary is a simple version of the current model. We find that the model can reproduce both modular and hierarchical properties, characterized by the hierarchical clustering function of a vertex with degree k, C(k), being in good agreement with empirical measurements for real-world networks.
Multiple comparisons in genetic association studies: a hierarchical modeling approach.
Yi, Nengjun; Xu, Shizhong; Lou, Xiang-Yang; Mallick, Himel
2014-02-01
Multiple comparisons or multiple testing has been viewed as a thorny issue in genetic association studies aiming to detect disease-associated genetic variants from a large number of genotyped variants. We alleviate the problem of multiple comparisons by proposing a hierarchical modeling approach that is fundamentally different from the existing methods. The proposed hierarchical models simultaneously fit as many variables as possible and shrink unimportant effects towards zero. Thus, the hierarchical models yield more efficient estimates of parameters than the traditional methods that analyze genetic variants separately, and also coherently address the multiple comparisons problem due to largely reducing the effective number of genetic effects and the number of statistically "significant" effects. We develop a method for computing the effective number of genetic effects in hierarchical generalized linear models, and propose a new adjustment for multiple comparisons, the hierarchical Bonferroni correction, based on the effective number of genetic effects. Our approach not only increases the power to detect disease-associated variants but also controls the Type I error. We illustrate and evaluate our method with real and simulated data sets from genetic association studies. The method has been implemented in our freely available R package BhGLM (http://www.ssg.uab.edu/bhglm/).
Modeling local item dependence with the hierarchical generalized linear model.
Jiao, Hong; Wang, Shudong; Kamata, Akihito
2005-01-01
Local item dependence (LID) can emerge when the test items are nested within common stimuli or item groups. This study proposes a three-level hierarchical generalized linear model (HGLM) to model LID when LID is due to such contextual effects. The proposed three-level HGLM was examined by analyzing simulated data sets and was compared with the Rasch-equivalent two-level HGLM that ignores such a nested structure of test items. The results demonstrated that the proposed model could capture LID and estimate its magnitude. Also, the two-level HGLM resulted in larger mean absolute differences between the true and the estimated item difficulties than those from the proposed three-level HGLM. Furthermore, it was demonstrated that the proposed three-level HGLM estimated the ability distribution variance unaffected by the LID magnitude, while the two-level HGLM with no LID consideration increasingly underestimated the ability variance as the LID magnitude increased.
The Revised Hierarchical Model: A critical review and assessment
Kroll, J.F.; Hell, J.G. van; Tokowicz, N.; Green, D.W.
2010-01-01
Brysbaert and Duyck (this issue) suggest that it is time to abandon the Revised Hierarchical Model (Kroll and Stewart, 1994) in favor of connectionist models such as BIA+ (Dijkstra and Van Heuven, 2002) that more accurately account for the recent evidence on non-selective access in bilingual word re
Hierarchical Policy Model for Managing Heterogeneous Security Systems
Lee, Dong-Young; Kim, Minsoo
2007-12-01
The integrated security management becomes increasingly complex as security manager must take heterogeneous security systems, different networking technologies, and distributed applications into consideration. The task of managing these security systems and applications depends on various systems and vender specific issues. In this paper, we present a hierarchical policy model which are derived from the conceptual policy, and specify means to enforce this behavior. The hierarchical policy model consist of five levels which are conceptual policy level, goal-oriented policy level, target policy level, process policy level and low-level policy.
Quick Web Services Lookup Model Based on Hierarchical Registration
Institute of Scientific and Technical Information of China (English)
谢山; 朱国进; 陈家训
2003-01-01
Quick Web Services Lookup (Q-WSL) is a new model to registration and lookup of complex services in the Internet. The model is designed to quickly find complex Web services by using hierarchical registration method. The basic concepts of Web services system are introduced and presented, and then the method of hierarchical registration of services is described. In particular, service query document description and service lookup procedure are concentrated, and it addresses how to lookup these services which are registered in the Web services system. Furthermore, an example design and an evaluation of its performance are presented.Specifically, it shows that the using of attributionbased service query document design and contentbased hierarchical registration in Q-WSL allows service requesters to discover needed services more flexibly and rapidly. It is confirmed that Q-WSL is very suitable for Web services system.
Bayesian structural equation modeling method for hierarchical model validation
Energy Technology Data Exchange (ETDEWEB)
Jiang Xiaomo [Department of Civil and Environmental Engineering, Vanderbilt University, Box 1831-B, Nashville, TN 37235 (United States)], E-mail: xiaomo.jiang@vanderbilt.edu; Mahadevan, Sankaran [Department of Civil and Environmental Engineering, Vanderbilt University, Box 1831-B, Nashville, TN 37235 (United States)], E-mail: sankaran.mahadevan@vanderbilt.edu
2009-04-15
A building block approach to model validation may proceed through various levels, such as material to component to subsystem to system, comparing model predictions with experimental observations at each level. Usually, experimental data becomes scarce as one proceeds from lower to higher levels. This paper presents a structural equation modeling approach to make use of the lower-level data for higher-level model validation under uncertainty, integrating several components: lower-level data, higher-level data, computational model, and latent variables. The method proposed in this paper uses latent variables to model two sets of relationships, namely, the computational model to system-level data, and lower-level data to system-level data. A Bayesian network with Markov chain Monte Carlo simulation is applied to represent the two relationships and to estimate the influencing factors between them. Bayesian hypothesis testing is employed to quantify the confidence in the predictive model at the system level, and the role of lower-level data in the model validation assessment at the system level. The proposed methodology is implemented for hierarchical assessment of three validation problems, using discrete observations and time-series data.
MULTILEVEL RECURRENT MODEL FOR HIERARCHICAL CONTROL OF COMPLEX REGIONAL SECURITY
Directory of Open Access Journals (Sweden)
Andrey V. Masloboev
2014-11-01
Full Text Available Subject of research. The research goal and scope are development of methods and software for mathematical and computer modeling of the regional security information support systems as multilevel hierarchical systems. Such systems are characterized by loosely formalization, multiple-aspect of descendent system processes and their interconnectivity, high level dynamics and uncertainty. The research methodology is based on functional-target approach and principles of multilevel hierarchical system theory. The work considers analysis and structural-algorithmic synthesis problem-solving of the multilevel computer-aided systems intended for management and decision-making information support in the field of regional security. Main results. A hierarchical control multilevel model of regional socio-economic system complex security has been developed. The model is based on functional-target approach and provides both formal statement and solving, and practical implementation of the automated information system structure and control algorithms synthesis problems of regional security management optimal in terms of specified criteria. An approach for intralevel and interlevel coordination problem-solving in the multilevel hierarchical systems has been proposed on the basis of model application. The coordination is provided at the expense of interconnection requirements satisfaction between the functioning quality indexes (objective functions, which are optimized by the different elements of multilevel systems. That gives the possibility for sufficient coherence reaching of the local decisions, being made on the different control levels, under decentralized decision-making and external environment high dynamics. Recurrent model application provides security control mathematical models formation of regional socioeconomic systems, functioning under uncertainty. Practical relevance. The model implementation makes it possible to automate synthesis realization of
Hierarchical Non-Emitting Markov Models
Ristad, E S; Ristad, Eric Sven; Thomas, Robert G.
1998-01-01
We describe a simple variant of the interpolated Markov model with non-emitting state transitions and prove that it is strictly more powerful than any Markov model. More importantly, the non-emitting model outperforms the classic interpolated model on the natural language texts under a wide range of experimental conditions, with only a modest increase in computational requirements. The non-emitting model is also much less prone to overfitting. Keywords: Markov model, interpolated Markov model, hidden Markov model, mixture modeling, non-emitting state transitions, state-conditional interpolation, statistical language model, discrete time series, Brown corpus, Wall Street Journal.
Conceptual hierarchical modeling to describe wetland plant community organization
Little, A.M.; Guntenspergen, G.R.; Allen, T.F.H.
2010-01-01
Using multivariate analysis, we created a hierarchical modeling process that describes how differently-scaled environmental factors interact to affect wetland-scale plant community organization in a system of small, isolated wetlands on Mount Desert Island, Maine. We followed the procedure: 1) delineate wetland groups using cluster analysis, 2) identify differently scaled environmental gradients using non-metric multidimensional scaling, 3) order gradient hierarchical levels according to spatiotem-poral scale of fluctuation, and 4) assemble hierarchical model using group relationships with ordination axes and post-hoc tests of environmental differences. Using this process, we determined 1) large wetland size and poor surface water chemistry led to the development of shrub fen wetland vegetation, 2) Sphagnum and water chemistry differences affected fen vs. marsh / sedge meadows status within small wetlands, and 3) small-scale hydrologic differences explained transitions between forested vs. non-forested and marsh vs. sedge meadow vegetation. This hierarchical modeling process can help explain how upper level contextual processes constrain biotic community response to lower-level environmental changes. It creates models with more nuanced spatiotemporal complexity than classification and regression tree procedures. Using this process, wetland scientists will be able to generate more generalizable theories of plant community organization, and useful management models. ?? Society of Wetland Scientists 2009.
Update Legal Documents Using Hierarchical Ranking Models and Word Clustering
Pham, Minh Quang Nhat; Nguyen, Minh Le; Shimazu, Akira
2010-01-01
Our research addresses the task of updating legal documents when newinformation emerges. In this paper, we employ a hierarchical ranking model tothe task of updating legal documents. Word clustering features are incorporatedto the ranking models to exploit semantic relations between words. Experimentalresults on legal data built from the United States Code show that the hierarchicalranking model with word clustering outperforms baseline methods using VectorSpace Model, and word cluster-based ...
Grid based calibration of SWAT hydrological models
Directory of Open Access Journals (Sweden)
D. Gorgan
2012-07-01
Full Text Available The calibration and execution of large hydrological models, such as SWAT (soil and water assessment tool, developed for large areas, high resolution, and huge input data, need not only quite a long execution time but also high computation resources. SWAT hydrological model supports studies and predictions of the impact of land management practices on water, sediment, and agricultural chemical yields in complex watersheds. The paper presents the gSWAT application as a web practical solution for environmental specialists to calibrate extensive hydrological models and to run scenarios, by hiding the complex control of processes and heterogeneous resources across the grid based high computation infrastructure. The paper highlights the basic functionalities of the gSWAT platform, and the features of the graphical user interface. The presentation is concerned with the development of working sessions, interactive control of calibration, direct and basic editing of parameters, process monitoring, and graphical and interactive visualization of the results. The experiments performed on different SWAT models and the obtained results argue the benefits brought by the grid parallel and distributed environment as a solution for the processing platform. All the instances of SWAT models used in the reported experiments have been developed through the enviroGRIDS project, targeting the Black Sea catchment area.
High Accuracy Transistor Compact Model Calibrations
Energy Technology Data Exchange (ETDEWEB)
Hembree, Charles E. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Mar, Alan [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Robertson, Perry J. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States)
2015-09-01
Typically, transistors are modeled by the application of calibrated nominal and range models. These models consists of differing parameter values that describe the location and the upper and lower limits of a distribution of some transistor characteristic such as current capacity. Correspond- ingly, when using this approach, high degrees of accuracy of the transistor models are not expected since the set of models is a surrogate for a statistical description of the devices. The use of these types of models describes expected performances considering the extremes of process or transistor deviations. In contrast, circuits that have very stringent accuracy requirements require modeling techniques with higher accuracy. Since these accurate models have low error in transistor descriptions, these models can be used to describe part to part variations as well as an accurate description of a single circuit instance. Thus, models that meet these stipulations also enable the calculation of quantifi- cation of margins with respect to a functional threshold and uncertainties in these margins. Given this need, new model high accuracy calibration techniques for bipolar junction transis- tors have been developed and are described in this report.
Hierarchical modelling for the environmental sciences statistical methods and applications
Clark, James S
2006-01-01
New statistical tools are changing the way in which scientists analyze and interpret data and models. Hierarchical Bayes and Markov Chain Monte Carlo methods for analysis provide a consistent framework for inference and prediction where information is heterogeneous and uncertain, processes are complicated, and responses depend on scale. Nowhere are these methods more promising than in the environmental sciences.
Objective calibration of numerical weather prediction models
Voudouri, A.; Khain, P.; Carmona, I.; Bellprat, O.; Grazzini, F.; Avgoustoglou, E.; Bettems, J. M.; Kaufmann, P.
2017-07-01
Numerical weather prediction (NWP) and climate models use parameterization schemes for physical processes, which often include free or poorly confined parameters. Model developers normally calibrate the values of these parameters subjectively to improve the agreement of forecasts with available observations, a procedure referred as expert tuning. A practicable objective multi-variate calibration method build on a quadratic meta-model (MM), that has been applied for a regional climate model (RCM) has shown to be at least as good as expert tuning. Based on these results, an approach to implement the methodology to an NWP model is presented in this study. Challenges in transferring the methodology from RCM to NWP are not only restricted to the use of higher resolution and different time scales. The sensitivity of the NWP model quality with respect to the model parameter space has to be clarified, as well as optimize the overall procedure, in terms of required amount of computing resources for the calibration of an NWP model. Three free model parameters affecting mainly turbulence parameterization schemes were originally selected with respect to their influence on the variables associated to daily forecasts such as daily minimum and maximum 2 m temperature as well as 24 h accumulated precipitation. Preliminary results indicate that it is both affordable in terms of computer resources and meaningful in terms of improved forecast quality. In addition, the proposed methodology has the advantage of being a replicable procedure that can be applied when an updated model version is launched and/or customize the same model implementation over different climatological areas.
On the construction of hierarchic models
Out, D.-J.; Rikxoort, van R.P.; Bakker, R.R.
1994-01-01
One of the main problems in the field of model-based diagnosis of technical systems today is finding the most useful model or models of the system being diagnosed. Often, a model showing the physical components and the connections between them is all that is available. As systems grow larger and lar
Modeling urban air pollution with optimized hierarchical fuzzy inference system.
Tashayo, Behnam; Alimohammadi, Abbas
2016-10-01
Environmental exposure assessments (EEA) and epidemiological studies require urban air pollution models with appropriate spatial and temporal resolutions. Uncertain available data and inflexible models can limit air pollution modeling techniques, particularly in under developing countries. This paper develops a hierarchical fuzzy inference system (HFIS) to model air pollution under different land use, transportation, and meteorological conditions. To improve performance, the system treats the issue as a large-scale and high-dimensional problem and develops the proposed model using a three-step approach. In the first step, a geospatial information system (GIS) and probabilistic methods are used to preprocess the data. In the second step, a hierarchical structure is generated based on the problem. In the third step, the accuracy and complexity of the model are simultaneously optimized with a multiple objective particle swarm optimization (MOPSO) algorithm. We examine the capabilities of the proposed model for predicting daily and annual mean PM2.5 and NO2 and compare the accuracy of the results with representative models from existing literature. The benefits provided by the model features, including probabilistic preprocessing, multi-objective optimization, and hierarchical structure, are precisely evaluated by comparing five different consecutive models in terms of accuracy and complexity criteria. Fivefold cross validation is used to assess the performance of the generated models. The respective average RMSEs and coefficients of determination (R (2)) for the test datasets using proposed model are as follows: daily PM2.5 = (8.13, 0.78), annual mean PM2.5 = (4.96, 0.80), daily NO2 = (5.63, 0.79), and annual mean NO2 = (2.89, 0.83). The obtained results demonstrate that the developed hierarchical fuzzy inference system can be utilized for modeling air pollution in EEA and epidemiological studies.
ECoS, a framework for modelling hierarchical spatial systems.
Harris, John R W; Gorley, Ray N
2003-10-01
A general framework for modelling hierarchical spatial systems has been developed and implemented as the ECoS3 software package. The structure of this framework is described, and illustrated with representative examples. It allows the set-up and integration of sets of advection-diffusion equations representing multiple constituents interacting in a spatial context. Multiple spaces can be defined, with zero, one or two-dimensions and can be nested, and linked through constituent transfers. Model structure is generally object-oriented and hierarchical, reflecting the natural relations within its real-world analogue. Velocities, dispersions and inter-constituent transfers, together with additional functions, are defined as properties of constituents to which they apply. The resulting modular structure of ECoS models facilitates cut and paste model development, and template model components have been developed for the assembly of a range of estuarine water quality models. Published examples of applications to the geochemical dynamics of estuaries are listed.
Inference in HIV dynamics models via hierarchical likelihood
2010-01-01
HIV dynamical models are often based on non-linear systems of ordinary differential equations (ODE), which do not have analytical solution. Introducing random effects in such models leads to very challenging non-linear mixed-effects models. To avoid the numerical computation of multiple integrals involved in the likelihood, we propose a hierarchical likelihood (h-likelihood) approach, treated in the spirit of a penalized likelihood. We give the asymptotic distribution of the maximum h-likelih...
Cornic, Philippe; Illoul, Cédric; Cheminet, Adam; Le Besnerais, Guy; Champagnat, Frédéric; Le Sant, Yves; Leclaire, Benjamin
2016-09-01
We address calibration and self-calibration of tomographic PIV experiments within a pinhole model of cameras. A complete and explicit pinhole model of a camera equipped with a 2-tilt angles Scheimpflug adapter is presented. It is then used in a calibration procedure based on a freely moving calibration plate. While the resulting calibrations are accurate enough for Tomo-PIV, we confirm, through a simple experiment, that they are not stable in time, and illustrate how the pinhole framework can be used to provide a quantitative evaluation of geometrical drifts in the setup. We propose an original self-calibration method based on global optimization of the extrinsic parameters of the pinhole model. These methods are successfully applied to the tomographic PIV of an air jet experiment. An unexpected by-product of our work is to show that volume self-calibration induces a change in the world frame coordinates. Provided the calibration drift is small, as generally observed in PIV, the bias on the estimated velocity field is negligible but the absolute location cannot be accurately recovered using standard calibration data.
Modeling diurnal hormone profiles by hierarchical state space models.
Liu, Ziyue; Guo, Wensheng
2015-10-30
Adrenocorticotropic hormone (ACTH) diurnal patterns contain both smooth circadian rhythms and pulsatile activities. How to evaluate and compare them between different groups is a challenging statistical task. In particular, we are interested in testing (1) whether the smooth ACTH circadian rhythms in chronic fatigue syndrome and fibromyalgia patients differ from those in healthy controls and (2) whether the patterns of pulsatile activities are different. In this paper, a hierarchical state space model is proposed to extract these signals from noisy observations. The smooth circadian rhythms shared by a group of subjects are modeled by periodic smoothing splines. The subject level pulsatile activities are modeled by autoregressive processes. A functional random effect is adopted at the pair level to account for the matched pair design. Parameters are estimated by maximizing the marginal likelihood. Signals are extracted as posterior means. Computationally efficient Kalman filter algorithms are adopted for implementation. Application of the proposed model reveals that the smooth circadian rhythms are similar in the two groups but the pulsatile activities in patients are weaker than those in the healthy controls. Copyright © 2015 John Wiley & Sons, Ltd.
Learning curve estimation in medical devices and procedures: hierarchical modeling.
Govindarajulu, Usha S; Stillo, Marco; Goldfarb, David; Matheny, Michael E; Resnic, Frederic S
2017-07-30
In the use of medical device procedures, learning effects have been shown to be a critical component of medical device safety surveillance. To support their estimation of these effects, we evaluated multiple methods for modeling these rates within a complex simulated dataset representing patients treated by physicians clustered within institutions. We employed unique modeling for the learning curves to incorporate the learning hierarchy between institution and physicians and then modeled them within established methods that work with hierarchical data such as generalized estimating equations (GEE) and generalized linear mixed effect models. We found that both methods performed well, but that the GEE may have some advantages over the generalized linear mixed effect models for ease of modeling and a substantially lower rate of model convergence failures. We then focused more on using GEE and performed a separate simulation to vary the shape of the learning curve as well as employed various smoothing methods to the plots. We concluded that while both hierarchical methods can be used with our mathematical modeling of the learning curve, the GEE tended to perform better across multiple simulated scenarios in order to accurately model the learning effect as a function of physician and hospital hierarchical data in the use of a novel medical device. We found that the choice of shape used to produce the 'learning-free' dataset would be dataset specific, while the choice of smoothing method was negligibly different from one another. This was an important application to understand how best to fit this unique learning curve function for hierarchical physician and hospital data. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Hierarchical Item Response Models for Cognitive Diagnosis
Hansen, Mark Patrick
2013-01-01
Cognitive diagnosis models (see, e.g., Rupp, Templin, & Henson, 2010) have received increasing attention within educational and psychological measurement. The popularity of these models may be largely due to their perceived ability to provide useful information concerning both examinees (classifying them according to their attribute profiles)…
Hierarchical model-based interferometric synthetic aperture radar image registration
Wang, Yang; Huang, Haifeng; Dong, Zhen; Wu, Manqing
2014-01-01
With the rapid development of spaceborne interferometric synthetic aperture radar technology, classical image registration methods are incompetent for high-efficiency and high-accuracy masses of real data processing. Based on this fact, we propose a new method. This method consists of two steps: coarse registration that is realized by cross-correlation algorithm and fine registration that is realized by hierarchical model-based algorithm. Hierarchical model-based algorithm is a high-efficiency optimization algorithm. The key features of this algorithm are a global model that constrains the overall structure of the motion estimated, a local model that is used in the estimation process, and a coarse-to-fine refinement strategy. Experimental results from different kinds of simulated and real data have confirmed that the proposed method is very fast and has high accuracy. Comparing with a conventional cross-correlation method, the proposed method provides markedly improved performance.
Concept Association and Hierarchical Hamming Clustering Model in Text Classification
Institute of Scientific and Technical Information of China (English)
Su Gui-yang; Li Jian-hua; Ma Ying-hua; Li Sheng-hong; Yin Zhong-hang
2004-01-01
We propose two models in this paper. The concept of association model is put forward to obtain the co-occurrence relationships among keywords in the documents and the hierarchical Hamming clustering model is used to reduce the dimensionality of the category feature vector space which can solve the problem of the extremely high dimensionality of the documents' feature space. The results of experiment indicate that it can obtain the co-occurrence relations among keywords in the documents which promote the recall of classification system effectively. The hierarchical Hamming clustering model can reduce the dimensionality of the category feature vector efficiently, the size of the vector space is only about 10% of the primary dimensionality.
Dissecting magnetar variability with Bayesian hierarchical models
Huppenkothen, D; Hogg, D W; Murray, I; Frean, M; Elenbaas, C; Watts, A L; Levin, Y; van der Horst, A J; Kouveliotou, C
2015-01-01
Neutron stars are a prime laboratory for testing physical processes under conditions of strong gravity, high density, and extreme magnetic fields. Among the zoo of neutron star phenomena, magnetars stand out for their bursting behaviour, ranging from extremely bright, rare giant flares to numerous, less energetic recurrent bursts. The exact trigger and emission mechanisms for these bursts are not known; favoured models involve either a crust fracture and subsequent energy release into the magnetosphere, or explosive reconnection of magnetic field lines. In the absence of a predictive model, understanding the physical processes responsible for magnetar burst variability is difficult. Here, we develop an empirical model that decomposes magnetar bursts into a superposition of small spike-like features with a simple functional form, where the number of model components is itself part of the inference problem. The cascades of spikes that we model might be formed by avalanches of reconnection, or crust rupture afte...
Hierarchical Bulk Synchronous Parallel Model and Performance Optimization
Institute of Scientific and Technical Information of China (English)
HUANG Linpeng; SUNYongqiang; YUAN Wei
1999-01-01
Based on the framework of BSP, aHierarchical Bulk Synchronous Parallel (HBSP) performance model isintroduced in this paper to capture the performance optimizationproblem for various stages in parallel program development and toaccurately predict the performance of a parallel program byconsidering factors causing variance at local computation and globalcommunication. The related methodology has been applied to several realapplications and the results show that HBSP is a suitable model foroptimizing parallel programs.
Fractal Derivative Model for Air Permeability in Hierarchic Porous Media
Directory of Open Access Journals (Sweden)
Jie Fan
2012-01-01
Full Text Available Air permeability in hierarchic porous media does not obey Fick's equation or its modification because fractal objects have well-defined geometric properties, which are discrete and discontinuous. We propose a theoretical model dealing with, for the first time, a seemingly complex air permeability process using fractal derivative method. The fractal derivative model has been successfully applied to explain the novel air permeability phenomenon of cocoon. The theoretical analysis was in agreement with experimental results.
A hierarchical model for spatial capture-recapture data
Royle, J. Andrew; Young, K.V.
2008-01-01
Estimating density is a fundamental objective of many animal population studies. Application of methods for estimating population size from ostensibly closed populations is widespread, but ineffective for estimating absolute density because most populations are subject to short-term movements or so-called temporary emigration. This phenomenon invalidates the resulting estimates because the effective sample area is unknown. A number of methods involving the adjustment of estimates based on heuristic considerations are in widespread use. In this paper, a hierarchical model of spatially indexed capture recapture data is proposed for sampling based on area searches of spatial sample units subject to uniform sampling intensity. The hierarchical model contains explicit models for the distribution of individuals and their movements, in addition to an observation model that is conditional on the location of individuals during sampling. Bayesian analysis of the hierarchical model is achieved by the use of data augmentation, which allows for a straightforward implementation in the freely available software WinBUGS. We present results of a simulation study that was carried out to evaluate the operating characteristics of the Bayesian estimator under variable densities and movement patterns of individuals. An application of the model is presented for survey data on the flat-tailed horned lizard (Phrynosoma mcallii) in Arizona, USA.
A hierarchical model for ordinal matrix factorization
DEFF Research Database (Denmark)
Paquet, Ulrich; Thomson, Blaise; Winther, Ole
2012-01-01
their ratings for other movies. The Netflix data set is used for evaluation, which consists of around 100 million ratings. Using root mean-squared error (RMSE) as an evaluation metric, results show that the suggested model outperforms alternative factorization techniques. Results also show how Gibbs sampling...
Hierarchical, model-based risk management of critical infrastructures
Energy Technology Data Exchange (ETDEWEB)
Baiardi, F. [Polo G.Marconi La Spezia, Universita di Pisa, Pisa (Italy); Dipartimento di Informatica, Universita di Pisa, L.go B.Pontecorvo 3 56127, Pisa (Italy)], E-mail: f.baiardi@unipi.it; Telmon, C.; Sgandurra, D. [Dipartimento di Informatica, Universita di Pisa, L.go B.Pontecorvo 3 56127, Pisa (Italy)
2009-09-15
Risk management is a process that includes several steps, from vulnerability analysis to the formulation of a risk mitigation plan that selects countermeasures to be adopted. With reference to an information infrastructure, we present a risk management strategy that considers a sequence of hierarchical models, each describing dependencies among infrastructure components. A dependency exists anytime a security-related attribute of a component depends upon the attributes of other components. We discuss how this notion supports the formal definition of risk mitigation plan and the evaluation of the infrastructure robustness. A hierarchical relation exists among models that are analyzed because each model increases the level of details of some components in a previous one. Since components and dependencies are modeled through a hypergraph, to increase the model detail level, some hypergraph nodes are replaced by more and more detailed hypergraphs. We show how critical information for the assessment can be automatically deduced from the hypergraph and define conditions that determine cases where a hierarchical decomposition simplifies the assessment. In these cases, the assessment has to analyze the hypergraph that replaces the component rather than applying again all the analyses to a more detailed, and hence larger, hypergraph. We also show how the proposed framework supports the definition of a risk mitigation plan and discuss some indicators of the overall infrastructure robustness. Lastly, the development of tools to support the assessment is discussed.
Seepage Calibration Model and Seepage Testing Data
Energy Technology Data Exchange (ETDEWEB)
P. Dixon
2004-02-17
The purpose of this Model Report is to document the Seepage Calibration Model (SCM). The SCM is developed (1) to establish the conceptual basis for the Seepage Model for Performance Assessment (SMPA), and (2) to derive seepage-relevant, model-related parameters and their distributions for use in the SMPA and seepage abstraction in support of the Total System Performance Assessment for License Application (TSPA-LA). The SCM is intended to be used only within this Model Report for the estimation of seepage-relevant parameters through calibration of the model against seepage-rate data from liquid-release tests performed in several niches along the Exploratory Studies Facility (ESF) Main Drift and in the Cross Drift. The SCM does not predict seepage into waste emplacement drifts under thermal or ambient conditions. Seepage predictions for waste emplacement drifts under ambient conditions will be performed with the SMPA (see upcoming REV 02 of CRWMS M&O 2000 [153314]), which inherits the conceptual basis and model-related parameters from the SCM. Seepage during the thermal period is examined separately in the Thermal Hydrologic (TH) Seepage Model (see BSC 2003 [161530]). The scope of this work is (1) to evaluate seepage rates measured during liquid-release experiments performed in several niches in the Exploratory Studies Facility (ESF) and in the Cross Drift, which was excavated for enhanced characterization of the repository block (ECRB); (2) to evaluate air-permeability data measured in boreholes above the niches and the Cross Drift to obtain the permeability structure for the seepage model; (3) to use inverse modeling to calibrate the SCM and to estimate seepage-relevant, model-related parameters on the drift scale; (4) to estimate the epistemic uncertainty of the derived parameters, based on the goodness-of-fit to the observed data and the sensitivity of calculated seepage with respect to the parameters of interest; (5) to characterize the aleatory uncertainty of
Introduction to Hierarchical Bayesian Modeling for Ecological Data
Parent, Eric
2012-01-01
Making statistical modeling and inference more accessible to ecologists and related scientists, Introduction to Hierarchical Bayesian Modeling for Ecological Data gives readers a flexible and effective framework to learn about complex ecological processes from various sources of data. It also helps readers get started on building their own statistical models. The text begins with simple models that progressively become more complex and realistic through explanatory covariates and intermediate hidden states variables. When fitting the models to data, the authors gradually present the concepts a
A Hierarchical Probability Model of Colon Cancer
Kelly, Michael
2010-01-01
We consider a model of fixed size $N = 2^l$ in which there are $l$ generations of daughter cells and a stem cell. In each generation $i$ there are $2^{i-1}$ daughter cells. At each integral time unit the cells split so that the stem cell splits into a stem cell and generation 1 daughter cell and the generation $i$ daughter cells become two cells of generation $i+1$. The last generation is removed from the population. The stem cell gets first and second mutations at rates $u_1$ and $u_2$ and the daughter cells get first and second mutations at rates $v_1$ and $v_2$. We find the distribution for the time it takes to get two mutations as $N$ goes to infinity and the mutation rates go to 0. We also find the distribution for the location of the mutations. Several outcomes are possible depending on how fast the rates go to 0. The model considered has been proposed by Komarova (2007) as a model for colon cancer.
Hierarchical Model Predictive Control for Resource Distribution
DEFF Research Database (Denmark)
Bendtsen, Jan Dimon; Trangbæk, K; Stoustrup, Jakob
2010-01-01
This paper deals with hierarchichal model predictive control (MPC) of distributed systems. A three level hierachical approach is proposed, consisting of a high level MPC controller, a second level of so-called aggregators, controlled by an online MPC-like algorithm, and a lower level of autonomous...... facilitates plug-and-play addition of subsystems without redesign of any controllers. The method is supported by a number of simulations featuring a three-level smart-grid power control system for a small isolated power grid....
Continuum damage modeling and simulation of hierarchical dental enamel
Ma, Songyun; Scheider, Ingo; Bargmann, Swantje
2016-05-01
Dental enamel exhibits high fracture toughness and stiffness due to a complex hierarchical and graded microstructure, optimally organized from nano- to macro-scale. In this study, a 3D representative volume element (RVE) model is adopted to study the deformation and damage behavior of the fibrous microstructure. A continuum damage mechanics model coupled to hyperelasticity is developed for modeling the initiation and evolution of damage in the mineral fibers as well as protein matrix. Moreover, debonding of the interface between mineral fiber and protein is captured by employing a cohesive zone model. The dependence of the failure mechanism on the aspect ratio of the mineral fibers is investigated. In addition, the effect of the interface strength on the damage behavior is studied with respect to geometric features of enamel. Further, the effect of an initial flaw on the overall mechanical properties is analyzed to understand the superior damage tolerance of dental enamel. The simulation results are validated by comparison to experimental data from micro-cantilever beam testing at two hierarchical levels. The transition of the failure mechanism at different hierarchical levels is also well reproduced in the simulations.
Thermodynamically consistent model calibration in chemical kinetics
Directory of Open Access Journals (Sweden)
Goutsias John
2011-05-01
Full Text Available Abstract Background The dynamics of biochemical reaction systems are constrained by the fundamental laws of thermodynamics, which impose well-defined relationships among the reaction rate constants characterizing these systems. Constructing biochemical reaction systems from experimental observations often leads to parameter values that do not satisfy the necessary thermodynamic constraints. This can result in models that are not physically realizable and may lead to inaccurate, or even erroneous, descriptions of cellular function. Results We introduce a thermodynamically consistent model calibration (TCMC method that can be effectively used to provide thermodynamically feasible values for the parameters of an open biochemical reaction system. The proposed method formulates the model calibration problem as a constrained optimization problem that takes thermodynamic constraints (and, if desired, additional non-thermodynamic constraints into account. By calculating thermodynamically feasible values for the kinetic parameters of a well-known model of the EGF/ERK signaling cascade, we demonstrate the qualitative and quantitative significance of imposing thermodynamic constraints on these parameters and the effectiveness of our method for accomplishing this important task. MATLAB software, using the Systems Biology Toolbox 2.1, can be accessed from http://www.cis.jhu.edu/~goutsias/CSS lab/software.html. An SBML file containing the thermodynamically feasible EGF/ERK signaling cascade model can be found in the BioModels database. Conclusions TCMC is a simple and flexible method for obtaining physically plausible values for the kinetic parameters of open biochemical reaction systems. It can be effectively used to recalculate a thermodynamically consistent set of parameter values for existing thermodynamically infeasible biochemical reaction models of cellular function as well as to estimate thermodynamically feasible values for the parameters of new
Bayesian Hierarchical Models to Augment the Mediterranean Forecast System
2016-06-07
year. Our goal is to develop an ensemble ocean forecast methodology, using Bayesian Hierarchical Modelling (BHM) tools . The ocean ensemble forecast...from above); i.e. we assume Ut ~ Z Λt1/2. WORK COMPLETED The prototype MFS-Wind-BHM was designed and implemented based on stochastic...coding refinements we implemented on the prototype surface wind BHM. A DWF event in February 2005, in the Gulf of Lions, was identified for reforecast
Emergence of a 'visual number sense' in hierarchical generative models.
Stoianov, Ivilin; Zorzi, Marco
2012-01-08
Numerosity estimation is phylogenetically ancient and foundational to human mathematical learning, but its computational bases remain controversial. Here we show that visual numerosity emerges as a statistical property of images in 'deep networks' that learn a hierarchical generative model of the sensory input. Emergent numerosity detectors had response profiles resembling those of monkey parietal neurons and supported numerosity estimation with the same behavioral signature shown by humans and animals.
Sensor modelling and camera calibration for close-range photogrammetry
Luhmann, Thomas; Fraser, Clive; Maas, Hans-Gerd
2016-05-01
Metric calibration is a critical prerequisite to the application of modern, mostly consumer-grade digital cameras for close-range photogrammetric measurement. This paper reviews aspects of sensor modelling and photogrammetric calibration, with attention being focussed on techniques of automated self-calibration. Following an initial overview of the history and the state of the art, selected topics of current interest within calibration for close-range photogrammetry are addressed. These include sensor modelling, with standard, extended and generic calibration models being summarised, along with non-traditional camera systems. Self-calibration via both targeted planar arrays and targetless scenes amenable to SfM-based exterior orientation are then discussed, after which aspects of calibration and measurement accuracy are covered. Whereas camera self-calibration is largely a mature technology, there is always scope for additional research to enhance the models and processes employed with the many camera systems nowadays utilised in close-range photogrammetry.
Hierarchical animal movement models for population-level inference
Hooten, Mevin B.; Buderman, Frances E.; Brost, Brian M.; Hanks, Ephraim M.; Ivans, Jacob S.
2016-01-01
New methods for modeling animal movement based on telemetry data are developed regularly. With advances in telemetry capabilities, animal movement models are becoming increasingly sophisticated. Despite a need for population-level inference, animal movement models are still predominantly developed for individual-level inference. Most efforts to upscale the inference to the population level are either post hoc or complicated enough that only the developer can implement the model. Hierarchical Bayesian models provide an ideal platform for the development of population-level animal movement models but can be challenging to fit due to computational limitations or extensive tuning required. We propose a two-stage procedure for fitting hierarchical animal movement models to telemetry data. The two-stage approach is statistically rigorous and allows one to fit individual-level movement models separately, then resample them using a secondary MCMC algorithm. The primary advantages of the two-stage approach are that the first stage is easily parallelizable and the second stage is completely unsupervised, allowing for an automated fitting procedure in many cases. We demonstrate the two-stage procedure with two applications of animal movement models. The first application involves a spatial point process approach to modeling telemetry data, and the second involves a more complicated continuous-time discrete-space animal movement model. We fit these models to simulated data and real telemetry data arising from a population of monitored Canada lynx in Colorado, USA.
Calibrated predictions for multivariate competing risks models.
Gorfine, Malka; Hsu, Li; Zucker, David M; Parmigiani, Giovanni
2014-04-01
Prediction models for time-to-event data play a prominent role in assessing the individual risk of a disease, such as cancer. Accurate disease prediction models provide an efficient tool for identifying individuals at high risk, and provide the groundwork for estimating the population burden and cost of disease and for developing patient care guidelines. We focus on risk prediction of a disease in which family history is an important risk factor that reflects inherited genetic susceptibility, shared environment, and common behavior patterns. In this work family history is accommodated using frailty models, with the main novel feature being allowing for competing risks, such as other diseases or mortality. We show through a simulation study that naively treating competing risks as independent right censoring events results in non-calibrated predictions, with the expected number of events overestimated. Discrimination performance is not affected by ignoring competing risks. Our proposed prediction methodologies correctly account for competing events, are very well calibrated, and easy to implement.
Al-Abed, N. A.; Whiteley, H. R.
2002-11-01
Calibrating a comprehensive, multi-parameter conceptual hydrological model, such as the Hydrological Simulation Program Fortran model, is a major challenge. This paper describes calibration procedures for water-quantity parameters of the HSPF version 10·11 using the automatic-calibration parameter estimator model coupled with a geographical information system (GIS) approach for spatially averaged properties. The study area was the Grand River watershed, located in southern Ontario, Canada, between 79° 30 and 80° 57W longitude and 42° 51 and 44° 31N latitude. The drainage area is 6965 km2. Calibration efforts were directed to those model parameters that produced large changes in model response during sensitivity tests run prior to undertaking calibration. A GIS was used extensively in this study. It was first used in the watershed segmentation process. During calibration, the GIS data were used to establish realistic starting values for the surface and subsurface zone parameters LZSN, UZSN, COVER, and INFILT and physically reasonable ratios of these parameters among watersheds were preserved during calibration with the ratios based on the known properties of the subwatersheds determined using GIS. This calibration procedure produced very satisfactory results; the percentage difference between the simulated and the measured yearly discharge ranged between 4 to 16%, which is classified as good to very good calibration. The average simulated daily discharge for the watershed outlet at Brantford for the years 1981-85 was 67 m3 s-1 and the average measured discharge at Brantford was 70 m3 s-1. The coupling of a GIS with automatice calibration produced a realistic and accurate calibration for the HSPF model with much less effort and subjectivity than would be required for unassisted calibration.
Coordinated Resource Management Models in Hierarchical Systems
Directory of Open Access Journals (Sweden)
Gabsi Mounir
2013-03-01
Full Text Available In response to the trend of efficient global economy, constructing a global logistic model has garnered much attention from the industry .Location selection is an important issue for those international companies that are interested in building a global logistics management system. Infrastructure in Developing Countries are based on the use of both classical and modern control technology, for which the most important components are professional levels of structure knowledge, dynamics and management processes, threats and interference and external and internal attacks. The problem of control flows of energy and materials resources in local and regional structures in normal and marginal, emergency operation provoked information attacks or threats on failure flows are further relevant especially when considering the low level of professional ,psychological and cognitive training of operational personnel manager. Logistics Strategies include the business goals requirements, allowable decisions tactics, and vision for designing and operating a logistics system .In this paper described the selection module coordinating flow management strategies based on the use of resources and logistics systems concepts.
Hierarchical models and the analysis of bird survey information
Sauer, J.R.; Link, W.A.
2003-01-01
Management of birds often requires analysis of collections of estimates. We describe a hierarchical modeling approach to the analysis of these data, in which parameters associated with the individual species estimates are treated as random variables, and probability statements are made about the species parameters conditioned on the data. A Markov-Chain Monte Carlo (MCMC) procedure is used to fit the hierarchical model. This approach is computer intensive, and is based upon simulation. MCMC allows for estimation both of parameters and of derived statistics. To illustrate the application of this method, we use the case in which we are interested in attributes of a collection of estimates of population change. Using data for 28 species of grassland-breeding birds from the North American Breeding Bird Survey, we estimate the number of species with increasing populations, provide precision-adjusted rankings of species trends, and describe a measure of population stability as the probability that the trend for a species is within a certain interval. Hierarchical models can be applied to a variety of bird survey applications, and we are investigating their use in estimation of population change from survey data.
A new approach for modeling generalization gradients: A case for Hierarchical Models
Directory of Open Access Journals (Sweden)
Koen eVanbrabant
2015-05-01
Full Text Available A case is made for the use of hierarchical models in the analysis of generalization gradients. Hierarchical models overcome several restrictions that are imposed by repeated measures analysis-of-variance (rANOVA, the default statistical method in current generalization research. More specifically, hierarchical models allow to include continuous independent variables and overcomes problematic assumptions such as sphericity. We focus on how generalization research can benefit from this added flexibility. In a simulation study we demonstrate the dominance of hierarchical models over rANOVA. In addition, we show the lack of efficiency of the Mauchly's sphericity test in sample sizes typical for generalization research, and confirm how violations of sphericity increase the probability of type I errors. A worked example of a hierarchical model is provided, with a specific emphasis on the interpretation of parameters relevant for generalization research.
A new approach for modeling generalization gradients: a case for hierarchical models.
Vanbrabant, Koen; Boddez, Yannick; Verduyn, Philippe; Mestdagh, Merijn; Hermans, Dirk; Raes, Filip
2015-01-01
A case is made for the use of hierarchical models in the analysis of generalization gradients. Hierarchical models overcome several restrictions that are imposed by repeated measures analysis-of-variance (rANOVA), the default statistical method in current generalization research. More specifically, hierarchical models allow to include continuous independent variables and overcomes problematic assumptions such as sphericity. We focus on how generalization research can benefit from this added flexibility. In a simulation study we demonstrate the dominance of hierarchical models over rANOVA. In addition, we show the lack of efficiency of the Mauchly's sphericity test in sample sizes typical for generalization research, and confirm how violations of sphericity increase the probability of type I errors. A worked example of a hierarchical model is provided, with a specific emphasis on the interpretation of parameters relevant for generalization research.
Calibration of Models Using Groundwater Age (Invited)
Sanford, W. E.
2009-12-01
Water-resource managers are frequently concerned with the long-term ability of a groundwater system to deliver volumes of water for both humans and ecosystems under natural and anthropogenic stresses. Analysis of how a groundwater system responds to such stresses usually involves the construction and calibration of a numerical groundwater-flow model. The calibration procedure usually involves the use of both groundwater-level and flux observations. Water-level data are often more abundant, and thus the availability of flux data can be critical, with well discharge and base flow to streams being most often available. Lack of good flux data however is a common occurrence, especially in more arid climates where the sustainability of the water supply may be even more in question. Environmental tracers are frequently being used to estimate the “age” of a water sample, which represents the time the water has been in the subsurface since its arrival at the water table. Groundwater ages provide flux-related information and can be used successfully to help calibrate groundwater models if porosity is well constrained, especially when there is a paucity of other flux data. As several different methods of simulating groundwater age and tracer movement are possible, a review is presented here of the advantages, disadvantages, and potential pitfalls of the various numerical and tracer methods used in model calibration. The usefulness of groundwater ages for model calibration depends on the ability both to interpret a tracer so as to obtain an apparent observed age, and to use a numerical model to obtain an equivalent simulated age observation. Different levels of simplicity and assumptions accompany different methods for calculating the equivalent simulated age observation. The advantages of computational efficiency in certain methods can be offset by error associated with the underlying assumptions. Advective travel-time calculation using path-line tracking in finite
Ye, Lin; Su, Steven W
2015-01-01
Optimum Experimental Design (OED) is an information gathering technique used to estimate parameters, which aims to minimize the variance of parameter estimation and prediction. In this paper, we further investigate an OED for MEMS accelerometer calibration of the 9-parameter auto-calibration model. Based on a linearized 9-parameter accelerometer model, we show the proposed OED is both G-optimal and rotatable, which are the desired properties for the calibration of wearable sensors for which only simple calibration devices are available. The experimental design is carried out with a newly developed wearable health monitoring device and desired experimental results have been achieved.
Calibration of hydrological model with programme PEST
Brilly, Mitja; Vidmar, Andrej; Kryžanowski, Andrej; Bezak, Nejc; Šraj, Mojca
2016-04-01
PEST is tool based on minimization of an objective function related to the root mean square error between the model output and the measurement. We use "singular value decomposition", section of the PEST control file, and Tikhonov regularization method for successfully estimation of model parameters. The PEST sometimes failed if inverse problems were ill-posed, but (SVD) ensures that PEST maintains numerical stability. The choice of the initial guess for the initial parameter values is an important issue in the PEST and need expert knowledge. The flexible nature of the PEST software and its ability to be applied to whole catchments at once give results of calibration performed extremely well across high number of sub catchments. Use of parallel computing version of PEST called BeoPEST was successfully useful to speed up calibration process. BeoPEST employs smart slaves and point-to-point communications to transfer data between the master and slaves computers. The HBV-light model is a simple multi-tank-type model for simulating precipitation-runoff. It is conceptual balance model of catchment hydrology which simulates discharge using rainfall, temperature and estimates of potential evaporation. Version of HBV-light-CLI allows the user to run HBV-light from the command line. Input and results files are in XML form. This allows to easily connecting it with other applications such as pre and post-processing utilities and PEST itself. The procedure was applied on hydrological model of Savinja catchment (1852 km2) and consists of twenty one sub-catchments. Data are temporary processed on hourly basis.
Hierarchical Heteroclinics in Dynamical Model of Cognitive Processes: Chunking
Afraimovich, Valentin S.; Young, Todd R.; Rabinovich, Mikhail I.
Combining the results of brain imaging and nonlinear dynamics provides a new hierarchical vision of brain network functionality that is helpful in understanding the relationship of the network to different mental tasks. Using these ideas it is possible to build adequate models for the description and prediction of different cognitive activities in which the number of variables is usually small enough for analysis. The dynamical images of different mental processes depend on their temporal organization and, as a rule, cannot be just simple attractors since cognition is characterized by transient dynamics. The mathematical image for a robust transient is a stable heteroclinic channel consisting of a chain of saddles connected by unstable separatrices. We focus here on hierarchical chunking dynamics that can represent several cognitive activities. Chunking is the dynamical phenomenon that means dividing a long information chain into shorter items. Chunking is known to be important in many processes of perception, learning, memory and cognition. We prove that in the phase space of the model that describes chunking there exists a new mathematical object — heteroclinic sequence of heteroclinic cycles — using the technique of slow-fast approximations. This new object serves as a skeleton of motions reflecting sequential features of hierarchical chunking dynamics and is an adequate image of the chunking processing.
Automated Calibration For Numerical Models Of Riverflow
Fernandez, Betsaida; Kopmann, Rebekka; Oladyshkin, Sergey
2017-04-01
Calibration of numerical models is fundamental since the beginning of all types of hydro system modeling, to approximate the parameters that can mimic the overall system behavior. Thus, an assessment of different deterministic and stochastic optimization methods is undertaken to compare their robustness, computational feasibility, and global search capacity. Also, the uncertainty of the most suitable methods is analyzed. These optimization methods minimize the objective function that comprises synthetic measurements and simulated data. Synthetic measurement data replace the observed data set to guarantee an existing parameter solution. The input data for the objective function derivate from a hydro-morphological dynamics numerical model which represents an 180-degree bend channel. The hydro- morphological numerical model shows a high level of ill-posedness in the mathematical problem. The minimization of the objective function by different candidate methods for optimization indicates a failure in some of the gradient-based methods as Newton Conjugated and BFGS. Others reveal partial convergence, such as Nelder-Mead, Polak und Ribieri, L-BFGS-B, Truncated Newton Conjugated, and Trust-Region Newton Conjugated Gradient. Further ones indicate parameter solutions that range outside the physical limits, such as Levenberg-Marquardt and LeastSquareRoot. Moreover, there is a significant computational demand for genetic optimization methods, such as Differential Evolution and Basin-Hopping, as well as for Brute Force methods. The Deterministic Sequential Least Square Programming and the scholastic Bayes Inference theory methods present the optimal optimization results. keywords: Automated calibration of hydro-morphological dynamic numerical model, Bayesian inference theory, deterministic optimization methods.
Seepage Calibration Model and Seepage Testing Data
Energy Technology Data Exchange (ETDEWEB)
S. Finsterle
2004-09-02
The purpose of this Model Report is to document the Seepage Calibration Model (SCM). The SCM was developed (1) to establish the conceptual basis for the Seepage Model for Performance Assessment (SMPA), and (2) to derive seepage-relevant, model-related parameters and their distributions for use in the SMPA and seepage abstraction in support of the Total System Performance Assessment for License Application (TSPA-LA). This Model Report has been revised in response to a comprehensive, regulatory-focused evaluation performed by the Regulatory Integration Team [''Technical Work Plan for: Regulatory Integration Evaluation of Analysis and Model Reports Supporting the TSPA-LA'' (BSC 2004 [DIRS 169653])]. The SCM is intended to be used only within this Model Report for the estimation of seepage-relevant parameters through calibration of the model against seepage-rate data from liquid-release tests performed in several niches along the Exploratory Studies Facility (ESF) Main Drift and in the Cross-Drift. The SCM does not predict seepage into waste emplacement drifts under thermal or ambient conditions. Seepage predictions for waste emplacement drifts under ambient conditions will be performed with the SMPA [''Seepage Model for PA Including Drift Collapse'' (BSC 2004 [DIRS 167652])], which inherits the conceptual basis and model-related parameters from the SCM. Seepage during the thermal period is examined separately in the Thermal Hydrologic (TH) Seepage Model [see ''Drift-Scale Coupled Processes (DST and TH Seepage) Models'' (BSC 2004 [DIRS 170338])]. The scope of this work is (1) to evaluate seepage rates measured during liquid-release experiments performed in several niches in the Exploratory Studies Facility (ESF) and in the Cross-Drift, which was excavated for enhanced characterization of the repository block (ECRB); (2) to evaluate air-permeability data measured in boreholes above the niches and the Cross
Hierarchical modeling of cluster size in wildlife surveys
Royle, J. Andrew
2008-01-01
Clusters or groups of individuals are the fundamental unit of observation in many wildlife sampling problems, including aerial surveys of waterfowl, marine mammals, and ungulates. Explicit accounting of cluster size in models for estimating abundance is necessary because detection of individuals within clusters is not independent and detectability of clusters is likely to increase with cluster size. This induces a cluster size bias in which the average cluster size in the sample is larger than in the population at large. Thus, failure to account for the relationship between delectability and cluster size will tend to yield a positive bias in estimates of abundance or density. I describe a hierarchical modeling framework for accounting for cluster-size bias in animal sampling. The hierarchical model consists of models for the observation process conditional on the cluster size distribution and the cluster size distribution conditional on the total number of clusters. Optionally, a spatial model can be specified that describes variation in the total number of clusters per sample unit. Parameter estimation, model selection, and criticism may be carried out using conventional likelihood-based methods. An extension of the model is described for the situation where measurable covariates at the level of the sample unit are available. Several candidate models within the proposed class are evaluated for aerial survey data on mallard ducks (Anas platyrhynchos).
A hierarchical community occurrence model for North Carolina stream fish
Midway, S.R.; Wagner, Tyler; Tracy, B.H.
2016-01-01
The southeastern USA is home to one of the richest—and most imperiled and threatened—freshwater fish assemblages in North America. For many of these rare and threatened species, conservation efforts are often limited by a lack of data. Drawing on a unique and extensive data set spanning over 20 years, we modeled occurrence probabilities of 126 stream fish species sampled throughout North Carolina, many of which occur more broadly in the southeastern USA. Specifically, we developed species-specific occurrence probabilities from hierarchical Bayesian multispecies models that were based on common land use and land cover covariates. We also used index of biotic integrity tolerance classifications as a second level in the model hierarchy; we identify this level as informative for our work, but it is flexible for future model applications. Based on the partial-pooling property of the models, we were able to generate occurrence probabilities for many imperiled and data-poor species in addition to highlighting a considerable amount of occurrence heterogeneity that supports species-specific investigations whenever possible. Our results provide critical species-level information on many threatened and imperiled species as well as information that may assist with re-evaluation of existing management strategies, such as the use of surrogate species. Finally, we highlight the use of a relatively simple hierarchical model that can easily be generalized for similar situations in which conventional models fail to provide reliable estimates for data-poor groups.
Hierarchical Bayesian spatial models for multispecies conservation planning and monitoring.
Carroll, Carlos; Johnson, Devin S; Dunk, Jeffrey R; Zielinski, William J
2010-12-01
Biologists who develop and apply habitat models are often familiar with the statistical challenges posed by their data's spatial structure but are unsure of whether the use of complex spatial models will increase the utility of model results in planning. We compared the relative performance of nonspatial and hierarchical Bayesian spatial models for three vertebrate and invertebrate taxa of conservation concern (Church's sideband snails [Monadenia churchi], red tree voles [Arborimus longicaudus], and Pacific fishers [Martes pennanti pacifica]) that provide examples of a range of distributional extents and dispersal abilities. We used presence-absence data derived from regional monitoring programs to develop models with both landscape and site-level environmental covariates. We used Markov chain Monte Carlo algorithms and a conditional autoregressive or intrinsic conditional autoregressive model framework to fit spatial models. The fit of Bayesian spatial models was between 35 and 55% better than the fit of nonspatial analogue models. Bayesian spatial models outperformed analogous models developed with maximum entropy (Maxent) methods. Although the best spatial and nonspatial models included similar environmental variables, spatial models provided estimates of residual spatial effects that suggested how ecological processes might structure distribution patterns. Spatial models built from presence-absence data improved fit most for localized endemic species with ranges constrained by poorly known biogeographic factors and for widely distributed species suspected to be strongly affected by unmeasured environmental variables or population processes. By treating spatial effects as a variable of interest rather than a nuisance, hierarchical Bayesian spatial models, especially when they are based on a common broad-scale spatial lattice (here the national Forest Inventory and Analysis grid of 24 km(2) hexagons), can increase the relevance of habitat models to multispecies
Application of Bayesian Hierarchical Prior Modeling to Sparse Channel Estimation
DEFF Research Database (Denmark)
Pedersen, Niels Lovmand; Manchón, Carles Navarro; Shutin, Dmitriy
2012-01-01
. The estimators result as an application of the variational message-passing algorithm on the factor graph representing the signal model extended with the hierarchical prior models. Numerical results demonstrate the superior performance of our channel estimators as compared to traditional and state......Existing methods for sparse channel estimation typically provide an estimate computed as the solution maximizing an objective function defined as the sum of the log-likelihood function and a penalization term proportional to the l1-norm of the parameter of interest. However, other penalization......-of-the-art sparse methods....
Bayesian hierarchical modeling for detecting safety signals in clinical trials.
Xia, H Amy; Ma, Haijun; Carlin, Bradley P
2011-09-01
Detection of safety signals from clinical trial adverse event data is critical in drug development, but carries a challenging statistical multiplicity problem. Bayesian hierarchical mixture modeling is appealing for its ability to borrow strength across subgroups in the data, as well as moderate extreme findings most likely due merely to chance. We implement such a model for subject incidence (Berry and Berry, 2004 ) using a binomial likelihood, and extend it to subject-year adjusted incidence rate estimation under a Poisson likelihood. We use simulation to choose a signal detection threshold, and illustrate some effective graphics for displaying the flagged signals.
An Extended Hierarchical Trusted Model for Wireless Sensor Networks
Institute of Scientific and Technical Information of China (English)
DU Ruiying; XU Mingdi; ZHANG Huanguo
2006-01-01
Cryptography and authentication are traditional approach for providing network security. However, they are not sufficient for solving the problems which malicious nodes compromise whole wireless sensor network leading to invalid data transmission and wasting resource by using vicious behaviors. This paper puts forward an extended hierarchical trusted architecture for wireless sensor network, and establishes trusted congregations by three-tier framework. The method combines statistics, economics with encrypt mechanism for developing two trusted models which evaluate cluster head nodes and common sensor nodes respectively. The models form logical trusted-link from command node to common sensor nodes and guarantees the network can run in secure and reliable circumstance.
Ensemble renormalization group for the random-field hierarchical model.
Decelle, Aurélien; Parisi, Giorgio; Rocchi, Jacopo
2014-03-01
The renormalization group (RG) methods are still far from being completely understood in quenched disordered systems. In order to gain insight into the nature of the phase transition of these systems, it is common to investigate simple models. In this work we study a real-space RG transformation on the Dyson hierarchical lattice with a random field, which leads to a reconstruction of the RG flow and to an evaluation of the critical exponents of the model at T=0. We show that this method gives very accurate estimations of the critical exponents by comparing our results with those obtained by some of us using an independent method.
Facial animation on an anatomy-based hierarchical face model
Zhang, Yu; Prakash, Edmond C.; Sung, Eric
2003-04-01
In this paper we propose a new hierarchical 3D facial model based on anatomical knowledge that provides high fidelity for realistic facial expression animation. Like real human face, the facial model has a hierarchical biomechanical structure, incorporating a physically-based approximation to facial skin tissue, a set of anatomically-motivated facial muscle actuators and underlying skull structure. The deformable skin model has multi-layer structure to approximate different types of soft tissue. It takes into account the nonlinear stress-strain relationship of the skin and the fact that soft tissue is almost incompressible. Different types of muscle models have been developed to simulate distribution of the muscle force on the skin due to muscle contraction. By the presence of the skull model, our facial model takes advantage of both more accurate facial deformation and the consideration of facial anatomy during the interactive definition of facial muscles. Under the muscular force, the deformation of the facial skin is evaluated using numerical integration of the governing dynamic equations. The dynamic facial animation algorithm runs at interactive rate with flexible and realistic facial expressions to be generated.
A Bisimulation-based Hierarchical Framework for Software Development Models
Directory of Open Access Journals (Sweden)
Ping Liang
2013-08-01
Full Text Available Software development models have been ripen since the emergence of software engineering, like waterfall model, V-model, spiral model, etc. To ensure the successful implementation of those models, various metrics for software products and development process have been developed along, like CMMI, software metrics, and process re-engineering, etc. The quality of software products and processes can be ensured in consistence as much as possible and the abstract integrity of a software product can be achieved. However, in reality, the maintenance of software products is still high and even higher along with software evolution due to the inconsistence occurred by changes and inherent errors of software products. It is better to build up a robust software product that can sustain changes as many as possible. Therefore, this paper proposes a process algebra based hierarchical framework to extract an abstract equivalent of deliverable at the end of phases of a software product from its software development models. The process algebra equivalent of the deliverable is developed hierarchically with the development of the software product, applying bi-simulation to test run the deliverable of phases to guarantee the consistence and integrity of the software development and product in a trivially mathematical way. And an algorithm is also given to carry out the assessment of the phase deliverable in process algebra.
C-HiLasso: A Collaborative Hierarchical Sparse Modeling Framework
Sprechmann, Pablo; Sapiro, Guillermo; Eldar, Yonina
2010-01-01
Sparse modeling is a powerful framework for data analysis and processing. Traditionally, encoding in this framework is performed by solving an L1-regularized linear regression problem, commonly referred to as Lasso or Basis Pursuit. In this work we combine the sparsity-inducing property of the Lasso model at the individual feature level, with the block-sparsity property of the Group Lasso model, where sparse groups of features are jointly encoded, obtaining a sparsity pattern hierarchically structured. This results in the Hierarchical Lasso (HiLasso), which shows important practical modeling advantages. We then extend this approach to the collaborative case, where a set of simultaneously coded signals share the same sparsity pattern at the higher (group) level, but not necessarily at the lower (inside the group) level, obtaining the collaborative HiLasso model (C-HiLasso). Such signals then share the same active groups, or classes, but not necessarily the same active set. This model is very well suited for ap...
o-HETM: An Online Hierarchical Entity Topic Model for News Streams
2015-05-22
Cao et al. (Eds.): PAKDD 2015, Part I, LNAI 9077, pp. 696–707, 2015. DOI: 10.1007/978-3-319-18038-0 54 o-HETM: An Online Hierarchical Entity Topic... 2004 ) o-HETM: An Online Hierarchical Entity Topic Model for News Streams 707 6. Mimno, D., Li, W., McCallum, A.: Mixtures of hierarchical topics with
A hierarchical nest survival model integrating incomplete temporally varying covariates
Converse, Sarah J.; Royle, J. Andrew; Adler, Peter H.; Urbanek, Richard P.; Barzan, Jeb A.
2013-01-01
Nest success is a critical determinant of the dynamics of avian populations, and nest survival modeling has played a key role in advancing avian ecology and management. Beginning with the development of daily nest survival models, and proceeding through subsequent extensions, the capacity for modeling the effects of hypothesized factors on nest survival has expanded greatly. We extend nest survival models further by introducing an approach to deal with incompletely observed, temporally varying covariates using a hierarchical model. Hierarchical modeling offers a way to separate process and observational components of demographic models to obtain estimates of the parameters of primary interest, and to evaluate structural effects of ecological and management interest. We built a hierarchical model for daily nest survival to analyze nest data from reintroduced whooping cranes (Grus americana) in the Eastern Migratory Population. This reintroduction effort has been beset by poor reproduction, apparently due primarily to nest abandonment by breeding birds. We used the model to assess support for the hypothesis that nest abandonment is caused by harassment from biting insects. We obtained indices of blood-feeding insect populations based on the spatially interpolated counts of insects captured in carbon dioxide traps. However, insect trapping was not conducted daily, and so we had incomplete information on a temporally variable covariate of interest. We therefore supplemented our nest survival model with a parallel model for estimating the values of the missing insect covariates. We used Bayesian model selection to identify the best predictors of daily nest survival. Our results suggest that the black fly Simulium annulus may be negatively affecting nest survival of reintroduced whooping cranes, with decreasing nest survival as abundance of S. annulus increases. The modeling framework we have developed will be applied in the future to a larger data set to evaluate the
About wave field modeling in hierarchic medium with fractal inclusions
Hachay, Olga; Khachay, Andrey
2014-05-01
The processes of oil gaseous deposits outworking are linked with moving of polyphase multicomponent media, which are characterized by no equilibrium and nonlinear rheological features. The real behavior of layered systems is defined as complicated rheology moving liquids and structural morphology of porous media. It is eargently needed to account those factors for substantial description of the filtration processes. Additionally we must account also the synergetic effects. That allows suggesting new methods of control and managing of complicated natural systems, which can research these effects. Thus our research is directed to the layered system, from which we have to outwork oil and which is a complicated hierarchic dynamical system with fractal inclusions. In that paper we suggest the algorithm of modeling of 2-d seismic field distribution in the heterogeneous medium with hierarchic inclusions. Also we can compare the integral 2-D for seismic field in a frame of local hierarchic heterogeneity with a porous inclusion and pure elastic inclusion for the case when the parameter Lame is equal to zero for the inclusions and the layered structure. For that case we can regard the problem for the latitude and longitudinal waves independently. Here we shall analyze the first case. The received results can be used for choosing criterions of joined seismic methods for high complicated media research.If the boundaries of the inclusion of the k rank are fractals, the surface and contour integrals in the integral equations must be changed to repeated fractional integrals of Riman-Liuvill type .Using the developed earlier 3-d method of induction electromagnetic frequency geometric monitoring we showed the opportunity of defining of physical and structural features of hierarchic oil layer structure and estimating of water saturating by crack inclusions. For visualization we had elaborated some algorithms and programs for constructing cross sections for two hierarchic structural
Towards automatic calibration of 2-dimensional flood propagation models
Directory of Open Access Journals (Sweden)
P. Fabio
2009-11-01
Full Text Available Hydraulic models for flood propagation description are an essential tool in many fields, e.g. civil engineering, flood hazard and risk assessments, evaluation of flood control measures, etc. Nowadays there are many models of different complexity regarding the mathematical foundation and spatial dimensions available, and most of them are comparatively easy to operate due to sophisticated tools for model setup and control. However, the calibration of these models is still underdeveloped in contrast to other models like e.g. hydrological models or models used in ecosystem analysis. This has basically two reasons: first, the lack of relevant data against the models can be calibrated, because flood events are very rarely monitored due to the disturbances inflicted by them and the lack of appropriate measuring equipment in place. Secondly, especially the two-dimensional models are computationally very demanding and therefore the use of available sophisticated automatic calibration procedures is restricted in many cases. This study takes a well documented flood event in August 2002 at the Mulde River in Germany as an example and investigates the most appropriate calibration strategy for a full 2-D hyperbolic finite element model. The model independent optimiser PEST, that gives the possibility of automatic calibrations, is used. The application of the parallel version of the optimiser to the model and calibration data showed that a it is possible to use automatic calibration in combination of 2-D hydraulic model, and b equifinality of model parameterisation can also be caused by a too large number of degrees of freedom in the calibration data in contrast to a too simple model setup. In order to improve model calibration and reduce equifinality a method was developed to identify calibration data with likely errors that obstruct model calibration.
Linguistic steganography on Twitter: hierarchical language modeling with manual interaction
Wilson, Alex; Blunsom, Phil; Ker, Andrew D.
2014-02-01
This work proposes a natural language stegosystem for Twitter, modifying tweets as they are written to hide 4 bits of payload per tweet, which is a greater payload than previous systems have achieved. The system, CoverTweet, includes novel components, as well as some already developed in the literature. We believe that the task of transforming covers during embedding is equivalent to unilingual machine translation (paraphrasing), and we use this equivalence to de ne a distortion measure based on statistical machine translation methods. The system incorporates this measure of distortion to rank possible tweet paraphrases, using a hierarchical language model; we use human interaction as a second distortion measure to pick the best. The hierarchical language model is designed to model the speci c language of the covers, which in this setting is the language of the Twitter user who is embedding. This is a change from previous work, where general-purpose language models have been used. We evaluate our system by testing the output against human judges, and show that humans are unable to distinguish stego tweets from cover tweets any better than random guessing.
Finite Population Correction for Two-Level Hierarchical Linear Models.
Lai, Mark H C; Kwok, Oi-Man; Hsiao, Yu-Yu; Cao, Qian
2017-03-16
The research literature has paid little attention to the issue of finite population at a higher level in hierarchical linear modeling. In this article, we propose a method to obtain finite-population-adjusted standard errors of Level-1 and Level-2 fixed effects in 2-level hierarchical linear models. When the finite population at Level-2 is incorrectly assumed as being infinite, the standard errors of the fixed effects are overestimated, resulting in lower statistical power and wider confidence intervals. The impact of ignoring finite population correction is illustrated by using both a real data example and a simulation study with a random intercept model and a random slope model. Simulation results indicated that the bias in the unadjusted fixed-effect standard errors was substantial when the Level-2 sample size exceeded 10% of the Level-2 population size; the bias increased with a larger intraclass correlation, a larger number of clusters, and a larger average cluster size. We also found that the proposed adjustment produced unbiased standard errors, particularly when the number of clusters was at least 30 and the average cluster size was at least 10. We encourage researchers to consider the characteristics of the target population for their studies and adjust for finite population when appropriate. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
A Hierarchical Model for Continuous Gesture Recognition Using Kinect
DEFF Research Database (Denmark)
Jensen, Søren Kejser; Moesgaard, Christoffer; Nielsen, Christoffer Samuel
2013-01-01
Human gesture recognition is an area, which has been studied thoroughly in recent years,and close to100% recognition rates in restricted environments have been achieved, often either with single separated gestures in the input stream, or with computationally intensive systems. The results...... are unfortunately not as striking, when it comes to a continuous stream of gestures. In this paper we introduce a hierarchical system for gesture recognition for use in a gaming setting, with a continuous stream of data. Layer 1 is based on Nearest Neighbor Search and layer 2 uses Hidden Markov Models. The system...
Dynamical Properties of Potassium Ion Channels with a Hierarchical Model
Institute of Scientific and Technical Information of China (English)
ZHAN Yong; AN Hai-Long; YU Hui; ZHANG Su-Hua; HAN Ying-Rong
2006-01-01
@@ It is well known that potassium ion channels have higher permeability than K ions, and the permeable rate of a single K ion channel is about 108 ions per second. We develop a hierarchical model of potassium ion channel permeation involving ab initio quantum calculations and Brownian dynamics simulations, which can consistently explain a range of channel dynamics. The results show that the average velocity of K ions, the mean permeable time of K ions and the permeable rate of single channel are about 0.92nm/ns, 4.35ns and 2.30×108 ions/s,respectively.
Hierarchical Stochastic Simulation Algorithm for SBML Models of Genetic Circuits
Directory of Open Access Journals (Sweden)
Leandro eWatanabe
2014-11-01
Full Text Available This paper describes a hierarchical stochastic simulation algorithm which has been implemented within iBioSim, a tool used to model, analyze, and visualize genetic circuits. Many biological analysis tools flatten out hierarchy before simulation, but there are many disadvantages associated with this approach. First, the memory required to represent the model can quickly expand in the process. Second, the flattening process is computationally expensive. Finally, when modeling a dynamic cellular population within iBioSim, inlining the hierarchy of the model is inefficient since models must grow dynamically over time. This paper discusses a new approach to handle hierarchy on the fly to make the tool faster and more memory-efficient. This approach yields significant performance improvements as compared to the former flat analysis method.
A Hierarchical Model Architecture for Enterprise Integration in Chemical Industries
Institute of Scientific and Technical Information of China (English)
华贲; 周章玉; 成思危
2001-01-01
Towards integration of supply chain, manufacturing/production and investment decision making, this paper presents a hierarchical model architecture which contains six sub-models covering the areas of manufacturing control, production operation, design and revamp, production management, supply chain and investment decision making. Six types of flow, material, energy, information, humanware, partsware and capital are ciasified. These flows connect enterprise components/subsystems to formulate system topology and logical structure. Enterprise components/subsystems are abstracted to generic elementary and composite classes. Finally, the model architecture is applied to a management system of an integrated suply chain, and suggestion are made on the usage of the model architecture and further development of the model as well as imvlementation issues.
Hierarchical Model for the Evolution of Cloud Complexes
Sánchez, N; Sanchez, Nestor; Parravano, Antonio
1999-01-01
The structure of cloud complexes appears to be well described by a "tree structure" representation when the image is partitioned into "clouds". In this representation, the parent-child relationships are assigned according to containment. Based on this picture, a hierarchical model for the evolution of Cloud Complexes, including star formation, is constructed, that follows the mass evolution of each sub-structure by computing its mass exchange (evaporation or condensation) with its parent and children, which depends on the radiation density at the interphase. For the set of parameters used as a reference model, the system produces IMFs with a maximum at too high mass (~2 M_sun) and the characteristic times for evolution seem too long. We show that these properties can be improved by adjusting model parameters. However, the emphasis here is to illustrate some general properties of this nonlinear model for the star formation process. Notwithstanding the simplifications involved, the model reveals an essential fe...
Spatial Bayesian hierarchical modelling of extreme sea states
Clancy, Colm; O'Sullivan, John; Sweeney, Conor; Dias, Frédéric; Parnell, Andrew C.
2016-11-01
A Bayesian hierarchical framework is used to model extreme sea states, incorporating a latent spatial process to more effectively capture the spatial variation of the extremes. The model is applied to a 34-year hindcast of significant wave height off the west coast of Ireland. The generalised Pareto distribution is fitted to declustered peaks over a threshold given by the 99.8th percentile of the data. Return levels of significant wave height are computed and compared against those from a model based on the commonly-used maximum likelihood inference method. The Bayesian spatial model produces smoother maps of return levels. Furthermore, this approach greatly reduces the uncertainty in the estimates, thus providing information on extremes which is more useful for practical applications.
Inference in HIV dynamics models via hierarchical likelihood
Commenges, D; Putter, H; Thiebaut, R
2010-01-01
HIV dynamical models are often based on non-linear systems of ordinary differential equations (ODE), which do not have analytical solution. Introducing random effects in such models leads to very challenging non-linear mixed-effects models. To avoid the numerical computation of multiple integrals involved in the likelihood, we propose a hierarchical likelihood (h-likelihood) approach, treated in the spirit of a penalized likelihood. We give the asymptotic distribution of the maximum h-likelihood estimators (MHLE) for fixed effects, a result that may be relevant in a more general setting. The MHLE are slightly biased but the bias can be made negligible by using a parametric bootstrap procedure. We propose an efficient algorithm for maximizing the h-likelihood. A simulation study, based on a classical HIV dynamical model, confirms the good properties of the MHLE. We apply it to the analysis of a clinical trial.
[A medical image semantic modeling based on hierarchical Bayesian networks].
Lin, Chunyi; Ma, Lihong; Yin, Junxun; Chen, Jianyu
2009-04-01
A semantic modeling approach for medical image semantic retrieval based on hierarchical Bayesian networks was proposed, in allusion to characters of medical images. It used GMM (Gaussian mixture models) to map low-level image features into object semantics with probabilities, then it captured high-level semantics through fusing these object semantics using a Bayesian network, so that it built a multi-layer medical image semantic model, aiming to enable automatic image annotation and semantic retrieval by using various keywords at different semantic levels. As for the validity of this method, we have built a multi-level semantic model from a small set of astrocytoma MRI (magnetic resonance imaging) samples, in order to extract semantics of astrocytoma in malignant degree. Experiment results show that this is a superior approach.
Item Response Theory Using Hierarchical Generalized Linear Models
Directory of Open Access Journals (Sweden)
Hamdollah Ravand
2015-03-01
Full Text Available Multilevel models (MLMs are flexible in that they can be employed to obtain item and person parameters, test for differential item functioning (DIF and capture both local item and person dependence. Papers on the MLM analysis of item response data have focused mostly on theoretical issues where applications have been add-ons to simulation studies with a methodological focus. Although the methodological direction was necessary as a first step to show how MLMs can be utilized and extended to model item response data, the emphasis needs to be shifted towards providing evidence on how applications of MLMs in educational testing can provide the benefits that have been promised. The present study uses foreign language reading comprehension data to illustrate application of hierarchical generalized models to estimate person and item parameters, differential item functioning (DIF, and local person dependence in a three-level model.
A Maximum Entropy Estimator for the Aggregate Hierarchical Logit Model
Directory of Open Access Journals (Sweden)
Pedro Donoso
2011-08-01
Full Text Available A new approach for estimating the aggregate hierarchical logit model is presented. Though usually derived from random utility theory assuming correlated stochastic errors, the model can also be derived as a solution to a maximum entropy problem. Under the latter approach, the Lagrange multipliers of the optimization problem can be understood as parameter estimators of the model. Based on theoretical analysis and Monte Carlo simulations of a transportation demand model, it is demonstrated that the maximum entropy estimators have statistical properties that are superior to classical maximum likelihood estimators, particularly for small or medium-size samples. The simulations also generated reduced bias in the estimates of the subjective value of time and consumer surplus.
A hierarchical model of the evolution of human brain specializations.
Barrett, H Clark
2012-06-26
The study of information-processing adaptations in the brain is controversial, in part because of disputes about the form such adaptations might take. Many psychologists assume that adaptations come in two kinds, specialized and general-purpose. Specialized mechanisms are typically thought of as innate, domain-specific, and isolated from other brain systems, whereas generalized mechanisms are developmentally plastic, domain-general, and interactive. However, if brain mechanisms evolve through processes of descent with modification, they are likely to be heterogeneous, rather than coming in just two kinds. They are likely to be hierarchically organized, with some design features widely shared across brain systems and others specific to particular processes. Also, they are likely to be largely developmentally plastic and interactive with other brain systems, rather than canalized and isolated. This article presents a hierarchical model of brain specialization, reviewing evidence for the model from evolutionary developmental biology, genetics, brain mapping, and comparative studies. Implications for the search for uniquely human traits are discussed, along with ways in which conventional views of modularity in psychology may need to be revised.
Study of hierarchical federation architecture using multi-resolution modeling
Institute of Scientific and Technical Information of China (English)
HAO Yan-ling; SHEN Dong-hui; QIAN Hua-ming; DENG Ming-hui
2004-01-01
This paper aims at finding a solution to the problem aroused in complex system simulation, where a specific functional federation is coupled with other simulation systems. In other words, the communication information within the system may be received by other federates that participated in this united simulation. For the purpose of ensuring simulation system unitary character, a hierarchical federation architecture (HFA) is taken. Also considering the real situation, where federates in a complicated simulation system can be made simpler to an extent, a multi-resolution modeling (MRM) method is imported to implement the design of hierarchical federation. By utilizing the multiple resolution entity (MRE) modeling approach, MRE for federates are designed out. When different level training simulation is required, the appropriate MRE at corresponding layers can be called. The design method realizes the reuse feature of the simulation system and reduces simulation complexity and improves the validity of system Simulation Cost (SC). Taking submarine voyage training simulator (SVTS) for instance, a HFA for submarine is constructed inthis paper, which approves the feasibility of studied approach.
A stochastic model for detecting overlapping and hierarchical community structure.
Directory of Open Access Journals (Sweden)
Xiaochun Cao
Full Text Available Community detection is a fundamental problem in the analysis of complex networks. Recently, many researchers have concentrated on the detection of overlapping communities, where a vertex may belong to more than one community. However, most current methods require the number (or the size of the communities as a priori information, which is usually unavailable in real-world networks. Thus, a practical algorithm should not only find the overlapping community structure, but also automatically determine the number of communities. Furthermore, it is preferable if this method is able to reveal the hierarchical structure of networks as well. In this work, we firstly propose a generative model that employs a nonnegative matrix factorization (NMF formulization with a l(2,1 norm regularization term, balanced by a resolution parameter. The NMF has the nature that provides overlapping community structure by assigning soft membership variables to each vertex; the l(2,1 regularization term is a technique of group sparsity which can automatically determine the number of communities by penalizing too many nonempty communities; and hence the resolution parameter enables us to explore the hierarchical structure of networks. Thereafter, we derive the multiplicative update rule to learn the model parameters, and offer the proof of its correctness. Finally, we test our approach on a variety of synthetic and real-world networks, and compare it with some state-of-the-art algorithms. The results validate the superior performance of our new method.
The Hierarchical Dirichlet Process Hidden Semi-Markov Model
Johnson, Matthew J
2012-01-01
There is much interest in the Hierarchical Dirichlet Process Hidden Markov Model (HDP-HMM) as a natural Bayesian nonparametric extension of the traditional HMM. However, in many settings the HDP-HMM's strict Markovian constraints are undesirable, particularly if we wish to learn or encode non-geometric state durations. We can extend the HDP-HMM to capture such structure by drawing upon explicit-duration semi- Markovianity, which has been developed in the parametric setting to allow construction of highly interpretable models that admit natural prior information on state durations. In this paper we introduce the explicitduration HDP-HSMM and develop posterior sampling algorithms for efficient inference in both the direct-assignment and weak-limit approximation settings. We demonstrate the utility of the model and our inference methods on synthetic data as well as experiments on a speaker diarization problem and an example of learning the patterns in Morse code.
Learning Hierarchical User Interest Models from Web Pages
Institute of Scientific and Technical Information of China (English)
无
2006-01-01
We propose an algorithm for learning hierarchical user interest models according to the Web pages users have browsed. In this algorithm, the interests of a user are represented into a tree which is called a user interest tree, the content and the structure of which can change simultaneously to adapt to the changes in a user's interests. This expression represents a user's specific and general interests as a continuum. In some sense, specific interests correspond to short-term interests, while general interests correspond to long-term interests. So this representation more really reflects the users' interests. The algorithm can automatically model a user's multiple interest domains, dynamically generate the interest models and prune a user interest tree when the number of the nodes in it exceeds given value. Finally, we show the experiment results in a Chinese Web Site.
Multi-mode clustering model for hierarchical wireless sensor networks
Hu, Xiangdong; Li, Yongfu; Xu, Huifen
2017-03-01
The topology management, i.e., clusters maintenance, of wireless sensor networks (WSNs) is still a challenge due to its numerous nodes, diverse application scenarios and limited resources as well as complex dynamics. To address this issue, a multi-mode clustering model (M2 CM) is proposed to maintain the clusters for hierarchical WSNs in this study. In particular, unlike the traditional time-trigger model based on the whole-network and periodic style, the M2 CM is proposed based on the local and event-trigger operations. In addition, an adaptive local maintenance algorithm is designed for the broken clusters in the WSNs using the spatial-temporal demand changes accordingly. Numerical experiments are performed using the NS2 network simulation platform. Results validate the effectiveness of the proposed model with respect to the network maintenance costs, node energy consumption and transmitted data as well as the network lifetime.
Modeling evolutionary dynamics of epigenetic mutations in hierarchically organized tumors.
Directory of Open Access Journals (Sweden)
Andrea Sottoriva
2011-05-01
Full Text Available The cancer stem cell (CSC concept is a highly debated topic in cancer research. While experimental evidence in favor of the cancer stem cell theory is apparently abundant, the results are often criticized as being difficult to interpret. An important reason for this is that most experimental data that support this model rely on transplantation studies. In this study we use a novel cellular Potts model to elucidate the dynamics of established malignancies that are driven by a small subset of CSCs. Our results demonstrate that epigenetic mutations that occur during mitosis display highly altered dynamics in CSC-driven malignancies compared to a classical, non-hierarchical model of growth. In particular, the heterogeneity observed in CSC-driven tumors is considerably higher. We speculate that this feature could be used in combination with epigenetic (methylation sequencing studies of human malignancies to prove or refute the CSC hypothesis in established tumors without the need for transplantation. Moreover our tumor growth simulations indicate that CSC-driven tumors display evolutionary features that can be considered beneficial during tumor progression. Besides an increased heterogeneity they also exhibit properties that allow the escape of clones from local fitness peaks. This leads to more aggressive phenotypes in the long run and makes the neoplasm more adaptable to stringent selective forces such as cancer treatment. Indeed when therapy is applied the clone landscape of the regrown tumor is more aggressive with respect to the primary tumor, whereas the classical model demonstrated similar patterns before and after therapy. Understanding these often counter-intuitive fundamental properties of (non-hierarchically organized malignancies is a crucial step in validating the CSC concept as well as providing insight into the therapeutical consequences of this model.
Research and application of hierarchical model for multiple fault diagnosis
Institute of Scientific and Technical Information of China (English)
An Ruoming; Jiang Xingwei; Song Zhengji
2005-01-01
Computational complexity of complex system multiple fault diagnosis is a puzzle at all times. Based on the well-known Mozetic's approach, a novel hierarchical model-based diagnosis methodology is put forward for improving efficiency of multi-fault recognition and localization. Structural abstraction and weighted fault propagation graphs are combined to build diagnosis model. The graphs have weighted arcs with fault propagation probabilities and propagation strength. For solving the problem of coupled faults, two diagnosis strategies are used: one is the Lagrangian relaxation and the primal heuristic algorithms; another is the method of propagation strength. Finally, an applied example shows the applicability of the approach and experimental results are given to show the superiority of the presented technique.
Hierarchical population model with a carrying capacity distribution
Indekeu, J O
2002-01-01
A time- and space-discrete model for the growth of a rapidly saturating local biological population $N(x,t)$ is derived from a hierarchical random deposition process previously studied in statistical physics. Two biologically relevant parameters, the probabilities of birth, $B$, and of death, $D$, determine the carrying capacity $K$. Due to the randomness the population depends strongly on position, $x$, and there is a distribution of carrying capacities, $\\Pi (K)$. This distribution has self-similar character owing to the imposed hierarchy. The most probable carrying capacity and its probability are studied as a function of $B$ and $D$. The effective growth rate decreases with time, roughly as in a Verhulst process. The model is possibly applicable, for example, to bacteria forming a "towering pillar" biofilm. The bacteria divide on randomly distributed nutrient-rich regions and are exposed to random local bactericidal agent (antibiotic spray). A gradual overall temperature change away from optimal growth co...
Hierarchical decision modeling essays in honor of Dundar F. Kocaoglu
2016-01-01
This volume, developed in honor of Dr. Dundar F. Kocaoglu, aims to demonstrate the applications of the Hierarchical Decision Model (HDM) in different sectors and its capacity in decision analysis. It is comprised of essays from noted scholars, academics and researchers of engineering and technology management around the world. This book is organized into four parts: Technology Assessment, Strategic Planning, National Technology Planning and Decision Making Tools. Dr. Dundar F. Kocaoglu is one of the pioneers of multiple decision models using hierarchies, and creator of the HDM in decision analysis. HDM is a mission-oriented method for evaluation and/or selection among alternatives. A wide range of alternatives can be considered, including but not limited to, different technologies, projects, markets, jobs, products, cities to live in, houses to buy, apartments to rent, and schools to attend. Dr. Kocaoglu’s approach has been adopted for decision problems in many industrial sectors, including electronics rese...
Bayesian hierarchical modelling of weak lensing - the golden goal
Heavens, Alan; Jaffe, Andrew; Hoffmann, Till; Kiessling, Alina; Wandelt, Benjamin
2016-01-01
To accomplish correct Bayesian inference from weak lensing shear data requires a complete statistical description of the data. The natural framework to do this is a Bayesian Hierarchical Model, which divides the chain of reasoning into component steps. Starting with a catalogue of shear estimates in tomographic bins, we build a model that allows us to sample simultaneously from the the underlying tomographic shear fields and the relevant power spectra (E-mode, B-mode, and E-B, for auto- and cross-power spectra). The procedure deals easily with masked data and intrinsic alignments. Using Gibbs sampling and messenger fields, we show with simulated data that the large (over 67000-)dimensional parameter space can be efficiently sampled and the full joint posterior probability density function for the parameters can feasibly be obtained. The method correctly recovers the underlying shear fields and all of the power spectra, including at levels well below the shot noise.
Influence of rainfall observation network on model calibration and application
Directory of Open Access Journals (Sweden)
A. Bárdossy
2008-01-01
Full Text Available The objective in this study is to investigate the influence of the spatial resolution of the rainfall input on the model calibration and application. The analysis is carried out by varying the distribution of the raingauge network. A meso-scale catchment located in southwest Germany has been selected for this study. First, the semi-distributed HBV model is calibrated with the precipitation interpolated from the available observed rainfall of the different raingauge networks. An automatic calibration method based on the combinatorial optimization algorithm simulated annealing is applied. The performance of the hydrological model is analyzed as a function of the raingauge density. Secondly, the calibrated model is validated using interpolated precipitation from the same raingauge density used for the calibration as well as interpolated precipitation based on networks of reduced and increased raingauge density. Lastly, the effect of missing rainfall data is investigated by using a multiple linear regression approach for filling in the missing measurements. The model, calibrated with the complete set of observed data, is then run in the validation period using the above described precipitation field. The simulated hydrographs obtained in the above described three sets of experiments are analyzed through the comparisons of the computed Nash-Sutcliffe coefficient and several goodness-of-fit indexes. The results show that the model using different raingauge networks might need re-calibration of the model parameters, specifically model calibrated on relatively sparse precipitation information might perform well on dense precipitation information while model calibrated on dense precipitation information fails on sparse precipitation information. Also, the model calibrated with the complete set of observed precipitation and run with incomplete observed data associated with the data estimated using multiple linear regressions, at the locations treated as
Regulator Loss Functions and Hierarchical Modeling for Safety Decision Making.
Hatfield, Laura A; Baugh, Christine M; Azzone, Vanessa; Normand, Sharon-Lise T
2017-07-01
Regulators must act to protect the public when evidence indicates safety problems with medical devices. This requires complex tradeoffs among risks and benefits, which conventional safety surveillance methods do not incorporate. To combine explicit regulator loss functions with statistical evidence on medical device safety signals to improve decision making. In the Hospital Cost and Utilization Project National Inpatient Sample, we select pediatric inpatient admissions and identify adverse medical device events (AMDEs). We fit hierarchical Bayesian models to the annual hospital-level AMDE rates, accounting for patient and hospital characteristics. These models produce expected AMDE rates (a safety target), against which we compare the observed rates in a test year to compute a safety signal. We specify a set of loss functions that quantify the costs and benefits of each action as a function of the safety signal. We integrate the loss functions over the posterior distribution of the safety signal to obtain the posterior (Bayes) risk; the preferred action has the smallest Bayes risk. Using simulation and an analysis of AMDE data, we compare our minimum-risk decisions to a conventional Z score approach for classifying safety signals. The 2 rules produced different actions for nearly half of hospitals (45%). In the simulation, decisions that minimize Bayes risk outperform Z score-based decisions, even when the loss functions or hierarchical models are misspecified. Our method is sensitive to the choice of loss functions; eliciting quantitative inputs to the loss functions from regulators is challenging. A decision-theoretic approach to acting on safety signals is potentially promising but requires careful specification of loss functions in consultation with subject matter experts.
Note on the equivalence of hierarchical variational models and auxiliary deep generative models
Brümmer, Niko
2016-01-01
This note compares two recently published machine learning methods for constructing flexible, but tractable families of variational hidden-variable posteriors. The first method, called "hierarchical variational models" enriches the inference model with an extra variable, while the other, called "auxiliary deep generative models", enriches the generative model instead. We conclude that the two methods are mathematically equivalent.
Cahill, Niamh; Kemp, Andrew C.; Horton, Benjamin P.; Parnell, Andrew C.
2016-02-01
We present a Bayesian hierarchical model for reconstructing the continuous and dynamic evolution of relative sea-level (RSL) change with quantified uncertainty. The reconstruction is produced from biological (foraminifera) and geochemical (δ13C) sea-level indicators preserved in dated cores of salt-marsh sediment. Our model is comprised of three modules: (1) a new Bayesian transfer (B-TF) function for the calibration of biological indicators into tidal elevation, which is flexible enough to formally accommodate additional proxies; (2) an existing chronology developed using the Bchron age-depth model, and (3) an existing Errors-In-Variables integrated Gaussian process (EIV-IGP) model for estimating rates of sea-level change. Our approach is illustrated using a case study of Common Era sea-level variability from New Jersey, USA We develop a new B-TF using foraminifera, with and without the additional (δ13C) proxy and compare our results to those from a widely used weighted-averaging transfer function (WA-TF). The formal incorporation of a second proxy into the B-TF model results in smaller vertical uncertainties and improved accuracy for reconstructed RSL. The vertical uncertainty from the multi-proxy B-TF is ˜ 28 % smaller on average compared to the WA-TF. When evaluated against historic tide-gauge measurements, the multi-proxy B-TF most accurately reconstructs the RSL changes observed in the instrumental record (mean square error = 0.003 m2). The Bayesian hierarchical model provides a single, unifying framework for reconstructing and analyzing sea-level change through time. This approach is suitable for reconstructing other paleoenvironmental variables (e.g., temperature) using biological proxies.
Calibrating Car-Following Model Considering Measurement Errors
Directory of Open Access Journals (Sweden)
Chang-qiao Shao
2013-01-01
Full Text Available Car-following model has important applications in traffic and safety engineering. To enhance the accuracy of model in predicting behavior of individual driver, considerable studies strive to improve the model calibration technologies. However, microscopic car-following models are generally calibrated by using macroscopic traffic data ignoring measurement errors-in-variables that leads to unreliable and erroneous conclusions. This paper aims to develop a technology to calibrate the well-known Van Aerde model. Particularly, the effect of measurement errors-in-variables on the accuracy of estimate is considered. In order to complete calibration of the model using microscopic data, a new parameter estimate method named two-step approach is proposed. The result shows that the modified Van Aerde model to a certain extent is more reliable than the generic model.
Improve Query Performance On Hierarchical Data. Adjacency List Model Vs. Nested Set Model
Directory of Open Access Journals (Sweden)
Cornelia Gyorödi
2016-04-01
Full Text Available Hierarchical data are found in a variety of database applications, including content management categories, forums, business organization charts, and product categories. In this paper, we will examine two models deal with hierarchical data in relational databases namely, adjacency list model and nested set model. We analysed these models by executing various operations and queries in a web-application for the management of categories, thus highlighting the results obtained during performance comparison tests. The purpose of this paper is to present the advantages and disadvantages of using an adjacency list model compared to nested set model in a relational database integrated into an application for the management of categories, which needs to manipulate a big amount of hierarchical data.
GSMNet: A Hierarchical Graph Model for Moving Objects in Networks
Directory of Open Access Journals (Sweden)
Hengcai Zhang
2017-03-01
Full Text Available Existing data models for moving objects in networks are often limited by flexibly controlling the granularity of representing networks and the cost of location updates and do not encompass semantic information, such as traffic states, traffic restrictions and social relationships. In this paper, we aim to fill the gap of traditional network-constrained models and propose a hierarchical graph model called the Geo-Social-Moving model for moving objects in Networks (GSMNet that adopts four graph structures, RouteGraph, SegmentGraph, ObjectGraph and MoveGraph, to represent the underlying networks, trajectories and semantic information in an integrated manner. The bulk of user-defined data types and corresponding operators is proposed to handle moving objects and answer a new class of queries supporting three kinds of conditions: spatial, temporal and semantic information. Then, we develop a prototype system with the native graph database system Neo4Jto implement the proposed GSMNet model. In the experiment, we conduct the performance evaluation using simulated trajectories generated from the BerlinMOD (Berlin Moving Objects Database benchmark and compare with the mature MOD system Secondo. The results of 17 benchmark queries demonstrate that our proposed GSMNet model has strong potential to reduce time-consuming table join operations an d shows remarkable advantages with regard to representing semantic information and controlling the cost of location updates.
A Bayesian hierarchical model for wind gust prediction
Friederichs, Petra; Oesting, Marco; Schlather, Martin
2014-05-01
A postprocessing method for ensemble wind gust forecasts given by a mesoscale limited area numerical weather prediction (NWP) model is presented, which is based on extreme value theory. A process layer for the parameters of a generalized extreme value distribution (GEV) is introduced using a Bayesian hierarchical model (BHM). Incorporating the information of the COMSO-DE forecasts, the process parameters model the spatial response surfaces of the GEV parameters as Gaussian random fields. The spatial BHM provides area wide forecasts of wind gusts in terms of a conditional GEV. It models the marginal distribution of the spatial gust process and provides not only forecasts of the conditional GEV at locations without observations, but also uncertainty information about the estimates. A disadvantages of BHM model is that it assumes conditional independent observations. In order to incorporate the dependence between gusts at neighboring locations as well as the spatial random fields of observed and forecasted maximal wind gusts, we propose to model them jointly by a bivariate Brown-Resnick process.
Hierarchical modeling and its numerical implementation for layered thin elastic structures
Energy Technology Data Exchange (ETDEWEB)
Cho, Jin-Rae [Hongik University, Sejong (Korea, Republic of)
2017-05-15
Thin elastic structures such as beam- and plate-like structures and laminates are characterized by the small thickness, which lead to classical plate and laminate theories in which the displacement fields through the thickness are assumed linear or higher-order polynomials. These classical theories are either insufficient to represent the complex stress variation through the thickness or may encounter the accuracy-computational cost dilemma. In order to overcome the inherent problem of classical theories, the concept of hierarchical modeling has been emerged. In the hierarchical modeling, the hierarchical models with different model levels are selected and combined within a structure domain, in order to make the modeling error be distributed as uniformly as possible throughout the problem domain. The purpose of current study is to explore the potential of hierarchical modeling for the effective numerical analysis of layered structures such as laminated composite. For this goal, the hierarchical models are constructed and the hierarchical modeling is implemented by selectively adjusting the level of hierarchical models. As well, the major characteristics of hierarchical models are investigated through the numerical experiments.
Impact of data quality and quantity and the calibration procedure on crop growth model calibration
Seidel, Sabine J.; Werisch, Stefan
2014-05-01
Crop growth models are a commonly used tool for impact assessment of climate variability and climate change on crop yields and water use. Process-based crop models rely on algorithms that approximate the main physiological plant processes by a set of equations containing several calibration parameters as well as basic underlying assumptions. It is well recognized that model calibration is essential to improve the accuracy and reliability of model predictions. However, model calibration and validation is often hindered by a limited quantity and quality of available data. Recent studies suggest that crop model parameters can only be derived from field experiments in which plant growth and development processes have been measured. To be able to achieve a reliable prediction of crop growth under irrigation or drought stress, the correct characterization of the whole soil-plant-atmosphere system is essential. In this context is the accurate simulation of crop development, yield and the soil water dynamics plays an important role. In this study we aim to investigate the importance of a site and cultivar-specific model calibration based on experimental data using the SVAT model Daisy. We investigate to which extent different data sets and different parameter estimation procedures affect particularly yield estimates, irrigation water demand and the soil water dynamics. The comprehensive experimental data has been derived from an experiment conducted in Germany where five irrigation regimes were imposed on cabbage. Data collection included continuous measurements of soil tension and soil water content in two plots at three depths, weekly measurements of LAI, plant heights, leaf-N-content, stomatal conductivity, biomass partitioning, rooting depth as well as harvested yields and duration of growing period. Three crop growth calibration strategies were compared: (1) manual calibration based on yield and duration of growing period, (2) manual calibration based on yield
Evolutionary optimization of a hierarchical object recognition model.
Schneider, Georg; Wersing, Heiko; Sendhoff, Bernhard; Körner, Edgar
2005-06-01
A major problem in designing artificial neural networks is the proper choice of the network architecture. Especially for vision networks classifying three-dimensional (3-D) objects this problem is very challenging, as these networks are necessarily large and therefore the search space for defining the needed networks is of a very high dimensionality. This strongly increases the chances of obtaining only suboptimal structures from standard optimization algorithms. We tackle this problem in two ways. First, we use biologically inspired hierarchical vision models to narrow the space of possible architectures and to reduce the dimensionality of the search space. Second, we employ evolutionary optimization techniques to determine optimal features and nonlinearities of the visual hierarchy. Here, we especially focus on higher order complex features in higher hierarchical stages. We compare two different approaches to perform an evolutionary optimization of these features. In the first setting, we directly code the features into the genome. In the second setting, in analogy to an ontogenetical development process, we suggest the new method of an indirect coding of the features via an unsupervised learning process, which is embedded into the evolutionary optimization. In both cases the processing nonlinearities are encoded directly into the genome and are thus subject to optimization. The fitness of the individuals for the evolutionary selection process is computed by measuring the network classification performance on a benchmark image database. Here, we use a nearest-neighbor classification approach, based on the hierarchical feature output. We compare the found solutions with respect to their ability to generalize. We differentiate between a first- and a second-order generalization. The first-order generalization denotes how well the vision system, after evolutionary optimization of the features and nonlinearities using a database A, can classify previously unseen test
On the unnecessary ubiquity of hierarchical linear modeling.
McNeish, Daniel; Stapleton, Laura M; Silverman, Rebecca D
2017-03-01
In psychology and the behavioral sciences generally, the use of the hierarchical linear model (HLM) and its extensions for discrete outcomes are popular methods for modeling clustered data. HLM and its discrete outcome extensions, however, are certainly not the only methods available to model clustered data. Although other methods exist and are widely implemented in other disciplines, it seems that psychologists have yet to consider these methods in substantive studies. This article compares and contrasts HLM with alternative methods including generalized estimating equations and cluster-robust standard errors. These alternative methods do not model random effects and thus make a smaller number of assumptions and are interpreted identically to single-level methods with the benefit that estimates are adjusted to reflect clustering of observations. Situations where these alternative methods may be advantageous are discussed including research questions where random effects are and are not required, when random effects can change the interpretation of regression coefficients, challenges of modeling with random effects with discrete outcomes, and examples of published psychology articles that use HLM that may have benefitted from using alternative methods. Illustrative examples are provided and discussed to demonstrate the advantages of the alternative methods and also when HLM would be the preferred method. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Hierarchical Model Predictive Control for Plug-and-Play Resource Distribution
DEFF Research Database (Denmark)
Bendtsen, Jan Dimon; Trangbæk, K; Stoustrup, Jakob
2012-01-01
This chapter deals with hierarchical model predictive control (MPC) of distributed systems. A three level hierarchical approach is proposed, consisting of a high level MPC controller, a second level of so-called aggregators, controlled by an online MPC-like algorithm, and a lower level of autonom......This chapter deals with hierarchical model predictive control (MPC) of distributed systems. A three level hierarchical approach is proposed, consisting of a high level MPC controller, a second level of so-called aggregators, controlled by an online MPC-like algorithm, and a lower level...
Identification of constitutive equation in hierarchical multiscale modelling of cup drawing process
Gawad, J.; Van Bael, A.; Eyckens, P.; Samaey, G.; Van Houtte, P.; Roose, D.
2011-08-01
In this paper we discuss extensions to a hierarchical multi-scale model (HMS) of cold sheet forming processes. The HMS model is capable of predicting changes in plastic anisotropy due to the evolution of crystallographic textures. The ALAMEL polycrystal plasticity model is employed to predict the texture evolution during the plastic deformation. The same model acts as a multilevel model and provides "virtual experiments" for calibration of an analytical constitutive law. Plastic anisotropy is described by means of the Facet method, which is able to reproduce the plastic potential in the entire strain rate space. The paper presents new strategies for identification of the Facet expression that are focused on improving its accuracy in the parts of the plastic potential surface that are more extensively used by the macroscopic FE model and therefore need to be reproduced more accurately. In this work we also evaluate the applicability of identification methods that (1) rely exclusively on the plastic potential or (2) can take into consideration also the deviatioric stresses derived from the Facet expression. It is shown that both methods provide the Facet expressions that correctly approximate the plastic anisotropy predicted by the multilevel ALAMEL model.
A Bayesian hierarchical model for accident and injury surveillance.
MacNab, Ying C
2003-01-01
This article presents a recent study which applies Bayesian hierarchical methodology to model and analyse accident and injury surveillance data. A hierarchical Poisson random effects spatio-temporal model is introduced and an analysis of inter-regional variations and regional trends in hospitalisations due to motor vehicle accident injuries to boys aged 0-24 in the province of British Columbia, Canada, is presented. The objective of this article is to illustrate how the modelling technique can be implemented as part of an accident and injury surveillance and prevention system where transportation and/or health authorities may routinely examine accidents, injuries, and hospitalisations to target high-risk regions for prevention programs, to evaluate prevention strategies, and to assist in health planning and resource allocation. The innovation of the methodology is its ability to uncover and highlight important underlying structure of the data. Between 1987 and 1996, British Columbia hospital separation registry registered 10,599 motor vehicle traffic injury related hospitalisations among boys aged 0-24 who resided in British Columbia, of which majority (89%) of the injuries occurred to boys aged 15-24. The injuries were aggregated by three age groups (0-4, 5-14, and 15-24), 20 health regions (based of place-of-residence), and 10 calendar years (1987 to 1996) and the corresponding mid-year population estimates were used as 'at risk' population. An empirical Bayes inference technique using penalised quasi-likelihood estimation was implemented to model both rates and counts, with spline smoothing accommodating non-linear temporal effects. The results show that (a) crude rates and ratios at health region level are unstable, (b) the models with spline smoothing enable us to explore possible shapes of injury trends at both the provincial level and the regional level, and (c) the fitted models provide a wealth of information about the patterns (both over space and time
Novel Hierarchical Fall Detection Algorithm Using a Multiphase Fall Model
Hsieh, Chia-Yeh; Liu, Kai-Chun; Huang, Chih-Ning; Chu, Woei-Chyn; Chan, Chia-Tai
2017-01-01
Falls are the primary cause of accidents for the elderly in the living environment. Reducing hazards in the living environment and performing exercises for training balance and muscles are the common strategies for fall prevention. However, falls cannot be avoided completely; fall detection provides an alarm that can decrease injuries or death caused by the lack of rescue. The automatic fall detection system has opportunities to provide real-time emergency alarms for improving the safety and quality of home healthcare services. Two common technical challenges are also tackled in order to provide a reliable fall detection algorithm, including variability and ambiguity. We propose a novel hierarchical fall detection algorithm involving threshold-based and knowledge-based approaches to detect a fall event. The threshold-based approach efficiently supports the detection and identification of fall events from continuous sensor data. A multiphase fall model is utilized, including free fall, impact, and rest phases for the knowledge-based approach, which identifies fall events and has the potential to deal with the aforementioned technical challenges of a fall detection system. Seven kinds of falls and seven types of daily activities arranged in an experiment are used to explore the performance of the proposed fall detection algorithm. The overall performances of the sensitivity, specificity, precision, and accuracy using a knowledge-based algorithm are 99.79%, 98.74%, 99.05% and 99.33%, respectively. The results show that the proposed novel hierarchical fall detection algorithm can cope with the variability and ambiguity of the technical challenges and fulfill the reliability, adaptability, and flexibility requirements of an automatic fall detection system with respect to the individual differences. PMID:28208694
Model Calibration of Exciter and PSS Using Extended Kalman Filter
Energy Technology Data Exchange (ETDEWEB)
Kalsi, Karanjit; Du, Pengwei; Huang, Zhenyu
2012-07-26
Power system modeling and controls continue to become more complex with the advent of smart grid technologies and large-scale deployment of renewable energy resources. As demonstrated in recent studies, inaccurate system models could lead to large-scale blackouts, thereby motivating the need for model calibration. Current methods of model calibration rely on manual tuning based on engineering experience, are time consuming and could yield inaccurate parameter estimates. In this paper, the Extended Kalman Filter (EKF) is used as a tool to calibrate exciter and Power System Stabilizer (PSS) models of a particular type of machine in the Western Electricity Coordinating Council (WECC). The EKF-based parameter estimation is a recursive prediction-correction process which uses the mismatch between simulation and measurement to adjust the model parameters at every time step. Numerical simulations using actual field test data demonstrate the effectiveness of the proposed approach in calibrating the parameters.
Wei Wu; James Clark; James Vose
2010-01-01
Hierarchical Bayesian (HB) modeling allows for multiple sources of uncertainty by factoring complex relationships into conditional distributions that can be used to draw inference and make predictions. We applied an HB model to estimate the parameters and state variables of a parsimonious hydrological model â GR4J â by coherently assimilating the uncertainties from the...
A note on adding and deleting edges in hierarchical log-linear models
DEFF Research Database (Denmark)
Edwards, David
2012-01-01
The operations of edge addition and deletion for hierarchical log-linear models are defined, and polynomial-time algorithms for the operations are given......The operations of edge addition and deletion for hierarchical log-linear models are defined, and polynomial-time algorithms for the operations are given...
Optimum Binary Search Trees on the Hierarchical Memory Model
Thite, Shripad
2008-01-01
The Hierarchical Memory Model (HMM) of computation is similar to the standard Random Access Machine (RAM) model except that the HMM has a non-uniform memory organized in a hierarchy of levels numbered 1 through h. The cost of accessing a memory location increases with the level number, and accesses to memory locations belonging to the same level cost the same. Formally, the cost of a single access to the memory location at address a is given by m(a), where m: N -> N is the memory cost function, and the h distinct values of m model the different levels of the memory hierarchy. We study the problem of constructing and storing a binary search tree (BST) of minimum cost, over a set of keys, with probabilities for successful and unsuccessful searches, on the HMM with an arbitrary number of memory levels, and for the special case h=2. While the problem of constructing optimum binary search trees has been well studied for the standard RAM model, the additional parameter m for the HMM increases the combinatorial comp...
A Biological Hierarchical Model Based Underwater Moving Object Detection
Directory of Open Access Journals (Sweden)
Jie Shen
2014-01-01
Full Text Available Underwater moving object detection is the key for many underwater computer vision tasks, such as object recognizing, locating, and tracking. Considering the super ability in visual sensing of the underwater habitats, the visual mechanism of aquatic animals is generally regarded as the cue for establishing bionic models which are more adaptive to the underwater environments. However, the low accuracy rate and the absence of the prior knowledge learning limit their adaptation in underwater applications. Aiming to solve the problems originated from the inhomogeneous lumination and the unstable background, the mechanism of the visual information sensing and processing pattern from the eye of frogs are imitated to produce a hierarchical background model for detecting underwater objects. Firstly, the image is segmented into several subblocks. The intensity information is extracted for establishing background model which could roughly identify the object and the background regions. The texture feature of each pixel in the rough object region is further analyzed to generate the object contour precisely. Experimental results demonstrate that the proposed method gives a better performance. Compared to the traditional Gaussian background model, the completeness of the object detection is 97.92% with only 0.94% of the background region that is included in the detection results.
Higher-order models versus direct hierarchical models: g as superordinate or breadth factor?
Directory of Open Access Journals (Sweden)
GILLES E. GIGNAC
2008-03-01
Full Text Available Intelligence research appears to have overwhelmingly endorsed a superordinate (higher-order model conceptualization of g, in comparison to the relatively less well-known breadth conceptualization of g, as represented by the direct hierarchical model. In this paper, several similarities and distinctions between the indirect and direct hierarchical models are delineated. Based on the re-analysis of five correlation matrices, it was demonstrated via CFA that the conventional conception of g as a higher-order superordinate factor was likely not as plausible as a first-order breadth factor. The results are discussed in light of theoretical advantages of conceptualizing g as a first-order factor. Further, because the associations between group-factors and g are constrained to zero within a direct hierarchical model, previous observations of isomorphic associations between a lower-order group factor and g are questioned.
National Research Council Canada - National Science Library
Royle, J. Andrew; Dorazio, Robert M
2008-01-01
"This book describes a general and flexible framework for modeling and inference in ecological systems based on hierarchical modeling in which a strict focus on probability models and parametric inference is adopted...
A hierarchical network modeling method for railway tunnels safety assessment
Zhou, Jin; Xu, Weixiang; Guo, Xin; Liu, Xumin
2017-02-01
Using network theory to model risk-related knowledge on accidents is regarded as potential very helpful in risk management. A large amount of defects detection data for railway tunnels is collected in autumn every year in China. It is extremely important to discover the regularities knowledge in database. In this paper, based on network theories and by using data mining techniques, a new method is proposed for mining risk-related regularities to support risk management in railway tunnel projects. A hierarchical network (HN) model which takes into account the tunnel structures, tunnel defects, potential failures and accidents is established. An improved Apriori algorithm is designed to rapidly and effectively mine correlations between tunnel structures and tunnel defects. Then an algorithm is presented in order to mine the risk-related regularities table (RRT) from the frequent patterns. At last, a safety assessment method is proposed by consideration of actual defects and possible risks of defects gained from the RRT. This method cannot only generate the quantitative risk results but also reveal the key defects and critical risks of defects. This paper is further development on accident causation network modeling methods which can provide guidance for specific maintenance measure.
Production optimisation in the petrochemical industry by hierarchical multivariate modelling
Energy Technology Data Exchange (ETDEWEB)
Andersson, Magnus; Furusjoe, Erik; Jansson, Aasa
2004-06-01
This project demonstrates the advantages of applying hierarchical multivariate modelling in the petrochemical industry in order to increase knowledge of the total process. The models indicate possible ways to optimise the process regarding the use of energy and raw material, which is directly linked to the environmental impact of the process. The refinery of Nynaes Refining AB (Goeteborg, Sweden) has acted as a demonstration site in this project. The models developed for the demonstration site resulted in: Detection of an unknown process disturbance and suggestions of possible causes; Indications on how to increase the yield in combination with energy savings; The possibility to predict product quality from on-line process measurements, making the results available at a higher frequency than customary laboratory analysis; Quantification of the gradually lowered efficiency of heat transfer in the furnace and increased fuel consumption as an effect of soot build-up on the furnace coils; Increased knowledge of the relation between production rate and the efficiency of the heat exchangers. This report is one of two reports from the project. It contains a technical discussion of the result with some degree of detail. A shorter and more easily accessible report is also available, see IVL report B1586-A.
Production optimisation in the petrochemical industry by hierarchical multivariate modelling
Energy Technology Data Exchange (ETDEWEB)
Andersson, Magnus; Furusjoe, Erik; Jansson, Aasa
2004-06-01
This project demonstrates the advantages of applying hierarchical multivariate modelling in the petrochemical industry in order to increase knowledge of the total process. The models indicate possible ways to optimise the process regarding the use of energy and raw material, which is directly linked to the environmental impact of the process. The refinery of Nynaes Refining AB (Goeteborg, Sweden) has acted as a demonstration site in this project. The models developed for the demonstration site resulted in: Detection of an unknown process disturbance and suggestions of possible causes; Indications on how to increase the yield in combination with energy savings; The possibility to predict product quality from on-line process measurements, making the results available at a higher frequency than customary laboratory analysis; Quantification of the gradually lowered efficiency of heat transfer in the furnace and increased fuel consumption as an effect of soot build-up on the furnace coils; Increased knowledge of the relation between production rate and the efficiency of the heat exchangers. This report is one of two reports from the project. It contains a technical discussion of the result with some degree of detail. A shorter and more easily accessible report is also available, see IVL report B1586-A.
Testing of a one dimensional model for Field II calibration
DEFF Research Database (Denmark)
Bæk, David; Jensen, Jørgen Arendt; Willatzen, Morten
2008-01-01
to the calibrated Field II program for 1, 4, and 10 cycle excitations. Two parameter sets were applied for modeling, one real valued Pz27 parameter set, manufacturer supplied, and one complex valued parameter set found in literature, Alguer´o et al. [11]. The latter implicitly accounts for attenuation. Results show......Field II is a program for simulating ultrasound transducer fields. It is capable of calculating the emitted and pulse-echoed fields for both pulsed and continuous wave transducers. To make it fully calibrated a model of the transducer’s electro-mechanical impulse response must be included. We...... examine an adapted one dimensional transducer model originally proposed by Willatzen [9] to calibrate Field II. This model is modified to calculate the required impulse responses needed by Field II for a calibrated field pressure and external circuit current calculation. The testing has been performed...
Analysis of Sting Balance Calibration Data Using Optimized Regression Models
Ulbrich, N.; Bader, Jon B.
2010-01-01
Calibration data of a wind tunnel sting balance was processed using a candidate math model search algorithm that recommends an optimized regression model for the data analysis. During the calibration the normal force and the moment at the balance moment center were selected as independent calibration variables. The sting balance itself had two moment gages. Therefore, after analyzing the connection between calibration loads and gage outputs, it was decided to choose the difference and the sum of the gage outputs as the two responses that best describe the behavior of the balance. The math model search algorithm was applied to these two responses. An optimized regression model was obtained for each response. Classical strain gage balance load transformations and the equations of the deflection of a cantilever beam under load are used to show that the search algorithm s two optimized regression models are supported by a theoretical analysis of the relationship between the applied calibration loads and the measured gage outputs. The analysis of the sting balance calibration data set is a rare example of a situation when terms of a regression model of a balance can directly be derived from first principles of physics. In addition, it is interesting to note that the search algorithm recommended the correct regression model term combinations using only a set of statistical quality metrics that were applied to the experimental data during the algorithm s term selection process.
Loss Function Based Ranking in Two-Stage, Hierarchical Models
Lin, Rongheng; Louis, Thomas A.; Paddock, Susan M.; Ridgeway, Greg
2009-01-01
Performance evaluations of health services providers burgeons. Similarly, analyzing spatially related health information, ranking teachers and schools, and identification of differentially expressed genes are increasing in prevalence and importance. Goals include valid and efficient ranking of units for profiling and league tables, identification of excellent and poor performers, the most differentially expressed genes, and determining “exceedances” (how many and which unit-specific true parameters exceed a threshold). These data and inferential goals require a hierarchical, Bayesian model that accounts for nesting relations and identifies both population values and random effects for unit-specific parameters. Furthermore, the Bayesian approach coupled with optimizing a loss function provides a framework for computing non-standard inferences such as ranks and histograms. Estimated ranks that minimize Squared Error Loss (SEL) between the true and estimated ranks have been investigated. The posterior mean ranks minimize SEL and are “general purpose,” relevant to a broad spectrum of ranking goals. However, other loss functions and optimizing ranks that are tuned to application-specific goals require identification and evaluation. For example, when the goal is to identify the relatively good (e.g., in the upper 10%) or relatively poor performers, a loss function that penalizes classification errors produces estimates that minimize the error rate. We construct loss functions that address this and other goals, developing a unified framework that facilitates generating candidate estimates, comparing approaches and producing data analytic performance summaries. We compare performance for a fully parametric, hierarchical model with Gaussian sampling distribution under Gaussian and a mixture of Gaussians prior distributions. We illustrate approaches via analysis of standardized mortality ratio data from the United States Renal Data System. Results show that SEL
The Hierarchical Sparse Selection Model of Visual Crowding
Directory of Open Access Journals (Sweden)
Wesley eChaney
2014-09-01
Full Text Available Because the environment is cluttered, objects rarely appear in isolation. The visual system must therefore attentionally select behaviorally relevant objects from among many irrelevant ones. A limit on our ability to select individual objects is revealed by the phenomenon of visual crowding: an object seen in the periphery, easily recognized in isolation, can become impossible to identify when surrounded by other, similar objects. The neural basis of crowding is hotly debated: while prevailing theories hold that crowded information is irrecoverable – destroyed due to over-integration in early-stage visual processing – recent evidence demonstrates otherwise. Crowding can occur between high-level, configural object representations, and crowded objects can contribute with high precision to judgments about the gist of a group of objects, even when they are individually unrecognizable. While existing models can account for the basic diagnostic criteria of crowding (e.g. specific critical spacing, spatial anisotropies, and temporal tuning, no present model explains how crowding can operate simultaneously at multiple levels in the visual processing hierarchy, including at the level of whole objects. Here, we present a new model of visual crowding— the hierarchical sparse selection (HSS model, which accounts for object-level crowding, as well as a number of puzzling findings in the recent literature. Counter to existing theories, we posit that crowding occurs not due to degraded visual representations in the brain, but due to impoverished sampling of visual representations for the sake of perception. The HSS model unifies findings from a disparate array of visual crowding studies and makes testable predictions about how information in crowded scenes can be accessed.
The hierarchical sparse selection model of visual crowding.
Chaney, Wesley; Fischer, Jason; Whitney, David
2014-01-01
Because the environment is cluttered, objects rarely appear in isolation. The visual system must therefore attentionally select behaviorally relevant objects from among many irrelevant ones. A limit on our ability to select individual objects is revealed by the phenomenon of visual crowding: an object seen in the periphery, easily recognized in isolation, can become impossible to identify when surrounded by other, similar objects. The neural basis of crowding is hotly debated: while prevailing theories hold that crowded information is irrecoverable - destroyed due to over-integration in early stage visual processing - recent evidence demonstrates otherwise. Crowding can occur between high-level, configural object representations, and crowded objects can contribute with high precision to judgments about the "gist" of a group of objects, even when they are individually unrecognizable. While existing models can account for the basic diagnostic criteria of crowding (e.g., specific critical spacing, spatial anisotropies, and temporal tuning), no present model explains how crowding can operate simultaneously at multiple levels in the visual processing hierarchy, including at the level of whole objects. Here, we present a new model of visual crowding-the hierarchical sparse selection (HSS) model, which accounts for object-level crowding, as well as a number of puzzling findings in the recent literature. Counter to existing theories, we posit that crowding occurs not due to degraded visual representations in the brain, but due to impoverished sampling of visual representations for the sake of perception. The HSS model unifies findings from a disparate array of visual crowding studies and makes testable predictions about how information in crowded scenes can be accessed.
Scheibehenne, Benjamin; Pachur, Thorsten
2015-04-01
To be useful, cognitive models with fitted parameters should show generalizability across time and allow accurate predictions of future observations. It has been proposed that hierarchical procedures yield better estimates of model parameters than do nonhierarchical, independent approaches, because the formers' estimates for individuals within a group can mutually inform each other. Here, we examine Bayesian hierarchical approaches to evaluating model generalizability in the context of two prominent models of risky choice-cumulative prospect theory (Tversky & Kahneman, 1992) and the transfer-of-attention-exchange model (Birnbaum & Chavez, 1997). Using empirical data of risky choices collected for each individual at two time points, we compared the use of hierarchical versus independent, nonhierarchical Bayesian estimation techniques to assess two aspects of model generalizability: parameter stability (across time) and predictive accuracy. The relative performance of hierarchical versus independent estimation varied across the different measures of generalizability. The hierarchical approach improved parameter stability (in terms of a lower absolute discrepancy of parameter values across time) and predictive accuracy (in terms of deviance; i.e., likelihood). With respect to test-retest correlations and posterior predictive accuracy, however, the hierarchical approach did not outperform the independent approach. Further analyses suggested that this was due to strong correlations between some parameters within both models. Such intercorrelations make it difficult to identify and interpret single parameters and can induce high degrees of shrinkage in hierarchical models. Similar findings may also occur in the context of other cognitive models of choice.
A Bayesian Hierarchical Model for Reconstructing Sea Levels: From Raw Data to Rates of Change
Cahill, Niamh; Horton, Benjamin P; Parnell, Andrew C
2015-01-01
We present a holistic Bayesian hierarchical model for reconstructing the continuous and dynamic evolution of relative sea-level (RSL) change with fully quantified uncertainty. The reconstruction is produced from biological (foraminifera) and geochemical ({\\delta}13C) sea-level indicators preserved in dated cores of salt-marsh sediment. Our model is comprised of three modules: (1) A Bayesian transfer function for the calibration of foraminifera into tidal elevation, which is flexible enough to formally accommodate additional proxies (in this case bulk-sediment {\\delta}13C values); (2) A chronology developed from an existing Bchron age-depth model, and (3) An existing errors-in-variables integrated Gaussian process (EIV-IGP) model for estimating rates of sea-level change. We illustrate our approach using a case study of Common Era sea-level variability from New Jersey, U.S.A. We develop a new Bayesian transfer function (B-TF), with and without the {\\delta}13C proxy and compare our results to those from a widely...
A Method to Test Model Calibration Techniques: Preprint
Energy Technology Data Exchange (ETDEWEB)
Judkoff, Ron; Polly, Ben; Neymark, Joel
2016-09-01
This paper describes a method for testing model calibration techniques. Calibration is commonly used in conjunction with energy retrofit audit models. An audit is conducted to gather information about the building needed to assemble an input file for a building energy modeling tool. A calibration technique is used to reconcile model predictions with utility data, and then the 'calibrated model' is used to predict energy savings from a variety of retrofit measures and combinations thereof. Current standards and guidelines such as BPI-2400 and ASHRAE-14 set criteria for 'goodness of fit' and assume that if the criteria are met, then the calibration technique is acceptable. While it is logical to use the actual performance data of the building to tune the model, it is not certain that a good fit will result in a model that better predicts post-retrofit energy savings. Therefore, the basic idea here is that the simulation program (intended for use with the calibration technique) is used to generate surrogate utility bill data and retrofit energy savings data against which the calibration technique can be tested. This provides three figures of merit for testing a calibration technique, 1) accuracy of the post-retrofit energy savings prediction, 2) closure on the 'true' input parameter values, and 3) goodness of fit to the utility bill data. The paper will also discuss the pros and cons of using this synthetic surrogate data approach versus trying to use real data sets of actual buildings.
Scale of association: hierarchical linear models and the measurement of ecological systems
Sean M. McMahon; Jeffrey M. Diez
2007-01-01
A fundamental challenge to understanding patterns in ecological systems lies in employing methods that can analyse, test and draw inference from measured associations between variables across scales. Hierarchical linear models (HLM) use advanced estimation algorithms to measure regression relationships and variance-covariance parameters in hierarchically structured...
Bayesian Hierarchical Modeling for Big Data Fusion in Soil Hydrology
Mohanty, B.; Kathuria, D.; Katzfuss, M.
2016-12-01
Soil moisture datasets from remote sensing (RS) platforms (such as SMOS and SMAP) and reanalysis products from land surface models are typically available on a coarse spatial granularity of several square km. Ground based sensors on the other hand provide observations on a finer spatial scale (meter scale or less) but are sparsely available. Soil moisture is affected by high variability due to complex interactions between geologic, topographic, vegetation and atmospheric variables. Hydrologic processes usually occur at a scale of 1 km or less and therefore spatially ubiquitous and temporally periodic soil moisture products at this scale are required to aid local decision makers in agriculture, weather prediction and reservoir operations. Past literature has largely focused on downscaling RS soil moisture for a small extent of a field or a watershed and hence the applicability of such products has been limited. The present study employs a spatial Bayesian Hierarchical Model (BHM) to derive soil moisture products at a spatial scale of 1 km for the state of Oklahoma by fusing point scale Mesonet data and coarse scale RS data for soil moisture and its auxiliary covariates such as precipitation, topography, soil texture and vegetation. It is seen that the BHM model handles change of support problems easily while performing accurate uncertainty quantification arising from measurement errors and imperfect retrieval algorithms. The computational challenge arising due to the large number of measurements is tackled by utilizing basis function approaches and likelihood approximations. The BHM model can be considered as a complex Bayesian extension of traditional geostatistical prediction methods (such as Kriging) for large datasets in the presence of uncertainties.
Energy Technology Data Exchange (ETDEWEB)
Moges, Edom [Civil and Environmental Engineering Department, Washington State University, Richland Washington USA; Demissie, Yonas [Civil and Environmental Engineering Department, Washington State University, Richland Washington USA; Li, Hong-Yi [Hydrology Group, Pacific Northwest National Laboratory, Richland Washington USA
2016-04-01
In most water resources applications, a single model structure might be inadequate to capture the dynamic multi-scale interactions among different hydrological processes. Calibrating single models for dynamic catchments, where multiple dominant processes exist, can result in displacement of errors from structure to parameters, which in turn leads to over-correction and biased predictions. An alternative to a single model structure is to develop local expert structures that are effective in representing the dominant components of the hydrologic process and adaptively integrate them based on an indicator variable. In this study, the Hierarchical Mixture of Experts (HME) framework is applied to integrate expert model structures representing the different components of the hydrologic process. Various signature diagnostic analyses are used to assess the presence of multiple dominant processes and the adequacy of a single model, as well as to identify the structures of the expert models. The approaches are applied for two distinct catchments, the Guadalupe River (Texas) and the French Broad River (North Carolina) from the Model Parameter Estimation Experiment (MOPEX), using different structures of the HBV model. The results show that the HME approach has a better performance over the single model for the Guadalupe catchment, where multiple dominant processes are witnessed through diagnostic measures. Whereas, the diagnostics and aggregated performance measures prove that French Broad has a homogeneous catchment response, making the single model adequate to capture the response.
DEFF Research Database (Denmark)
Huang, Qian; Huang, Yue-Cai; Ko, King-Tim;
2011-01-01
dimensioning and planning. This paper investigates the computationally efficient loss performance modeling for multiservice in hierarchical heterogeneous wireless networks. A speed-sensitive call admission control (CAC) scheme is considered in our model to assign overflowed calls to appropriate tiers...
A Multilevel Secure Relation-Hierarchical Data Model for a Secure DBMS
Institute of Scientific and Technical Information of China (English)
无
2001-01-01
A multilevel secure relation-hierarchical data model formultilevel secure database is extended from the relation-hierarchical data model in single level environment in this paper. Based on the model, an upper-lower layer relational integrity is presented after we analyze and eliminate the covert channels caused by the database integrity. Two SQL statements are extended to process polyinstantiation in the multilevel secure environment. The system based on the multilevel secure relation-hierarchical data model is capable of integratively storing and manipulating complicated objects (e.g., multilevel spatial data) and conventional data (e.g., integer, real number and character string) in multilevel secure database.
Investigating follow-up outcome change using hierarchical linear modeling.
Ogrodniczuk, J S; Piper, W E; Joyce, A S
2001-03-01
Individual change in outcome during a one-year follow-up period for 98 patients who received either interpretive or supportive psychotherapy was examined using hierarchical linear modeling (HLM). This followed a previous study that had investigated average (treatment condition) change during follow-up using traditional methods of data analysis (repeated measures ANOVA, chi-square tests). We also investigated whether two patient personality characteristics-quality of object relations (QOR) and psychological mindedness (PM)-predicted individual change. HLM procedures yielded findings that were not detected using traditional methods of data analysis. New findings indicated that the rate of individual change in outcome during follow-up varied significantly among the patients. QOR was directly related to favorable individual change for supportive therapy patients, but not for patients who received interpretive therapy. The findings have implications for determining which patients will show long-term benefit following short-term supportive therapy and how to enhance it. The study also found significant associations between QOR and final outcome level.
Finite element model calibration using frequency responses with damping equalization
Abrahamsson, T. J. S.; Kammer, D. C.
2015-10-01
Model calibration is a cornerstone of the finite element verification and validation procedure, in which the credibility of the model is substantiated by positive comparison with test data. The calibration problem, in which the minimum deviation between finite element model data and experimental data is searched for, is normally characterized as being a large scale optimization problem with many model parameters to solve for and with deviation metrics that are nonlinear in these parameters. The calibrated parameters need to be found by iterative procedures, starting from initial estimates. Sometimes these procedures get trapped in local deviation function minima and do not converge to the globally optimal calibration solution that is searched for. The reason for such traps is often the multi-modality of the problem which causes eigenmode crossover problems in the iterative variation of parameter settings. This work presents a calibration formulation which gives a smooth deviation metric with a large radius of convergence to the global minimum. A damping equalization method is suggested to avoid the mode correlation and mode pairing problems that need to be solved in many other model updating procedures. By this method, the modal damping of a test data model and the finite element model is set to be the same fraction of critical modal damping. Mode pairing for mapping of experimentally found damping to the finite element model is thus not needed. The method is combined with model reduction for efficiency and employs the Levenberg-Marquardt minimizer with randomized starts to achieve the calibration solution. The performance of the calibration procedure, including a study of parameter bias and variance under noisy data conditions, is demonstrated by two numerical examples.
Qian, Song S; Craig, J Kevin; Baustian, Melissa M; Rabalais, Nancy N
2009-12-01
We introduce the Bayesian hierarchical modeling approach for analyzing observational data from marine ecological studies using a data set intended for inference on the effects of bottom-water hypoxia on macrobenthic communities in the northern Gulf of Mexico off the coast of Louisiana, USA. We illustrate (1) the process of developing a model, (2) the use of the hierarchical model results for statistical inference through innovative graphical presentation, and (3) a comparison to the conventional linear modeling approach (ANOVA). Our results indicate that the Bayesian hierarchical approach is better able to detect a "treatment" effect than classical ANOVA while avoiding several arbitrary assumptions necessary for linear models, and is also more easily interpreted when presented graphically. These results suggest that the hierarchical modeling approach is a better alternative than conventional linear models and should be considered for the analysis of observational field data from marine systems.
Calibration of microscopic traffic simulation models using metaheuristic algorithms
Directory of Open Access Journals (Sweden)
Miao Yu
2017-06-01
Full Text Available This paper presents several metaheuristic algorithms to calibrate a microscopic traffic simulation model. The genetic algorithm (GA, Tabu Search (TS, and a combination of the GA and TS (i.e., warmed GA and warmed TS are implemented and compared. A set of traffic data collected from the I-5 Freeway, Los Angles, California, is used. Objective functions are defined to minimize the difference between simulated and field traffic data which are built based on the flow and speed. Several car-following parameters in VISSIM, which can significantly affect the simulation outputs, are selected to calibrate. A better match to the field measurements is reached with the GA, TS, and warmed GA and TS when comparing with that only using the default parameters in VISSIM. Overall, TS performs very well and can be used to calibrate parameters. Combining metaheuristic algorithms clearly performs better and therefore is highly recommended for calibrating microscopic traffic simulation models.
Generic model for line-of-sight analysis and calibration
Afik, Zvika; Shammas, A.; Schwartz, Roni; Gal, Eli
1991-04-01
1ariy electrooptical (E'O) systems incorporate an inaging sensor and a Line of Sight (LOS) deflection mirror. At a higher system level, such as for fire control or mIssile homing applications, these sensors are required to neasure angular target position very accurately. This work presents ar approach that has been developed for the modeling and calibration of such electrooptical systems. Using a generic system which includes a mirror mounted on a twoaxis LOS steering unit and an imaging sensor, a description of the mathematical model of the system is given here. This model may be used for system performance analyses as well as for developing various algorithms for the calculation of target angular position. The system model uses a number of calibration parameters such as gimbal nonorthogonality and other assembly and production errors. These are obtained from laboratory measurement results via a mathematical calibration model. We explain how the calibration model is developed from the system model. The method shown here can significantly reduce the number of computations and the look-up-table capacity needed in an operational system, as well as reducing the extent of laboratory calibrations usually required.
Hierarchical set of models to estimate soil thermal diffusivity
Arkhangelskaya, Tatiana; Lukyashchenko, Ksenia
2016-04-01
Soil thermal properties significantly affect the land-atmosphere heat exchange rates. Intra-soil heat fluxes depend both on temperature gradients and soil thermal conductivity. Soil temperature changes due to energy fluxes are determined by soil specific heat. Thermal diffusivity is equal to thermal conductivity divided by volumetric specific heat and reflects both the soil ability to transfer heat and its ability to change temperature when heat is supplied or withdrawn. The higher soil thermal diffusivity is, the thicker is the soil/ground layer in which diurnal and seasonal temperature fluctuations are registered and the smaller are the temperature fluctuations at the soil surface. Thermal diffusivity vs. moisture dependencies for loams, sands and clays of the East European Plain were obtained using the unsteady-state method. Thermal diffusivity of different soils differed greatly, and for a given soil it could vary by 2, 3 or even 5 times depending on soil moisture. The shapes of thermal diffusivity vs. moisture dependencies were different: peak curves were typical for sandy soils and sigmoid curves were typical for loamy and especially for compacted soils. The lowest thermal diffusivities and the smallest range of their variability with soil moisture were obtained for clays with high humus content. Hierarchical set of models will be presented, allowing an estimate of soil thermal diffusivity from available data on soil texture, moisture, bulk density and organic carbon. When developing these models the first step was to parameterize the experimental thermal diffusivity vs. moisture dependencies with a 4-parameter function; the next step was to obtain regression formulas to estimate the function parameters from available data on basic soil properties; the last step was to evaluate the accuracy of suggested models using independent data on soil thermal diffusivity. The simplest models were based on soil bulk density and organic carbon data and provided different
Experimental technique of calibration of symmetrical air pollution models
Indian Academy of Sciences (India)
P Kumar
2005-10-01
Based on the inherent property of symmetry of air pollution models,a Symmetrical Air Pollution Model Index (SAPMI)has been developed to calibrate the accuracy of predictions made by such models,where the initial quantity of release at the source is not known.For exact prediction the value of SAPMI should be equal to 1.If the predicted values are overestimating then SAPMI is > 1and if it is underestimating then SAPMI is < 1.Speciﬁc design for the layout of receptors has been suggested as a requirement for the calibration experiments.SAPMI is applicable for all variations of symmetrical air pollution dispersion models.
Calibrating the ECCO ocean general circulation model using Green's functions
Menemenlis, D.; Fu, L. L.; Lee, T.; Fukumori, I.
2002-01-01
Green's functions provide a simple, yet effective, method to test and calibrate General-Circulation-Model(GCM) parameterizations, to study and quantify model and data errors, to correct model biases and trends, and to blend estimates from different solutions and data products.
A Generic Software Framework for Data Assimilation and Model Calibration
Van Velzen, N.
2010-01-01
The accuracy of dynamic simulation models can be increased by using observations in conjunction with a data assimilation or model calibration algorithm. However, implementing such algorithms usually increases the complexity of the model software significantly. By using concepts from object oriented
Hierarchical Shrinkage Priors and Model Fitting for High-dimensional Generalized Linear Models
Yi, Nengjun; Ma, Shuangge
2013-01-01
Genetic and other scientific studies routinely generate very many predictor variables, which can be naturally grouped, with predictors in the same groups being highly correlated. It is desirable to incorporate the hierarchical structure of the predictor variables into generalized linear models for simultaneous variable selection and coefficient estimation. We propose two prior distributions: hierarchical Cauchy and double-exponential distributions, on coefficients in generalized linear models. The hierarchical priors include both variable-specific and group-specific tuning parameters, thereby not only adopting different shrinkage for different coefficients and different groups but also providing a way to pool the information within groups. We fit generalized linear models with the proposed hierarchical priors by incorporating flexible expectation-maximization (EM) algorithms into the standard iteratively weighted least squares as implemented in the general statistical package R. The methods are illustrated with data from an experiment to identify genetic polymorphisms for survival of mice following infection with Listeria monocytogenes. The performance of the proposed procedures is further assessed via simulation studies. The methods are implemented in a freely available R package BhGLM (http://www.ssg.uab.edu/bhglm/). PMID:23192052
Intelligent multiagent coordination based on reinforcement hierarchical neuro-fuzzy models.
Mendoza, Leonardo Forero; Vellasco, Marley; Figueiredo, Karla
2014-12-01
This paper presents the research and development of two hybrid neuro-fuzzy models for the hierarchical coordination of multiple intelligent agents. The main objective of the models is to have multiple agents interact intelligently with each other in complex systems. We developed two new models of coordination for intelligent multiagent systems, which integrates the Reinforcement Learning Hierarchical Neuro-Fuzzy model with two proposed coordination mechanisms: the MultiAgent Reinforcement Learning Hierarchical Neuro-Fuzzy with a market-driven coordination mechanism (MA-RL-HNFP-MD) and the MultiAgent Reinforcement Learning Hierarchical Neuro-Fuzzy with graph coordination (MA-RL-HNFP-CG). In order to evaluate the proposed models and verify the contribution of the proposed coordination mechanisms, two multiagent benchmark applications were developed: the pursuit game and the robot soccer simulation. The results obtained demonstrated that the proposed coordination mechanisms greatly improve the performance of the multiagent system when compared with other strategies.
Ahn, Kuk-Hyun; Palmer, Richard; Steinschneider, Scott
2017-01-01
This study presents a regional, probabilistic framework for seasonal forecasts of extreme low summer flows in the northeastern United States conditioned on antecedent climate and hydrologic conditions. The model is developed to explore three innovations in hierarchical modeling for seasonal forecasting at ungaged sites: (1) predictive climate teleconnections are inferred directly from ocean fields instead of predefined climate indices, (2) a parsimonious modeling structure is introduced to allow climate teleconnections to vary spatially across streamflow gages, and (3) climate teleconnections and antecedent hydrologic conditions are considered jointly for regional forecast development. The proposed model is developed and calibrated in a hierarchical Bayesian framework to pool regional information across sites and enhance regionalization skill. The model is validated in a cross-validation framework along with five simpler nested formulations to test specific hypotheses embedded in the full model structure. Results indicate that each of the three innovations improve out-of-sample summer low-flow forecasts, with the greatest benefits derived from the spatially heterogeneous effect of climate teleconnections. We conclude with a discussion of possible model improvements from a better representation of antecedent hydrologic conditions at ungaged sites.
An Example Multi-Model Analysis: Calibration and Ranking
Ahlmann, M.; James, S. C.; Lowry, T. S.
2007-12-01
Modeling solute transport is a complex process governed by multiple site-specific parameters like porosity and hydraulic conductivity as well as many solute-dependent processes such as diffusion and reaction. Furthermore, it must be determined whether a steady or time-variant model is most appropriate. A problem arises because over-parameterized conceptual models may be easily calibrated to exactly reproduce measured data, even if these data contain measurement noise. During preliminary site investigation stages where available data may be scarce it is often advisable to develop multiple independent conceptual models, but the question immediately arises: which model is best? This work outlines a method for quickly calibrating and ranking multiple models using the parameter estimation code PEST in conjunction with the second-order-bias-corrected Akaike Information Criterion (AICc). The method is demonstrated using the twelve analytical solutions to the one- dimensional convective-dispersive-reactive solute transport equation as the multiple conceptual models (van~Genuchten M. Th. and W. J. Alves, 1982. Analytical solutions of the one-dimensional convective- dispersive solute transport equation, USDA ARS Technical Bulletin Number 1661. U.S. Salinity Laboratory, 4500 Glenwood Drive, Riverside, CA 92501.). Each solution is calibrated to three data sets, each comprising an increasing number of calibration points that represent increased knowledge of the modeled site (calibration points are selected from one of the analytical solutions that provides the "correct" model). The AICc is calculated after each successive calibration to the three data sets yielding model weights that are functions of the sum of the squared, weighted residuals, the number of parameters, and the number of observations (calibration data points) and ultimately indicates which model has the highest likelihood of being correct. The results illustrate how the sparser data sets can be modeled
Effects of temporal variability on HBV model calibration
Directory of Open Access Journals (Sweden)
Steven Reinaldo Rusli
2015-10-01
Full Text Available This study aimed to investigate the effect of temporal variability on the optimization of the Hydrologiska Byråns Vattenbalansavedlning (HBV model, as well as the calibration performance using manual optimization and average parameter values. By applying the HBV model to the Jiangwan Catchment, whose geological features include lots of cracks and gaps, simulations under various schemes were developed: short, medium-length, and long temporal calibrations. The results show that, with long temporal calibration, the objective function values of the Nash-Sutcliffe efficiency coefficient (NSE, relative error (RE, root mean square error (RMSE, and high flow ratio generally deliver a preferable simulation. Although NSE and RMSE are relatively stable with different temporal scales, significant improvements to RE and the high flow ratio are seen with longer temporal calibration. It is also noted that use of average parameter values does not lead to better simulation results compared with manual optimization. With medium-length temporal calibration, manual optimization delivers the best simulation results, with NSE, RE, RMSE, and the high flow ratio being 0.563 6, 0.122 3, 0.978 8, and 0.854 7, respectively; and calibration using average parameter values delivers NSE, RE, RMSE, and the high flow ratio of 0.481 1, 0.467 6, 1.021 0, and 2.784 0, respectively. Similar behavior is found with long temporal calibration, when NSE, RE, RMSE, and the high flow ratio using manual optimization are 0.525 3, −0.069 2, 1.058 0, and 0.980 0, respectively, as compared with 0.490 3, 0.224 8, 1.096 2, and 0.547 9, respectively, using average parameter values. This study shows that selection of longer periods of temporal calibration in hydrological analysis delivers better simulation in general for water balance analysis.
Calibration Against the Moon. I: A Disk-Resolved Lunar Model for Absolute Reflectance Calibration
2010-01-01
the (nearly) full Moon for calibration. The Hillier et al. empirical representation described the mean solar phase functions /(at) for observation... full Moon . Thermal emission contrib- utes up to 2-3% of the signal at 2.26 tun, and the thermal compo- nent rapidly diminishes at shorter...ROLO chip dataset and to a sample of the ROLO imagery near full Moon . The final model was extended to the shortwave IR (0.35-2.45 urn) and is able
Calibrating Vadose Zone Models with Time-Lapse Gravity Data
DEFF Research Database (Denmark)
Christiansen, Lars; Hansen, A. B.; Looms, M. C.
2009-01-01
hydrogeological parameters. These studies focused on the saturated zone with specific yield as the most prominent target parameter. Any change in storage in the vadose zone has been considered as noise. Our modeling results show a measureable change in gravity from the vadose zone during a forced infiltration...... experiment on 10m by 10m grass land. Simulation studies show a potential for vadose zone model calibration using gravity data in conjunction with other geophysical data, e.g. cross-borehole georadar. We present early field data and calibration results from a forced infiltration experiment conducted over 30...... days and discuss the potential for gravity measurements in vadose zone model parameter estimation....
National Research Council Canada - National Science Library
Allison A Vaughn; Matthew Bergman; Barry Fass-Holmes
2015-01-01
...) in the fall term of the five most recent academic years. Hierarchical linear modeling analyses showed that the predictors with the largest effect sizes were English writing programs and class level...
LIMO EEG: a toolbox for hierarchical LInear MOdeling of ElectroEncephaloGraphic data
National Research Council Canada - National Science Library
Pernet, Cyril R; Chauveau, Nicolas; Gaspar, Carl; Rousselet, Guillaume A
2011-01-01
...). LIMO EEG is a Matlab toolbox (EEGLAB compatible) to analyse evoked responses over all space and time dimensions, while accounting for single trial variability using a simple hierarchical linear modelling of the data...
LIMO EEG: A Toolbox for Hierarchical LInear MOdeling of ElectroEncephaloGraphic Data
National Research Council Canada - National Science Library
Pernet, Cyril R; Chauveau, Nicolas; Gaspar, Carl; Rousselet, Guillaume A
2011-01-01
...). LIMO EEG is a Matlab toolbox (EEGLAB compatible) to analyse evoked responses over all space and time dimensions, while accounting for single trial variability using a simple hierarchical linear modelling of the data...
Higher Order Hierarchical Legendre Basis Functions for Electromagnetic Modeling
DEFF Research Database (Denmark)
Jørgensen, Erik; Volakis, John L.; Meincke, Peter
2004-01-01
This paper presents a new hierarchical basis of arbitrary order for integral equations solved with the Method of Moments (MoM). The basis is derived from orthogonal Legendre polynomials which are modified to impose continuity of vector quantities between neighboring elements while maintaining mos...
Higher Order Hierarchical Legendre Basis Functions for Electromagnetic Modeling
DEFF Research Database (Denmark)
Jørgensen, Erik; Volakis, John L.; Meincke, Peter
2004-01-01
This paper presents a new hierarchical basis of arbitrary order for integral equations solved with the Method of Moments (MoM). The basis is derived from orthogonal Legendre polynomials which are modified to impose continuity of vector quantities between neighboring elements while maintaining mos...
Heuristics for Hierarchical Partitioning with Application to Model Checking
DEFF Research Database (Denmark)
Möller, Michael Oliver; Alur, Rajeev
2001-01-01
Given a collection of connected components, it is often desired to cluster together parts of strong correspondence, yielding a hierarchical structure. We address the automation of this process and apply heuristics to battle the combinatorial and computational complexity. We define a cost function...
Cloud-Based Model Calibration Using OpenStudio: Preprint
Energy Technology Data Exchange (ETDEWEB)
Hale, E.; Lisell, L.; Goldwasser, D.; Macumber, D.; Dean, J.; Metzger, I.; Parker, A.; Long, N.; Ball, B.; Schott, M.; Weaver, E.; Brackney, L.
2014-03-01
OpenStudio is a free, open source Software Development Kit (SDK) and application suite for performing building energy modeling and analysis. The OpenStudio Parametric Analysis Tool has been extended to allow cloud-based simulation of multiple OpenStudio models parametrically related to a baseline model. This paper describes the new cloud-based simulation functionality and presents a model cali-bration case study. Calibration is initiated by entering actual monthly utility bill data into the baseline model. Multiple parameters are then varied over multiple iterations to reduce the difference between actual energy consumption and model simulation results, as calculated and visualized by billing period and by fuel type. Simulations are per-formed in parallel using the Amazon Elastic Cloud service. This paper highlights model parameterizations (measures) used for calibration, but the same multi-nodal computing architecture is available for other purposes, for example, recommending combinations of retrofit energy saving measures using the calibrated model as the new baseline.
A methodology to calibrate pedestrian walker models using multiple objectives
Campanella, M.C.; Daamen, W.; Hoogendoorn, S.P.
2012-01-01
The application of walker models to simulate real situations require accuracy in several traffic situations. One strategy to obtain a generic model is to calibrate the parameters in several situations using multiple-objective functions in the optimization process. In this paper, we propose a general
DEFF Research Database (Denmark)
Petersen, Britta; Gernaey, Krist; Henze, Mogens;
2002-01-01
The purpose of the calibrated model determines how to approach a model calibration, e.g. which information is needed and to which level of detail the model should be calibrated. A systematic model calibration procedure was therefore defined and evaluated for a municipal–industrial wastewater trea...
Extending the Real-Time Maude Semantics of Ptolemy to Hierarchical DE Models
Bae, Kyungmin; 10.4204/EPTCS.36.3
2010-01-01
This paper extends our Real-Time Maude formalization of the semantics of flat Ptolemy II discrete-event (DE) models to hierarchical models, including modal models. This is a challenging task that requires combining synchronous fixed-point computations with hierarchical structure. The synthesis of a Real-Time Maude verification model from a Ptolemy II DE model, and the formal verification of the synthesized model in Real-Time Maude, have been integrated into Ptolemy II, enabling a model-engineering process that combines the convenience of Ptolemy II DE modeling and simulation with formal verification in Real-Time Maude.
Bai, Hao; Zhang, Xi-wen
2017-06-01
While Chinese is learned as a second language, its characters are taught step by step from their strokes to components, radicals to components, and their complex relations. Chinese Characters in digital ink from non-native language writers are deformed seriously, thus the global recognition approaches are poorer. So a progressive approach from bottom to top is presented based on hierarchical models. Hierarchical information includes strokes and hierarchical components. Each Chinese character is modeled as a hierarchical tree. Strokes in one Chinese characters in digital ink are classified with Hidden Markov Models and concatenated to the stroke symbol sequence. And then the structure of components in one ink character is extracted. According to the extraction result and the stroke symbol sequence, candidate characters are traversed and scored. Finally, the recognition candidate results are listed by descending. The method of this paper is validated by testing 19815 copies of the handwriting Chinese characters written by foreign students.
Energy Technology Data Exchange (ETDEWEB)
Sumida, S. [U-shin Ltd., Tokyo (Japan); Nagamatsu, M.; Maruyama, K. [Hokkaido Institute of Technology, Sapporo (Japan); Hiramatsu, S. [Mazda Motor Corp., Hiroshima (Japan)
1997-10-01
A new approach on modeling is put forward in order to compose the virtual prototype which is indispensable for fully computer integrated concurrent development of automobile product. A basic concept of the hierarchical functional model is proposed as the concrete form of this new modeling technology. This model is used mainly for explaining and simulating functions and efficiencies of both the parts and the total product of automobile. All engineers who engage themselves in design and development of automobile can collaborate with one another using this model. Some application examples are shown, and usefulness of this model is demonstrated. 5 refs., 5 figs.
Error Model and Accuracy Calibration of 5-Axis Machine Tool
Directory of Open Access Journals (Sweden)
Fangyu Pan
2013-08-01
Full Text Available To improve the machining precision and reduce the geometric errors for 5-axis machinetool, error model and calibration are presented in this paper. Error model is realized by the theory of multi-body system and characteristic matrixes, which can establish the relationship between the cutting tool and the workpiece in theory. The accuracy calibration was difficult to achieve, but by a laser approach-laser interferometer and laser tracker, the errors can be displayed accurately which is benefit for later compensation.
Hierarchical model-based predictive control of a power plant portfolio
DEFF Research Database (Denmark)
Edlund, Kristian; Bendtsen, Jan Dimon; Jørgensen, John Bagterp
2011-01-01
control” – becomes increasingly important as the ratio of renewable energy in a power system grows. As a consequence, tomorrow's “smart grids” require highly flexible and scalable control systems compared to conventional power systems. This paper proposes a hierarchical model-based predictive control...... design for power system portfolio control, which aims specifically at meeting these demands.The design involves a two-layer hierarchical structure with clearly defined interfaces that facilitate an object-oriented implementation approach. The same hierarchical structure is reflected in the underlying...
Modeling of germanium detector and its sourceless calibration
Directory of Open Access Journals (Sweden)
Steljić Milijana
2008-01-01
Full Text Available The paper describes the procedure of adapting a coaxial high-precision germanium detector to a device with numerical calibration. The procedure includes the determination of detector dimensions and establishing the corresponding model of the system. In order to achieve a successful calibration of the system without the usage of standard sources, Monte Carlo simulations were performed to determine its efficiency and pulse-height response function. A detailed Monte Carlo model was developed using the MCNP-5.0 code. The obtained results have indicated that this method represents a valuable tool for the quantitative uncertainty analysis of radiation spectrometers and gamma-ray detector calibration, thus minimizing the need for the deployment of radioactive sources.
Technical note: Bayesian calibration of dynamic ruminant nutrition models.
Reed, K F; Arhonditsis, G B; France, J; Kebreab, E
2016-08-01
Mechanistic models of ruminant digestion and metabolism have advanced our understanding of the processes underlying ruminant animal physiology. Deterministic modeling practices ignore the inherent variation within and among individual animals and thus have no way to assess how sources of error influence model outputs. We introduce Bayesian calibration of mathematical models to address the need for robust mechanistic modeling tools that can accommodate error analysis by remaining within the bounds of data-based parameter estimation. For the purpose of prediction, the Bayesian approach generates a posterior predictive distribution that represents the current estimate of the value of the response variable, taking into account both the uncertainty about the parameters and model residual variability. Predictions are expressed as probability distributions, thereby conveying significantly more information than point estimates in regard to uncertainty. Our study illustrates some of the technical advantages of Bayesian calibration and discusses the future perspectives in the context of animal nutrition modeling.
Calibration of the Crop model in the Community Land Model
Directory of Open Access Journals (Sweden)
X. Zeng
2013-01-01
Full Text Available Farming is using more terrestrial ground with increases in population and the expanding use of agriculture for non-nutritional purposes such as biofuel production. This agricultural expansion exerts an increasing impact on the terrestrial carbon cycle. In order to understand the impact of such processes, the Community Land Model (CLM has been augmented with a CLM-Crop extension that simulates the development of three crop types: maize, soybean, and spring wheat. The CLM-Crop model is a complex system that relies on a suite of parametric inputs that govern plant growth under a given atmospheric forcing and available resources. CLM-Crop development used measurements of gross primary productivity and net ecosystem exchange from AmeriFlux sites to choose parameter values that optimize crop productivity in the model. In this paper we calibrate these values in order to provide a faithful projection in terms of both plant development and net carbon exchange, using a Markov chain Monte Carlo technique.
Hierarchical Modelling of Flood Risk for Engineering Decision Analysis
DEFF Research Database (Denmark)
Custer, Rocco
Societies around the world are faced with flood risk, prompting authorities and decision makers to manage risk to protect population and assets. With climate change, urbanisation and population growth, flood risk changes constantly, requiring flood risk management strategies that are flexible...... and robust. Traditional risk management solutions, e.g. dike construction, are not particularly flexible, as they are difficult to adapt to changing risk. Conversely, the recent concept of integrated flood risk management, entailing a combination of several structural and non-structural risk management...... measures, allows identifying flexible and robust flood risk management strategies. Based on it, this thesis investigates hierarchical flood protection systems, which encompass two, or more, hierarchically integrated flood protection structures on different spatial scales (e.g. dikes, local flood barriers...
Calibration of hydrologic models using flow-duration curves
Westerberg, I.; Younger, P.; Guerrero, J.; Beven, K.; Seibert, J.; Halldin, S.; Xu, C.
2010-12-01
The usefulness of hydrological models depends on their skill to mimic real-world hydrology as attested by some efficiency criterion. The suitability of traditional criteria, such as the Nash-Sutcliffe efficiency, for model calibration has been much debated. Discharge data are plentiful for a few decades around the 1970’s but much less available in the last decades since the reported number of discharge stations in the world has gone down substantially from the peak in the late 1970’s. At the same time global precipitation and climate data such as TRMM and ERA-Interim, used to drive hydrological models, have become more readily available in the last 10-20 years. This mismatch of observation time periods makes traditional model calibration difficult or even impossible for basins where there are no overlapping periods of model input and evaluation data. A new calibration method is proposed here that addresses this mismatch and at the same time accounts for uncertainty in discharge data. An estimation of the discharge-data uncertainty is used as a basis to set limits of acceptability for observed flow-duration curves. These limits are then used for model calibration and evaluation within a Generalised Likelihood Uncertainty Estimation (GLUE) framework. Advantages of the new approach include less risk of bias because of epistemic (knowledge) type input-output errors (e.g. no simulated discharge for an observed flow peak because of no rain gauges in the only part of the catchment where it rained), a calibration that addresses the model performance for the whole flow regime (low, medium and high flows) simultaneously and a more realistic uncertainty estimation since discharge uncertainty is addressed. The new method is most suitable for water-balance model applications. Additional limits of acceptability for snow-routine parameters will be needed in basins with snow and frozen soils.
Modeling place field activity with hierarchical slow feature analysis
Directory of Open Access Journals (Sweden)
Fabian eSchoenfeld
2015-05-01
Full Text Available In this paper we present six experimental studies from the literature on hippocampal place cells and replicate their main results in a computational framework based on the principle of slowness. Each of the chosen studies first allows rodents to develop stable place field activity and then examines a distinct property of the established spatial encoding, namely adaptation to cue relocation and removal; directional firing activity in the linear track and open field; and results of morphing and stretching the overall environment. To replicate these studies we employ a hierarchical Slow Feature Analysis (SFA network. SFA is an unsupervised learning algorithm extracting slowly varying information from a given stream of data, and hierarchical application of SFA allows for high dimensional input such as visual images to be processed efficiently and in a biologically plausible fashion. Training data for the network is produced in ratlab, a free basic graphics engine designed to quickly set up a wide range of 3D environments mimicking real life experimental studies, simulate a foraging rodent while recording its visual input, and training & sampling a hierarchical SFA network.
New aerial survey and hierarchical model to estimate manatee abundance
Langimm, Cahterine A.; Dorazio, Robert M.; Stith, Bradley M.; Doyle, Terry J.
2011-01-01
Monitoring the response of endangered and protected species to hydrological restoration is a major component of the adaptive management framework of the Comprehensive Everglades Restoration Plan. The endangered Florida manatee (Trichechus manatus latirostris) lives at the marine-freshwater interface in southwest Florida and is likely to be affected by hydrologic restoration. To provide managers with prerestoration information on distribution and abundance for postrestoration comparison, we developed and implemented a new aerial survey design and hierarchical statistical model to estimate and map abundance of manatees as a function of patch-specific habitat characteristics, indicative of manatee requirements for offshore forage (seagrass), inland fresh drinking water, and warm-water winter refuge. We estimated the number of groups of manatees from dual-observer counts and estimated the number of individuals within groups by removal sampling. Our model is unique in that we jointly analyzed group and individual counts using assumptions that allow probabilities of group detection to depend on group size. Ours is the first analysis of manatee aerial surveys to model spatial and temporal abundance of manatees in association with habitat type while accounting for imperfect detection. We conducted the study in the Ten Thousand Islands area of southwestern Florida, USA, which was expected to be affected by the Picayune Strand Restoration Project to restore hydrology altered for a failed real-estate development. We conducted 11 surveys in 2006, spanning the cold, dry season and warm, wet season. To examine short-term and seasonal changes in distribution we flew paired surveys 1–2 days apart within a given month during the year. Manatees were sparsely distributed across the landscape in small groups. Probability of detection of a group increased with group size; the magnitude of the relationship between group size and detection probability varied among surveys. Probability
Bayesian Calibration of the Community Land Model using Surrogates
Energy Technology Data Exchange (ETDEWEB)
Ray, Jaideep; Hou, Zhangshuan; Huang, Maoyi; Sargsyan, K.; Swiler, Laura P.
2015-01-01
We present results from the Bayesian calibration of hydrological parameters of the Community Land Model (CLM), which is often used in climate simulations and Earth system models. A statistical inverse problem is formulated for three hydrological parameters, conditioned on observations of latent heat surface fluxes over 48 months. Our calibration method uses polynomial and Gaussian process surrogates of the CLM, and solves the parameter estimation problem using a Markov chain Monte Carlo sampler. Posterior probability densities for the parameters are developed for two sites with different soil and vegetation covers. Our method also allows us to examine the structural error in CLM under two error models. We find that accurate surrogate models can be created for CLM in most cases. The posterior distributions lead to better prediction than the default parameter values in CLM. Climatologically averaging the observations does not modify the parameters’ distributions significantly. The structural error model reveals a correlation time-scale which can potentially be used to identify physical processes that could be contributing to it. While the calibrated CLM has a higher predictive skill, the calibration is under-dispersive.
Bayesian calibration of the Community Land Model using surrogates
Energy Technology Data Exchange (ETDEWEB)
Ray, Jaideep; Hou, Zhangshuan; Huang, Maoyi; Swiler, Laura Painton
2014-02-01
We present results from the Bayesian calibration of hydrological parameters of the Community Land Model (CLM), which is often used in climate simulations and Earth system models. A statistical inverse problem is formulated for three hydrological parameters, conditional on observations of latent heat surface fluxes over 48 months. Our calibration method uses polynomial and Gaussian process surrogates of the CLM, and solves the parameter estimation problem using a Markov chain Monte Carlo sampler. Posterior probability densities for the parameters are developed for two sites with different soil and vegetation covers. Our method also allows us to examine the structural error in CLM under two error models. We find that surrogate models can be created for CLM in most cases. The posterior distributions are more predictive than the default parameter values in CLM. Climatologically averaging the observations does not modify the parameters' distributions significantly. The structural error model reveals a correlation time-scale which can be used to identify the physical process that could be contributing to it. While the calibrated CLM has a higher predictive skill, the calibration is under-dispersive.
Calibration of hydrological models using flow-duration curves
Directory of Open Access Journals (Sweden)
I. K. Westerberg
2011-07-01
Full Text Available The degree of belief we have in predictions from hydrologic models will normally depend on how well they can reproduce observations. Calibrations with traditional performance measures, such as the Nash-Sutcliffe model efficiency, are challenged by problems including: (1 uncertain discharge data, (2 variable sensitivity of different performance measures to different flow magnitudes, (3 influence of unknown input/output errors and (4 inability to evaluate model performance when observation time periods for discharge and model input data do not overlap. This paper explores a calibration method using flow-duration curves (FDCs to address these problems. The method focuses on reproducing the observed discharge frequency distribution rather than the exact hydrograph. It consists of applying limits of acceptability for selected evaluation points (EPs on the observed uncertain FDC in the extended GLUE approach. Two ways of selecting the EPs were tested – based on equal intervals of discharge and of volume of water. The method was tested and compared to a calibration using the traditional model efficiency for the daily four-parameter WASMOD model in the Paso La Ceiba catchment in Honduras and for Dynamic TOPMODEL evaluated at an hourly time scale for the Brue catchment in Great Britain. The volume method of selecting EPs gave the best results in both catchments with better calibrated slow flow, recession and evaporation than the other criteria. Observed and simulated time series of uncertain discharges agreed better for this method both in calibration and prediction in both catchments. An advantage with the method is that the rejection criterion is based on an estimation of the uncertainty in discharge data and that the EPs of the FDC can be chosen to reflect the aims of the modelling application, e.g. using more/less EPs at high/low flows. While the method appears less sensitive to epistemic input/output errors than previous use of limits of
Embodying, calibrating and caring for a local model of obesity
DEFF Research Database (Denmark)
Winther, Jonas; Hillersdal, Line
and technologies herein lead to the emergence of what we propose to be local models of obesity. Describing the emergence of local models of obesity we show how a specific model is being cared for, calibrated and embodied by research staff as well as research subjects and how interdisciplinary obesity research...... is an ongoing process of configuring but also extending beyond already established models of obesity. We argue that an articulation of such practices of local care, embodiment and calibration are crucial for the appreciation, evaluation and transferability of interdisciplinary obesity research....... highlighted as such a problem. Within research communities disparate explanatory models of obesity exist (Ulijaszek 2008) and some of these models of obesity are brought together in the Copenhagen-based interdisciplinary research initiative; Governing Obesity (GO) with the aim of addressing the causes...
A framework for the calibration of social simulation models
Ciampaglia, Giovanni Luca
2013-01-01
Simulation with agent-based models is increasingly used in the study of complex socio-technical systems and in social simulation in general. This paradigm offers a number of attractive features, namely the possibility of modeling emergent phenomena within large populations. As a consequence, often the quantity in need of calibration may be a distribution over the population whose relation with the parameters of the model is analytically intractable. Nevertheless, we can simulate. In this paper we present a simulation-based framework for the calibration of agent-based models with distributional output based on indirect inference. We illustrate our method step by step on a model of norm emergence in an online community of peer production, using data from three large Wikipedia communities. Model fit and diagnostics are discussed.
Chulkov Vitaliy Olegovich; Rakhmonov Emomali Karimovich; Kas'yanov Vitaliy Fedorovich; Gusakova Elena Aleksandrovna
2012-01-01
This article deals with the infographic modeling of hierarchical management systems exposed to innovative conflicts. The authors analyze the facts that serve as conflict drivers in the construction management environment. The reasons for innovative conflicts include changes in hierarchical structures of management systems, adjustment of workers to new management conditions, changes in the ideology, etc. Conflicts under consideration may involve contradictions between requests placed by custom...
Hierarchical hybrid testability modeling and evaluation method based on information fusion
Institute of Scientific and Technical Information of China (English)
Xishan Zhang; Kaoli Huang; Pengcheng Yan; Guangyao Lian
2015-01-01
In order to meet the demand of testability analysis and evaluation for complex equipment under a smal sample test in the equipment life cycle, the hierarchical hybrid testability model-ing and evaluation method (HHTME), which combines the testabi-lity structure model (TSM) with the testability Bayesian networks model (TBNM), is presented. Firstly, the testability network topo-logy of complex equipment is built by using the hierarchical hybrid testability modeling method. Secondly, the prior conditional prob-ability distribution between network nodes is determined through expert experience. Then the Bayesian method is used to update the conditional probability distribution, according to history test information, virtual simulation information and similar product in-formation. Final y, the learned hierarchical hybrid testability model (HHTM) is used to estimate the testability of equipment. Compared with the results of other modeling methods, the relative deviation of the HHTM is only 0.52%, and the evaluation result is the most accurate.
Technical Note: Calibration and validation of geophysical observation models
Salama, M.S.; van der Velde, R.; van der Woerd, H.J.; Kromkamp, J.C.; Philippart, C.J.M.; Joseph, A.T.; O'Neill, P.E.; Lang, R.H.; Gish, T.; Werdell, P.J.; Su, Z.
2012-01-01
We present a method to calibrate and validate observational models that interrelate remotely sensed energy fluxes to geophysical variables of land and water surfaces. Coincident sets of remote sensing observation of visible and microwave radiations and geophysical data are assembled and subdivided i
Modeling nonignorable missing data processes in item calibration
Glas, Cees A.W.; Pimentel, Jonald L.
2006-01-01
In this report, it is shown that the problem of nonignorable missing data in the calibration phase for computerized adaptive testing can be handled by introducing an item response theory (IRT) model for the missing data indicator. In the first simulation study, it is shown that treating data with no
Royle, J. Andrew; Converse, Sarah J.
2014-01-01
Capture–recapture studies are often conducted on populations that are stratified by space, time or other factors. In this paper, we develop a Bayesian spatial capture–recapture (SCR) modelling framework for stratified populations – when sampling occurs within multiple distinct spatial and temporal strata.We describe a hierarchical model that integrates distinct models for both the spatial encounter history data from capture–recapture sampling, and also for modelling variation in density among strata. We use an implementation of data augmentation to parameterize the model in terms of a latent categorical stratum or group membership variable, which provides a convenient implementation in popular BUGS software packages.We provide an example application to an experimental study involving small-mammal sampling on multiple trapping grids over multiple years, where the main interest is in modelling a treatment effect on population density among the trapping grids.Many capture–recapture studies involve some aspect of spatial or temporal replication that requires some attention to modelling variation among groups or strata. We propose a hierarchical model that allows explicit modelling of group or strata effects. Because the model is formulated for individual encounter histories and is easily implemented in the BUGS language and other free software, it also provides a general framework for modelling individual effects, such as are present in SCR models.
Usability Prediction & Ranking of SDLC Models Using Fuzzy Hierarchical Usability Model
Gupta, Deepak; Ahlawat, Anil K.; Sagar, Kalpna
2017-06-01
Evaluation of software quality is an important aspect for controlling and managing the software. By such evaluation, improvements in software process can be made. The software quality is significantly dependent on software usability. Many researchers have proposed numbers of usability models. Each model considers a set of usability factors but do not cover all the usability aspects. Practical implementation of these models is still missing, as there is a lack of precise definition of usability. Also, it is very difficult to integrate these models into current software engineering practices. In order to overcome these challenges, this paper aims to define the term `usability' using the proposed hierarchical usability model with its detailed taxonomy. The taxonomy considers generic evaluation criteria for identifying the quality components, which brings together factors, attributes and characteristics defined in various HCI and software models. For the first time, the usability model is also implemented to predict more accurate usability values. The proposed system is named as fuzzy hierarchical usability model that can be easily integrated into the current software engineering practices. In order to validate the work, a dataset of six software development life cycle models is created and employed. These models are ranked according to their predicted usability values. This research also focuses on the detailed comparison of proposed model with the existing usability models.
von Davier, Matthias; Haberman, Shelby J
2014-04-01
This commentary addresses the modeling and final analytical path taken, as well as the terminology used, in the paper "Hierarchical diagnostic classification models: a family of models for estimating and testing attribute hierarchies" by Templin and Bradshaw (Psychometrika, doi: 10.1007/s11336-013-9362-0, 2013). It raises several issues concerning use of cognitive diagnostic models that either assume attribute hierarchies or assume a certain form of attribute interactions. The issues raised are illustrated with examples, and references are provided for further examination.
Sun, Kaioqiong; Udupa, Jayaram K.; Odhner, Dewey; Tong, Yubing; Torigian, Drew A.
2014-03-01
This paper proposes a thoracic anatomy segmentation method based on hierarchical recognition and delineation guided by a built fuzzy model. Labeled binary samples for each organ are registered and aligned into a 3D fuzzy set representing the fuzzy shape model for the organ. The gray intensity distributions of the corresponding regions of the organ in the original image are recorded in the model. The hierarchical relation and mean location relation between different organs are also captured in the model. Following the hierarchical structure and location relation, the fuzzy shape model of different organs is registered to the given target image to achieve object recognition. A fuzzy connected delineation method is then used to obtain the final segmentation result of organs with seed points provided by recognition. The hierarchical structure and location relation integrated in the model provide the initial parameters for registration and make the recognition efficient and robust. The 3D fuzzy model combined with hierarchical affine registration ensures that accurate recognition can be obtained for both non-sparse and sparse organs. The results on real images are presented and shown to be better than a recently reported fuzzy model-based anatomy recognition strategy.
Calibration of parallel kinematics machine using generalized distance error model
Institute of Scientific and Technical Information of China (English)
无
2007-01-01
This paper focus on the accuracy enhancement of parallel kinematics machine through kinematics calibration. In the calibration processing, well-structured identification Jacobian matrix construction and end-effector position and orientation measurement are two main difficulties. In this paper, the identification Jacobian matrix is constructed easily by numerical calculation utilizing the unit virtual velocity method. The generalized distance errors model is presented for avoiding measuring the position and orientation directly which is difficult to be measured. At last, a measurement tool is given for acquiring the data points in the calibration processing.Experimental studies confirmed the effectiveness of method. It is also shown in the paper that the proposed approach can be applied to other typed parallel manipulators.
Virtual Sensor for Calibration of Thermal Models of Machine Tools
Directory of Open Access Journals (Sweden)
Alexander Dementjev
2014-01-01
strictly depends on the accuracy of these machines, but they are prone to deformation caused by their own heat. The deformation needs to be compensated in order to assure accurate production. So an adequate model of the high-dimensional thermal deformation process must be created and parameters of this model must be evaluated. Unfortunately, such parameters are often unknown and cannot be calculated a priori. Parameter identification during real experiments is not an option for these models because of its high engineering and machine time effort. The installation of additional sensors to measure these parameters directly is uneconomical. Instead, an effective calibration of thermal models can be reached by combining real and virtual measurements on a machine tool during its real operation, without additional sensors installation. In this paper, a new approach for thermal model calibration is presented. The expected results are very promising and can be recommended as an effective solution for this class of problems.
Royle, J. Andrew; Dorazio, Robert M.
2008-01-01
A guide to data collection, modeling and inference strategies for biological survey data using Bayesian and classical statistical methods. This book describes a general and flexible framework for modeling and inference in ecological systems based on hierarchical models, with a strict focus on the use of probability models and parametric inference. Hierarchical models represent a paradigm shift in the application of statistics to ecological inference problems because they combine explicit models of ecological system structure or dynamics with models of how ecological systems are observed. The principles of hierarchical modeling are developed and applied to problems in population, metapopulation, community, and metacommunity systems. The book provides the first synthetic treatment of many recent methodological advances in ecological modeling and unifies disparate methods and procedures. The authors apply principles of hierarchical modeling to ecological problems, including * occurrence or occupancy models for estimating species distribution * abundance models based on many sampling protocols, including distance sampling * capture-recapture models with individual effects * spatial capture-recapture models based on camera trapping and related methods * population and metapopulation dynamic models * models of biodiversity, community structure and dynamics.
Use of hierarchical models to analyze European trends in congenital anomaly prevalence.
Cavadino, Alana; Prieto-Merino, David; Addor, Marie-Claude; Arriola, Larraitz; Bianchi, Fabrizio; Draper, Elizabeth; Garne, Ester; Greenlees, Ruth; Haeusler, Martin; Khoshnood, Babak; Kurinczuk, Jenny; McDonnell, Bob; Nelen, Vera; O'Mahony, Mary; Randrianaivo, Hanitra; Rankin, Judith; Rissmann, Anke; Tucker, David; Verellen-Dumoulin, Christine; de Walle, Hermien; Wellesley, Diana; Morris, Joan K
2016-06-01
Surveillance of congenital anomalies is important to identify potential teratogens. Despite known associations between different anomalies, current surveillance methods examine trends within each subgroup separately. We aimed to evaluate whether hierarchical statistical methods that combine information from several subgroups simultaneously would enhance current surveillance methods using data collected by EUROCAT, a European network of population-based congenital anomaly registries. Ten-year trends (2003 to 2012) in 18 EUROCAT registries over 11 countries were analyzed for the following groups of anomalies: neural tube defects, congenital heart defects, digestive system, and chromosomal anomalies. Hierarchical Poisson regression models that combined related subgroups together according to EUROCAT's hierarchy of subgroup coding were applied. Results from hierarchical models were compared with those from Poisson models that consider each congenital anomaly separately. Hierarchical models gave similar results as those obtained when considering each anomaly subgroup in a separate analysis. Hierarchical models that included only around three subgroups showed poor convergence and were generally found to be over-parameterized. Larger sets of anomaly subgroups were found to be too heterogeneous to group together in this way. There were no substantial differences between independent analyses of each subgroup and hierarchical models when using the EUROCAT anomaly subgroups. Considering each anomaly separately, therefore, remains an appropriate method for the detection of potential changes in prevalence by surveillance systems. Hierarchical models do, however, remain an interesting alternative method of analysis when considering the risks of specific exposures in relation to the prevalence of congenital anomalies, which could be investigated in other studies. Birth Defects Research (Part A) 106:480-10, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
A Bayesian hierarchical diffusion model decomposition of performance in Approach-Avoidance Tasks.
Krypotos, Angelos-Miltiadis; Beckers, Tom; Kindt, Merel; Wagenmakers, Eric-Jan
2015-01-01
Common methods for analysing response time (RT) tasks, frequently used across different disciplines of psychology, suffer from a number of limitations such as the failure to directly measure the underlying latent processes of interest and the inability to take into account the uncertainty associated with each individual's point estimate of performance. Here, we discuss a Bayesian hierarchical diffusion model and apply it to RT data. This model allows researchers to decompose performance into meaningful psychological processes and to account optimally for individual differences and commonalities, even with relatively sparse data. We highlight the advantages of the Bayesian hierarchical diffusion model decomposition by applying it to performance on Approach-Avoidance Tasks, widely used in the emotion and psychopathology literature. Model fits for two experimental data-sets demonstrate that the model performs well. The Bayesian hierarchical diffusion model overcomes important limitations of current analysis procedures and provides deeper insight in latent psychological processes of interest.
Model calibration and validation of an impact test simulation
Energy Technology Data Exchange (ETDEWEB)
Hemez, F. M. (François M.); Wilson, A. C. (Amanda C.); Havrilla, G. N. (George N.)
2001-01-01
This paper illustrates the methodology being developed at Los Alamos National Laboratory for the validation of numerical simulations for engineering structural dynamics. The application involves the transmission of a shock wave through an assembly that consists of a steel cylinder and a layer of elastomeric (hyper-foam) material. The assembly is mounted on an impact table to generate the shock wave. The input acceleration and three output accelerations are measured. The main objective of the experiment is to develop a finite element representation of the system capable of reproducing the test data with acceptable accuracy. Foam layers of various thicknesses and several drop heights are considered during impact testing. Each experiment is replicated several times to estimate the experimental variability. Instead of focusing on the calibration of input parameters for a single configuration, the numerical model is validated for its ability to predict the response of three different configurations (various combinations of foam thickness and drop height). Design of Experiments is implemented to perform parametric and statistical variance studies. Surrogate models are developed to replace the computationally expensive numerical simulation. Variables of the finite element model are separated into calibration variables and control variables, The models are calibrated to provide numerical simulations that correctly reproduce the statistical variation of the test configurations. The calibration step also provides inference for the parameters of a high strain-rate dependent material model of the hyper-foam. After calibration, the validity of the numerical simulation is assessed through its ability to predict the response of a fourth test setup.
Evaluation of multivariate calibration models transferred between spectroscopic instruments
DEFF Research Database (Denmark)
Eskildsen, Carl Emil Aae; Hansen, Per W.; Skov, Thomas
2016-01-01
In a setting where multiple spectroscopic instruments are used for the same measurements it may be convenient to develop the calibration model on a single instrument and then transfer this model to the other instruments. In the ideal scenario, all instruments provide the same predictions for the ......In a setting where multiple spectroscopic instruments are used for the same measurements it may be convenient to develop the calibration model on a single instrument and then transfer this model to the other instruments. In the ideal scenario, all instruments provide the same predictions...... for the same samples using the transferred model. However, sometimes the success of a model transfer is evaluated by comparing the transferred model predictions with the reference values. This is not optimal, as uncertainties in the reference method will impact the evaluation. This paper proposes a new method...... for calibration model transfer evaluation. The new method is based on comparing predictions from different instruments, rather than comparing predictions and reference values. A total of 75 flour samples were available for the study. All samples were measured on ten near infrared (NIR) instruments from two...
Maximizing Adaptivity in Hierarchical Topological Models Using Cancellation Trees
Energy Technology Data Exchange (ETDEWEB)
Bremer, P; Pascucci, V; Hamann, B
2008-12-08
We present a highly adaptive hierarchical representation of the topology of functions defined over two-manifold domains. Guided by the theory of Morse-Smale complexes, we encode dependencies between cancellations of critical points using two independent structures: a traditional mesh hierarchy to store connectivity information and a new structure called cancellation trees to encode the configuration of critical points. Cancellation trees provide a powerful method to increase adaptivity while using a simple, easy-to-implement data structure. The resulting hierarchy is significantly more flexible than the one previously reported. In particular, the resulting hierarchy is guaranteed to be of logarithmic height.
Energy Technology Data Exchange (ETDEWEB)
Korn, E L
1978-08-01
This thesis is concerned with the effect of classification error on contingency tables being analyzed with hierarchical log-linear models (independence in an I x J table is a particular hierarchical log-linear model). Hierarchical log-linear models provide a concise way of describing independence and partial independences between the different dimensions of a contingency table. The structure of classification errors on contingency tables that will be used throughout is defined. This structure is a generalization of Bross' model, but here attention is paid to the different possible ways a contingency table can be sampled. Hierarchical log-linear models and the effect of misclassification on them are described. Some models, such as independence in an I x J table, are preserved by misclassification, i.e., the presence of classification error will not change the fact that a specific table belongs to that model. Other models are not preserved by misclassification; this implies that the usual tests to see if a sampled table belong to that model will not be of the right significance level. A simple criterion will be given to determine which hierarchical log-linear models are preserved by misclassification. Maximum likelihood theory is used to perform log-linear model analysis in the presence of known misclassification probabilities. It will be shown that the Pitman asymptotic power of tests between different hierarchical log-linear models is reduced because of the misclassification. A general expression will be given for the increase in sample size necessary to compensate for this loss of power and some specific cases will be examined.
Hierarchical Bayesian Model for Simultaneous EEG Source and Forward Model Reconstruction (SOFOMORE)
DEFF Research Database (Denmark)
Stahlhut, Carsten; Mørup, Morten; Winther, Ole;
2009-01-01
In this paper we propose an approach to handle forward model uncertainty for EEG source reconstruction. A stochastic forward model is motivated by the many uncertain contributions that form the forward propagation model including the tissue conductivity distribution, the cortical surface, and ele......In this paper we propose an approach to handle forward model uncertainty for EEG source reconstruction. A stochastic forward model is motivated by the many uncertain contributions that form the forward propagation model including the tissue conductivity distribution, the cortical surface......, and electrode positions. We first present a hierarchical Bayesian framework for EEG source localization that jointly performs source and forward model reconstruction (SOFOMORE). Secondly, we evaluate the SOFOMORE model by comparison with source reconstruction methods that use fixed forward models. Simulated...... and real EEG data demonstrate that invoking a stochastic forward model leads to improved source estimates....
3D Modelling with Structured Light GAMMA Calibration
Directory of Open Access Journals (Sweden)
Eser Sert
2014-01-01
Full Text Available Structured light method is one of the non-contact measurement methods used for high resolution and high sensitive 3D modeling. In this method, a projector, camera and computer are used. Projector projects patterns that are generated with specific coding strategies onto the object that will be 3D modeled. Camera receives these patterns. By processing the images received by the camera, object is 3D modeled. Light intensity that is emitted from the projector generally not a linear function of the signal input. This causes brightness problems in the patterns projected. Thus, images received from the camera needs to the gamma corrected. In this study, gamma calibration method is proposed to overcome this problem. Test results show that proposed calibration system improves the accuracy and quality of the 3D modeling.
Calibration of the hydrogeological model of the Baltic Artesian Basin
Virbulis, J.; Klints, I.; Timuhins, A.; Sennikovs, J.; Bethers, U.
2012-04-01
Let us consider the calibration issue for the Baltic Artesian Basin (BAB) which is a complex hydrogeological system in the southeastern Baltic with surface area close to 0.5 million square kilometers. The model of the geological structure contains 42 layers including aquifers and aquitards. The age of sediments varies from Cambrian up to the Quaternary deposits. The finite element method model was developed for the calculation of the steady state three-dimensional groundwater flow with free surface. No-flow boundary conditions were applied on the rock bottom and the side boundaries of BAB, while simple hydrological model is applied on the surface. The level of the lakes, rivers and the sea is fixed as constant hydraulic head. Constant mean value of 70 mm/year was assumed as an infiltration flux elsewhere and adjusted during the automatic calibration process. Averaged long-term water extraction was applied at the water supply wells. The calibration of the hydrogeological model is one of the most important steps during the model development. The knowledge about the parameters of the modeled system is often insufficient, especially for the large regional models, and a lack of geometric and hydraulic conductivity data is typical. The quasi-Newton optimization method L-BFGS-B is used for the calibration of the BAB model. Model is calibrated on the available water level measurements in monitoring wells and level measurements in boreholes during their installation. As the available data is not uniformly distributed over the covered area, weight coefficient is assigned to each borehole in order not to overestimate the clusters of boreholes. The year 2000 is chosen as the reference year for the present time scenario and the data from surrounding years are also taken into account but with smaller weighting coefficients. The objective function to be minimized by the calibration process is the weighted sum of squared differences between observed and modeled piezometric heads
Ranking of Business Process Simulation Software Tools with DEX/QQ Hierarchical Decision Model.
Damij, Nadja; Boškoski, Pavle; Bohanec, Marko; Mileva Boshkoska, Biljana
2016-01-01
The omnipresent need for optimisation requires constant improvements of companies' business processes (BPs). Minimising the risk of inappropriate BP being implemented is usually performed by simulating the newly developed BP under various initial conditions and "what-if" scenarios. An effectual business process simulations software (BPSS) is a prerequisite for accurate analysis of an BP. Characterisation of an BPSS tool is a challenging task due to the complex selection criteria that includes quality of visual aspects, simulation capabilities, statistical facilities, quality reporting etc. Under such circumstances, making an optimal decision is challenging. Therefore, various decision support models are employed aiding the BPSS tool selection. The currently established decision support models are either proprietary or comprise only a limited subset of criteria, which affects their accuracy. Addressing this issue, this paper proposes a new hierarchical decision support model for ranking of BPSS based on their technical characteristics by employing DEX and qualitative to quantitative (QQ) methodology. Consequently, the decision expert feeds the required information in a systematic and user friendly manner. There are three significant contributions of the proposed approach. Firstly, the proposed hierarchical model is easily extendible for adding new criteria in the hierarchical structure. Secondly, a fully operational decision support system (DSS) tool that implements the proposed hierarchical model is presented. Finally, the effectiveness of the proposed hierarchical model is assessed by comparing the resulting rankings of BPSS with respect to currently available results.
Hierarchical ensemble of background models for PTZ-based video surveillance.
Liu, Ning; Wu, Hefeng; Lin, Liang
2015-01-01
In this paper, we study a novel hierarchical background model for intelligent video surveillance with the pan-tilt-zoom (PTZ) camera, and give rise to an integrated system consisting of three key components: background modeling, observed frame registration, and object tracking. First, we build the hierarchical background model by separating the full range of continuous focal lengths of a PTZ camera into several discrete levels and then partitioning the wide scene at each level into many partial fixed scenes. In this way, the wide scenes captured by a PTZ camera through rotation and zoom are represented by a hierarchical collection of partial fixed scenes. A new robust feature is presented for background modeling of each partial scene. Second, we locate the partial scenes corresponding to the observed frame in the hierarchical background model. Frame registration is then achieved by feature descriptor matching via fast approximate nearest neighbor search. Afterwards, foreground objects can be detected using background subtraction. Last, we configure the hierarchical background model into a framework to facilitate existing object tracking algorithms under the PTZ camera. Foreground extraction is used to assist tracking an object of interest. The tracking outputs are fed back to the PTZ controller for adjusting the camera properly so as to maintain the tracked object in the image plane. We apply our system on several challenging scenarios and achieve promising results.
Calibration of stormwater management model using flood extent data
Han, Kunyeun; Kim, YoungJoo; Kim, Byunhyun; Famiglietti, James S.; Sanders, Brett F.
2014-01-01
The Seogu (western) portion of Daegu, Korea experiences chronic urban flooding and there is a need to increase flood detention and storage to reduce flood impacts. Since the site is densely developed, use of an underground car park as a cistern has been proposed. The stormwater management model (SWMM) is applied to study alternative hydraulic designs and overall performance, and it is shown that by linking SWMM to a two-dimensional flood inundation model, SWMM parameters can be calibrated fro...
DEFF Research Database (Denmark)
Van Daele, Timothy; Gernaey, Krist V.; Ringborg, Rolf Hoffmeyer
2017-01-01
The aim of model calibration is to estimate unique parameter values from available experimental data, here applied to a biocatalytic process. The traditional approach of first gathering data followed by performing a model calibration is inefficient, since the information gathered during experimen......The aim of model calibration is to estimate unique parameter values from available experimental data, here applied to a biocatalytic process. The traditional approach of first gathering data followed by performing a model calibration is inefficient, since the information gathered during...
Non-linear calibration models for near infrared spectroscopy.
Ni, Wangdong; Nørgaard, Lars; Mørup, Morten
2014-02-27
Different calibration techniques are available for spectroscopic applications that show nonlinear behavior. This comprehensive comparative study presents a comparison of different nonlinear calibration techniques: kernel PLS (KPLS), support vector machines (SVM), least-squares SVM (LS-SVM), relevance vector machines (RVM), Gaussian process regression (GPR), artificial neural network (ANN), and Bayesian ANN (BANN). In this comparison, partial least squares (PLS) regression is used as a linear benchmark, while the relationship of the methods is considered in terms of traditional calibration by ridge regression (RR). The performance of the different methods is demonstrated by their practical applications using three real-life near infrared (NIR) data sets. Different aspects of the various approaches including computational time, model interpretability, potential over-fitting using the non-linear models on linear problems, robustness to small or medium sample sets, and robustness to pre-processing, are discussed. The results suggest that GPR and BANN are powerful and promising methods for handling linear as well as nonlinear systems, even when the data sets are moderately small. The LS-SVM is also attractive due to its good predictive performance for both linear and nonlinear calibrations.
Calibration and validation of DRAINMOD to model bioretention hydrology
Brown, R. A.; Skaggs, R. W.; Hunt, W. F.
2013-04-01
SummaryPrevious field studies have shown that the hydrologic performance of bioretention cells varies greatly because of factors such as underlying soil type, physiographic region, drainage configuration, surface storage volume, drainage area to bioretention surface area ratio, and media depth. To more accurately describe bioretention hydrologic response, a long-term hydrologic model that generates a water balance is needed. Some current bioretention models lack the ability to perform long-term simulations and others have never been calibrated from field monitored bioretention cells with underdrains. All peer-reviewed models lack the ability to simultaneously perform both of the following functions: (1) model an internal water storage (IWS) zone drainage configuration and (2) account for soil-water content using the soil-water characteristic curve. DRAINMOD, a widely-accepted agricultural drainage model, was used to simulate the hydrologic response of runoff entering a bioretention cell. The concepts of water movement in bioretention cells are very similar to those of agricultural fields with drainage pipes, so many bioretention design specifications corresponded directly to DRAINMOD inputs. Detailed hydrologic measurements were collected from two bioretention field sites in Nashville and Rocky Mount, North Carolina, to calibrate and test the model. Each field site had two sets of bioretention cells with varying media depths, media types, drainage configurations, underlying soil types, and surface storage volumes. After 12 months, one of these characteristics was altered - surface storage volume at Nashville and IWS zone depth at Rocky Mount. At Nashville, during the second year (post-repair period), the Nash-Sutcliffe coefficients for drainage and exfiltration/evapotranspiration (ET) both exceeded 0.8 during the calibration and validation periods. During the first year (pre-repair period), the Nash-Sutcliffe coefficients for drainage, overflow, and exfiltration
A Hierarchical Linear Model with Factor Analysis Structure at Level 2
Miyazaki, Yasuo; Frank, Kenneth A.
2006-01-01
In this article the authors develop a model that employs a factor analysis structure at Level 2 of a two-level hierarchical linear model (HLM). The model (HLM2F) imposes a structure on a deficient rank Level 2 covariance matrix [tau], and facilitates estimation of a relatively large [tau] matrix. Maximum likelihood estimators are derived via the…
DEFF Research Database (Denmark)
Mantzouni, Irene; Sørensen, Helle; O'Hara, Robert B.;
2010-01-01
and Beverton and Holt stock–recruitment (SR) models were extended by applying hierarchical methods, mixed-effects models, and Bayesian inference to incorporate the influence of these ecosystem factors on model parameters representing cod maximum reproductive rate and carrying capacity. We identified...
Lininger, Monica; Spybrook, Jessaca; Cheatham, Christopher C
2015-04-01
Longitudinal designs are common in the field of athletic training. For example, in the Journal of Athletic Training from 2005 through 2010, authors of 52 of the 218 original research articles used longitudinal designs. In 50 of the 52 studies, a repeated-measures analysis of variance was used to analyze the data. A possible alternative to this approach is the hierarchical linear model, which has been readily accepted in other medical fields. In this short report, we demonstrate the use of the hierarchical linear model for analyzing data from a longitudinal study in athletic training. We discuss the relevant hypotheses, model assumptions, analysis procedures, and output from the HLM 7.0 software. We also examine the advantages and disadvantages of using the hierarchical linear model with repeated measures and repeated-measures analysis of variance for longitudinal data.
Lininger, Monica; Spybrook, Jessaca; Cheatham, Christopher C.
2015-01-01
Longitudinal designs are common in the field of athletic training. For example, in the Journal of Athletic Training from 2005 through 2010, authors of 52 of the 218 original research articles used longitudinal designs. In 50 of the 52 studies, a repeated-measures analysis of variance was used to analyze the data. A possible alternative to this approach is the hierarchical linear model, which has been readily accepted in other medical fields. In this short report, we demonstrate the use of the hierarchical linear model for analyzing data from a longitudinal study in athletic training. We discuss the relevant hypotheses, model assumptions, analysis procedures, and output from the HLM 7.0 software. We also examine the advantages and disadvantages of using the hierarchical linear model with repeated measures and repeated-measures analysis of variance for longitudinal data. PMID:25875072
Model calibration criteria for estimating ecological flow characteristics
Vis, Marc; Knight, Rodney; Poole, Sandra; Wolfe, William; Seibert, Jan; Breuer, Lutz; Kraft, Philipp
2016-01-01
Quantification of streamflow characteristics in ungauged catchments remains a challenge. Hydrological modeling is often used to derive flow time series and to calculate streamflow characteristics for subsequent applications that may differ from those envisioned by the modelers. While the estimation of model parameters for ungauged catchments is a challenging research task in itself, it is important to evaluate whether simulated time series preserve critical aspects of the streamflow hydrograph. To address this question, seven calibration objective functions were evaluated for their ability to preserve ecologically relevant streamflow characteristics of the average annual hydrograph using a runoff model, HBV-light, at 27 catchments in the southeastern United States. Calibration trials were repeated 100 times to reduce parameter uncertainty effects on the results, and 12 ecological flow characteristics were computed for comparison. Our results showed that the most suitable calibration strategy varied according to streamflow characteristic. Combined objective functions generally gave the best results, though a clear underprediction bias was observed. The occurrence of low prediction errors for certain combinations of objective function and flow characteristic suggests that (1) incorporating multiple ecological flow characteristics into a single objective function would increase model accuracy, potentially benefitting decision-making processes; and (2) there may be a need to have different objective functions available to address specific applications of the predicted time series.
Calibration process of highly parameterized semi-distributed hydrological model
Vidmar, Andrej; Brilly, Mitja
2017-04-01
Hydrological phenomena take place in the hydrological system, which is governed by nature, and are essentially stochastic. These phenomena are unique, non-recurring, and changeable across space and time. Since any river basin with its own natural characteristics and any hydrological event therein, are unique, this is a complex process that is not researched enough. Calibration is a procedure of determining the parameters of a model that are not known well enough. Input and output variables and mathematical model expressions are known, while only some parameters are unknown, which are determined by calibrating the model. The software used for hydrological modelling nowadays is equipped with sophisticated algorithms for calibration purposes without possibility to manage process by modeler. The results are not the best. We develop procedure for expert driven process of calibration. We use HBV-light-CLI hydrological model which has command line interface and coupling it with PEST. PEST is parameter estimation tool which is used widely in ground water modeling and can be used also on surface waters. Process of calibration managed by expert directly, and proportionally to the expert knowledge, affects the outcome of the inversion procedure and achieves better results than if the procedure had been left to the selected optimization algorithm. First step is to properly define spatial characteristic and structural design of semi-distributed model including all morphological and hydrological phenomena, like karstic area, alluvial area and forest area. This step includes and requires geological, meteorological, hydraulic and hydrological knowledge of modeler. Second step is to set initial parameter values at their preferred values based on expert knowledge. In this step we also define all parameter and observation groups. Peak data are essential in process of calibration if we are mainly interested in flood events. Each Sub Catchment in the model has own observations group
Robust Real-Time Music Transcription with a Compositional Hierarchical Model
Pesek, Matevž; Leonardis, Aleš; Marolt, Matija
2017-01-01
The paper presents a new compositional hierarchical model for robust music transcription. Its main features are unsupervised learning of a hierarchical representation of input data, transparency, which enables insights into the learned representation, as well as robustness and speed which make it suitable for real-world and real-time use. The model consists of multiple layers, each composed of a number of parts. The hierarchical nature of the model corresponds well to hierarchical structures in music. The parts in lower layers correspond to low-level concepts (e.g. tone partials), while the parts in higher layers combine lower-level representations into more complex concepts (tones, chords). The layers are learned in an unsupervised manner from music signals. Parts in each layer are compositions of parts from previous layers based on statistical co-occurrences as the driving force of the learning process. In the paper, we present the model’s structure and compare it to other hierarchical approaches in the field of music information retrieval. We evaluate the model’s performance for the multiple fundamental frequency estimation. Finally, we elaborate on extensions of the model towards other music information retrieval tasks. PMID:28046074
Identifying Spatially Variable Sensitivity of Model Predictions and Calibrations
McKenna, S. A.; Hart, D. B.
2005-12-01
Stochastic inverse modeling provides an ensemble of stochastic property fields, each calibrated to measured steady-state and transient head data. These calibrated fields are used as input for predictions of other processes (e.g., contaminant transport, advective travel time). Use of the entire ensemble of fields transfers spatial uncertainty in hydraulic properties to uncertainty in the predicted performance measures. A sampling-based sensitivity coefficient is proposed to determine the sensitivity of the performance measures to the uncertain values of hydraulic properties at every cell in the model domain. The basis of this sensitivity coefficient is the Spearman rank correlation coefficient. Sampling-based sensitivity coefficients are demonstrated using a recent set of transmissivity (T) fields created through a stochastic inverse calibration process for the Culebra dolomite in the vicinity of the WIPP site in southeastern New Mexico. The stochastic inverse models were created using a unique approach to condition a geologically-based conceptual model of T to measured T values via a multiGaussian residual field. This field is calibrated to both steady-state and transient head data collected over an 11 year period. Maps of these sensitivity coefficients provide a means of identifying the locations in the study area to which both the value of the model calibration objective function and the predicted travel times to a regulatory boundary are most sensitive to the T and head values. These locations can be targeted for deployment of additional long-term monitoring resources. Comparison of areas where the calibration objective function and the travel time have high sensitivity shows that these are not necessarily coincident with regions of high uncertainty. The sampling-based sensitivity coefficients are compared to analytically derived sensitivity coefficients at the 99 pilot point locations. Results of the sensitivity mapping exercise are being used in combination
Nimon, Kim
2012-01-01
Using state achievement data that are openly accessible, this paper demonstrates the application of hierarchical linear modeling within the context of career technical education research. Three prominent approaches to analyzing clustered data (i.e., modeling aggregated data, modeling disaggregated data, modeling hierarchical data) are discussed…
Vathsangam, Harshvardhan; Emken, B Adar; Schroeder, E Todd; Spruijt-Metz, Donna; Sukhatme, Gaurav S
2013-12-01
Walking is a commonly available activity to maintain a healthy lifestyle. Accurately tracking and measuring calories expended during walking can improve user feedback and intervention measures. Inertial sensors are a promising measurement tool to achieve this purpose. An important aspect in mapping inertial sensor data to energy expenditure is the question of normalizing across physiological parameters. Common approaches such as weight scaling require validation for each new population. An alternative is to use a hierarchical approach to model subject-specific parameters at one level and cross-subject parameters connected by physiological variables at a higher level. In this paper, we evaluate an inertial sensor-based hierarchical model to measure energy expenditure across a target population. We first determine the optimal movement and physiological features set to represent data. Periodicity based features are more accurate (phierarchical model with a subject-specific regression model and weight exponent scaled models. Subject-specific models perform significantly better (pmodels at all exponent scales whereas the hierarchical model performed worse than both. However, using an informed prior from the hierarchical model produces similar errors to using a subject-specific model with large amounts of training data (phierarchical modeling is a promising technique for generalized prediction energy expenditure prediction across a target population in a clinical setting.
User Demand Aware Grid Scheduling Model with Hierarchical Load Balancing
Directory of Open Access Journals (Sweden)
P. Suresh
2013-01-01
Full Text Available Grid computing is a collection of computational and data resources, providing the means to support both computational intensive applications and data intensive applications. In order to improve the overall performance and efficient utilization of the resources, an efficient load balanced scheduling algorithm has to be implemented. The scheduling approach also needs to consider user demand to improve user satisfaction. This paper proposes a dynamic hierarchical load balancing approach which considers load of each resource and performs load balancing. It minimizes the response time of the jobs and improves the utilization of the resources in grid environment. By considering the user demand of the jobs, the scheduling algorithm also improves the user satisfaction. The experimental results show the improvement of the proposed load balancing method.
Two Error Models for Calibrating SCARA Robots based on the MDH Model
Directory of Open Access Journals (Sweden)
Li Xiaolong
2017-01-01
Full Text Available This paper describes the process of using two error models for calibrating Selective Compliance Assembly Robot Arm (SCARA robots based on the modified Denavit-Hartenberg(MDH model, with the aim of improving the robot's accuracy. One of the error models is the position error model, which uses robot position errors with respect to an accurate robot base frame built before the measurement commenced. The other model is the distance error model, which uses only the robot moving distance to calculate errors. Because calibration requires the end-effector to be accurately measured, a laser tracker was used to measure the robot position and distance errors. After calibrating the robot and, the end-effector locations were measured again compensating the error models' parameters obtained from the calibration. The finding is that the robot's accuracy improved greatly after compensating the calibrated parameters.
Hydrological model calibration for enhancing global flood forecast skill
Hirpa, Feyera A.; Beck, Hylke E.; Salamon, Peter; Thielen-del Pozo, Jutta
2016-04-01
Early warning systems play a key role in flood risk reduction, and their effectiveness is directly linked to streamflow forecast skill. The skill of a streamflow forecast is affected by several factors; among them are (i) model errors due to incomplete representation of physical processes and inaccurate parameterization, (ii) uncertainty in the model initial conditions, and (iii) errors in the meteorological forcing. In macro scale (continental or global) modeling, it is a common practice to use a priori parameter estimates over large river basins or wider regions, resulting in suboptimal streamflow estimations. The aim of this work is to improve flood forecast skill of the Global Flood Awareness System (GloFAS; www.globalfloods.eu), a grid-based forecasting system that produces flood forecast unto 30 days lead, through calibration of the distributed hydrological model parameters. We use a combination of in-situ and satellite-based streamflow data for automatic calibration using a multi-objective genetic algorithm. We will present the calibrated global parameter maps and report the forecast skill improvements achieved. Furthermore, we discuss current challenges and future opportunities with regard to global-scale early flood warning systems.
Design of Experiments, Model Calibration and Data Assimilation
Energy Technology Data Exchange (ETDEWEB)
Williams, Brian J. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2014-07-30
This presentation provides an overview of emulation, calibration and experiment design for computer experiments. Emulation refers to building a statistical surrogate from a carefully selected and limited set of model runs to predict unsampled outputs. The standard kriging approach to emulation of complex computer models is presented. Calibration refers to the process of probabilistically constraining uncertain physics/engineering model inputs to be consistent with observed experimental data. An initial probability distribution for these parameters is updated using the experimental information. Markov chain Monte Carlo (MCMC) algorithms are often used to sample the calibrated parameter distribution. Several MCMC algorithms commonly employed in practice are presented, along with a popular diagnostic for evaluating chain behavior. Space-filling approaches to experiment design for selecting model runs to build effective emulators are discussed, including Latin Hypercube Design and extensions based on orthogonal array skeleton designs and imposed symmetry requirements. Optimization criteria that further enforce space-filling, possibly in projections of the input space, are mentioned. Designs to screen for important input variations are summarized and used for variable selection in a nuclear fuels performance application. This is followed by illustration of sequential experiment design strategies for optimization, global prediction, and rare event inference.
Design of Experiments, Model Calibration and Data Assimilation
Energy Technology Data Exchange (ETDEWEB)
Williams, Brian J. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2014-07-30
This presentation provides an overview of emulation, calibration and experiment design for computer experiments. Emulation refers to building a statistical surrogate from a carefully selected and limited set of model runs to predict unsampled outputs. The standard kriging approach to emulation of complex computer models is presented. Calibration refers to the process of probabilistically constraining uncertain physics/engineering model inputs to be consistent with observed experimental data. An initial probability distribution for these parameters is updated using the experimental information. Markov chain Monte Carlo (MCMC) algorithms are often used to sample the calibrated parameter distribution. Several MCMC algorithms commonly employed in practice are presented, along with a popular diagnostic for evaluating chain behavior. Space-filling approaches to experiment design for selecting model runs to build effective emulators are discussed, including Latin Hypercube Design and extensions based on orthogonal array skeleton designs and imposed symmetry requirements. Optimization criteria that further enforce space-filling, possibly in projections of the input space, are mentioned. Designs to screen for important input variations are summarized and used for variable selection in a nuclear fuels performance application. This is followed by illustration of sequential experiment design strategies for optimization, global prediction, and rare event inference.
Effect of length of the observed dataset on the calibration of a distributed hydrological model
Cui, X.; Sun, W.; Teng, J.; Song, H.; Yao, X.
2015-05-01
Calibration of hydrological models in ungauged basins is now a hot research topic in the field of hydrology. In addition to the traditional method of parameter regionalization, using discontinuous flow observations to calibrate hydrological models has gradually become popular in recent years. In this study, the possibility of using a limited number of river discharge data to calibrate a distributed hydrological model, the Soil and Water Assessment Tool (SWAT), was explored. The influence of the quantity of discharge measurements on model calibration in the upper Heihe Basin was analysed. Calibration using only one year of daily discharge measurements was compared with calibration using three years of discharge data. The results showed that the parameter values derived from calibration using one year's data could achieve similar model performance with calibration using three years' data, indicating that there is a possibility of using limited numbers of discharge data to calibrate the SWAT model effectively in poorly gauged basins.
Toman, Blaza; Nelson, Michael A; Lippa, Katrice A
2016-01-01
Chemical purity assessment using quantitative (1)H-nuclear magnetic resonance spectroscopy is a method based on ratio references of mass and signal intensity of the analyte species to that of chemical standards of known purity. As such, it is an example of a calculation using a known measurement equation with multiple inputs. Though multiple samples are often analyzed during purity evaluations in order to assess measurement repeatability, the uncertainty evaluation must also account for contributions from inputs to the measurement equation. Furthermore, there may be other uncertainty components inherent in the experimental design, such as independent implementation of multiple calibration standards. As such, the uncertainty evaluation is not purely bottom up (based on the measurement equation) or top down (based on the experimental design), but inherently contains elements of both. This hybrid form of uncertainty analysis is readily implemented with Bayesian statistical analysis. In this article we describe this type of analysis in detail and illustrate it using data from an evaluation of chemical purity and its uncertainty for a folic acid material.
Hierarchical modeling for reliability analysis using Markov models. B.S./M.S. Thesis - MIT
Fagundo, Arturo
1994-01-01
Markov models represent an extremely attractive tool for the reliability analysis of many systems. However, Markov model state space grows exponentially with the number of components in a given system. Thus, for very large systems Markov modeling techniques alone become intractable in both memory and CPU time. Often a particular subsystem can be found within some larger system where the dependence of the larger system on the subsystem is of a particularly simple form. This simple dependence can be used to decompose such a system into one or more subsystems. A hierarchical technique is presented which can be used to evaluate these subsystems in such a way that their reliabilities can be combined to obtain the reliability for the full system. This hierarchical approach is unique in that it allows the subsystem model to pass multiple aggregate state information to the higher level model, allowing more general systems to be evaluated. Guidelines are developed to assist in the system decomposition. An appropriate method for determining subsystem reliability is also developed. This method gives rise to some interesting numerical issues. Numerical error due to roundoff and integration are discussed at length. Once a decomposition is chosen, the remaining analysis is straightforward but tedious. However, an approach is developed for simplifying the recombination of subsystem reliabilities. Finally, a real world system is used to illustrate the use of this technique in a more practical context.
Methane emission modeling with MCMC calibration for a boreal peatland
Raivonen, Maarit; Smolander, Sampo; Susiluoto, Jouni; Backman, Leif; Li, Xuefei; Markkanen, Tiina; Kleinen, Thomas; Makela, Jarmo; Aalto, Tuula; Rinne, Janne; Brovkin, Victor; Vesala, Timo
2016-04-01
Natural wetlands, particularly peatlands of the boreal latitudes, are a significant source of methane (CH4). At the moment, the emission estimates are highly uncertain. These natural emissions respond to climatic variability, so it is necessary to understand their dynamics, in order to be able to predict how they affect the greenhouse gas balance in the future. We have developed a model of CH4 production, oxidation and transport in boreal peatlands. It simulates production of CH4 as a proportion of anaerobic peat respiration, transport of CH4 and oxygen between the soil and the atmosphere via diffusion in aerenchymatous plants and in peat pores (water and air filled), ebullition and oxidation of CH4 by methanotrophic microbes. Ultimately, we aim to add the model functionality to global climate models such as the JSBACH (Reick et al., 2013), the land surface scheme of the MPI Earth System Model. We tested the model with measured methane fluxes (using eddy covariance technique) from the Siikaneva site, an oligotrophic boreal fen in southern Finland (61°49' N, 24°11' E), over years 2005-2011. To give the model estimates regional reliability, we calibrated the model using Markov chain Monte Carlo (MCMC) technique. Although the simulations and the research are still ongoing, preliminary results from the MCMC calibration can be described as very promising considering that the model is still at relatively early stage. We will present the model and its dynamics as well as results from the MCMC calibration and the comparison with Siikaneva flux data.
Osei, Frank B.; Osei, F.B.; Duker, Alfred A.; Stein, A.
2011-01-01
This study analyses the joint effects of the two transmission routes of cholera on the space-time diffusion dynamics. Statistical models are developed and presented to investigate the transmission network routes of cholera diffusion. A hierarchical Bayesian modelling approach is employed for a joint
Measuring Service Quality in Higher Education: Development of a Hierarchical Model (HESQUAL)
Teeroovengadum, Viraiyan; Kamalanabhan, T. J.; Seebaluck, Ashley Keshwar
2016-01-01
Purpose: This paper aims to develop and empirically test a hierarchical model for measuring service quality in higher education. Design/methodology/approach: The first phase of the study consisted of qualitative research methods and a comprehensive literature review, which allowed the development of a conceptual model comprising 53 service quality…
Augmenting Visual Analysis in Single-Case Research with Hierarchical Linear Modeling
Davis, Dawn H.; Gagne, Phill; Fredrick, Laura D.; Alberto, Paul A.; Waugh, Rebecca E.; Haardorfer, Regine
2013-01-01
The purpose of this article is to demonstrate how hierarchical linear modeling (HLM) can be used to enhance visual analysis of single-case research (SCR) designs. First, the authors demonstrated the use of growth modeling via HLM to augment visual analysis of a sophisticated single-case study. Data were used from a delayed multiple baseline…
Boedeker, Peter
2017-01-01
Hierarchical linear modeling (HLM) is a useful tool when analyzing data collected from groups. There are many decisions to be made when constructing and estimating a model in HLM including which estimation technique to use. Three of the estimation techniques available when analyzing data with HLM are maximum likelihood, restricted maximum…
Missing Data Treatments at the Second Level of Hierarchical Linear Models
St. Clair, Suzanne W.
2011-01-01
The current study evaluated the performance of traditional versus modern MDTs in the estimation of fixed-effects and variance components for data missing at the second level of an hierarchical linear model (HLM) model across 24 different study conditions. Variables manipulated in the analysis included, (a) number of Level-2 variables with missing…
Osei, Frank B.; Duker, Alfred A.; Stein, Alfred
2011-01-01
This study analyses the joint effects of the two transmission routes of cholera on the space-time diffusion dynamics. Statistical models are developed and presented to investigate the transmission network routes of cholera diffusion. A hierarchical Bayesian modelling approach is employed for a joint
The Hierarchical Trend Model for property valuation and local price indices
M.K. Francke; G.A. Vos
2002-01-01
This paper presents a hierarchical trend model (HTM) for selling prices of houses, addressing three main problems: the spatial and temporal dependence of selling prices and the dependency of price index changes on housing quality. In this model the general price trend, cluster-level price trends, an
Measuring Service Quality in Higher Education: Development of a Hierarchical Model (HESQUAL)
Teeroovengadum, Viraiyan; Kamalanabhan, T. J.; Seebaluck, Ashley Keshwar
2016-01-01
Purpose: This paper aims to develop and empirically test a hierarchical model for measuring service quality in higher education. Design/methodology/approach: The first phase of the study consisted of qualitative research methods and a comprehensive literature review, which allowed the development of a conceptual model comprising 53 service quality…
An improved calibration technique for wind tunnel model attitude sensors
Tripp, John S.; Wong, Douglas T.; Finley, Tom D.; Tcheng, Ping
1993-01-01
Aerodynamic wind tunnel tests at NASA Langley Research Center (LaRC) require accurate measurement of model attitude. Inertial accelerometer packages have been the primary sensor used to measure model attitude to an accuracy of +/- 0.01 deg as required for aerodynamic research. The calibration parameters of the accelerometer package are currently obtained from a seven-point tumble test using a simplified empirical approximation. The inaccuracy due to the approximation exceeds the accuracy requirement as the misalignment angle between the package axis and the model body axis increases beyond 1.4 deg. This paper presents the exact solution derived from the coordinate transformation to eliminate inaccuracy caused by the approximation. In addition, a new calibration procedure is developed in which the data taken from the seven-point tumble test is fit to the exact solution by means of a least-squares estimation procedure. Validation tests indicate that the new calibration procedure provides +/- 0.005-deg accuracy over large package misalignments, which is not possible with the current procedure.
Multiobjective Automatic Parameter Calibration of a Hydrological Model
Directory of Open Access Journals (Sweden)
Donghwi Jung
2017-03-01
Full Text Available This study proposes variable balancing approaches for the exploration (diversification and exploitation (intensification of the non-dominated sorting genetic algorithm-II (NSGA-II with simulated binary crossover (SBX and polynomial mutation (PM in the multiobjective automatic parameter calibration of a lumped hydrological model, the HYMOD model. Two objectives—minimizing the percent bias and minimizing three peak flow differences—are considered in the calibration of the six parameters of the model. The proposed balancing approaches, which migrate the focus between exploration and exploitation over generations by varying the crossover and mutation distribution indices of SBX and PM, respectively, are compared with traditional static balancing approaches (the two dices value is fixed during optimization in a benchmark hydrological calibration problem for the Leaf River (1950 km2 near Collins, Mississippi. Three performance metrics—solution quality, spacing, and convergence—are used to quantify and compare the quality of the Pareto solutions obtained by the two different balancing approaches. The variable balancing approaches that migrate the focus of exploration and exploitation differently for SBX and PM outperformed other methods.
Fitting and Calibrating a Multilevel Mixed-Effects Stem Taper Model for Maritime Pine in NW Spain
Arias-Rodil, Manuel; Castedo-Dorado, Fernando; Cámara-Obregón, Asunción; Diéguez-Aranda, Ulises
2015-01-01
Stem taper data are usually hierarchical (several measurements per tree, and several trees per plot), making application of a multilevel mixed-effects modelling approach essential. However, correlation between trees in the same plot/stand has often been ignored in previous studies. Fitting and calibration of a variable-exponent stem taper function were conducted using data from 420 trees felled in even-aged maritime pine (Pinus pinaster Ait.) stands in NW Spain. In the fitting step, the tree level explained much more variability than the plot level, and therefore calibration at plot level was omitted. Several stem heights were evaluated for measurement of the additional diameter needed for calibration at tree level. Calibration with an additional diameter measured at between 40 and 60% of total tree height showed the greatest improvement in volume and diameter predictions. If additional diameter measurement is not available, the fixed-effects model fitted by the ordinary least squares technique should be used. Finally, we also evaluated how the expansion of parameters with random effects affects the stem taper prediction, as we consider this a key question when applying the mixed-effects modelling approach to taper equations. The results showed that correlation between random effects should be taken into account when assessing the influence of random effects in stem taper prediction. PMID:26630156
Terhorst, Lauren; Beck, Kelly Battle; McKeon, Ashlee B; Graham, Kristin M; Ye, Feifei; Shiffman, Saul
2017-08-01
Ecological momentary assessment (EMA) methods collect real-time data in real-world environments, which allow physical medicine and rehabilitation researchers to examine objective outcome data and reduces bias from retrospective recall. The statistical analysis of EMA data is directly related to the research question and the temporal design of the study. Hierarchical linear modeling, which accounts for multiple observations from the same participant, is a particularly useful approach to analyzing EMA data. The objective of this paper was to introduce the process of conducting hierarchical linear modeling analyses with EMA data. This is accomplished using exemplars from recent physical medicine and rehabilitation literature.
Ogle, Kiona; Ryan, Edmund; Dijkstra, Feike A.; Pendall, Elise
2016-12-01
Nonsteady state chambers are often employed to measure soil CO2 fluxes. CO2 concentrations (C) in the headspace are sampled at different times (t), and fluxes (f) are calculated from regressions of C versus t based on a limited number of observations. Variability in the data can lead to poor fits and unreliable f estimates; groups with too few observations or poor fits are often discarded, resulting in "missing" f values. We solve these problems by fitting linear (steady state) and nonlinear (nonsteady state, diffusion based) models of C versus t, within a hierarchical Bayesian framework. Data are from the Prairie Heating and CO2 Enrichment study that manipulated atmospheric CO2, temperature, soil moisture, and vegetation. CO2 was collected from static chambers biweekly during five growing seasons, resulting in >12,000 samples and >3100 groups and associated fluxes. We compare f estimates based on nonhierarchical and hierarchical Bayesian (B versus HB) versions of the linear and diffusion-based (L versus D) models, resulting in four different models (BL, BD, HBL, and HBD). Three models fit the data exceptionally well (R2 ≥ 0.98), but the BD model was inferior (R2 = 0.87). The nonhierarchical models (BL and BD) produced highly uncertain f estimates (wide 95% credible intervals), whereas the hierarchical models (HBL and HBD) produced very precise estimates. Of the hierarchical versions, the linear model (HBL) underestimated f by 33% relative to the nonsteady state model (HBD). The hierarchical models offer improvements upon traditional nonhierarchical approaches to estimating f, and we provide example code for the models.
Hosoda, Kazufumi; Tsuda, Soichiro; Kadowaki, Kohmei; Nakamura, Yutaka; Nakano, Tadashi; Ishii, Kojiro
2016-02-01
Understanding ecosystem dynamics is crucial as contemporary human societies face ecosystem degradation. One of the challenges that needs to be recognized is the complex hierarchical dynamics. Conventional dynamic models in ecology often represent only the population level and have yet to include the dynamics of the sub-organism level, which makes an ecosystem a complex adaptive system that shows characteristic behaviors such as resilience and regime shifts. The neglect of the sub-organism level in the conventional dynamic models would be because integrating multiple hierarchical levels makes the models unnecessarily complex unless supporting experimental data are present. Now that large amounts of molecular and ecological data are increasingly accessible in microbial experimental ecosystems, it is worthwhile to tackle the questions of their complex hierarchical dynamics. Here, we propose an approach that combines microbial experimental ecosystems and a hierarchical dynamic model named population-reaction model. We present a simple microbial experimental ecosystem as an example and show how the system can be analyzed by a population-reaction model. We also show that population-reaction models can be applied to various ecological concepts, such as predator-prey interactions, climate change, evolution, and stability of diversity. Our approach will reveal a path to the general understanding of various ecosystems and organisms. Copyright © 2015 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.
A Privacy Data-Oriented Hierarchical MapReduce Programming Model
Directory of Open Access Journals (Sweden)
Haiwen Han
2013-08-01
Full Text Available To realize privacy data protection efficiently in hybrid cloud service, a hierarchical control architecture based multi-cluster MapReduce programming model (the Hierarchical MapReduce Model,HMR is presented. Under this hierarchical control architecture, data isolation and placement among private cloud and public clouds according to the data privacy characteristic is implemented by the control center in private cloud. And then, to perform the corresponding distributed parallel computation correctly under the multi-clusters mode that is different to the conventional single-cluster mode, the Map-Reduce-GlobalReduce three stage scheduling process is designed. Limiting the computation about privacy data in private cloud while outsourcing the computation about non-privacy data to public clouds as much as possible, HMR reaches the performance of both security and low cost.
Fuzzy hierarchical model for risk assessment principles, concepts, and practical applications
Chan, Hing Kai
2013-01-01
Risk management is often complicated by situational uncertainties and the subjective preferences of decision makers. Fuzzy Hierarchical Model for Risk Assessment introduces a fuzzy-based hierarchical approach to solve risk management problems considering both qualitative and quantitative criteria to tackle imprecise information. This approach is illustrated through number of case studies using examples from the food, fashion and electronics sectors to cover a range of applications including supply chain management, green product design and green initiatives. These practical examples explore how this method can be adapted and fine tuned to fit other industries as well. Supported by an extensive literature review, Fuzzy Hierarchical Model for Risk Assessment comprehensively introduces a new method for project managers across all industries as well as researchers in risk management.
Sensor Network Data Fault Detection using Hierarchical Bayesian Space-Time Modeling
Ni, Kevin; Pottie, G J
2009-01-01
We present a new application of hierarchical Bayesian space-time (HBST) modeling: data fault detection in sensor networks primarily used in environmental monitoring situations. To show the effectiveness of HBST modeling, we develop a rudimentary tagging system to mark data that does not fit with given models. Using this, we compare HBST modeling against first order linear autoregressive (AR) modeling, which is a commonly used alternative due to its simplicity. We show that while HBST is mo...
DEFF Research Database (Denmark)
Øjelund, Henrik; Sadegh, Payman
2000-01-01
, constraints are introduced to ensure the conformity of the estimates to a gien global structure. Hierarchical models are then utilized as a tool to ccomodate global model uncertainties via parametric variabilities within the structure. The global parameters and their associated uncertainties are estimated...... simultaneously with the (local estimates of) function values. The approach is applied to modelling of a linear time variant dynamic system under prior linear time invariant structure where local regression fails as a result of high dimensionality.......Local function approximations concern fitting low order models to weighted data in neighbourhoods of the points where the approximations are desired. Despite their generality and convenience of use, local models typically suffer, among others, from difficulties arising in physical interpretation...
Calibration of a numerical ionospheric model with EISCAT observations
Directory of Open Access Journals (Sweden)
P.-L. Blelly
Full Text Available A set of EISCAT UHF and VHF observations is used for calibrating a coupled fluid-kinetic model of the ionosphere. The data gathered in the period 1200- 2400 UT on 24 March 1995 had various intervals of interest for such a calibration. The magnetospheric activity was very low during the afternoon, allowing for a proper examination of a case of quiet ionospheric conditions. The radars entered the auroral oval just after 1900 UT: a series of dynamic events probably associated with rapidly moving auroral arcs was observed until after 2200 UT. No attempts were made to model the dynamical behaviour during the 1900–2200 UT period. In contrast, the period 2200–2400 UT was characterised by quite steady precipitation: this latter period was then chosen for calibrating the model during precipitation events. The adjustment of the model on the four primary parameters observed by the radars (namely the electron concentration and temperature and the ion temperature and velocity needed external inputs (solar fluxes and magnetic activity index and the adjustments of a neutral atmospheric model in order to reach a good agreement. It is shown that for the quiet ionosphere, only slight adjustments of the neutral atmosphere models are needed. In contrast, adjusting the observations during the precipitation event requires strong departures from the model, both for the atomic oxygen and hydrogen. However, it is argued that this could well be the result of inadequately representing the vibrational states of N_{2} during precipitation events, and that these factors have to be considered only as ad hoc corrections.
Directory of Open Access Journals (Sweden)
Chulkov Vitaliy Olegovich
2012-12-01
Full Text Available This article deals with the infographic modeling of hierarchical management systems exposed to innovative conflicts. The authors analyze the facts that serve as conflict drivers in the construction management environment. The reasons for innovative conflicts include changes in hierarchical structures of management systems, adjustment of workers to new management conditions, changes in the ideology, etc. Conflicts under consideration may involve contradictions between requests placed by customers and the legislation, any risks that may originate from the above contradiction, conflicts arising from any failure to comply with any accepted standards of conduct, etc. One of the main objectives of the theory of hierarchical structures is to develop a model capable of projecting potential innovative conflicts. Models described in the paper reflect dynamic changes in patterns of external impacts within the conflict area. The simplest model element is a monad, or an indivisible set of characteristics of participants at the pre-set level. Interaction between two monads forms a diad. Modeling of situations that involve a different number of monads, diads, resources and impacts can improve methods used to control and manage hierarchical structures in the construction industry. However, in the absence of any mathematical models employed to simulate conflict-related events, processes and situations, any research into, projection and management of interpersonal and group-to-group conflicts are to be performed in the legal environment
HIERARCHICAL METHODOLOGY FOR MODELING HYDROGEN STORAGE SYSTEMS PART II: DETAILED MODELS
Energy Technology Data Exchange (ETDEWEB)
Hardy, B; Donald L. Anton, D
2008-12-22
There is significant interest in hydrogen storage systems that employ a media which either adsorbs, absorbs or reacts with hydrogen in a nearly reversible manner. In any media based storage system the rate of hydrogen uptake and the system capacity is governed by a number of complex, coupled physical processes. To design and evaluate such storage systems, a comprehensive methodology was developed, consisting of a hierarchical sequence of models that range from scoping calculations to numerical models that couple reaction kinetics with heat and mass transfer for both the hydrogen charging and discharging phases. The scoping models were presented in Part I [1] of this two part series of papers. This paper describes a detailed numerical model that integrates the phenomena occurring when hydrogen is charged and discharged. A specific application of the methodology is made to a system using NaAlH{sub 4} as the storage media.
Application of variance components estimation to calibrate geoid error models.
Guo, Dong-Mei; Xu, Hou-Ze
2015-01-01
The method of using Global Positioning System-leveling data to obtain orthometric heights has been well studied. A simple formulation for the weighted least squares problem has been presented in an earlier work. This formulation allows one directly employing the errors-in-variables models which completely descript the covariance matrices of the observables. However, an important question that what accuracy level can be achieved has not yet to be satisfactorily solved by this traditional formulation. One of the main reasons for this is the incorrectness of the stochastic models in the adjustment, which in turn allows improving the stochastic models of measurement noises. Therefore the issue of determining the stochastic modeling of observables in the combined adjustment with heterogeneous height types will be a main focus point in this paper. Firstly, the well-known method of variance component estimation is employed to calibrate the errors of heterogeneous height data in a combined least square adjustment of ellipsoidal, orthometric and gravimetric geoid. Specifically, the iterative algorithms of minimum norm quadratic unbiased estimation are used to estimate the variance components for each of heterogeneous observations. Secondly, two different statistical models are presented to illustrate the theory. The first method directly uses the errors-in-variables as a priori covariance matrices and the second method analyzes the biases of variance components and then proposes bias-corrected variance component estimators. Several numerical test results show the capability and effectiveness of the variance components estimation procedure in combined adjustment for calibrating geoid error model.
Directory of Open Access Journals (Sweden)
Brodjol Sutijo Supri Ulama
2012-01-01
Full Text Available Problem statement: Household expenditure analysis was highly demanding for government in order to formulate its policy. Since household data was viewed as hierarchical structure with household nested in its regional residence which varies inter region, the contextual welfare analysis was needed. This study proposed to develop a hierarchical model for estimating household expenditure in an attempt to measure the effect of regional diversity by taking into account district characteristics and household attributes using a Bayesian approach. Approach: Due to the variation of household expenditure data which was captured by the three parameters of Log-Normal (LN3 distribution, the model was developed based on LN3 distribution. Data used in this study was household expenditure data in Central Java, Indonesia. Since, data were unbalanced and hierarchical models using a classical approach work well for balanced data, thus the estimation process was done by using Bayesian method with MCMC and Gibbs sampling. Results: The hierarchical Bayesian model based on LN3 distribution could be implemented to explain the variation of household expenditure using district characteristics and household attributes. Conclusion: The model shows that districts characteristics which include demographic and economic conditions of districts and the availability of public facilities which are strongly associated with a dimension of human development index, i.e., economic, education and health, do affect to household expenditure through its household attributes."
Application of hierarchical genetic models to Raven and WAIS subtests: a Dutch twin study.
Rijsdijk, Frühling V; Vernon, P A; Boomsma, Dorret I
2002-05-01
Hierarchical models of intelligence are highly informative and widely accepted. Application of these models to twin data, however, is sparse. This paper addresses the question of how a genetic hierarchical model fits the Wechsler Adult Intelligence Scale (WAIS) subtests and the Raven Standard Progressive test score, collected in 194 18-year-old Dutch twin pairs. We investigated whether first-order group factors possess genetic and environmental variance independent of the higher-order general factor and whether the hierarchical structure is significant for all sources of variance. A hierarchical model with the 3 Cohen group-factors (verbal comprehension, perceptual organisation and freedom-from-distractibility) and a higher-order g factor showed the best fit to the phenotypic data and to additive genetic influences (A), whereas the unique environmental source of variance (E) could be modeled by a single general factor and specifics. There was no evidence for common environmental influences. The covariation among the WAIS group factors and the covariation between the group factors and the Raven is predominantly influenced by a second-order genetic factor and strongly support the notion of a biological basis of g.
Calibrating the Abaqus Crushable Foam Material Model using UNM Data
Energy Technology Data Exchange (ETDEWEB)
Schembri, Philip E. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Lewis, Matthew W. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2014-02-27
Triaxial test data from the University of New Mexico and uniaxial test data from W-14 is used to calibrate the Abaqus crushable foam material model to represent the syntactic foam comprised of APO-BMI matrix and carbon microballoons used in the W76. The material model is an elasto-plasticity model in which the yield strength depends on pressure. Both the elastic properties and the yield stress are estimated by fitting a line to the elastic region of each test response. The model parameters are fit to the data (in a non-rigorous way) to provide both a conservative and not-conservative material model. The model is verified to perform as intended by comparing the values of pressure and shear stress at yield, as well as the shear and volumetric stress-strain response, to the test data.
A Hierarchical Bayesian Model to Predict Self-Thinning Line for Chinese Fir in Southern China.
Directory of Open Access Journals (Sweden)
Xiongqing Zhang
Full Text Available Self-thinning is a dynamic equilibrium between forest growth and mortality at full site occupancy. Parameters of the self-thinning lines are often confounded by differences across various stand and site conditions. For overcoming the problem of hierarchical and repeated measures, we used hierarchical Bayesian method to estimate the self-thinning line. The results showed that the self-thinning line for Chinese fir (Cunninghamia lanceolata (Lamb.Hook. plantations was not sensitive to the initial planting density. The uncertainty of model predictions was mostly due to within-subject variability. The simulation precision of hierarchical Bayesian method was better than that of stochastic frontier function (SFF. Hierarchical Bayesian method provided a reasonable explanation of the impact of other variables (site quality, soil type, aspect, etc. on self-thinning line, which gave us the posterior distribution of parameters of self-thinning line. The research of self-thinning relationship could be benefit from the use of hierarchical Bayesian method.
Global cross-calibration of Landsat spectral mixture models
Sousa, Daniel
2016-01-01
Data continuity for the Landsat program relies on accurate cross-calibration among sensors. The Landsat 8 OLI has been shown to exhibit superior performance to the sensors on Landsats 4-7 with respect to radiometric calibration, signal to noise, and geolocation. However, improvements to the positioning of the spectral response functions on the OLI have resulted in known biases for commonly used spectral indices because the new band responses integrate absorption features differently from previous Landsat sensors. The objective of this analysis is to quantify the impact of these changes on linear spectral mixture models that use imagery collected by different Landsat sensors. The 2013 underflight of Landsat 7 and 8 provides an opportunity to cross calibrate the spectral mixing spaces of the ETM+ and OLI sensors using near-simultaneous acquisitions from a wide variety of land cover types worldwide. We use 80,910,343 pairs of OLI and ETM+ spectra to characterize the OLI spectral mixing space and perform a cross-...
Calibrating Building Energy Models Using Supercomputer Trained Machine Learning Agents
Energy Technology Data Exchange (ETDEWEB)
Sanyal, Jibonananda [ORNL; New, Joshua Ryan [ORNL; Edwards, Richard [ORNL; Parker, Lynne Edwards [ORNL
2014-01-01
Building Energy Modeling (BEM) is an approach to model the energy usage in buildings for design and retrofit purposes. EnergyPlus is the flagship Department of Energy software that performs BEM for different types of buildings. The input to EnergyPlus can often extend in the order of a few thousand parameters which have to be calibrated manually by an expert for realistic energy modeling. This makes it challenging and expensive thereby making building energy modeling unfeasible for smaller projects. In this paper, we describe the Autotune research which employs machine learning algorithms to generate agents for the different kinds of standard reference buildings in the U.S. building stock. The parametric space and the variety of building locations and types make this a challenging computational problem necessitating the use of supercomputers. Millions of EnergyPlus simulations are run on supercomputers which are subsequently used to train machine learning algorithms to generate agents. These agents, once created, can then run in a fraction of the time thereby allowing cost-effective calibration of building models.
A Linear Viscoelastic Model Calibration of Sylgard 184.
Energy Technology Data Exchange (ETDEWEB)
Long, Kevin Nicholas; Brown, Judith Alice
2017-04-01
We calibrate a linear thermoviscoelastic model for solid Sylgard 184 (90-10 formulation), a lightly cross-linked, highly flexible isotropic elastomer for use both in Sierra / Solid Mechanics via the Universal Polymer Model as well as in Sierra / Structural Dynamics (Salinas) for use as an isotropic viscoelastic material. Material inputs for the calibration in both codes are provided. The frequency domain master curve of oscillatory shear was obtained from a report from Los Alamos National Laboratory (LANL). However, because the form of that data is different from the constitutive models in Sierra, we also present the mapping of the LANL data onto Sandia’s constitutive models. Finally, blind predictions of cyclic tension and compression out to moderate strains of 40 and 20% respectively are compared with Sandia’s legacy cure schedule material. Although the strain rate of the data is unknown, the linear thermoviscoelastic model accurately predicts the experiments out to moderate strains for the slower strain rates, which is consistent with the expectation that quasistatic test procedures were likely followed. This good agreement comes despite the different cure schedules between the Sandia and LANL data.
Meta-Analysis in Higher Education: An Illustrative Example Using Hierarchical Linear Modeling
Denson, Nida; Seltzer, Michael H.
2011-01-01
The purpose of this article is to provide higher education researchers with an illustrative example of meta-analysis utilizing hierarchical linear modeling (HLM). This article demonstrates the step-by-step process of meta-analysis using a recently-published study examining the effects of curricular and co-curricular diversity activities on racial…
An accessible method for implementing hierarchical models with spatio-temporal abundance data
Ross, Beth E.; Hooten, Melvin B.; Koons, David N.
2012-01-01
A common goal in ecology and wildlife management is to determine the causes of variation in population dynamics over long periods of time and across large spatial scales. Many assumptions must nevertheless be overcome to make appropriate inference about spatio-temporal variation in population dynamics, such as autocorrelation among data points, excess zeros, and observation error in count data. To address these issues, many scientists and statisticians have recommended the use of Bayesian hierarchical models. Unfortunately, hierarchical statistical models remain somewhat difficult to use because of the necessary quantitative background needed to implement them, or because of the computational demands of using Markov Chain Monte Carlo algorithms to estimate parameters. Fortunately, new tools have recently been developed that make it more feasible for wildlife biologists to fit sophisticated hierarchical Bayesian models (i.e., Integrated Nested Laplace Approximation, ‘INLA’). We present a case study using two important game species in North America, the lesser and greater scaup, to demonstrate how INLA can be used to estimate the parameters in a hierarchical model that decouples observation error from process variation, and accounts for unknown sources of excess zeros as well as spatial and temporal dependence in the data. Ultimately, our goal was to make unbiased inference about spatial variation in population trends over time.
The Hierarchical Factor Model of ADHD: Invariant across Age and National Groupings?
Toplak, Maggie E.; Sorge, Geoff B.; Flora, David B.; Chen, Wai; Banaschewski, Tobias; Buitelaar, Jan; Ebstein, Richard; Eisenberg, Jacques; Franke, Barbara; Gill, Michael; Miranda, Ana; Oades, Robert D.; Roeyers, Herbert; Rothenberger, Aribert; Sergeant, Joseph; Sonuga-Barke, Edmund; Steinhausen, Hans-Christoph; Thompson, Margaret; Tannock, Rosemary; Asherson, Philip; Faraone, Stephen V.
2012-01-01
Objective: To examine the factor structure of attention-deficit/hyperactivity disorder (ADHD) in a clinical sample of 1,373 children and adolescents with ADHD and their 1,772 unselected siblings recruited from different countries across a large age range. Hierarchical and correlated factor analytic models were compared separately in the ADHD and…
Raykov, Tenko
2011-01-01
Interval estimation of intraclass correlation coefficients in hierarchical designs is discussed within a latent variable modeling framework. A method accomplishing this aim is outlined, which is applicable in two-level studies where participants (or generally lower-order units) are clustered within higher-order units. The procedure can also be…
Putwain, Dave; Deveney, Carolyn
2009-01-01
The aim of this study was to examine an expanded integrative hierarchical model of test emotions and achievement goal orientations in predicting the examination performance of undergraduate students. Achievement goals were theorised as mediating the relationship between test emotions and performance. 120 undergraduate students completed…
2010-01-01
can also refer to hierarchical parameterization transcending any scale, such as mesoscopic to continuum levels. Such a multiscale modeling paradigm ...particularly suited for systems defined by long-chain polymers with relatively short persistence lengths, or systems that are entropically driven...mechanics. Thus, we introduce a universal framework through a finer-trains-coarser multiscale paradigm , which effectively defines coarse- grain
Michou, Aikaterini; Vansteenkiste, Maarten; Mouratidis, Athanasios; Lens, Willy
2014-01-01
Background: The hierarchical model of achievement motivation presumes that achievement goals channel the achievement motives of need for achievement and fear of failure towards motivational outcomes. Yet, less is known whether autonomous and controlling reasons underlying the pursuit of achievement goals can serve as additional pathways between…
Lam, Terence Yuk Ping; Lau, Kwok Chi
2014-01-01
This study uses hierarchical linear modeling to examine the influence of a range of factors on the science performances of Hong Kong students in PISA 2006. Hong Kong has been consistently ranked highly in international science assessments, such as Programme for International Student Assessment and Trends in International Mathematics and Science…
Meta-Analysis in Higher Education: An Illustrative Example Using Hierarchical Linear Modeling
Denson, Nida; Seltzer, Michael H.
2011-01-01
The purpose of this article is to provide higher education researchers with an illustrative example of meta-analysis utilizing hierarchical linear modeling (HLM). This article demonstrates the step-by-step process of meta-analysis using a recently-published study examining the effects of curricular and co-curricular diversity activities on racial…
Rocconi, Louis M.
2013-01-01
This study examined the differing conclusions one may come to depending upon the type of analysis chosen, hierarchical linear modeling or ordinary least squares (OLS) regression. To illustrate this point, this study examined the influences of seniors' self-reported critical thinking abilities three ways: (1) an OLS regression with the student…
Rademaker, A.R.; Minnen, A. van; Ebberink, F.; Zuiden, M. van; Geuze, E.
2012-01-01
Background: As of yet, no collective agreement has been reached regarding the precise factor structure of posttraumatic stress disorder (PTSD). Several alternative factor-models have been proposed in the last decades. Objective: The current study examined the fit of a hierarchical adaptation of the
Multi-Organ Contribution to the Metabolic Plasma Profile Using Hierarchical Modelling.
Directory of Open Access Journals (Sweden)
Frida Torell
Full Text Available Hierarchical modelling was applied in order to identify the organs that contribute to the levels of metabolites in plasma. Plasma and organ samples from gut, kidney, liver, muscle and pancreas were obtained from mice. The samples were analysed using gas chromatography time-of-flight mass spectrometry (GC TOF-MS at the Swedish Metabolomics centre, Umeå University, Sweden. The multivariate analysis was performed by means of principal component analysis (PCA and orthogonal projections to latent structures (OPLS. The main goal of this study was to investigate how each organ contributes to the metabolic plasma profile. This was performed using hierarchical modelling. Each organ was found to have a unique metabolic profile. The hierarchical modelling showed that the gut, kidney and liver demonstrated the greatest contribution to the metabolic pattern of plasma. For example, we found that metabolites were absorbed in the gut and transported to the plasma. The kidneys excrete branched chain amino acids (BCAAs and fatty acids are transported in the plasma to the muscles and liver. Lactic acid was also found to be transported from the pancreas to plasma. The results indicated that hierarchical modelling can be utilized to identify the organ contribution of unknown metabolites to the metabolic profile of plasma.
Hierarchical linear modeling of longitudinal pedigree data for genetic association analysis
DEFF Research Database (Denmark)
Tan, Qihua; B Hjelmborg, Jacob V; Thomassen, Mads;
2014-01-01
on the mean level of a phenotype, they are not sufficiently straightforward to handle the kinship correlation on the time-dependent trajectories of a phenotype. We introduce a 2-level hierarchical linear model to separately assess the genetic associations with the mean level and the rate of change...
A developmental model of hierarchical stage structure in objective moral judgements
J. Boom; P.C.M. Molenaar
1989-01-01
A hierarchical structural model of moral judgment is proposed in which an S is characterized as occupying a particular moral stage. During development, the S's characteristic stage progresses along a latent, ordered dimension in an age-dependent way. Evaluation of prototypic statements representativ
Schermelleh-Engel, Karin; Keith, Nina; Moosbrugger, Helfried; Hodapp, Volker
2004-01-01
An extension of latent state-trait (LST) theory to hierarchical LST models is presented. In hierarchical LST models, the covariances between 2 or more latent traits are explained by a general 3rd-order factor, and the covariances between latent state residuals pertaining to different traits measured on the same measurement occasion are explained…
Calibration of Conceptual Rainfall-Runoff Models Using Global Optimization
Directory of Open Access Journals (Sweden)
Chao Zhang
2015-01-01
Full Text Available Parameter optimization for the conceptual rainfall-runoff (CRR model has always been the difficult problem in hydrology since watershed hydrological model is high-dimensional and nonlinear with multimodal and nonconvex response surface and its parameters are obviously related and complementary. In the research presented here, the shuffled complex evolution (SCE-UA global optimization method was used to calibrate the Xinanjiang (XAJ model. We defined the ideal data and applied the method to observed data. Our results show that, in the case of ideal data, the data length did not affect the parameter optimization for the hydrological model. If the objective function was selected appropriately, the proposed method found the true parameter values. In the case of observed data, we applied the technique to different lengths of data (1, 2, and 3 years and compared the results with ideal data. We found that errors in the data and model structure lead to significant uncertainties in the parameter optimization.
Hierarchical Multiscale Modeling of Macromolecules and their Assemblies.
Ortoleva, P; Singharoy, A; Pankavich, S
2013-04-28
Soft materials (e.g., enveloped viruses, liposomes, membranes and supercooled liquids) simultaneously deform or display collective behaviors, while undergoing atomic scale vibrations and collisions. While the multiple space-time character of such systems often makes traditional molecular dynamics simulation impractical, a multiscale approach has been presented that allows for long-time simulation with atomic detail based on the co-evolution of slowly-varying order parameters (OPs) with the quasi-equilibrium probability density of atomic configurations. However, this approach breaks down when the structural change is extreme, or when nearest-neighbor connectivity of atoms is not maintained. In the current study, a self-consistent approach is presented wherein OPs and a reference structure co-evolve slowly to yield long-time simulation for dynamical soft-matter phenomena such as structural transitions and self-assembly. The development begins with the Liouville equation for N classical atoms and an ansatz on the form of the associated N-atom probability density. Multiscale techniques are used to derive Langevin equations for the coupled OP-configurational dynamics. The net result is a set of equations for the coupled stochastic dynamics of the OPs and centers of mass of the subsystems that constitute a soft material body. The theory is based on an all-atom methodology and an interatomic force field, and therefore enables calibration-free simulations of soft matter, such as macromolecular assemblies.
Dynamic calibration of agent-based models using data assimilation.
Ward, Jonathan A; Evans, Andrew J; Malleson, Nicolas S
2016-04-01
A widespread approach to investigating the dynamical behaviour of complex social systems is via agent-based models (ABMs). In this paper, we describe how such models can be dynamically calibrated using the ensemble Kalman filter (EnKF), a standard method of data assimilation. Our goal is twofold. First, we want to present the EnKF in a simple setting for the benefit of ABM practitioners who are unfamiliar with it. Second, we want to illustrate to data assimilation experts the value of using such methods in the context of ABMs of complex social systems and the new challenges these types of model present. We work towards these goals within the context of a simple question of practical value: how many people are there in Leeds (or any other major city) right now? We build a hierarchy of exemplar models that we use to demonstrate how to apply the EnKF and calibrate these using open data of footfall counts in Leeds.
Efficiency of Evolutionary Algorithms for Calibration of Watershed Models
Ahmadi, M.; Arabi, M.
2009-12-01
Since the promulgation of the Clean Water Act in the U.S. and other similar legislations around the world over the past three decades, watershed management programs have focused on the nexus of pollution prevention and mitigation. In this context, hydrologic/water quality models have been increasingly embedded in the decision making process. Simulation models are now commonly used to investigate the hydrologic response of watershed systems under varying climatic and land use conditions, and also to study the fate and transport of contaminants at various spatiotemporal scales. Adequate calibration and corroboration of models for various outputs at varying scales is an essential component of watershed modeling. The parameter estimation process could be challenging when multiple objectives are important. For example, improving streamflow predictions of the model at a stream location may result in degradation of model predictions for sediments and/or nutrient at the same location or other outlets. This paper aims to evaluate the applicability and efficiency of single and multi objective evolutionary algorithms for parameter estimation of complex watershed models. To this end, the Shuffled Complex Evolution (SCE-UA) algorithm, a single-objective genetic algorithm (GA), and a multi-objective genetic algorithm (i.e., NSGA-II) were reconciled with the Soil and Water Assessment Tool (SWAT) to calibrate the model at various locations within the Wildcat Creek Watershed, Indiana. The efficiency of these methods were investigated using different error statistics including root mean square error, coefficient of determination and Nash-Sutcliffe efficiency coefficient for the output variables as well as the baseflow component of the stream discharge. A sensitivity analysis was carried out to screening model parameters that bear significant uncertainties. Results indicated that while flow processes can be reasonably ascertained, parameterization of nutrient and pesticide processes
An Exactly Soluble Hierarchical Clustering Model Inverse Cascades, Self-Similarity, and Scaling
Gabrielov, A; Turcotte, D L
1999-01-01
We show how clustering as a general hierarchical dynamical process proceeds via a sequence of inverse cascades to produce self-similar scaling, as an intermediate asymptotic, which then truncates at the largest spatial scales. We show how this model can provide a general explanation for the behavior of several models that has been described as ``self-organized critical,'' including forest-fire, sandpile, and slider-block models.
Lee Chun Chang; Hui-Yu Lin
2012-01-01
Housing data are of a nested nature as houses are nested in a village, a town, or a county. This study thus applies HLM (hierarchical linear modelling) in an empirical study by adding neighborhood characteristic variables into the model for consideration. Using the housing data of 31 neighborhoods in the Taipei area as analysis samples and three HLM sub-models, this study discusses the impact of neighborhood characteristics on house prices. The empirical results indicate that the impact of va...
A first-order dynamical model of hierarchical triple stars and its application
Xu, Xingbo; Fu, Yanning
2015-01-01
For most hierarchical triple stars, the classical double two-body model of zeroth-order cannot describe the motions of the components under the current observational accuracy. In this paper, Marchal's first-order analytical solution is implemented and a more efficient simplified version is applied to real triple stars. The results show that, for most triple stars, the proposed first-order model is preferable to the zeroth-order model either in fitting observational data or in predicting component positions.
Hierarchical Web Page Classification Based on a Topic Model and Neighboring Pages Integration
Sriurai, Wongkot; Meesad, Phayung; Haruechaiyasak, Choochart
2010-01-01
Most Web page classification models typically apply the bag of words (BOW) model to represent the feature space. The original BOW representation, however, is unable to recognize semantic relationships between terms. One possible solution is to apply the topic model approach based on the Latent Dirichlet Allocation algorithm to cluster the term features into a set of latent topics. Terms assigned into the same topic are semantically related. In this paper, we propose a novel hierarchical class...
Hierarchical multi-scale modeling of texture induced plastic anisotropy in sheet forming
Gawad, J.; van Bael, Albert; Eyckens, P.; Samaey, G.; Van Houtte, P.; Roose, D.
2013-01-01
In this paper we present a Hierarchical Multi-Scale (HMS) model of coupled evolutions of crystallographic texture and plastic anisotropy in plastic forming of polycrystalline metallic alloys. The model exploits the Finite Element formulation to describe the macroscopic deformation of the material. Anisotropy of the plastic properties is derived from a physics-based polycrystalline plasticity micro-scale model by means of virtual experiments. The homogenized micro-scale stress response given b...
Calibration of the simulation model of the VINCY cyclotron magnet
Directory of Open Access Journals (Sweden)
Ćirković Saša
2002-01-01
Full Text Available The MERMAID program will be used to isochronise the nominal magnetic field of the VINCY Cyclotron. This program simulates the response, i. e. calculates the magnetic field, of a previously defined model of a magnet. The accuracy of 3D field calculation depends on the density of the grid points in the simulation model grid. The size of the VINCY Cyclotron and the maximum number of grid points in the XY plane limited by MERMAID define the maximumobtainable accuracy of field calculations. Comparisons of the field simulated with maximum obtainable accuracy with the magnetic field measured in the first phase of the VINCY Cyclotron magnetic field measurements campaign has shown that the difference between these two fields is not as small as required. Further decrease of the difference between these fields is obtained by the simulation model calibration, i. e. by adjusting the current through the main coils in the simulation model.
Calibration of the simulation model of the Vincy cyclotron magnet
Cirkovic, S; Vorozhtsov, A S; Vorozhtsov, S B
2002-01-01
The MERMAID program will be used to isochronise the nominal magnetic field of the VINCY Cyclotron. This program simulates the response, i. e. calculates the magnetic field, of a previously defined model of a magnet. The accuracy of 3D field calculation depends on the density of the grid points in the simulation model grid. The size of the VINCY Cyclotron and the maximum number of grid points in the XY plane limited by MERMAID define the maximum obtainable accuracy of field calculations. Comparisons of the field simulated with maximum obtainable accuracy with the magnetic field measured in the first phase of the VINCY Cyclotron magnetic field measurements campaign has shown that the difference between these two fields is not as small as required. Further decrease of the difference between these fields is obtained by the simulation model calibration, i. e. by adjusting the current through the main coils in the simulation model.
New NIR Calibration Models Speed Biomass Composition and Reactivity Characterization
Energy Technology Data Exchange (ETDEWEB)
2015-09-01
Obtaining accurate chemical composition and reactivity (measures of carbohydrate release and yield) information for biomass feedstocks in a timely manner is necessary for the commercialization of biofuels. This highlight describes NREL's work to use near-infrared (NIR) spectroscopy and partial least squares multivariate analysis to develop calibration models to predict the feedstock composition and the release and yield of soluble carbohydrates generated by a bench-scale dilute acid pretreatment and enzymatic hydrolysis assay. This highlight is being developed for the September 2015 Alliance S&T Board meeting.
Directory of Open Access Journals (Sweden)
J. P. Werner
2015-03-01
Full Text Available Reconstructions of the late-Holocene climate rely heavily upon proxies that are assumed to be accurately dated by layer counting, such as measurements of tree rings, ice cores, and varved lake sediments. Considerable advances could be achieved if time-uncertain proxies were able to be included within these multiproxy reconstructions, and if time uncertainties were recognized and correctly modeled for proxies commonly treated as free of age model errors. Current approaches for accounting for time uncertainty are generally limited to repeating the reconstruction using each one of an ensemble of age models, thereby inflating the final estimated uncertainty – in effect, each possible age model is given equal weighting. Uncertainties can be reduced by exploiting the inferred space–time covariance structure of the climate to re-weight the possible age models. Here, we demonstrate how Bayesian hierarchical climate reconstruction models can be augmented to account for time-uncertain proxies. Critically, although a priori all age models are given equal probability of being correct, the probabilities associated with the age models are formally updated within the Bayesian framework, thereby reducing uncertainties. Numerical experiments show that updating the age model probabilities decreases uncertainty in the resulting reconstructions, as compared with the current de facto standard of sampling over all age models, provided there is sufficient information from other data sources in the spatial region of the time-uncertain proxy. This approach can readily be generalized to non-layer-counted proxies, such as those derived from marine sediments.
Directory of Open Access Journals (Sweden)
J. P. Werner
2014-12-01
Full Text Available Reconstructions of late-Holocene climate rely heavily upon proxies that are assumed to be accurately dated by layer counting, such as measurement on tree rings, ice cores, and varved lake sediments. Considerable advances may be achievable if time uncertain proxies could be included within these multiproxy reconstructions, and if time uncertainties were recognized and correctly modeled for proxies commonly treated as free of age model errors. Current approaches to accounting for time uncertainty are generally limited to repeating the reconstruction using each of an ensemble of age models, thereby inflating the final estimated uncertainty – in effect, each possible age model is given equal weighting. Uncertainties can be reduced by exploiting the inferred space–time covariance structure of the climate to re-weight the possible age models. Here we demonstrate how Bayesian Hierarchical climate reconstruction models can be augmented to account for time uncertain proxies. Critically, while a priori all age models are given equal probability of being correct, the probabilities associated with the age models are formally updated within the Bayesian framework, thereby reducing uncertainties. Numerical experiments show that updating the age-model probabilities decreases uncertainty in the climate reconstruction, as compared with the current de-facto standard of sampling over all age models, provided there is sufficient information from other data sources in the region of the time-uncertain proxy. This approach can readily be generalized to non-layer counted proxies, such as those derived from marine sediments.
A Hierarchical Latent Stochastic Differential Equation Model for Affective Dynamics
Oravecz, Zita; Tuerlinckx, Francis; Vandekerckhove, Joachim
2011-01-01
In this article a continuous-time stochastic model (the Ornstein-Uhlenbeck process) is presented to model the perpetually altering states of the core affect, which is a 2-dimensional concept underlying all our affective experiences. The process model that we propose can account for the temporal changes in core affect on the latent level. The key…
Xu, Lei; Johnson, Timothy D.; Nichols, Thomas E.; Nee, Derek E.
2010-01-01
Summary The aim of this work is to develop a spatial model for multi-subject fMRI data. There has been extensive work on univariate modeling of each voxel for single and multi-subject data, some work on spatial modeling of single-subject data, and some recent work on spatial modeling of multi-subject data. However, there has been no work on spatial models that explicitly account for inter-subject variability in activation locations. In this work, we use the idea of activation centers and model the inter-subject variability in activation locations directly. Our model is specified in a Bayesian hierarchical frame work which allows us to draw inferences at all levels: the population level, the individual level and the voxel level. We use Gaussian mixtures for the probability that an individual has a particular activation. This helps answer an important question which is not addressed by any of the previous methods: What proportion of subjects had a significant activity in a given region. Our approach incorporates the unknown number of mixture components into the model as a parameter whose posterior distribution is estimated by reversible jump Markov Chain Monte Carlo. We demonstrate our method with a fMRI study of resolving proactive interference and show dramatically better precision of localization with our method relative to the standard mass-univariate method. Although we are motivated by fMRI data, this model could easily be modified to handle other types of imaging data. PMID:19210732
Dettmer, Jan; Dosso, Stan E
2012-10-01
This paper develops a trans-dimensional approach to matched-field geoacoustic inversion, including interacting Markov chains to improve efficiency and an autoregressive model to account for correlated errors. The trans-dimensional approach and hierarchical seabed model allows inversion without assuming any particular parametrization by relaxing model specification to a range of plausible seabed models (e.g., in this case, the number of sediment layers is an unknown parameter). Data errors are addressed by sampling statistical error-distribution parameters, including correlated errors (covariance), by applying a hierarchical autoregressive error model. The well-known difficulty of low acceptance rates for trans-dimensional jumps is addressed with interacting Markov chains, resulting in a substantial increase in efficiency. The trans-dimensional seabed model and the hierarchical error model relax the degree of prior assumptions required in the inversion, resulting in substantially improved (more realistic) uncertainty estimates and a more automated algorithm. In particular, the approach gives seabed parameter uncertainty estimates that account for uncertainty due to prior model choice (layering and data error statistics). The approach is applied to data measured on a vertical array in the Mediterranean Sea.
Geomechanical Simulation of Bayou Choctaw Strategic Petroleum Reserve - Model Calibration.
Energy Technology Data Exchange (ETDEWEB)
Park, Byoung [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2017-02-01
A finite element numerical analysis model has been constructed that consists of a realistic mesh capturing the geometries of Bayou Choctaw (BC) Strategic Petroleum Reserve (SPR) site and multi - mechanism deformation ( M - D ) salt constitutive model using the daily data of actual wellhead pressure and oil - brine interface. The salt creep rate is not uniform in the salt dome, and the creep test data for BC salt is limited. Therefore, the model calibration is necessary to simulate the geomechanical behavior of the salt dome. The cavern volumetric closures of SPR caverns calculated from CAVEMAN are used for the field baseline measurement. The structure factor, A 2 , and transient strain limit factor, K 0 , in the M - D constitutive model are used for the calibration. The A 2 value obtained experimentally from the BC salt and K 0 value of Waste Isolation Pilot Plant (WIPP) salt are used for the baseline values. T o adjust the magnitude of A 2 and K 0 , multiplication factors A2F and K0F are defined, respectively. The A2F and K0F values of the salt dome and salt drawdown skins surrounding each SPR cavern have been determined through a number of back fitting analyses. The cavern volumetric closures calculated from this model correspond to the predictions from CAVEMAN for six SPR caverns. Therefore, this model is able to predict past and future geomechanical behaviors of the salt dome, caverns, caprock , and interbed layers. The geological concerns issued in the BC site will be explained from this model in a follow - up report .
Sin, Gürkan; Van Hulle, Stijn W H; De Pauw, Dirk J W; van Griensven, Ann; Vanrolleghem, Peter A
2005-07-01
Modelling activated sludge systems has gained an increasing momentum after the introduction of activated sludge models (ASMs) in 1987. Application of dynamic models for full-scale systems requires essentially a calibration of the chosen ASM to the case under study. Numerous full-scale model applications have been performed so far which were mostly based on ad hoc approaches and expert knowledge. Further, each modelling study has followed a different calibration approach: e.g. different influent wastewater characterization methods, different kinetic parameter estimation methods, different selection of parameters to be calibrated, different priorities within the calibration steps, etc. In short, there was no standard approach in performing the calibration study, which makes it difficult, if not impossible, to (1) compare different calibrations of ASMs with each other and (2) perform internal quality checks for each calibration study. To address these concerns, systematic calibration protocols have recently been proposed to bring guidance to the modeling of activated sludge systems and in particular to the calibration of full-scale models. In this contribution four existing calibration approaches (BIOMATH, HSG, STOWA and WERF) will be critically discussed using a SWOT (Strengths, Weaknesses, Opportunities, Threats) analysis. It will also be assessed in what way these approaches can be further developed in view of further improving the quality of ASM calibration. In this respect, the potential of automating some steps of the calibration procedure by use of mathematical algorithms is highlighted.
Fraldi, M.; Perrella, G.; Ciervo, M.; Bosia, F.; Pugno, N. M.
2017-09-01
Very recently, a Weibull-based probabilistic strategy has been successfully applied to bundles of wires to determine their overall stress-strain behaviour, also capturing previously unpredicted nonlinear and post-elastic features of hierarchical strands. This approach is based on the so-called ;Equal Load Sharing (ELS); hypothesis by virtue of which, when a wire breaks, the load acting on the strand is homogeneously redistributed among the surviving wires. Despite the overall effectiveness of the method, some discrepancies between theoretical predictions and in silico Finite Element-based simulations or experimental findings might arise when more complex structures are analysed, e.g. helically arranged bundles. To overcome these limitations, an enhanced hybrid approach is proposed in which the probability of rupture is combined with a deterministic mechanical model of a strand constituted by helically-arranged and hierarchically-organized wires. The analytical model is validated comparing its predictions with both Finite Element simulations and experimental tests. The results show that generalized stress-strain responses - incorporating tension/torsion coupling - are naturally found and, once one or more elements break, the competition between geometry and mechanics of the strand microstructure, i.e. the different cross sections and helical angles of the wires in the different hierarchical levels of the strand, determines the no longer homogeneous stress redistribution among the surviving wires whose fate is hence governed by a ;Hierarchical Load Sharing; criterion.
A Genetic Algorithm for the Calibration of a Micro-Simulation Model
Espinosa, Omar Baqueiro
2012-01-01
This paper describes the process followed to calibrate a micro-simulation model for the Altmark region in Germany and a Derbyshire region in the UK. The calibration process is performed in three main steps: first, a subset of input and output variables to use for the calibration process is selected from the complete parameter space in the model; second, the calibration process is performed using a genetic algorithm calibration approach; finally, a comparison between the real data and the data obtained from the best fit model is done to verify the accuracy of the model.
Mars Entry Atmospheric Data System Modeling, Calibration, and Error Analysis
Karlgaard, Christopher D.; VanNorman, John; Siemers, Paul M.; Schoenenberger, Mark; Munk, Michelle M.
2014-01-01
The Mars Science Laboratory (MSL) Entry, Descent, and Landing Instrumentation (MEDLI)/Mars Entry Atmospheric Data System (MEADS) project installed seven pressure ports through the MSL Phenolic Impregnated Carbon Ablator (PICA) heatshield to measure heatshield surface pressures during entry. These measured surface pressures are used to generate estimates of atmospheric quantities based on modeled surface pressure distributions. In particular, the quantities to be estimated from the MEADS pressure measurements include the dynamic pressure, angle of attack, and angle of sideslip. This report describes the calibration of the pressure transducers utilized to reconstruct the atmospheric data and associated uncertainty models, pressure modeling and uncertainty analysis, and system performance results. The results indicate that the MEADS pressure measurement system hardware meets the project requirements.
Differential Evolution algorithm applied to FSW model calibration
Idagawa, H. S.; Santos, T. F. A.; Ramirez, A. J.
2014-03-01
Friction Stir Welding (FSW) is a solid state welding process that can be modelled using a Computational Fluid Dynamics (CFD) approach. These models use adjustable parameters to control the heat transfer and the heat input to the weld. These parameters are used to calibrate the model and they are generally determined using the conventional trial and error approach. Since this method is not very efficient, we used the Differential Evolution (DE) algorithm to successfully determine these parameters. In order to improve the success rate and to reduce the computational cost of the method, this work studied different characteristics of the DE algorithm, such as the evolution strategy, the objective function, the mutation scaling factor and the crossover rate. The DE algorithm was tested using a friction stir weld performed on a UNS S32205 Duplex Stainless Steel.
The Evolution of Galaxy Clustering in Hierarchical Models
1999-01-01
The main ingredients of recent semi-analytic models of galaxy formation are summarised. We present predictions for the galaxy clustering properties of a well specified LCDM model whose parameters are constrained by observed local galaxy properties. We present preliminary predictions for evolution of clustering that can be probed with deep pencil beam surveys.
Understanding Prairie Fen Hydrology - a Hierarchical Multi-Scale Groundwater Modeling Approach
Sampath, P.; Liao, H.; Abbas, H.; Ma, L.; Li, S.
2012-12-01
Prairie fens provide critical habitat to more than 50 rare species and significantly contribute to the biodiversity of the upper Great Lakes region. The sustainability of these globally unique ecosystems, however, requires that they be fed by a steady supply of pristine, calcareous groundwater. Understanding the hydrology that supports the existence of such fens is essential in preserving these valuable habitats. This research uses process-based multi-scale groundwater modeling for this purpose. Two fen-sites, MacCready Fen and Ives Road Fen, in Southern Michigan were systematically studied. A hierarchy of nested steady-state models was built for each fen-site to capture the system's dynamics at spatial scales ranging from the regional groundwater-shed to the local fens. The models utilize high-resolution Digital Elevation Models (DEM), National Hydrologic Datasets (NHD), a recently-assembled water-well database, and results from a state-wide groundwater mapping project to represent the complex hydro-geological and stress framework. The modeling system simulates both shallow glacial and deep bedrock aquifers as well as the interaction between surface water and groundwater. Aquifer heterogeneities were explicitly simulated with multi-scale transition probability geo-statistics. A two-way hydraulic head feedback mechanism was set up between the nested models, such that the parent models provided boundary conditions to the child models, and in turn the child models provided local information to the parent models. A hierarchical mass budget analysis was performed to estimate the seepage fluxes at the surface water/groundwater interfaces and to assess the relative importance of the processes at multiple scales that contribute water to the fens. The models were calibrated using observed base-flows at stream gauging stations and/or static water levels at wells. Three-dimensional particle tracking was used to predict the sources of water to the fens. We observed from the
A Hierarchical Multiobjective Routing Model for MPLS Networks with Two Service Classes
Craveirinha, José; Girão-Silva, Rita; Clímaco, João; Martins, Lúcia
This work presents a model for multiobjective routing in MPLS networks formulated within a hierarchical network-wide optimization framework, with two classes of services, namely QoS and Best Effort (BE) services. The routing model uses alternative routing and hierarchical optimization with two optimization levels, including fairness objectives. Another feature of the model is the use of an approximate stochastic representation of the traffic flows in the network, based on the concept of effective bandwidth. The theoretical foundations of a heuristic strategy for finding “good” compromise solutions to the very complex bi-level routing optimization problem, based on a conjecture concerning the definition of marginal implied costs for QoS flows and BE flows, will be described. The main features of a first version of this heuristic based on a bi-objective shortest path model and some preliminary results for a benchmark network will also be revealed.
Leung, K M; Elashoff, R M; Rees, K S; Hasan, M M; Legorreta, A P
1998-03-01
The purpose of this study was to identify factors related to pregnancy and childbirth that might be predictive of a patient's length of stay after delivery and to model variations in length of stay. California hospital discharge data on maternity patients (n = 499,912) were analyzed. Hierarchical linear modeling was used to adjust for patient case mix and hospital characteristics and to account for the dependence of outcome variables within hospitals. Substantial variation in length of stay among patients was observed. The variation was mainly attributed to delivery type (vaginal or cesarean section), the patient's clinical risk factors, and severity of complications (if any). Furthermore, hospitals differed significantly in maternity lengths of stay even after adjustment for patient case mix. Developing risk-adjusted models for length of stay is a complex process but is essential for understanding variation. The hierarchical linear model approach described here represents a more efficient and appropriate way of studying interhospital variations than the traditional regression approach.
Directory of Open Access Journals (Sweden)
Nasim Nickbakhsh
2017-03-01
Full Text Available The distributed system of Grid subscribes the non-homogenous sources at a vast level in a dynamic manner. The resource discovery manner is very influential on the efficiency and of quality the system functionality. The “Bitmap” model is based on the hierarchical and conscious search model that allows for less traffic and low number of messages in relation to other methods in this respect. This proposed method is based on the hierarchical and conscious search model that enhances the Bitmap method with the objective to reduce traffic, reduce the load of resource management processing, reduce the number of emerged messages due to resource discovery and increase the resource according speed. The proposed method and the Bitmap method are simulated through Arena tool. This proposed model is abbreviated as RNTL.
DEFF Research Database (Denmark)
Thomadsen, Tommy
2005-01-01
of different types of hierarchical networks. This is supplemented by a review of ring network design problems and a presentation of a model allowing for modeling most hierarchical networks. We use methods based on linear programming to design the hierarchical networks. Thus, a brief introduction to the various....... The thesis investigates models for hierarchical network design and methods used to design such networks. In addition, ring network design is considered, since ring networks commonly appear in the design of hierarchical networks. The thesis introduces hierarchical networks, including a classification scheme...... linear programming based methods is included. The thesis is thus suitable as a foundation for study of design of hierarchical networks. The major contribution of the thesis consists of seven papers which are included in the appendix. The papers address hierarchical network design and/or ring network...
Bayesian Hierarchical Random Intercept Model Based on Three Parameter Gamma Distribution
Wirawati, Ika; Iriawan, Nur; Irhamah
2017-06-01
Hierarchical data structures are common throughout many areas of research. Beforehand, the existence of this type of data was less noticed in the analysis. The appropriate statistical analysis to handle this type of data is the hierarchical linear model (HLM). This article will focus only on random intercept model (RIM), as a subclass of HLM. This model assumes that the intercept of models in the lowest level are varied among those models, and their slopes are fixed. The differences of intercepts were suspected affected by some variables in the upper level. These intercepts, therefore, are regressed against those upper level variables as predictors. The purpose of this paper would demonstrate a proven work of the proposed two level RIM of the modeling on per capita household expenditure in Maluku Utara, which has five characteristics in the first level and three characteristics of districts/cities in the second level. The per capita household expenditure data in the first level were captured by the three parameters Gamma distribution. The model, therefore, would be more complex due to interaction of many parameters for representing the hierarchical structure and distribution pattern of the data. To simplify the estimation processes of parameters, the computational Bayesian method couple with Markov Chain Monte Carlo (MCMC) algorithm and its Gibbs Sampling are employed.
A Solvatochromic Model Calibrates Nitriles’ Vibrational Frequencies to Electrostatic Fields
Bagchi, Sayan; Fried, Stephen D.; Boxer, Steven G.
2012-01-01
Electrostatic interactions provide a primary connection between a protein’s three-dimensional structure and its function. Infrared (IR) probes are useful because vibrational frequencies of certain chemical groups, such as nitriles, are linearly sensitive to local electrostatic field, and can serve as a molecular electric field meter. IR spectroscopy has been used to study electrostatic changes or fluctuations in proteins, but measured peak frequencies have not been previously mapped to total electric fields, because of the absence of a field-frequency calibration and the complication of local chemical effects such as H-bonds. We report a solvatochromic model that provides a means to assess the H-bonding status of aromatic nitrile vibrational probes, and calibrates their vibrational frequencies to electrostatic field. The analysis involves correlations between the nitrile’s IR frequency and its 13C chemical shift, whose observation is facilitated by a robust method for introducing isotopes into aromatic nitriles. The method is tested on the model protein Ribonuclease S (RNase S) containing a labeled p-CN-Phe near the active site. Comparison of the measurements in RNase S against solvatochromic data gives an estimate of the average total electrostatic field at this location. The value determined agrees quantitatively with MD simulations, suggesting broader potential for the use of IR probes in the study of protein electrostatics. PMID:22694663
A solvatochromic model calibrates nitriles' vibrational frequencies to electrostatic fields.
Bagchi, Sayan; Fried, Stephen D; Boxer, Steven G
2012-06-27
Electrostatic interactions provide a primary connection between a protein's three-dimensional structure and its function. Infrared probes are useful because vibrational frequencies of certain chemical groups, such as nitriles, are linearly sensitive to local electrostatic field and can serve as a molecular electric field meter. IR spectroscopy has been used to study electrostatic changes or fluctuations in proteins, but measured peak frequencies have not been previously mapped to total electric fields, because of the absence of a field-frequency calibration and the complication of local chemical effects such as H-bonds. We report a solvatochromic model that provides a means to assess the H-bonding status of aromatic nitrile vibrational probes and calibrates their vibrational frequencies to electrostatic field. The analysis involves correlations between the nitrile's IR frequency and its (13)C chemical shift, whose observation is facilitated by a robust method for introducing isotopes into aromatic nitriles. The method is tested on the model protein ribonuclease S (RNase S) containing a labeled p-CN-Phe near the active site. Comparison of the measurements in RNase S against solvatochromic data gives an estimate of the average total electrostatic field at this location. The value determined agrees quantitatively with molecular dynamics simulations, suggesting broader potential for the use of IR probes in the study of protein electrostatics.
A joint calibration model for combining predictive distributions
Directory of Open Access Journals (Sweden)
Patrizia Agati
2013-05-01
Full Text Available In many research fields, as for example in probabilistic weather forecasting, valuable predictive information about a future random phenomenon may come from several, possibly heterogeneous, sources. Forecast combining methods have been developed over the years in order to deal with ensembles of sources: the aim is to combine several predictions in such a way to improve forecast accuracy and reduce risk of bad forecasts.In this context, we propose the use of a Bayesian approach to information combining, which consists in treating the predictive probability density functions (pdfs from the individual ensemble members as data in a Bayesian updating problem. The likelihood function is shown to be proportional to the product of the pdfs, adjusted by a joint “calibration function” describing the predicting skill of the sources (Morris, 1977. In this paper, after rephrasing Morris’ algorithm in a predictive context, we propose to model the calibration function in terms of bias, scale and correlation and to estimate its parameters according to the least squares criterion. The performance of our method is investigated and compared with that of Bayesian Model Averaging (Raftery, 2005 on simulated data.
Non-linear calibration models for near infrared spectroscopy
DEFF Research Database (Denmark)
Ni, Wangdong; Nørgaard, Lars; Mørup, Morten
2014-01-01
Different calibration techniques are available for spectroscopic applications that show nonlinear behavior. This comprehensive comparative study presents a comparison of different nonlinear calibration techniques: kernel PLS (KPLS), support vector machines (SVM), least-squares SVM (LS-SVM), relev...
How useful are stream level observations for model calibration?
Seibert, Jan; Vis, Marc; Pool, Sandra
2014-05-01
Streamflow estimation in ungauged basins is especially challenging in data-scarce regions and it might be reasonable to take at least a few measurements. Recent studies demonstrated that few streamflow measurements, representing data that could be measured with limited efforts in an ungauged basin, might be needed to constrain runoff models for simulations in ungauged basins. While in these previous studies we assumed that few streamflow measurements were taken during different points in time over one year, obviously it would be reasonable to (also) measure stream levels. Several approaches could be used in practice for such stream level observations: water level loggers have become less expensive and easier to install and can be used to obtain continuous stream level time series; stream levels will in the near future be increasingly available from satellite remote sensing resulting in evenly space time series; community-based approaches (e.g., crowdhydrology.org), finally, can offer level observations at irregular time intervals. Here we present a study where a catchment runoff model (the HBV model) was calibrated for gauged basins in Switzerland assuming that only a subset of the data was available. We pretended that only stream level observations at different time intervals, representing the temporal resolution of the different observation approaches mentioned before, and a small number of streamflow observations were available. The model, which was calibrated based on these data subsets, was then evaluated on the full observed streamflow record. Our results indicate that streamlevel data alone already can provide surprisingly good model simulation results, which can be further improved by the combination with one streamflow observation. The surprisingly good results with only streamlevel time series can be explained by the relatively high precipitation in the studied catchments. Constructing a hypothetical catchment with reduced precipitation resulted in poorer
Su, Chiu-Wen; Ming-Fang Yen, Amy; Lai, Hongmin; Chen, Hsiu-Hsi; Chen, Sam Li-Sheng
2017-07-28
Background The accuracy of a prediction model for periodontal disease using the community periodontal index (CPI) has been undertaken by using an area receiver operating characteristics (AUROC) curve, but how the uncalibrated CPI, as measured by general dentists trained by periodontists in a large epidemiological study, required for constructing a prediction model that affects its performance has not been researched yet. Methods We conducted a two-stage design by first proposing a validation study to calibrate the CPI between a senior periodontal specialist and trained general dentists who measured CPIs in the main study of a nationwide survey. A Bayesian hierarchical logistic regression model was applied to estimate the non-updated and updated clinical weights used for building up risk scores. How the calibrated CPI affected the performance of the updated prediction model was quantified by comparing the AUROC curves between the original and the updated model. Results The estimates regarding the calibration of CPI obtained from the validation study were 66% and 85% for sensitivity and specificity, respectively. After updating, the clinical weights of each predictor were inflated, and the risk score for the highest risk category was elevated from 434 to 630. Such an update improved the AUROC performance of the two corresponding prediction models from 62.6% (95% CI: 61.7%-63.6%) for the non-updated model to 68.9% (95% CI: 68.0%-69.6%) for the updated one, reaching a statistically significant difference (P periodontal disease as measured by the calibrated CPI derived from a large epidemiological survey.
The high redshift galaxy population in hierarchical galaxy formation models
Kitzbichler, M G; Kitzbichler, Manfred G.; White, Simon D. M.
2006-01-01
We compare observations of the high redshift galaxy population to the predictions of the galaxy formation model of Croton et al. (2006). This model, implemented on the Millennium Simulation of the concordance LCDM cosmogony, introduces "radio mode" feedback from the central galaxies of groups and clusters in order to obtain quantitative agreement with the luminosity, colour, morphology and clustering properties of the low redshift galaxy population. Here we compare the predictions of this same model to the observed counts and redshift distributions of faint galaxies, as well as to their inferred luminosity and mass functions out to redshift 5. With the exception of the mass functions, all these properties are sensitive to modelling of dust obscuration. A simple but plausible treatment gives moderately good agreement with most of the data, although the predicted abundance of relatively massive (~M*) galaxies appears systematically high at high redshift, suggesting that such galaxies assemble earlier in this mo...
Sparse Event Modeling with Hierarchical Bayesian Kernel Methods
2016-01-05
the kernel function which depends on the application and the model user. This research uses the most popular kernel function, the radial basis...an important role in the nation’s economy. Unfortunately, the system’s reliability is declining due to the aging components of the network [Grier...kernel function. Gaussian Bayesian kernel models became very popular recently and were extended and applied to a number of classification problems. An
Building hierarchical models of avian distributions for the State of Georgia
Howell, J.E.; Peterson, J.T.; Conroy, M.J.
2008-01-01
To predict the distributions of breeding birds in the state of Georgia, USA, we built hierarchical models consisting of 4 levels of nested mapping units of decreasing area: 90,000 ha, 3,600 ha, 144 ha, and 5.76 ha. We used the Partners in Flight database of point counts to generate presence and absence data at locations across the state of Georgia for 9 avian species: Acadian flycatcher (Empidonax virescens), brownheaded nuthatch (Sitta pusilla), Carolina wren (Thryothorus ludovicianus), indigo bunting (Passerina cyanea), northern cardinal (Cardinalis cardinalis), prairie warbler (Dendroica discolor), yellow-billed cuckoo (Coccyxus americanus), white-eyed vireo (Vireo griseus), and wood thrush (Hylocichla mustelina). At each location, we estimated hierarchical-level-specific habitat measurements using the Georgia GAP Analysis18 class land cover and other Geographic Information System sources. We created candidate, species-specific occupancy models based on previously reported relationships, and fit these using Markov chain Monte Carlo procedures implemented in OpenBugs. We then created a confidence model set for each species based on Akaike's Information Criterion. We found hierarchical habitat relationships for all species. Three-fold cross-validation estimates of model accuracy indicated an average overall correct classification rate of 60.5%. Comparisons with existing Georgia GAP Analysis models indicated that our models were more accurate overall. Our results provide guidance to wildlife scientists and managers seeking predict avian occurrence as a function of local and landscape-level habitat attributes.
Cressie, Noel; Calder, Catherine A; Clark, James S; Ver Hoef, Jay M; Wikle, Christopher K
2009-04-01
Analyses of ecological data should account for the uncertainty in the process(es) that generated the data. However, accounting for these uncertainties is a difficult task, since ecology is known for its complexity. Measurement and/or process errors are often the only sources of uncertainty modeled when addressing complex ecological problems, yet analyses should also account for uncertainty in sampling design, in model specification, in parameters governing the specified model, and in initial and boundary conditions. Only then can we be confident in the scientific inferences and forecasts made from an analysis. Probability and statistics provide a framework that accounts for multiple sources of uncertainty. Given the complexities of ecological studies, the hierarchical statistical model is an invaluable tool. This approach is not new in ecology, and there are many examples (both Bayesian and non-Bayesian) in the literature illustrating the benefits of this approach. In this article, we provide a baseline for concepts, notation, and methods, from which discussion on hierarchical statistical modeling in ecology can proceed. We have also planted some seeds for discussion and tried to show where the practical difficulties lie. Our thesis is that hierarchical statistical modeling is a powerful way of approaching ecological analysis in the presence of inevitable but quantifiable uncertainties, even if practical issues sometimes require pragmatic compromises.
Chen, Yongsheng; Persaud, Bhagwant
2014-09-01
Crash modification factors (CMFs) for road safety treatments are developed as multiplicative factors that are used to reflect the expected changes in safety performance associated with changes in highway design and/or the traffic control features. However, current CMFs have methodological drawbacks. For example, variability with application circumstance is not well understood, and, as important, correlation is not addressed when several CMFs are applied multiplicatively. These issues can be addressed by developing safety performance functions (SPFs) with components of crash modification functions (CM-Functions), an approach that includes all CMF related variables, along with others, while capturing quantitative and other effects of factors and accounting for cross-factor correlations. CM-Functions can capture the safety impact of factors through a continuous and quantitative approach, avoiding the problematic categorical analysis that is often used to capture CMF variability. There are two formulations to develop such SPFs with CM-Function components - fully specified models and hierarchical models. Based on sample datasets from two Canadian cities, both approaches are investigated in this paper. While both model formulations yielded promising results and reasonable CM-Functions, the hierarchical model was found to be more suitable in retaining homogeneity of first-level SPFs, while addressing CM-Functions in sub-level modeling. In addition, hierarchical models better capture the correlations between different impact factors.
Boos, Moritz; Seer, Caroline; Lange, Florian; Kopp, Bruno
2016-01-01
Cognitive determinants of probabilistic inference were examined using hierarchical Bayesian modeling techniques. A classic urn-ball paradigm served as experimental strategy, involving a factorial two (prior probabilities) by two (likelihoods) design. Five computational models of cognitive processes were compared with the observed behavior. Parameter-free Bayesian posterior probabilities and parameter-free base rate neglect provided inadequate models of probabilistic inference. The introduction of distorted subjective probabilities yielded more robust and generalizable results. A general class of (inverted) S-shaped probability weighting functions had been proposed; however, the possibility of large differences in probability distortions not only across experimental conditions, but also across individuals, seems critical for the model's success. It also seems advantageous to consider individual differences in parameters of probability weighting as being sampled from weakly informative prior distributions of individual parameter values. Thus, the results from hierarchical Bayesian modeling converge with previous results in revealing that probability weighting parameters show considerable task dependency and individual differences. Methodologically, this work exemplifies the usefulness of hierarchical Bayesian modeling techniques for cognitive psychology. Theoretically, human probabilistic inference might be best described as the application of individualized strategic policies for Bayesian belief revision.
Directory of Open Access Journals (Sweden)
Fidel Ernesto Castro Morales
2016-03-01
Full Text Available Abstract Objectives: to propose the use of a Bayesian hierarchical model to study the allometric scaling of the fetoplacental weight ratio, including possible confounders. Methods: data from 26 singleton pregnancies with gestational age at birth between 37 and 42 weeks were analyzed. The placentas were collected immediately after delivery and stored under refrigeration until the time of analysis, which occurred within up to 12 hours. Maternal data were collected from medical records. A Bayesian hierarchical model was proposed and Markov chain Monte Carlo simulation methods were used to obtain samples from distribution a posteriori. Results: the model developed showed a reasonable fit, even allowing for the incorporation of variables and a priori information on the parameters used. Conclusions: new variables can be added to the modelfrom the available code, allowing many possibilities for data analysis and indicating the potential for use in research on the subject.
Directory of Open Access Journals (Sweden)
Dan WU
2009-06-01
Full Text Available The principal-subordinate hierarchical multi-objective programming model of initial water rights allocation was developed based on the principle of coordinated and sustainable development of different regions and water sectors within a basin. With the precondition of strictly controlling maximum emissions rights, initial water rights were allocated between the first and the second levels of the hierarchy in order to promote fair and coordinated development across different regions of the basin and coordinated and efficient water use across different water sectors, realize the maximum comprehensive benefits to the basin, promote the unity of quantity and quality of initial water rights allocation, and eliminate water conflict across different regions and water sectors. According to interactive decision-making theory, a principal-subordinate hierarchical interactive iterative algorithm based on the satisfaction degree was developed and used to solve the initial water rights allocation model. A case study verified the validity of the model.
Institute of Scientific and Technical Information of China (English)
Dan WU; Feng-ping WU; Yan-ping CHEN
2009-01-01
The principal-subordinate hierarchical multi-objective programming model of initial water rights allocation was developed based on the principle of coordinated and sustainable development of different regions and water sectors within a basin. With the precondition of strictly controlling maximum emissions rights, initial water rights were allocated between the first and the second levels of the hierarchy in order to promote fair and coordinated development across different regions of the basin and coordinated and efficient water use across different water sectors, realize the maximum comprehensive benefits to the basin, promote the unity of quantity and quality of initial water rights allocation, and eliminate water conflict across different regions and water sectors. According to interactive decision-making theory, a principal-subordinate hierarchical interactive iterative algorithm based on the satisfaction degree was developed and used to solve the initial water rights allocation model. A case study verified the validity of the model.
Jeong, Sungmoon; Lee, Minho
2012-01-01
This paper presents an adaptive object recognition model based on incremental feature representation and a hierarchical feature classifier that offers plasticity to accommodate additional input data and reduces the problem of forgetting previously learned information. The incremental feature representation method applies adaptive prototype generation with a cortex-like mechanism to conventional feature representation to enable an incremental reflection of various object characteristics, such as feature dimensions in the learning process. A feature classifier based on using a hierarchical generative model recognizes various objects with variant feature dimensions during the learning process. Experimental results show that the adaptive object recognition model successfully recognizes single and multiple-object classes with enhanced stability and flexibility.
Design of Experiments for Factor Hierarchization in Complex Structure Modelling
Directory of Open Access Journals (Sweden)
C. Kasmi
2013-07-01
Full Text Available Modelling the power-grid network is of fundamental interest to analyse the conducted propagation of unintentional and intentional electromagnetic interferences. The propagation is indeed highly influenced by the channel behaviour. In this paper, we investigate the effects of appliances and the position of cables in a low voltage network. First, the power-grid architecture is described. Then, the principle of Experimental Design is recalled. Next, the methodology is applied to power-grid modelling. Finally, we propose an analysis of the statistical moments of the experimental design results. Several outcomes are provided to describe the effects induced by parameter variability on the conducted propagation of spurious compromising emanations.
Sigma-model soliton intersections from exceptional calibrations
Portugues, R
2002-01-01
A first-order `BPS' equation is obtained for 1/8 supersymmetric intersections of soliton-membranes (lumps) of supersymmetric (4+1)-dimensional massless sigma models, and a special non-singular solution is found that preserves 1/4 supersymmetry. For 4-dimensional hyper-K\\"ahler target spaces ($HK_4$) the BPS equation is shown to be the low-energy limit of the equation for a Cayley-calibrated 4-surface in $\\bE^4\\times HK_4$. Similar first-order equations are found for stationary intersections of Q-lump-membranes of the massive sigma model, but now generic solutions preserve either 1/8 supersymmetry or no supersymmetry, depending on the time orientation.
A hierarchical Bayes error correction model to explain dynamic effects
D. Fok (Dennis); C. Horváth (Csilla); R. Paap (Richard); Ph.H.B.F. Franses (Philip Hans)
2004-01-01
textabstractFor promotional planning and market segmentation it is important to understand the short-run and long-run effects of the marketing mix on category and brand sales. In this paper we put forward a sales response model to explain the differences in short-run and long-run effects of promotio
Models to relate species to environment: a hierarchical statistical approac
Jamil, T.
2012-01-01
In the last two decades, the interest of community ecologists in trait-based approaches has grown dramatically and these approaches have been increasingly applied to explain and predict response of species to environmental conditions. A variety of modelling techniques are available. The dominant
Models to relate species to environment: a hierarchical statistical approac
Jamil, T.
2012-01-01
In the last two decades, the interest of community ecologists in trait-based approaches has grown dramatically and these approaches have been increasingly applied to explain and predict response of species to environmental conditions. A variety of modelling techniques are available. The dominant tec
Directory of Open Access Journals (Sweden)
Moritz eBoos
2016-05-01
Full Text Available Cognitive determinants of probabilistic inference were examined using hierarchical Bayesian modelling techniques. A classic urn-ball paradigm served as experimental strategy, involving a factorial two (prior probabilities by two (likelihoods design. Five computational models of cognitive processes were compared with the observed behaviour. Parameter-free Bayesian posterior probabilities and parameter-free base rate neglect provided inadequate models of probabilistic inference. The introduction of distorted subjective probabilities yielded more robust and generalizable results. A general class of (inverted S-shaped probability weighting functions had been proposed; however, the possibility of large differences in probability distortions not only across experimental conditions, but also across individuals, seems critical for the model’s success. It also seems advantageous to consider individual differences in parameters of probability weighting as being sampled from weakly informative prior distributions of individual parameter values. Thus, the results from hierarchical Bayesian modelling converge with previous results in revealing that probability weighting parameters show considerable task dependency and individual differences. Methodologically, this work exemplifies the usefulness of hierarchical Bayesian modelling techniques for cognitive psychology. Theoretically, human probabilistic inference might be best described as the application of individualized strategic policies for Bayesian belief revision.
Energy Technology Data Exchange (ETDEWEB)
Makeechev, V.A. [Industrial Power Company, Krasnopresnenskaya Naberejnaya 12, 123610 Moscow (Russian Federation); Soukhanov, O.A. [Energy Systems Institute, 1 st Yamskogo Polya Street 15, 125040 Moscow (Russian Federation); Sharov, Y.V. [Moscow Power Engineering Institute, Krasnokazarmennaya Street 14, 111250 Moscow (Russian Federation)
2008-07-15
This paper presents foundations of the optimization method intended for solution of power systems operation problems and based on the principles of functional modeling (FM). This paper also presents several types of hierarchical FM algorithms for economic dispatch in these systems derived from this method. According to the FM method a power system is represented by hierarchical model consisting of systems of equations of lower (subsystem) levels and higher level system of connection equations (SCE), in which only boundary variables of subsystems are present. Solution of optimization problem in accordance with the FM method consists of the following operations: (1) solution of optimization problem for each subsystem (values of boundary variables for subsystems should be determined on the higher level of model); (2) calculation of functional characteristic (FC) of each subsystem, pertaining to state of subsystem on current iteration (these two steps are carried out on the lower level of the model); (3) formation and solution of the higher level system of equations (SCE), which gives values of boundary and supplementary boundary variables on current iteration. The key elements in the general structure of the FM method are FCs of subsystems, which represent them on the higher level of the model as ''black boxes''. Important advantage of hierarchical FM algorithms is that results obtained with them on each iteration are identical to those of corresponding basic one level algorithms. (author)
A New Perspective for the Calibration of Computational Predictor Models.
Energy Technology Data Exchange (ETDEWEB)
Crespo, Luis Guillermo
2014-11-01
This paper presents a framework for calibrating computational models using data from sev- eral and possibly dissimilar validation experiments. The offset between model predictions and observations, which might be caused by measurement noise, model-form uncertainty, and numerical error, drives the process by which uncertainty in the models parameters is characterized. The resulting description of uncertainty along with the computational model constitute a predictor model. Two types of predictor models are studied: Interval Predictor Models (IPMs) and Random Predictor Models (RPMs). IPMs use sets to characterize uncer- tainty, whereas RPMs use random vectors. The propagation of a set through a model makes the response an interval valued function of the state, whereas the propagation of a random vector yields a random process. Optimization-based strategies for calculating both types of predictor models are proposed. Whereas the formulations used to calculate IPMs target solutions leading to the interval value function of minimal spread containing all observations, those for RPMs seek to maximize the models' ability to reproduce the distribution of obser- vations. Regarding RPMs, we choose a structure for the random vector (i.e., the assignment of probability to points in the parameter space) solely dependent on the prediction error. As such, the probabilistic description of uncertainty is not a subjective assignment of belief, nor is it expected to asymptotically converge to a fixed value, but instead it is a description of the model's ability to reproduce the experimental data. This framework enables evaluating the spread and distribution of the predicted response of target applications depending on the same parameters beyond the validation domain (i.e., roll-up and extrapolation).
Calibration of Predictor Models Using Multiple Validation Experiments
Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.
2015-01-01
This paper presents a framework for calibrating computational models using data from several and possibly dissimilar validation experiments. The offset between model predictions and observations, which might be caused by measurement noise, model-form uncertainty, and numerical error, drives the process by which uncertainty in the models parameters is characterized. The resulting description of uncertainty along with the computational model constitute a predictor model. Two types of predictor models are studied: Interval Predictor Models (IPMs) and Random Predictor Models (RPMs). IPMs use sets to characterize uncertainty, whereas RPMs use random vectors. The propagation of a set through a model makes the response an interval valued function of the state, whereas the propagation of a random vector yields a random process. Optimization-based strategies for calculating both types of predictor models are proposed. Whereas the formulations used to calculate IPMs target solutions leading to the interval value function of minimal spread containing all observations, those for RPMs seek to maximize the models' ability to reproduce the distribution of observations. Regarding RPMs, we choose a structure for the random vector (i.e., the assignment of probability to points in the parameter space) solely dependent on the prediction error. As such, the probabilistic description of uncertainty is not a subjective assignment of belief, nor is it expected to asymptotically converge to a fixed value, but instead it casts the model's ability to reproduce the experimental data. This framework enables evaluating the spread and distribution of the predicted response of target applications depending on the same parameters beyond the validation domain.
Experiments in Error Propagation within Hierarchal Combat Models
2015-09-01
and variances of Blue MTTK, Red MTTK, and P[Blue Wins] by Experimental Design are statistically different (Wackerly, Mendenhall III and Schaeffer...2008). Although the data is not normally distributed, the t-test is robust to non-normality (Wackerly, Mendenhall III and Schaeffer 2008). There is...this is handled by transforming the predicted values with a natural logarithm (Wackerly, Mendenhall III and Schaeffer 2008). The model considers
Hierarchical Models for Batteries: Overview with Some Case Studies
Energy Technology Data Exchange (ETDEWEB)
Pannala, Sreekanth [ORNL; Mukherjee, Partha P [ORNL; Allu, Srikanth [ORNL; Nanda, Jagjit [ORNL; Martha, Surendra K [ORNL; Dudney, Nancy J [ORNL; Turner, John A [ORNL
2012-01-01
Batteries are complex multiscale systems and a hierarchy of models has been employed to study different aspects of batteries at different resolutions. For the electrochemistry and charge transport, the models span from electric circuits, single-particle, pseudo 2D, detailed 3D, and microstructure resolved at the continuum scales and various techniques such as molecular dynamics and density functional theory to resolve the atomistic structure. Similar analogies exist for the thermal, mechanical, and electrical aspects of the batteries. We have been recently working on the development of a unified formulation for the continuum scales across the electrode-electrolyte-electrode system - using a rigorous volume averaging approach typical of multiphase formulation. This formulation accounts for any spatio-temporal variation of the different properties such as electrode/void volume fractions and anisotropic conductivities. In this talk the following will be presented: The background and the hierarchy of models that need to be integrated into a battery modeling framework to carry out predictive simulations, Our recent work on the unified 3D formulation addressing the missing links in the multiscale description of the batteries, Our work on microstructure resolved simulations for diffusion processes, Upscaling of quantities of interest to construct closures for the 3D continuum description, Sample results for a standard Carbon/Spinel cell will be presented and compared to experimental data, Finally, the infrastructure we are building to bring together components with different physics operating at different resolution will be presented. The presentation will also include details about how this generalized approach can be applied to other electrochemical storage systems such as supercapacitors, Li-Air batteries, and Lithium batteries with 3D architectures.
Bello, Nora M; Steibel, Juan P; Tempelman, Robert J
2010-06-01
Bivariate mixed effects models are often used to jointly infer upon covariance matrices for both random effects (u) and residuals (e) between two different phenotypes in order to investigate the architecture of their relationship. However, these (co)variances themselves may additionally depend upon covariates as well as additional sets of exchangeable random effects that facilitate borrowing of strength across a large number of clusters. We propose a hierarchical Bayesian extension of the classical bivariate mixed effects model by embedding additional levels of mixed effects modeling of reparameterizations of u-level and e-level (co)variances between two traits. These parameters are based upon a recently popularized square-root-free Cholesky decomposition and are readily interpretable, each conveniently facilitating a generalized linear model characterization. Using Markov Chain Monte Carlo methods, we validate our model based on a simulation study and apply it to a joint analysis of milk yield and calving interval phenotypes in Michigan dairy cows. This analysis indicates that the e-level relationship between the two traits is highly heterogeneous across herds and depends upon systematic herd management factors.
Wolf, J.
2002-01-01
To analyse the effects of climate change on potato growth and production, both a simple growth model, POTATOS, and a comprehensive model, NPOTATO, were applied. Both models were calibrated and tested against results from experiments and variety trials in The Netherlands. The sensitivity of model
Hierarchical Model Predictive Control for Sustainable Building Automation
Directory of Open Access Journals (Sweden)
Barbara Mayer
2017-02-01
Full Text Available A hierarchicalmodel predictive controller (HMPC is proposed for flexible and sustainable building automation. The implications of a building automation system for sustainability are defined, and model predictive control is introduced as an ideal tool to cover all requirements. The HMPC is presented as a development suitable for the optimization of modern buildings, as well as retrofitting. The performance and flexibility of the HMPC is demonstrated by simulation studies of a modern office building, and the perfect interaction with future smart grids is shown.
MacCann, Carolyn; Joseph, Dana L; Newman, Daniel A; Roberts, Richard D
2014-04-01
This article examines the status of emotional intelligence (EI) within the structure of human cognitive abilities. To evaluate whether EI is a 2nd-stratum factor of intelligence, data were fit to a series of structural models involving 3 indicators each for fluid intelligence, crystallized intelligence, quantitative reasoning, visual processing, and broad retrieval ability, as well as 2 indicators each for emotion perception, emotion understanding, and emotion management. Unidimensional, multidimensional, hierarchical, and bifactor solutions were estimated in a sample of 688 college and community college students. Results suggest adequate fit for 2 models: (a) an oblique 8-factor model (with 5 traditional cognitive ability factors and 3 EI factors) and (b) a hierarchical solution (with cognitive g at the highest level and EI representing a 2nd-stratum factor that loads onto g at λ = .80). The acceptable relative fit of the hierarchical model confirms the notion that EI is a group factor of cognitive ability, marking the expression of intelligence in the emotion domain. The discussion proposes a possible expansion of Cattell-Horn-Carroll theory to include EI as a 2nd-stratum factor of similar standing to factors such as fluid intelligence and visual processing.
Aging through hierarchical coalescence in the East model
Faggionato, A; Roberto, C; Toninelli, C
2010-01-01
We rigorously analyze the low temperature non-equilibrium dynamics of the East model, a special example of a one dimensional oriented kinetically constrained particle model, when the initial distribution is different from the reversible one and for times much smaller than the global relaxation time. This setting has been intensively studied in the physics literature to analyze the slow dynamics which follows a sudden quench from the liquid to the glass phase. In the limit of zero temperature (i.e. a vanishing density of vacancies) and for initial distributions such that the vacancies form a renewal process we prove that the density of vacancies, the persistence function and the two-time autocorrelation function behave as staircase functions with several plateaux. Furthermore the two-time autocorrelation function displays an aging behavior. We also provide a sharp description of the statistics of the domain length as a function of time, a domain being the interval between two consecutive vacancies. When the in...
Hierarchic stochastic modelling applied to intracellular Ca(2+ signals.
Directory of Open Access Journals (Sweden)
Gregor Moenke
Full Text Available Important biological processes like cell signalling and gene expression have noisy components and are very complex at the same time. Mathematical analysis of such systems has often been limited to the study of isolated subsystems, or approximations are used that are difficult to justify. Here we extend a recently published method (Thurley and Falcke, PNAS 2011 which is formulated in observable system configurations instead of molecular transitions. This reduces the number of system states by several orders of magnitude and avoids fitting of kinetic parameters. The method is applied to Ca(2+ signalling. Ca(2+ is a ubiquitous second messenger transmitting information by stochastic sequences of concentration spikes, which arise by coupling of subcellular Ca(2+ release events (puffs. We derive analytical expressions for a mechanistic Ca(2+ model, based on recent data from live cell imaging, and calculate Ca(2+ spike statistics in dependence on cellular parameters like stimulus strength or number of Ca(2+ channels. The new approach substantiates a generic Ca(2+ model, which is a very convenient way to simulate Ca(2+ spike sequences with correct spiking statistics.
[Determinants of malnutrition in a low-income population: hierarchical analytical model].
Olinto, M T; Victora, C G; Barros, F C; Tomasi, E
1993-01-01
To investigate the determinants of malnutrition among low-income children, the effects of socioeconomic, environmental, reproductive, morbidity, child care, birthweight and breastfeeding variables on stunting and wasting were studied. All 354 children below two years of age living in two urban slum areas of Pelotas, southern Brazil, were included. The multivariate analyses took into account the hierarchical structure of the risk factors for each type of deficit. Variables selected as significant on a given level of the model were considered as risk factors, even if their statistical significance was subsequently lost when hierarchically inferior variables were included. The final model for stunting included the variables education and presence of the father, maternal education and employment, birthweight and age. For wasting, the variables selected were the number of household appliances, birth interval, housing conditions, borough, birthweight, age, gender and previous hospitalizations.
Cerrolaza, Juan J; Villanueva, Arantxa; Cabeza, Rafael
2012-03-01
The accurate segmentation of subcortical brain structures in magnetic resonance (MR) images is of crucial importance in the interdisciplinary field of medical imaging. Although statistical approaches such as active shape models (ASMs) have proven to be particularly useful in the modeling of multiobject shapes, they are inefficient when facing challenging problems. Based on the wavelet transform, the fully generic multiresolution framework presented in this paper allows us to decompose the interobject relationships into different levels of detail. The aim of this hierarchical decomposition is twofold: to efficiently characterize the relationships between objects and their particular localities. Experiments performed on an eight-object structure defined in axial cross sectional MR brain images show that the new hierarchical segmentation significantly improves the accuracy of the segmentation, and while it exhibits a remarkable robustness with respect to the size of the training set.
Noma, Hisashi; Matsui, Shigeyuki
2013-05-20
The main purpose of microarray studies is screening of differentially expressed genes as candidates for further investigation. Because of limited resources in this stage, prioritizing genes are relevant statistical tasks in microarray studies. For effective gene selections, parametric empirical Bayes methods for ranking and selection of genes with largest effect sizes have been proposed (Noma et al., 2010; Biostatistics 11: 281-289). The hierarchical mixture model incorporates the differential and non-differential components and allows information borrowing across differential genes with separation from nuisance, non-differential genes. In this article, we develop empirical Bayes ranking methods via a semiparametric hierarchical mixture model. A nonparametric prior distribution, rather than parametric prior distributions, for effect sizes is specified and estimated using the "smoothing by roughening" approach of Laird and Louis (1991; Computational statistics and data analysis 12: 27-37). We present applications to childhood and infant leukemia clinical studies with microarrays for exploring genes related to prognosis or disease progression.
On hierarchical models for visual recognition and learning of objects, scenes, and activities
Spehr, Jens
2015-01-01
In many computer vision applications, objects have to be learned and recognized in images or image sequences. This book presents new probabilistic hierarchical models that allow an efficient representation of multiple objects of different categories, scales, rotations, and views. The idea is to exploit similarities between objects and object parts in order to share calculations and avoid redundant information. Furthermore inference approaches for fast and robust detection are presented. These new approaches combine the idea of compositional and similarity hierarchies and overcome limitations of previous methods. Besides classical object recognition the book shows the use for detection of human poses in a project for gait analysis. The use of activity detection is presented for the design of environments for ageing, to identify activities and behavior patterns in smart homes. In a presented project for parking spot detection using an intelligent vehicle, the proposed approaches are used to hierarchically model...
Root zone water quality model (RZWQM2): Model use, calibration and validation
Ma, Liwang; Ahuja, Lajpat; Nolan, B.T.; Malone, Robert; Trout, Thomas; Qi, Z.
2012-01-01
The Root Zone Water Quality Model (RZWQM2) has been used widely for simulating agricultural management effects on crop production and soil and water quality. Although it is a one-dimensional model, it has many desirable features for the modeling community. This article outlines the principles of calibrating the model component by component with one or more datasets and validating the model with independent datasets. Users should consult the RZWQM2 user manual distributed along with the model and a more detailed protocol on how to calibrate RZWQM2 provided in a book chapter. Two case studies (or examples) are included in this article. One is from an irrigated maize study in Colorado to illustrate the use of field and laboratory measured soil hydraulic properties on simulated soil water and crop production. It also demonstrates the interaction between soil and plant parameters in simulated plant responses to water stresses. The other is from a maize-soybean rotation study in Iowa to show a manual calibration of the model for crop yield, soil water, and N leaching in tile-drained soils. Although the commonly used trial-and-error calibration method works well for experienced users, as shown in the second example, an automated calibration procedure is more objective, as shown in the first example. Furthermore, the incorporation of the Parameter Estimation Software (PEST) into RZWQM2 made the calibration of the model more efficient than a grid (ordered) search of model parameters. In addition, PEST provides sensitivity and uncertainty analyses that should help users in selecting the right parameters to calibrate.
Heuristics for Hierarchical Partitioning with Application to Model Checking
DEFF Research Database (Denmark)
Möller, Michael Oliver; Alur, Rajeev
2001-01-01
for a temporal scaling technique, called “Next” heuristic [2]. The latter is applicable in reachability analysis and is included in a recent version of the Mocha model checking tool. We demonstrate performance and benefits of our method and use an asynchronous parity computer and an opinion poll protocol as case...... that captures the quality of a structure relative to the connections and favors shallow structures with a low degree of branching. Finding a structure with minimal cost is NP-complete. We present a greedy polynomial-time algorithm that approximates good solutions incrementally by local evaluation of a heuristic...... function. We argue for a heuristic function based on four criteria: the number of enclosed connections, the number of components, the number of touched connections and the depth of the structure. We report on an application in the context of formal verification, where our algorithm serves as a preprocessor...
A hierarchical lattice spring model to simulate the mechanics of 2-D materials-based composites
Directory of Open Access Journals (Sweden)
Lucas eBrely
2015-07-01
Full Text Available In the field of engineering materials, strength and toughness are typically two mutually exclusive properties. Structural biological materials such as bone, tendon or dentin have resolved this conflict and show unprecedented damage tolerance, toughness and strength levels. The common feature of these materials is their hierarchical heterogeneous structure, which contributes to increased energy dissipation before failure occurring at different scale levels. These structural properties are the key to exceptional bioinspired material mechanical properties, in particular for nanocomposites. Here, we develop a numerical model in order to simulate the mechanisms involved in damage progression and energy dissipation at different size scales in nano- and macro-composites, which depend both on the heterogeneity of the material and on the type of hierarchical structure. Both these aspects have been incorporated into a 2-dimensional model based on a Lattice Spring Model, accounting for geometrical nonlinearities and including statistically-based fracture phenomena. The model has been validated by comparing numerical results to continuum and fracture mechanics results as well as finite elements simulations, and then employed to study how structural aspects impact on hierarchical composite material properties. Results obtained with the numerical code highlight the dependence of stress distributions on matrix properties and reinforcement dispersion, geometry and properties, and how failure of sacrificial elements is directly involved in the damage tolerance of the material. Thanks to the rapidly developing field of nanocomposite manufacture, it is already possible to artificially create materials with multi-scale hierarchical reinforcements. The developed code could be a valuable support in the design and optimization of these advanced materials, drawing inspiration and going beyond biological materials with exceptional mechanical properties.
Joint hierarchical models for sparsely sampled high-dimensional LiDAR and forest variables
Finley, Andrew O.; Banerjee, Sudipto; Zhou, Yuzhen; Cook, Bruce D; Babcock, Chad
2016-01-01
Recent advancements in remote sensing technology, specifically Light Detection and Ranging (LiDAR) sensors, provide the data needed to quantify forest characteristics at a fine spatial resolution over large geographic domains. From an inferential standpoint, there is interest in prediction and interpolation of the often sparsely sampled and spatially misaligned LiDAR signals and forest variables. We propose a fully process-based Bayesian hierarchical model for above ground biomass (AGB) and L...
A Hierarchical Slicing Tool Model%一个分层切片工具模型
Institute of Scientific and Technical Information of China (English)
谭毅; 朱平; 李必信; 郑国梁
2001-01-01
Most of the traditional methods of slicing are based on dependence graph. But constructing dependence graph for object oriented programs directly is very complicated. The design and implementation of a hierarchical slicing tool model are described. By constructing the package level dependence graph, class level dependence graph, method level dependence graph and statement level dependence graph, package level slice, class level slice, method level slice and program slice are obtained step by step.
Jansen, P.G.W.
2003-01-01
Using hierarchical linear modeling the author investigated temporal trends in the predictive validity of an assessment center for career advancement (measured as salary growth) over a 13-year period, for a sample of 456 academic graduates. Using year of entry and tenure as controls, linear and quadratic properties of individual salary curves could be predicted by the assessment center dimensions. The validity of the (clinical) overall assessment rating for persons with tenure of at least 12 y...
Julia sets and complex singularities in diamond-like hierarchical Potts models
Institute of Scientific and Technical Information of China (English)
QIAO; Jianyong
2005-01-01
We study the phase transition of the Potts model on diamond-like hierarchical lattices. It is shown that the set of the complex singularities is the Julia set of a rational mapping. An interesting problem is how are these singularities continued to the complex plane. In this paper, by the method of complex dynamics, we give a complete description about the connectivity of the set of the complex singularities.
Chung-Chang Lee
2009-01-01
This paper uses hierarchical linear modeling (HLM) to explore the influence of satisfaction with public facilities on both individual residential and overall (or regional) levels on housing prices. The empirical results indicate that the average housing prices between local cities and counties exhibit significant variance. At the macro level, the explanatory power of the variable ¡§convenience of life¡¨ on the average housing prices of all counties and cities reaches the 5% significance level...
Guo,Qiang; Rajewski, Daniel; Takle, Eugene; Ganapathysubramanian, Baskar
2016-01-01
Current wind turbine simulations successfully use turbulence generating tools for modeling behavior. However, they lack the ability to reproduce variabilities in wind dynamics and inherent stochastic structures (like temporal and spatial coherences, sporadic bursts, high shear regions). This necessitates a more realistic parameterization of the wind that encodes location-, topography-, diurnal-, seasonal and stochastic affects. In this work, we develop a hierarchical temporal and spatial deco...
Calibration of a modified Sierra Model 235 slotted cascade impactor
Energy Technology Data Exchange (ETDEWEB)
Knuth, R.H.
1979-07-01
For measurements of ore dust in uranium concentrating mills, a Sierra Model 235 slotted cascade impactor was calibrated at a flow rate of .21 m/sup 3//min, using solid monodisperse particles and an impaction surface of Whatman No. 41 filter paper soaked in mineral oil. The reduction from the impactor's design flow rate of 1.13 m/sup 3//min (40 cfm) to 0.21 m/sup 3//min (7.5 cfm) increased the stage cut-off diameters by an average factor of 2.3, a necessary adjustment because of the anticipated large particle sizes of ore dust. The underestimation of mass median diameters, often caused by the rebound and reentrainment of solid particles from dry impaction surfaces, was virtually eliminated by using the oiled Whatman No. 41 impaction surface. Observations of satisfactory performance in the laboratory were verified by tests of the impactor in ore mills.
Cluster based hierarchical resource searching model in P2P network
Institute of Scientific and Technical Information of China (English)
Yang Ruijuan; Liu Jian; Tian Jingwen
2007-01-01
For the problem of large network load generated by the Gnutella resource-searching model in Peer to Peer (P2P) network, a improved model to decrease the network expense is proposed, which establishes a duster in P2P network,auto-organizes logical layers, and applies a hybrid mechanism of directional searching and flooding. The performance analysis and simulation results show that the proposed hierarchical searching model has availably reduced the generated message load and that its searching-response time performance is as fairly good as that of the Gnutella model.
The Case for A Hierarchal System Model for Linux Clusters
Energy Technology Data Exchange (ETDEWEB)
Seager, M; Gorda, B
2009-06-05
The computer industry today is no longer driven, as it was in the 40s, 50s and 60s, by High-performance computing requirements. Rather, HPC systems, especially Leadership class systems, sit on top of a pyramid investment mode. Figure 1 shows a representative pyramid investment model for systems hardware. At the base of the pyramid is the huge investment (order 10s of Billions of US Dollars per year) in semiconductor fabrication and process technologies. These costs, which are approximately doubling with every generation, are funded from investments multiple markets: enterprise, desktops, games, embedded and specialized devices. Over and above these base technology investments are investments for critical technology elements such as microprocessor, chipsets and memory ASIC components. Investments for these components are spread across the same markets as the base semiconductor processes investments. These second tier investments are approximately half the size of the lower level of the pyramid. The next technology investment layer up, tier 3, is more focused on scalable computing systems such as those needed for HPC and other markets. These tier 3 technology elements include networking (SAN, WAN and LAN), interconnects and large scalable SMP designs. Above these is tier 4 are relatively small investments necessary to build very large, scalable systems high-end or Leadership class systems. Primary among these are the specialized network designs of vertically integrated systems, etc.
Institute of Scientific and Technical Information of China (English)
WU; Jianhua; WANG; Zhaohui
2009-01-01
Digital libraries are complex systems and this brings difficulties for their evaluation.This paper proposes a hierarchical model to solve this problem,and puts the entangled matters into a clear-layered structure.Firstly,digital libraries(DLs thereafter)are classified into 5 groups in ascending gradations,i.e.mini DLs,small DLs,medium DLs,large DLs,and huge DLs by their scope of operation.Then,according to the characteristics of DLs at different operational scope and level of sophistication,they are further grouped into unitary DLs,union DLs and hybrid DLs accordingly.Based on this simulated structure,a hierarchical model for digital library evaluation is introduced,which evaluates DLs differentiatingly within a hierarchical scheme by using varying criteria based on their specific level of operational complexity such as at the micro-level,medium-level,and/or at the macro-level.Based on our careful examination and analysis of the current literature about DL evaluation system,an experiment is conducted by using the DL evaluation model along with its criteria for unitary DLs at micro-level.The main contents resulting from this evaluation experimentation and also those evaluation indicators and relevant issues of major concerns for DLs at medium-level and macro-level are also to be presented at some length.
Wildhaber, Mark L.; Wikle, Christopher K.; Moran, Edward H.; Anderson, Christopher J.; Franz, Kristie J.; Dey, Rima
2017-01-01
We present a hierarchical series of spatially decreasing and temporally increasing models to evaluate the uncertainty in the atmosphere – ocean global climate model (AOGCM) and the regional climate model (RCM) relative to the uncertainty in the somatic growth of the endangered pallid sturgeon (Scaphirhynchus albus). For effects on fish populations of riverine ecosystems, cli- mate output simulated by coarse-resolution AOGCMs and RCMs must be downscaled to basins to river hydrology to population response. One needs to transfer the information from these climate simulations down to the individual scale in a way that minimizes extrapolation and can account for spatio-temporal variability in the intervening stages. The goal is a framework to determine whether, given uncertainties in the climate models and the biological response, meaningful inference can still be made. The non-linear downscaling of climate information to the river scale requires that one realistically account for spatial and temporal variability across scale. Our down- scaling procedure includes the use of fixed/calibrated hydrological flow and temperature models coupled with a stochastically parameterized sturgeon bioenergetics model. We show that, although there is a large amount of uncertainty associated with both the climate model output and the fish growth process, one can establish significant differences in fish growth distributions between models, and between future and current climates for a given model.
Combined calibration and sensitivity analysis for a water quality model of the Biebrza River, Poland
Perk, van der M.; Bierkens, M.F.P.
1995-01-01
A study was performed to quantify the error in results of a water quality model of the Biebrza River, Poland, due to uncertainties in calibrated model parameters. The procedure used in this study combines calibration and sensitivity analysis. Finally,the model was validated to test the model capabil
Hydrological processes and model representation: impact of soft data on calibration
J.G. Arnold; M.A. Youssef; H. Yen; M.J. White; A.Y. Sheshukov; A.M. Sadeghi; D.N. Moriasi; J.L. Steiner; Devendra Amatya; R.W. Skaggs; E.B. Haney; J. Jeong; M. Arabi; P.H. Gowda
2015-01-01
Hydrologic and water quality models are increasingly used to determine the environmental impacts of climate variability and land management. Due to differing model objectives and differences in monitored data, there are currently no universally accepted procedures for model calibration and validation in the literature. In an effort to develop accepted model calibration...
Fast Ninomiya–Victoir calibration of the double-mean-reverting model
DEFF Research Database (Denmark)
Bayer, Christian; Gatheral, Jim; Karlsmark, Morten
2013-01-01
We consider the three-factor double mean reverting (DMR) option pricing model of Gatheral [Consistent Modelling of SPX and VIX Options, 2008], a model which can be successfully calibrated to both VIX options and SPX options simultaneously. One drawback of this model is that calibration may be slow...
A Predictive Model of Fragmentation using Adaptive Mesh Refinement and a Hierarchical Material Model
Energy Technology Data Exchange (ETDEWEB)
Koniges, A E; Masters, N D; Fisher, A C; Anderson, R W; Eder, D C; Benson, D; Kaiser, T B; Gunney, B T; Wang, P; Maddox, B R; Hansen, J F; Kalantar, D H; Dixit, P; Jarmakani, H; Meyers, M A
2009-03-03
Fragmentation is a fundamental material process that naturally spans spatial scales from microscopic to macroscopic. We developed a mathematical framework using an innovative combination of hierarchical material modeling (HMM) and adaptive mesh refinement (AMR) to connect the continuum to microstructural regimes. This framework has been implemented in a new multi-physics, multi-scale, 3D simulation code, NIF ALE-AMR. New multi-material volume fraction and interface reconstruction algorithms were developed for this new code, which is leading the world effort in hydrodynamic simulations that combine AMR with ALE (Arbitrary Lagrangian-Eulerian) techniques. The interface reconstruction algorithm is also used to produce fragments following material failure. In general, the material strength and failure models have history vector components that must be advected along with other properties of the mesh during remap stage of the ALE hydrodynamics. The fragmentation models are validated against an electromagnetically driven expanding ring experiment and dedicated laser-based fragmentation experiments conducted at the Jupiter Laser Facility. As part of the exit plan, the NIF ALE-AMR code was applied to a number of fragmentation problems of interest to the National Ignition Facility (NIF). One example shows the added benefit of multi-material ALE-AMR that relaxes the requirement that material boundaries must be along mesh boundaries.
Simultaneous Semi-Distributed Model Calibration Guided by ...
Modelling approaches to transfer hydrologically-relevant information from locations with streamflow measurements to locations without such measurements continues to be an active field of research for hydrologists. The Pacific Northwest Hydrologic Landscapes (PNW HL) provide a solid conceptual classification framework based on our understanding of dominant processes. A Hydrologic Landscape code (5 letter descriptor based on physical and climatic properties) describes each assessment unit area, and these units average area 60km2. The core function of these HL codes is to relate and transfer hydrologically meaningful information between watersheds without the need for streamflow time series. We present a novel approach based on the HL framework to answer the question “How can we calibrate models across separate watersheds simultaneously, guided by our understanding of dominant processes?“. We should be able to apply the same parameterizations to assessment units of common HL codes if 1) the Hydrologic Landscapes contain hydrologic information transferable between watersheds at a sub-watershed-scale and 2) we use a conceptual hydrologic model and parameters that reflect the hydrologic behavior of a watershed. In this study, This work specifically tests the ability or inability to use HL-codes to inform and share model parameters across watersheds in the Pacific Northwest. EPA’s Western Ecology Division has published and is refining a framework for defining la
Finite element model calibration of a nonlinear perforated plate
Ehrhardt, David A.; Allen, Matthew S.; Beberniss, Timothy J.; Neild, Simon A.
2017-03-01
This paper presents a case study in which the finite element model for a curved circular plate is calibrated to reproduce both the linear and nonlinear dynamic response measured from two nominally identical samples. The linear dynamic response is described with the linear natural frequencies and mode shapes identified with a roving hammer test. Due to the uncertainty in the stiffness characteristics from the manufactured perforations, the linear natural frequencies are used to update the effective modulus of elasticity of the full order finite element model (FEM). The nonlinear dynamic response is described with nonlinear normal modes (NNMs) measured using force appropriation and high speed 3D digital image correlation (3D-DIC). The measured NNMs are used to update the boundary conditions of the full order FEM through comparison with NNMs calculated from a nonlinear reduced order model (NLROM). This comparison revealed that the nonlinear behavior could not be captured without accounting for the small curvature of the plate from manufacturing as confirmed in literature. So, 3D-DIC was also used to identify the initial static curvature of each plate and the resulting curvature was included in the full order FEM. The updated models are then used to understand how the stress distribution changes at large response amplitudes providing a possible explanation of failures observed during testing.
Using Runoff Data to Calibrate the Community Land Model
Ray, J.; Hou, Z.; Huang, M.; Swiler, L.
2014-12-01
We present a statistical method for calibrating the Community Land Model (CLM) using streamflow observations collected between 1999 and 2008 at the outlet of two river basins from the Model Parameter Estimation Experiment (MOPEX), Oostanaula River at Resaca GA, and Walnut River at Winfield KS.. The observed streamflow shows variability over a large range of time-scales, none of which significantly dominates the others; consequently, the time-series seems noisy and is difficult to be directly used in model parameter estimation efforts without significant filtering. We perform a multi-resolution wavelet decomposition of the observed streamflow, and use the wavelet power coefficients (WPC) as the tuning data. We construct a mapping (a surrogate model) between WPC and three hydrological parameters of the CLM using a training set of 256 CLM runs. The dependence of WPC on the parameters is complex and cannot be captured using a surrogate unless the parameter combinations yield physically plausible model predictions, i.e., those that are skillful when compared to observations. Retaining only the top quartile of the runs ensures skillfulness, as measured by the RMS error between observations and CLM predictions. This "screening" of the training data yields a region (the "valid" region) in the parameter space where accurate surrogate models can be created. We construct a classifier for the "valid" region, and, in conjunction with the surrogate models for WPC, pose a Bayesian inverse problem for the three hydrological parameters. The inverse problem is solved using an adaptive Markov chain Monte Carlo (MCMC) method to construct a three-dimensional posterior distribution for the hydrological parameters. Posterior predictive tests using the surrogate model reveal that the posterior distribution is more predictive than the nominal values of the parameters, which are used as default values in the current version of CLM. The effectiveness of the inversion is then validated by
Anderson, Daniel
2012-01-01
This manuscript provides an overview of hierarchical linear modeling (HLM), as part of a series of papers covering topics relevant to consumers of educational research. HLM is tremendously flexible, allowing researchers to specify relations across multiple "levels" of the educational system (e.g., students, classrooms, schools, etc.).…
Hou, Fujun
2016-01-01
This paper provides a description of how market competitiveness evaluations concerning mechanical equipment can be made in the context of multi-criteria decision environments. It is assumed that, when we are evaluating the market competitiveness, there are limited number of candidates with some required qualifications, and the alternatives will be pairwise compared on a ratio scale. The qualifications are depicted as criteria in hierarchical structure. A hierarchical decision model called PCbHDM was used in this study based on an analysis of its desirable traits. Illustration and comparison shows that the PCbHDM provides a convenient and effective tool for evaluating the market competitiveness of mechanical equipment. The researchers and practitioners might use findings of this paper in application of PCbHDM.
DEFF Research Database (Denmark)
Mishnaevsky, Leon; Dai, Gaoming
2014-01-01
Hybrid and hierarchical polymer composites represent a promising group of materials for engineering applications. In this paper, computational studies of the strength and damage resistance of hybrid and hierarchical composites are reviewed. The reserves of the composite improvement are explored...... by using computational micromechanical models. It is shown that while glass/carbon fibers hybrid composites clearly demonstrate higher stiffness and lower weight with increasing the carbon content, they can have lower strength as compared with usual glass fiber polymer composites. Secondary...... nanoreinforcement can drastically increase the fatigue lifetime of composites. Especially, composites with the nanoplatelets localized in the fiber/matrix interface layer (fiber sizing) ensure much higher fatigue lifetime than those with the nanoplatelets in the matrix....
Xu, Lizhen; Paterson, Andrew D; Xu, Wei
2017-04-01
Motivated by the multivariate nature of microbiome data with hierarchical taxonomic clusters, counts that are often skewed and zero inflated, and repeated measures, we propose a Bayesian latent variable methodology to jointly model multiple operational taxonomic units within a single taxonomic cluster. This novel method can incorporate both negative binomial and zero-inflated negative binomial responses, and can account for serial and familial correlations. We develop a Markov chain Monte Carlo algorithm that is built on a data augmentation scheme using Pólya-Gamma random variables. Hierarchical centering and parameter expansion techniques are also used to improve the convergence of the Markov chain. We evaluate the performance of our proposed method through extensive simulations. We also apply our method to a human microbiome study.
Li, Ben; Li, Yunxiao; Qin, Zhaohui S
2017-06-01
Modern high-throughput biotechnologies such as microarray and next generation sequencing produce a massive amount of information for each sample assayed. However, in a typical high-throughput experiment, only limited amount of data are observed for each individual feature, thus the classical 'large p, small n' problem. Bayesian hierarchical model, capable of borrowing strength across features within the same dataset, has been recognized as an effective tool in analyzing such data. However, the shrinkage effect, the most prominent feature of hierarchical features, can lead to undesirable over-correction for some features. In this work, we discuss possible causes of the over-correction problem and propose several alternative solutions. Our strategy is rooted in the fact that in the Big Data era, large amount of historical data are available which should be taken advantage of. Our strategy presents a new framework to enhance the Bayesian hierarchical model. Through simulation and real data analysis, we demonstrated superior performance of the proposed strategy. Our new strategy also enables borrowing information across different platforms which could be extremely useful with emergence of new technologies and accumulation of data from different platforms in the Big Data era. Our method has been implemented in R package "adaptiveHM", which is freely available from https://github.com/benliemory/adaptiveHM.
Gas turbine engine prognostics using Bayesian hierarchical models: A variational approach
Zaidan, Martha A.; Mills, Andrew R.; Harrison, Robert F.; Fleming, Peter J.
2016-03-01
Prognostics is an emerging requirement of modern health monitoring that aims to increase the fidelity of failure-time predictions by the appropriate use of sensory and reliability information. In the aerospace industry it is a key technology to reduce life-cycle costs, improve reliability and asset availability for a diverse fleet of gas turbine engines. In this work, a Bayesian hierarchical model is selected to utilise fleet data from multiple assets to perform probabilistic estimation of remaining useful life (RUL) for civil aerospace gas turbine engines. The hierarchical formulation allows Bayesian updates of an individual predictive model to be made, based upon data received asynchronously from a fleet of assets with different in-service lives and for the entry of new assets into the fleet. In this paper, variational inference is applied to the hierarchical formulation to overcome the computational and convergence concerns that are raised by the numerical sampling techniques needed for inference in the original formulation. The algorithm is tested on synthetic data, where the quality of approximation is shown to be satisfactory with respect to prediction performance, computational speed, and ease of use. A case study of in-service gas turbine engine data demonstrates the value of integrating fleet data for accurately predicting degradation trajectories of assets.
Hierarchical analytical and simulation modelling of human-machine systems with interference
Braginsky, M. Ya; Tarakanov, D. V.; Tsapko, S. G.; Tsapko, I. V.; Baglaeva, E. A.
2017-01-01
The article considers the principles of building the analytical and simulation model of the human operator and the industrial control system hardware and software. E-networks as the extension of Petri nets are used as the mathematical apparatus. This approach allows simulating complex parallel distributed processes in human-machine systems. The structural and hierarchical approach is used as the building method for the mathematical model of the human operator. The upper level of the human operator is represented by the logical dynamic model of decision making based on E-networks. The lower level reflects psychophysiological characteristics of the human-operator.
Critical behavior of Gaussian model on diamond-type hierarchical lattices
Institute of Scientific and Technical Information of China (English)
孔祥木; 李崧
1999-01-01
It is proposed that the Gaussian type distribution constant bqi in the Gaussian model depends on the coordination number qi of site i, and that the relation bqi/bqj = qi/qj holds among bqi’s. The Gaussian model is then studied on a family of the diamond-type hierarchical （or DH） lattices, by the decimation real-space renormalization group following spin-resealing method. It is found that the magnetic property of the Gaussian model belongs to the same universal class, and that the critical point K* and the critical exponent v are given by K*= bqi/qi and v=1/2, respectively.
Hierarchical Colored Timed Petri Nets for Maintenance Process Modeling of Civil Aircraft
Institute of Scientific and Technical Information of China (English)
FU Cheng-cheng; SUN You-chao; LU Zhong
2008-01-01
Civil aircraft maintenance process simulation model is an effective method for analyzing the maintainability of a civil aircraft. First, we present the Hierarchical Colored Timed Petri Nets for maintenance process modeling of civil aircraft. Then, we expound a general method of civil aircraft maintenance activities, determine the maintenance level for decomposition, and propose the methods of describing logic of relations between the maintenance activities based on Petri Net. Finally, a time Colored Petri multi-level network modeling and simulation procedures and steps are given with the maintenance example of the landing gear burst tire of a certain type of aircraft. The feasibility of the method is proved by the example.
Razavi, S.; Anderson, D.; Martin, P.; MacMillan, G.; Tolson, B.; Gabriel, C.; Zhang, B.
2012-12-01
Many sophisticated groundwater models tend to be computationally intensive as they rigorously represent detailed scientific knowledge about the groundwater systems. Calibration (model inversion), which is a vital step of groundwater model development, can require hundreds or thousands of model evaluations (runs) for different sets of parameters and as such demand prohibitively large computational time and resources. One common strategy to circumvent this computational burden is surrogate modelling which is concerned with developing and utilizing fast-to-run surrogates of the original computationally intensive models (also called fine models). Surrogates can be either based on statistical and data-driven models such as kriging and neural networks or simplified physically-based models with lower fidelity to the original system (also called coarse models). Fidelity in this context refers to the degree of the realism of a simulation model. This research initially investigates different strategies for developing lower-fidelity surrogates of a fine groundwater model and their combinations. These strategies include coarsening the fine model, relaxing the numerical convergence criteria, and simplifying the model geological conceptualisation. Trade-offs between model efficiency and fidelity (accuracy) are of special interest. A methodological framework is developed for coordinating the original fine model with its lower-fidelity surrogates with the objective of efficiently calibrating the parameters of the original model. This framework is capable of mapping the original model parameters to the corresponding surrogate model parameters and also mapping the surrogate model response for the given parameters to the original model response. This framework is general in that it can be used with different optimization and/or uncertainty analysis techniques available for groundwater model calibration and parameter/predictive uncertainty assessment. A real-world computationally
Aggett, Graeme; Spies, Ryan; Szfranski, Bill; Hahn, Claudia; Weil, Page
2016-04-01
An adequate forecasting model may not perform well if it is inadequately calibrated. Model calibration is often constrained by the lack of adequate calibration data, especially for small river basins with high spatial rainfall variability. Rainfall/snow station networks may not be dense enough to accurately estimate the catchment rainfall/SWE. High discharges during flood events are subject to significant error due to flow gauging difficulty. Dynamic changes in catchment conditions (e.g., urbanization; losses in karstic systems) invariably introduce non-homogeneity in the water level and flow data. This presentation will highlight some of the challenges in reliable calibration of National Weather Service (i.e. US) operational flood forecast models, emphasizing the various challenges in different physiographic/climatic domains. It will also highlight the benefit of using various data visualization techniques to transfer information about model calibration to operational forecasters so they may understand the influence of the calibration on model performance under various conditions.
Analysis of household data on influenza epidemic with Bayesian hierarchical model.
Hsu, C Y; Yen, A M F; Chen, L S; Chen, H H
2015-03-01
Data used for modelling the household transmission of infectious diseases, such as influenza, have inherent multilevel structures and correlated property, which make the widely used conventional infectious disease transmission models (including the Greenwood model and the Reed-Frost model) not directly applicable within the context of a household (due to the crowded domestic condition or socioeconomic status of the household). Thus, at the household level, the effects resulting from individual-level factors, such as vaccination, may be confounded or modified in some way. We proposed the Bayesian hierarchical random-effects (random intercepts and random slopes) model under the context of generalised linear model to capture heterogeneity and variation on the individual, generation, and household levels. It was applied to empirical surveillance data on the influenza epidemic in Taiwan. The parameters of interest were estimated by using the Markov chain Monte Carlo method in conjunction with the Bayesian directed acyclic graphical models. Comparisons between models were made using the deviance information criterion. Based on the result of the random-slope Bayesian hierarchical method under the context of the Reed-Frost transmission model, the regression coefficient regarding the protective effect of vaccination varied statistically significantly from household to household. The result of such a heterogeneity was robust to the use of different prior distributions (including non-informative, sceptical, and enthusiastic ones). By integrating out the uncertainty of the parameters of the posterior distribution, the predictive distribution was computed to forecast the number of influenza cases allowing for random-household effect.
Hierarchical graphs for better annotations of rule-based models of biochemical systems
Energy Technology Data Exchange (ETDEWEB)
Hu, Bin [Los Alamos National Laboratory; Hlavacek, William [Los Alamos National Laboratory
2009-01-01
In the graph-based formalism of the BioNetGen language (BNGL), graphs are used to represent molecules, with a colored vertex representing a component of a molecule, a vertex label representing the internal state of a component, and an edge representing a bond between components. Components of a molecule share the same color. Furthermore, graph-rewriting rules are used to represent molecular interactions, with a rule that specifies addition (removal) of an edge representing a class of association (dissociation) reactions and with a rule that specifies a change of vertex label representing a class of reactions that affect the internal state of a molecular component. A set of rules comprises a mathematical/computational model that can be used to determine, through various means, the system-level dynamics of molecular interactions in a biochemical system. Here, for purposes of model annotation, we propose an extension of BNGL that involves the use of hierarchical graphs to represent (1) relationships among components and subcomponents of molecules and (2) relationships among classes of reactions defined by rules. We illustrate how hierarchical graphs can be used to naturally document the structural organization of the functional components and subcomponents of two proteins: the protein tyrosine kinase Lck and the T cell receptor (TCR)/CD3 complex. Likewise, we illustrate how hierarchical graphs can be used to document the similarity of two related rules for kinase-catalyzed phosphorylation of a protein substrate. We also demonstrate how a hierarchical graph representing a protein can be encoded in an XML-based format.
DEFF Research Database (Denmark)
Kristensen, Anders Ringgaard; Søllested, Thomas Algot
2004-01-01
that really uses all these methodological improvements. In this paper, the biological model describing the performance and feed intake of sows is presented. In particular, estimation of herd specific parameters is emphasized. The optimization model is described in a subsequent paper......Several replacement models have been presented in literature. In other applicational areas like dairy cow replacement, various methodological improvements like hierarchical Markov processes and Bayesian updating have been implemented, but not in sow models. Furthermore, there are methodological...... improvements like multi-level hierarchical Markov processes with decisions on multiple time scales, efficient methods for parameter estimations at herd level and standard software that has been hardly implemented at all in any replacement model. The aim of this study is to present a sow replacement model...
Hierarchical Modeling and Robust Synthesis for the Preliminary Design of Large Scale Complex Systems
Koch, Patrick N.
1997-01-01
Large-scale complex systems are characterized by multiple interacting subsystems and the analysis of multiple disciplines. The design and development of such systems inevitably requires the resolution of multiple conflicting objectives. The size of complex systems, however, prohibits the development of comprehensive system models, and thus these systems must be partitioned into their constituent parts. Because simultaneous solution of individual subsystem models is often not manageable iteration is inevitable and often excessive. In this dissertation these issues are addressed through the development of a method for hierarchical robust preliminary design exploration to facilitate concurrent system and subsystem design exploration, for the concurrent generation of robust system and subsystem specifications for the preliminary design of multi-level, multi-objective, large-scale complex systems. This method is developed through the integration and expansion of current design techniques: Hierarchical partitioning and modeling techniques for partitioning large-scale complex systems into more tractable parts, and allowing integration of subproblems for system synthesis; Statistical experimentation and approximation techniques for increasing both the efficiency and the comprehensiveness of preliminary design exploration; and Noise modeling techniques for implementing robust preliminary design when approximate models are employed. Hierarchical partitioning and modeling techniques including intermediate responses, linking variables, and compatibility constraints are incorporated within a hierarchical compromise decision support problem formulation for synthesizing subproblem solutions for a partitioned system. Experimentation and approximation techniques are employed for concurrent investigations and modeling of partitioned subproblems. A modified composite experiment is introduced for fitting better predictive models across the ranges of the factors, and an approach for
Engel, Dave W.; Reichardt, Thomas A.; Kulp, Thomas J.; Graff, David L.; Thompson, Sandra E.
2016-05-01
Validating predictive models and quantifying uncertainties inherent in the modeling process is a critical component of the HARD Solids Venture program [1]. Our current research focuses on validating physics-based models predicting the optical properties of solid materials for arbitrary surface morphologies and characterizing the uncertainties in these models. We employ a systematic and hierarchical approach by designing physical experiments and comparing the experimental results with the outputs of computational predictive models. We illustrate this approach through an example comparing a micro-scale forward model to an idealized solid-material system and then propagating the results through a system model to the sensor level. Our efforts should enhance detection reliability of the hyper-spectral imaging technique and the confidence in model utilization and model outputs by users and stakeholders.
Energy Technology Data Exchange (ETDEWEB)
Engel, David W.; Reichardt, Thomas A.; Kulp, Thomas J.; Graff, David; Thompson, Sandra E.
2016-09-17
Validating predictive models and quantifying uncertainties inherent in the modeling process is a critical component of the HARD Solids Venture program [1]. Our current research focuses on validating physics-based models predicting the optical properties of solid materials for arbitrary surface morphologies and characterizing the uncertainties in these models. We employ a systematic and hierarchical approach by designing physical experiments and comparing the experimental results with the outputs of computational predictive models. We illustrate this approach through an example comparing a micro-scale forward model to an idealized solid-material system and then propagating the results through a system model to the sensor level. Our efforts should enhance detection reliability of the hyper-spectral imaging technique and the confidence in model utilization and model outputs by users and stakeholders.
Dong, Ren G.; Welcome, Daniel E.; McDowell, Thomas W.; Wu, John Z.
2015-01-01
While simulations of the measured biodynamic responses of the whole human body or body segments to vibration are conventionally interpreted as summaries of biodynamic measurements, and the resulting models are considered quantitative, this study looked at these simulations from a different angle: model calibration. The specific aims of this study are to review and clarify the theoretical basis for model calibration, to help formulate the criteria for calibration validation, and to help appropriately select and apply calibration methods. In addition to established vibration theory, a novel theorem of mechanical vibration is also used to enhance the understanding of the mathematical and physical principles of the calibration. Based on this enhanced understanding, a set of criteria was proposed and used to systematically examine the calibration methods. Besides theoretical analyses, a numerical testing method is also used in the examination. This study identified the basic requirements for each calibration method to obtain a unique calibration solution. This study also confirmed that the solution becomes more robust if more than sufficient calibration references are provided. Practically, however, as more references are used, more inconsistencies can arise among the measured data for representing the biodynamic properties. To help account for the relative reliabilities of the references, a baseline weighting scheme is proposed. The analyses suggest that the best choice of calibration method depends on the modeling purpose, the model structure, and the availability and reliability of representative reference data. PMID:26740726
2013-01-01
This paper proposes a hierarchical Bayesian framework for modeling the life cycle of marine exploited fish with a spatial perspective. The application was developed for a nursery-dependent fish species, the common sole (Solea solea), on the Eastern Channel population (Western Europe). The approach combined processes of different natures and various sources of observations within an integrated framework for life-cycle modeling: (1) outputs of an individual-based model for larval drift and surv...
NSLS-II: Nonlinear Model Calibration for Synchrotrons
Energy Technology Data Exchange (ETDEWEB)
Bengtsson, J.
2010-10-08
This tech note is essentially a summary of a lecture we delivered to the Acc. Phys. Journal Club Apr, 2010. However, since the estimated accuracy of these methods has been naive and misleading in the field of particle accelerators, i.e., ignores the impact of noise, we will elaborate on this in some detail. A prerequisite for a calibration of the nonlinear Hamiltonian is that the quadratic part has been understood, i.e., that the linear optics for the real accelerator has been calibrated. For synchrotron light source operations, this problem has been solved by the interactive LOCO technique/tool (Linear Optics from Closed Orbits). Before that, in the context of hadron accelerators, it has been done by signal processing of turn-by-turn BPM data. We have outlined how to make a basic calibration of the nonlinear model for synchrotrons. In particular, we have shown how this was done for LEAR, CERN (antiprotons) in the mid-80s. Specifically, our accuracy for frequency estimation was {approx} 1 x 10{sup -5} for 1024 turns (to calibrate the linear optics) and {approx} 1 x 10{sup -4} for 256 turns for tune footprint and betatron spectrum. For a comparison, the estimated tune footprint for stable beam for NSLS-II is {approx}0.1. Since the transverse damping time is {approx}20 msec, i.e., {approx}4,000 turns. There is no fundamental difference for: antiprotons, protons, and electrons in this case. Because the estimated accuracy for these methods in the field of particle accelerators has been naive, i.e., ignoring the impact of noise, we have also derived explicit formula, from first principles, for a quantitative statement. For e.g. N = 256 and 5% noise we obtain {delta}{nu} {approx} 1 x 10{sup -5}. A comparison with the state-of-the-arts in e.g. telecomm and electrical engineering since the 60s is quite revealing. For example, Kalman filter (1960), crucial for the: Ranger, Mariner, and Apollo (including the Lunar Module) missions during the 60s. Or Claude Shannon et al
Hierarchical Agent-Based Integrated Modelling Approach for Microgrids with Adoption of EVs and HRES
Directory of Open Access Journals (Sweden)
Peng Han
2014-01-01
Full Text Available The large adoption of electric vehicles (EVs, hybrid renewable energy systems (HRESs, and the increasing of the loads shall bring significant challenges to the microgrid. The methodology to model microgrid with high EVs and HRESs penetrations is the key to EVs adoption assessment and optimized HRESs deployment. However, considering the complex interactions of the microgrid containing massive EVs and HRESs, any previous single modelling approaches are insufficient. Therefore in this paper, the methodology named Hierarchical Agent-based Integrated Modelling Approach (HAIMA is proposed. With the effective integration of the agent-based modelling with other advanced modelling approaches, the proposed approach theoretically contributes to a new microgrid model hierarchically constituted by microgrid management layer, component layer, and event layer. Then the HAIMA further links the key parameters and interconnects them to achieve the interactions of the whole model. Furthermore, HAIMA practically contributes to a comprehensive microgrid operation system, through which the assessment of the proposed model and the impact of the EVs adoption are achieved. Simulations show that the proposed HAIMA methodology will be beneficial for the microgrid study and EV’s operation assessment and shall be further utilized for the energy management, electricity consumption prediction, the EV scheduling control, and HRES deployment optimization.
Calibration of a distributed hydrology and land surface model using energy flux measurements
DEFF Research Database (Denmark)
Larsen, Morten Andreas Dahl; Refsgaard, Jens Christian; Jensen, Karsten H.
2016-01-01
In this study we develop and test a calibration approach on a spatially distributed groundwater-surface water catchment model (MIKE SHE) coupled to a land surface model component with particular focus on the water and energy fluxes. The model is calibrated against time series of eddy flux measure...
HDDM: Hierarchical Bayesian estimation of the Drift-Diffusion Model in Python.
Wiecki, Thomas V; Sofer, Imri; Frank, Michael J
2013-01-01
The diffusion model is a commonly used tool to infer latent psychological processes underlying decision-making, and to link them to neural mechanisms based on response times. Although efficient open source software has been made available to quantitatively fit the model to data, current estimation methods require an abundance of response time measurements to recover meaningful parameters, and only provide point estimates of each parameter. In contrast, hierarchical Bayesian parameter estimation methods are useful for enhancing statistical power, allowing for simultaneous estimation of individual subject parameters and the group distribution that they are drawn from, while also providing measures of uncertainty in these parameters in the posterior distribution. Here, we present a novel Python-based toolbox called HDDM (hierarchical drift diffusion model), which allows fast and flexible estimation of the the drift-diffusion model and the related linear ballistic accumulator model. HDDM requires fewer data per subject/condition than non-hierarchical methods, allows for full Bayesian data analysis, and can handle outliers in the data. Finally, HDDM supports the estimation of how trial-by-trial measurements (e.g., fMRI) influence decision-making parameters. This paper will first describe the theoretical background of the drift diffusion model and Bayesian inference. We then illustrate usage of the toolbox on a real-world data set from our lab. Finally, parameter recovery studies show that HDDM beats alternative fitting methods like the χ(2)-quantile method as well as maximum likelihood estimation. The software and documentation can be downloaded at: http://ski.clps.brown.edu/hddm_docs/
HDDM: Hierarchical Bayesian estimation of the Drift-Diffusion Model in Python
Directory of Open Access Journals (Sweden)
Thomas V Wiecki
2013-08-01
Full Text Available The diffusion model is a commonly used tool to infer latent psychological processes underlying decision making, and to link them to neural mechanisms based on reaction times. Although efficient open source software has been made available to quantitatively fit the model to data, current estimation methods require an abundance of reaction time measurements to recover meaningful parameters, and only provide point estimates of each parameter. In contrast, hierarchical Bayesian parameter estimation methods are useful for enhancing statistical power, allowing for simultaneous estimation of individual subject parameters and the group distribution that they are drawn from, while also providing measures of uncertainty in these parameters in the posterior distribution. Here, we present a novel Python-based toolbox called HDDM (hierarchical drift diffusion model, which allows fast and flexible estimation of the the drift-diffusion model and the related linear ballistic accumulator model. HDDM requires fewer data per subject / condition than non-hierarchical method, allows for full Bayesian data analysis, and can handle outliers in the data. Finally, HDDM supports the estimation of how trial-by-trial measurements (e.g. fMRI influence decision making parameters. This paper will first describe the theoretical background of drift-diffusion model and Bayesian inference. We then illustrate usage of the toolbox on a real-world data set from our lab. Finally, parameter recovery studies show that HDDM beats alternative fitting methods like the chi-quantile method as well as maximum likelihood estimation. The software and documentation can be downloaded at: http://ski.clps.brown.edu/hddm_docs
Branco, N S; de Sousa, J Ricardo; Ghosh, Angsula
2008-03-01
Using a real-space renormalization-group approximation, we study the anisotropic quantum Heisenberg model on hierarchical lattices, with interactions following aperiodic sequences. Three different sequences are considered, with relevant and irrelevant fluctuations, according to the Luck-Harris criterion. The phase diagram is discussed as a function of the anisotropy parameter Delta (such that Delta=0 and 1 correspond to the isotropic Heisenberg and Ising models, respectively). We find three different types of phase diagrams, with general characteristics: the isotropic Heisenberg plane is always an invariant one (as expected by symmetry arguments) and the critical behavior of the anisotropic Heisenberg model is governed by fixed points on the Ising-model plane. Our results for the isotropic Heisenberg model show that the relevance or irrelevance of aperiodic models, when compared to their uniform counterpart, is as predicted by the Harris-Luck criterion. A low-temperature renormalization-group procedure was applied to the classical isotropic Heisenberg model in two-dimensional hierarchical lattices: the relevance criterion is obtained, again in accordance with the Harris-Luck criterion.
Modelling and calibration of a ring-shaped electrostatic meter
Energy Technology Data Exchange (ETDEWEB)
Zhang Jianyong [University of Teesside, Middlesbrough TS1 3BA (United Kingdom); Zhou Bin; Xu Chuanlong; Wang Shimin, E-mail: zhoubinde1980@gmail.co [Southeast University, Sipailou 2, Nanjing 210096 (China)
2009-02-01
Ring-shaped electrostatic flow meters can provide very useful information on pneumatically transported air-solids mixture. This type of meters are popular in measuring and controlling the pulverized coal flow distribution among conveyors leading to burners in coal-fired power stations, and they have also been used for research purposes, e.g. for the investigation of electrification mechanism of air-solids two-phase flow. In this paper, finite element method (FEM) is employed to analyze the characteristics of ring-shaped electrostatic meters, and a mathematic model has been developed to express the relationship between the meter's voltage output and the motion of charged particles in the sensing volume. The theoretical analysis and the test results using a belt rig demonstrate that the output of the meter depends upon many parameters including the characteristics of conditioning circuitry, the particle velocity vector, the amount and the rate of change of the charge carried by particles, the locations of particles and etc. This paper also introduces a method to optimize the theoretical model via calibration.
Model Robust Calibration: Method and Application to Electronically-Scanned Pressure Transducers
Walker, Eric L.; Starnes, B. Alden; Birch, Jeffery B.; Mays, James E.
2010-01-01
This article presents the application of a recently developed statistical regression method to the controlled instrument calibration problem. The statistical method of Model Robust Regression (MRR), developed by Mays, Birch, and Starnes, is shown to improve instrument calibration by reducing the reliance of the calibration on a predetermined parametric (e.g. polynomial, exponential, logarithmic) model. This is accomplished by allowing fits from the predetermined parametric model to be augmented by a certain portion of a fit to the residuals from the initial regression using a nonparametric (locally parametric) regression technique. The method is demonstrated for the absolute scale calibration of silicon-based pressure transducers.
Improved Calibration of Near-Infrared Spectra by Using Ensembles of Neural Network Models
Ukil, A.; Bernasconi, J.; Braendle, H.; Buijs, H.; Bonenfant, S.
2015-01-01
IR or near-infrared (NIR) spectroscopy is a method used to identify a compound or to analyze the composition of a material. Calibration of NIR spectra refers to the use of the spectra as multivariate descriptors to predict concentrations of the constituents. To build a calibration model, state-of-the-art software predominantly uses linear regression techniques. For nonlinear calibration problems, neural network-based models have proved to be an interesting alternative. In this paper, we propo...
In this paper, the Genetic Algorithms (GA) and Bayesian model averaging (BMA) were combined to simultaneously conduct calibration and uncertainty analysis for the Soil and Water Assessment Tool (SWAT). In this hybrid method, several SWAT models with different structures are first selected; next GA i...
A hierarchical model for probabilistic independent component analysis of multi-subject fMRI studies.
Guo, Ying; Tang, Li
2013-12-01
An important goal in fMRI studies is to decompose the observed series of brain images to identify and characterize underlying brain functional networks. Independent component analysis (ICA) has been shown to be a powerful computational tool for this purpose. Classic ICA has been successfully applied to single-subject fMRI data. The extension of ICA to group inferences in neuroimaging studies, however, is challenging due to the unavailability of a pre-specified group design matrix. Existing group ICA methods generally concatenate observed fMRI data across subjects on the temporal domain and then decompose multi-subject data in a similar manner to single-subject ICA. The major limitation of existing methods is that they ignore between-subject variability in spatial distributions of brain functional networks in group ICA. In this article, we propose a new hierarchical probabilistic group ICA method to formally model subject-specific effects in both temporal and spatial domains when decomposing multi-subject fMRI data. The proposed method provides model-based estimation of brain functional networks at both the population and subject level. An important advantage of the hierarchical model is that it provides a formal statistical framework to investigate similarities and differences in brain functional networks across subjects, for example, subjects with mental disorders or neurodegenerative diseases such as Parkinson's as compared to normal subjects. We develop an EM algorithm for model estimation where both the E-step and M-step have explicit forms. We compare the performance of the proposed hierarchical model with that of two popular group ICA methods via simulation studies. We illustrate our method with application to an fMRI study of Zen meditation.
Directory of Open Access Journals (Sweden)
Andrew Cron
Full Text Available Flow cytometry is the prototypical assay for multi-parameter single cell analysis, and is essential in vaccine and biomarker research for the enumeration of antigen-specific lymphocytes that are often found in extremely low frequencies (0.1% or less. Standard analysis of flow cytometry data relies on visual identification of cell subsets by experts, a process that is subjective and often difficult to reproduce. An alternative and more objective approach is the use of statistical models to identify cell subsets of interest in an automated fashion. Two specific challenges for automated analysis are to detect extremely low frequency event subsets without biasing the estimate by pre-processing enrichment, and the ability to align cell subsets across multiple data samples for comparative analysis. In this manuscript, we develop hierarchical modeling extensions to the Dirichlet Process Gaussian Mixture Model (DPGMM approach we have previously described for cell subset identification, and show that the hierarchical DPGMM (HDPGMM naturally generates an aligned data model that captures both commonalities and variations across multiple samples. HDPGMM also increases the sensitivity to extremely low frequency events by sharing information across multiple samples analyzed simultaneously. We validate the accuracy and reproducibility of HDPGMM estimates of antigen-specific T cells on clinically relevant reference peripheral blood mononuclear cell (PBMC samples with known frequencies of antigen-specific T cells. These cell samples take advantage of retrovirally TCR-transduced T cells spiked into autologous PBMC samples to give a defined number of antigen-specific T cells detectable by HLA-peptide multimer binding. We provide open source software that can take advantage of both multiple processors and GPU-acceleration to perform the numerically-demanding computations. We show that hierarchical modeling is a useful probabilistic approach that can provide a
Directory of Open Access Journals (Sweden)
X. Chen
2013-09-01
Full Text Available A Hierarchal Bayesian model for forecasting regional summer rainfall and streamflow season-ahead using exogenous climate variables for East Central China is presented. The model provides estimates of the posterior forecasted probability distribution for 12 rainfall and 2 streamflow stations considering parameter uncertainty, and cross-site correlation. The model has a multilevel structure with regression coefficients modeled from a common multivariate normal distribution results in partial-pooling of information across multiple stations and better representation of parameter and posterior distribution uncertainty. Covariance structure of the residuals across stations is explicitly modeled. Model performance is tested under leave-10-out cross-validation. Frequentist and Bayesian performance metrics used include Receiver Operating Characteristic, Reduction of Error, Coefficient of Efficiency, Rank Probability Skill Scores, and coverage by posterior credible intervals. The ability of the model to reliably forecast regional summer rainfall and streamflow season-ahead offers potential for developing adaptive water risk management strategies.
Efficient Actor-Critic Algorithm with Hierarchical Model Learning and Planning
Fu, QiMing
2016-01-01
To improve the convergence rate and the sample efficiency, two efficient learning methods AC-HMLP and RAC-HMLP (AC-HMLP with ℓ2-regularization) are proposed by combining actor-critic algorithm with hierarchical model learning and planning. The hierarchical models consisting of the local and the global models, which are learned at the same time during learning of the value function and the policy, are approximated by local linear regression (LLR) and linear function approximation (LFA), respectively. Both the local model and the global model are applied to generate samples for planning; the former is used only if the state-prediction error does not surpass the threshold at each time step, while the latter is utilized at the end of each episode. The purpose of taking both models is to improve the sample efficiency and accelerate the convergence rate of the whole algorithm through fully utilizing the local and global information. Experimentally, AC-HMLP and RAC-HMLP are compared with three representative algorithms on two Reinforcement Learning (RL) benchmark problems. The results demonstrate that they perform best in terms of convergence rate and sample efficiency. PMID:27795704
Multi-scale hierarchical approach for parametric mapping: assessment on multi-compartmental models.
Rizzo, G; Turkheimer, F E; Bertoldo, A
2013-02-15
This paper investigates a new hierarchical method to apply basis function to mono- and multi-compartmental models (Hierarchical-Basis Function Method, H-BFM) at a voxel level. This method identifies the parameters of the compartmental model in its nonlinearized version, integrating information derived at the region of interest (ROI) level by segmenting the cerebral volume based on anatomical definition or functional clustering. We present the results obtained by using a two tissue-four rate constant model with two different tracers ([(11)C]FLB457 and [carbonyl-(11)C]WAY100635), one of the most complex models used in receptor studies, especially at the voxel level. H-BFM is robust and its application on both [(11)C]FLB457 and [carbonyl-(11)C]WAY100635 allows accurate and precise parameter estimates, good quality parametric maps and a low percentage of voxels out of physiological bound (approach for PET quantification by using compartmental modeling at the voxel level. In particular, different from other proposed approaches, this method can also be used when the linearization of the model is not appropriate. We expect that applying it to clinical data will generate reliable parametric maps. Copyright © 2012 Elsevier Inc. All rights reserved.
Dettmer, Jan; Molnar, Sheri; Steininger, Gavin; Dosso, Stan E.; Cassidy, John F.
2012-02-01
This paper applies a general trans-dimensional Bayesian inference methodology and hierarchical autoregressive data-error models to the inversion of microtremor array dispersion data for shear wave velocity (vs) structure. This approach accounts for the limited knowledge of the optimal earth model parametrization (e.g. the number of layers in the vs profile) and of the data-error statistics in the resulting vs parameter uncertainty estimates. The assumed earth model parametrization influences estimates of parameter values and uncertainties due to different parametrizations leading to different ranges of data predictions. The support of the data for a particular model is often non-unique and several parametrizations may be supported. A trans-dimensional formulation accounts for this non-uniqueness by including a model-indexing parameter as an unknown so that groups of models (identified by the indexing parameter) are considered in the results. The earth model is parametrized in terms of a partition model with interfaces given over a depth-range of interest. In this work, the number of interfaces (layers) in the partition model represents the trans-dimensional model indexing. In addition, serial data-error correlations are addressed by augmenting the geophysical forward model with a hierarchical autoregressive error model that can account for a wide range of error processes with a small number of parameters. Hence, the limited knowledge about the true statistical distribution of data errors is also accounted for in the earth model parameter estimates, resulting in more realistic uncertainties and parameter values. Hierarchical autoregressive error models do not rely on point estimates of the model vector to estimate data-error statistics, and have no requirement for computing the inverse or determinant of a data-error covariance matrix. This approach is particularly useful for trans-dimensional inverse problems, as point estimates may not be representative of the
Directory of Open Access Journals (Sweden)
Shiwei Lu
2017-01-01
Full Text Available The introduction of the Huff model is of critical significance in many fields, including urban transport, optimal location planning, economics and business analysis. Moreover, parameters calibration is a crucial procedure before using the model. Previous studies have paid much attention to calibrating the spatial interaction model for human mobility research. However, are whole sampling locations always the better solution for model calibration? We use active tracking data of over 16 million cell phones in Shenzhen, a metropolitan city in China, to evaluate the calibration accuracy of Huff model. Specifically, we choose five business areas in this city as destinations and then randomly select a fixed number of cell phone towers to calibrate the parameters in this spatial interaction model. We vary the selected number of cell phone towers by multipliers of 30 until we reach the total number of towers with flows to the five destinations. We apply the least square methods for model calibration. The distribution of the final sum of squared error between the observed flows and the estimated flows indicates that whole sampling locations are not always better for the outcomes of this spatial interaction model. Instead, fewer sampling locations with higher volume of trips could improve the calibration results. Finally, we discuss implications of this finding and suggest an approach to address the high-accuracy model calibration solution.
Contextual Hierarchical Part-Driven Conditional Random Field Model for Object Category Detection
Directory of Open Access Journals (Sweden)
Lizhen Wu
2012-01-01
Full Text Available Even though several promising approaches have been proposed in the literature, generic category-level object detection is still challenging due to high intraclass variability and ambiguity in the appearance among different object instances. From the view of constructing object models, the balance between flexibility and discrimination must be taken into consideration. Motivated by these demands, we propose a novel contextual hierarchical part-driven conditional random field (CRF model, which is based on not only individual object part appearance but also model contextual interactions of the parts simultaneously. By using a latent two-layer hierarchical formulation of labels and a weighted neighborhood structure, the model can effectively encode the dependencies among object parts. Meanwhile, beta-stable local features are introduced as observed data to ensure the discriminative and robustness of part description. The object category detection problem can be solved in a probabilistic framework using a supervised learning method based on maximum a posteriori (MAP estimation. The benefits of the proposed model are demonstrated on the standard dataset and satellite images.
A hierarchical bayesian model to quantify uncertainty of stream water temperature forecasts.
Directory of Open Access Journals (Sweden)
Guillaume Bal
Full Text Available Providing generic and cost effective modelling approaches to reconstruct and forecast freshwater temperature using predictors as air temperature and water discharge is a prerequisite to understanding ecological processes underlying the impact of water temperature and of global warming on continental aquatic ecosystems. Using air temperature as a simple linear predictor of water temperature can lead to significant bias in forecasts as it does not disentangle seasonality and long term trends in the signal. Here, we develop an alternative approach based on hierarchical Bayesian statistical time series modelling of water temperature, air temperature and water discharge using seasonal sinusoidal periodic signals and time varying means and amplitudes. Fitting and forecasting performances of this approach are compared with that of simple linear regression between water and air temperatures using i an emotive simulated example, ii application to three French coastal streams with contrasting bio-geographical conditions and sizes. The time series modelling approach better fit data and does not exhibit forecasting bias in long term trends contrary to the linear regression. This new model also allows for more accurate forecasts of water temperature than linear regression together with a fair assessment of the uncertainty around forecasting. Warming of water temperature forecast by our hierarchical Bayesian model was slower and more uncertain than that expected with the classical regression approach. These new forecasts are in a form that is readily usable in further ecological analyses and will allow weighting of outcomes from different scenarios to manage climate change impacts on freshwater wildlife.
A hierarchical statistical model for estimating population properties of quantitative genes
Directory of Open Access Journals (Sweden)
Wu Rongling
2002-06-01
Full Text Available Abstract Background Earlier methods for detecting major genes responsible for a quantitative trait rely critically upon a well-structured pedigree in which the segregation pattern of genes exactly follow Mendelian inheritance laws. However, for many outcrossing species, such pedigrees are not available and genes also display population properties. Results In this paper, a hierarchical statistical model is proposed to monitor the existence of a major gene based on its segregation and transmission across two successive generations. The model is implemented with an EM algorithm to provide maximum likelihood estimates for genetic parameters of the major locus. This new method is successfully applied to identify an additive gene having a large effect on stem height growth of aspen trees. The estimates of population genetic parameters for this major gene can be generalized to the original breeding population from which the parents were sampled. A simulation study is presented to evaluate finite sample properties of the model. Conclusions A hierarchical model was derived for detecting major genes affecting a quantitative trait based on progeny tests of outcrossing species. The new model takes into account the population genetic properties of genes and is expected to enhance the accuracy, precision and power of gene detection.
DEFF Research Database (Denmark)
Thomadsen, Tommy
2005-01-01
Communication networks are immensely important today, since both companies and individuals use numerous services that rely on them. This thesis considers the design of hierarchical (communication) networks. Hierarchical networks consist of layers of networks and are well-suited for coping...... the clusters. The design of hierarchical networks involves clustering of nodes, hub selection, and network design, i.e. selection of links and routing of ows. Hierarchical networks have been in use for decades, but integrated design of these networks has only been considered for very special types of networks....... The thesis investigates models for hierarchical network design and methods used to design such networks. In addition, ring network design is considered, since ring networks commonly appear in the design of hierarchical networks. The thesis introduces hierarchical networks, including a classification scheme...
Directory of Open Access Journals (Sweden)
Roland Y.H. Silitonga
2013-01-01
Full Text Available Indonesian Palm Oil Industry has the largest market share in the world, but still faces problems in order to strengthen the level of competitiveness. Those problems are in the industry chains, government regulation and policy as meso environment, and macro economic condition. Therefore these three elements should be considered when analyzing the improvement of competitiveness. Here, the governmental element is hoped to create a conducive environment. This paper presents the industry competitiveness conceptual model, using hierarchical multilevel system approach. The Hierarchical multilevel system approach is used to accommodate the complexity of the industrial relation and the government position as the meso environment. The step to develop the model firstly is to define the relevant system. Secondly, is to formulate the output of the model that is competitiveness in the form of indicator. Then, the relevant system with competitiveness as the output is built into a conceptual model using hierarchical multilevel system. The conceptual model is then discussed to see if it can explain the relevant system, and the potential of it to be developed into mathematical model.
Global cross-calibration of Landsat spectral mixture models
de Sousa, Daniel; Small, Christopher
2016-01-01
Data continuity for the Landsat program relies on accurate cross-calibration among sensors. The Landsat 8 OLI has been shown to exhibit superior performance to the sensors on Landsats 4-7 with respect to radiometric calibration, signal to noise, and geolocation. However, improvements to the positioning of the spectral response functions on the OLI have resulted in known biases for commonly used spectral indices because the new band responses integrate absorption features differently from prev...
CSIR Research Space (South Africa)
Singh, A
2011-05-01
Full Text Available The accuracy of the calibration model for the single and double integrating sphere systems are compared for a white light system. A calibration model is created from a matrix of samples with known absorption and reduced scattering coefficients...
Support of the Generic Framework programme : calibration of groundwater flow models
Stroet, Chris C.B.M. te; Minnema, Benny
2003-01-01
This report is to support the “Generic Framework programme” which consists of a series of projects to create a standard in the modelling processes that are used in water management issues. The topic of support is the field of model calibration. TNO-NITG is elaborating the calibration of groundwater
Effect of calibration data length on performance and optimal parameters of hydrological model
Directory of Open Access Journals (Sweden)
Chuan-Zhe LI
2010-12-01
Full Text Available In order to assess the effects of calibration data length on the performance and optimal parameter values of hydrological model in ungauged or data limited catchments (actually, data are non-continuous and fragmental in some catchments, we choose to use non-continuous calibration periods to have more independent streamflow data for SIMHYD model calibration. Nash-Sutcliffe efficiency (NSE and percentage water balance error (WBE are used as performance measures. The Particle Swarm Optimization (PSO method is used to calibrate the rainfall-runoff models. Different length of data range from 1 year to 10 years randomly sampled used for study on impact of calibration data length. 55 relatively unimpaired catchments all over Australia with daily precipitation, potential evapotranspiration (PET, and streamflow data are tested to obtain more general conclusions. The results show that, longer calibration data does not necessarily result in better model performance. In general, 8 years data are sufficient to obtain steady estimates of model performance and parameters for SIMHYD model. It is also show that most humid catchments require fewer calibration data to get good performance and stable parameter values. The model performs better in humid and semi-humid catchments than arid catchments. Our results may have useful and interesting implications in the efficiency of limited observation data used for hydrological model calibration in different climatic catchments.
Directory of Open Access Journals (Sweden)
Mitja Morgut
2012-01-01
Full Text Available The numerical predictions of the cavitating flow around two model scale propellers in uniform inflow are presented and discussed. The simulations are carried out using a commercial CFD solver. The homogeneous model is used and the influence of three widespread mass transfer models, on the accuracy of the numerical predictions, is evaluated. The mass transfer models in question share the common feature of employing empirical coefficients to adjust mass transfer rate from water to vapour and back, which can affect the stability and accuracy of the predictions. Thus, for a fair and congruent comparison, the empirical coefficients of the different mass transfer models are first properly calibrated using an optimization strategy. The numerical results obtained, with the three different calibrated mass transfer models, are very similar to each other for two selected model scale propellers. Nevertheless, a tendency to overestimate the cavity extension is observed, and consequently the thrust, in the most severe operational conditions, is not properly predicted.
A model of shape memory materials with hierarchical twinning: Statics and dynamics
Energy Technology Data Exchange (ETDEWEB)
Saxena, A.; Bishop, A.R. [Los Alamos National Lab., NM (United States); Shenoy, S.R. [International Center for Theoretical Physics, Trieste (Italy); Wu, Y.; Lookman, T. [Western Ontario Univ., London, Ontario (Canada). Dept. of Applied Mathematics
1995-07-01
We consider a model of shape memory material in which hierarchical twinning near the habit plane (austenite-martensite interface) is a new and crucial ingredient. The model includes (1) a triple-well potential ({phi} model) in local shear strain, (2) strain gradient terms up to second order in strain and fourth order in gradient, and (3) all symmetry allowed compositional fluctuation induced strain gradient terms. The last term favors hierarchy which enables communication between macroscopic (cm) and microscopic ({Angstrom}) regions essential for shape memory. Hierarchy also stabilizes between formation (critical pattern of twins). External stress or pressure (pattern) modulates the spacing of domain walls. Therefore the ``pattern`` is encoded in the modulated hierarchical variation of the depth and width of the twins. This hierarchy of length scales provides a hierarchy of time scales and thus the possibility of non-exponential decay. The four processes of the complete shape memory cycle -- write, record, erase and recall -- are explained within this model. Preliminary results based on 2D Langevin dynamics are shown for tweed and hierarchy formation.
Clustering dynamic textures with the hierarchical em algorithm for modeling video.
Mumtaz, Adeel; Coviello, Emanuele; Lanckriet, Gert R G; Chan, Antoni B
2013-07-01
Dynamic texture (DT) is a probabilistic generative model, defined over space and time, that represents a video as the output of a linear dynamical system (LDS). The DT model has been applied to a wide variety of computer vision problems, such as motion segmentation, motion classification, and video registration. In this paper, we derive a new algorithm for clustering DT models that is based on the hierarchical EM algorithm. The proposed clustering algorithm is capable of both clustering DTs and learning novel DT cluster centers that are representative of the cluster members in a manner that is consistent with the underlying generative probabilistic model of the DT. We also derive an efficient recursive algorithm for sensitivity analysis of the discrete-time Kalman smoothing filter, which is used as the basis for computing expectations in the E-step of the HEM algorithm. Finally, we demonstrate the efficacy of the clustering algorithm on several applications in motion analysis, including hierarchical motion clustering, semantic motion annotation, and learning bag-of-systems (BoS) codebooks for dynamic texture recognition.
Ghanbari, J; Naghdabadi, R
2009-07-22
We have used a hierarchical multiscale modeling scheme for the analysis of cortical bone considering it as a nanocomposite. This scheme consists of definition of two boundary value problems, one for macroscale, and another for microscale. The coupling between these scales is done by using the homogenization technique. At every material point in which the constitutive model is needed, a microscale boundary value problem is defined using a macroscopic kinematical quantity and solved. Using the described scheme, we have studied elastic properties of cortical bone considering its nanoscale microstructural constituents with various mineral volume fractions. Since the microstructure of bone consists of mineral platelet with nanometer size embedded in a protein matrix, it is similar to the microstructure of soft matrix nanocomposites reinforced with hard nanostructures. Considering a representative volume element (RVE) of the microstructure of bone as the microscale problem in our hierarchical multiscale modeling scheme, the global behavior of bone is obtained under various macroscopic loading conditions. This scheme may be suitable for modeling arbitrary bone geometries subjected to a variety of loading conditions. Using the presented method, mechanical properties of cortical bone including elastic moduli and Poisson's ratios in two major directions and shear modulus is obtained for different mineral volume fractions.
Holan, S.H.; Davis, G.M.; Wildhaber, M.L.; DeLonay, A.J.; Papoulias, D.M.
2009-01-01
The timing of spawning in fish is tightly linked to environmental factors; however, these factors are not very well understood for many species. Specifically, little information is available to guide recruitment efforts for endangered species such as the sturgeon. Therefore, we propose a Bayesian hierarchical model for predicting the success of spawning of the shovelnose sturgeon which uses both biological and behavioural (longitudinal) data. In particular, we use data that were produced from a tracking study that was conducted in the Lower Missouri River. The data that were produced from this study consist of biological variables associated with readiness to spawn along with longitudinal behavioural data collected by using telemetry and archival data storage tags. These high frequency data are complex both biologically and in the underlying behavioural process. To accommodate such complexity we developed a hierarchical linear regression model that uses an eigenvalue predictor, derived from the transition probability matrix of a two-state Markov switching model with generalized auto-regressive conditional heteroscedastic dynamics. Finally, to minimize the computational burden that is associated with estimation of this model, a parallel computing approach is proposed. ?? Journal compilation 2009 Royal Statistical Society.
Hierarchical spatial models for predicting pygmy rabbit distribution and relative abundance
Wilson, T.L.; Odei, J.B.; Hooten, M.B.; Edwards, T.C.
2010-01-01
Conservationists routinely use species distribution models to plan conservation, restoration and development actions, while ecologists use them to infer process from pattern. These models tend to work well for common or easily observable species, but are of limited utility for rare and cryptic species. This may be because honest accounting of known observation bias and spatial autocorrelation are rarely included, thereby limiting statistical inference of resulting distribution maps. We specified and implemented a spatially explicit Bayesian hierarchical model for a cryptic mammal species (pygmy rabbit Brachylagus idahoensis). Our approach used two levels of indirect sign that are naturally hierarchical (burrows and faecal pellets) to build a model that allows for inference on regression coefficients as well as spatially explicit model parameters. We also produced maps of rabbit distribution (occupied burrows) and relative abundance (number of burrows expected to be occupied by pygmy rabbits). The model demonstrated statistically rigorous spatial prediction by including spatial autocorrelation and measurement uncertainty. We demonstrated flexibility of our modelling framework by depicting probabilistic distribution predictions using different assumptions of pygmy rabbit habitat requirements. Spatial representations of the variance of posterior predictive distributions were obtained to evaluate heterogeneity in model fit across the spatial domain. Leave-one-out cross-validation was conducted to evaluate the overall model fit. Synthesis and applications. Our method draws on the strengths of previous work, thereby bridging and extending two active areas of ecological research: species distribution models and multi-state occupancy modelling. Our framework can be extended to encompass both larger extents and other species for which direct estimation of abundance is difficult. ?? 2010 The Authors. Journal compilation ?? 2010 British Ecological Society.
Institute of Scientific and Technical Information of China (English)
SHIMIZU N; OKADOME H; WADA D; KIMURA T; OHTSUBO K
2008-01-01
Chemometric arnylose modeling for global calibration, using whole grain near infrared transmittance spectra andsample selection, was used in an artificial neural network (ANN), to assess the global and local models generated, based onsamples of newly bred Indica, Japonica and rice. Global sample sets had a wide range of sample variation for amylose content(0 to 25.9%). The local sample set, Japonica sample, had relatively low amylose content and a narrow sample variation(amylose; 12.3% to 21.0%). For sample selection the CENTER algorithm was applied to generate calibration, validation andstop sample sets. Spectral preprocessing was found to reduce the optimum number of partial least squares (PLS) componentsfor amylose content and thus enhance the robustness of the local calibration. The best model was found to be an ANN globalcalibration with spectral preprocessing; the next was a PLS global calibration using standard spectra. These results pose thequestion whether an ANN algorithm with spectral preprocessing could be developed for global and local calibration models orwhether PLS without spectral preprocessing should be developed for global calibration models. We suggest that global calibra-tion models incorporating an ANN may be used as a universal calibration model.
A hierarchical Markov decision process modeling feeding and marketing decisions of growing pigs
DEFF Research Database (Denmark)
Pourmoayed, Reza; Nielsen, Lars Relund; Kristensen, Anders Ringgaard
2016-01-01
Feeding is the most important cost in the production of growing pigs and has a direct impact on the marketing decisions, growth and the final quality of the meat. In this paper, we address the sequential decision problem of when to change the feed-mix within a finisher pig pen and when to pick pigs...... for marketing. We formulate a hierarchical Markov decision process with three levels representing the decision process. The model considers decisions related to feeding and marketing and finds the optimal decision given the current state of the pen. The state of the system is based on information from on...
Hierarchical competition models with the Allee effect II: the case of immigration.
Assas, Laila; Dennis, Brian; Elaydi, Saber; Kwessi, Eddy; Livadiotis, George
2015-01-01
This is part II of an earlier paper that dealt with hierarchical models with the Allee effect but with no immigration. In this paper, we greatly simplify the proofs in part I and provide a proof of the global dynamics of the non-hyperbolic cases that were previously conjectured. Then, we show how immigration to one of the species or to both would, drastically, change the dynamics of the system. It is shown that if the level of immigration to one or to both species is above a specified level, then there will be no extinction region where both species go to extinction.
Critical behavior of the Ising model on a hierarchical lattice with aperiodic interactions
Pinho, S. T. R.; Haddad, T. A. S.; Salinas, S. R.
We write the exact renormalization-group recursion relations for nearest-neighbor ferromagnetic Ising models on Migdal-Kadanoff hierarchical lattices with a distribution of aperiodic exchange interactions according to a class of substitutional sequences. For small geometric fluctuations, the critical behavior is unchanged with respect to the uniform case. For large fluctuations, as in the case of the Rudin-Shapiro sequence, the uniform fixed point in the parameter space cannot be reached from any physical initial conditions. We derive a criterion to check the relevance of the geometric fluctuations.
Assessing the Graphical and Algorithmic Structure of Hierarchical Coloured Petri Net Models
Directory of Open Access Journals (Sweden)
George Benwell
1994-11-01
Full Text Available Petri nets, as a modelling formalism, are utilised for the analysis of processes, whether for explicit understanding, database design or business process re-engineering. The formalism, however, can be represented on a virtual continuum from highly graphical to largely algorithmic. The use and understanding of the formalism will, in part, therefore depend on the resultant complexity and power of the representation and, on the graphical or algorithmic preference of the user. This paper develops a metric which will indicate the graphical or algorithmic tendency of hierarchical coloured Petri nets.
Triviality of hierarchical O(N) spin model in four dimensions with large N
Watanabe, H
2003-01-01
The renormalization group transformation for the hierarchical O(N) spin model in four dimensions is studied by means of characteristic functions of single-site measures, and convergence of the critical trajectory to the Gaussian fixed point is shown for a sufficiently large N. In the strong coupling regime, the trajectory is controlled by the help of the exactly solved O(\\infty) trajectory, while, in the weak coupling regime, convergence to the Gaussian fixed point is shown by power decay of the effective coupling constant.
DEFF Research Database (Denmark)
Mishnaevsky, Leon
2014-01-01
, with modified, hybridor nanomodified structures. In this project, we seek to explore the potential of hybrid (carbon/glass),nanoreinforced and hierarchical composites (with secondary CNT, graphene or nanoclay reinforcement) as future materials for highly reliable large wind turbines. Using 3D multiscale...... computational models ofthe composites, we study the effect of hybrid structure and of nanomodifications on the strength, lifetime and service properties of the materials (see Figure 1). As a result, a series of recommendations toward the improvement of composites for structural applications under long term...
Platonova, Elena A; Hernandez, S Robert; Shewchuk, Richard M; Leddy, Kelly M
2006-01-01
This study examines how perceptions of organizational culture influence organizational outcomes, specially, individual employee job satisfaction. The study was conducted in the health care industry in the United States. It examined the data on employee perceptions of job attributes, organizational culture, and job satisfaction, collected by Press Ganey Associates from 88 hospitals across the country in 2002-2003. Hierarchical linear modeling was used to test how organizational culture affects individual employee job satisfaction. Results indicated that some dimensions of organizational culture, specifically, job security and performance recognition, play a role in improving employee job satisfaction.
Voith, Laura A; Brondino, Michael J
2017-09-01
Due to high prevalence rates and deleterious effects on individuals, families, and communities, intimate partner violence (IPV) is a significant public health problem. Because IPV occurs in the context of communities and neighborhoods, research must examine the broader environment in addition to individual-level factors to successfully facilitate behavior change. Drawing from the Social Determinants of Health framework and Social Disorganization Theory, neighborhood predictors of IPV were tested using hierarchical linear modeling. Results indicated that concentrated disadvantage and female-to-male partner violence were robust predictors of women's IPV victimization. Implications for theory, practice, and policy, and future research are discussed. © Society for Community Research and Action 2017.
Energy Technology Data Exchange (ETDEWEB)
Luscher, Darby J.
2010-04-01
All materials are heterogeneous at various scales of observation. The influence of material heterogeneity on nonuniform response and microstructure evolution can have profound impact on continuum thermomechanical response at macroscopic “engineering” scales. In many cases, it is necessary to treat this behavior as a multiscale process thus integrating the physical understanding of material behavior at various physical (length and time) scales in order to more accurately predict the thermomechanical response of materials as their microstructure evolves. The intent of the dissertation is to provide a formal framework for multiscale hierarchical homogenization to be used in developing constitutive models.
Locally self-similar phase diagram of the disordered Potts model on the hierarchical lattice.
Anglès d'Auriac, J-Ch; Iglói, Ferenc
2013-02-01
We study the critical behavior of the random q-state Potts model in the large-q limit on the diamond hierarchical lattice with an effective dimensionality d(eff)>2. By varying the temperature and the strength of the frustration the system has a phase transition line between the paramagnetic and the ferromagnetic phases which is controlled by four different fixed points. According to our renormalization group study the phase boundary in the vicinity of the multicritical point is self-similar; it is well represented by a logarithmic spiral. We expect an infinite number of reentrances in the thermodynamic limit; consequently one cannot define standard thermodynamic phases in this region.
Institute of Scientific and Technical Information of China (English)
LIU Hu; TIAN Yongliang; ZHANG Chaoying; YIN Jiao; SUN Yijie
2012-01-01
In order to take requirements for commercial operations or military missions into better consideration in new flight vehicle design,a tri-hierarchical task classification model of "design for operation" is proposed,which takes basic man-object interaction task,complex collaborative operation and large-scale joint operation into account.The corresponding general architecture of evaluation criteria is also depicted.Then a virtual simulation-based approach to implement the evaluations at three hierarchy levels is mainly analyzed with a detailed example,which validates the feasibility and effectiveness of evaluation architecture.Finally,extending the virtual simulation architecture from design to operation training is discussed.
Nadeem, Khurram; Moore, Jeffrey E; Zhang, Ying; Chipman, Hugh
2016-07-01
Stochastic versions of Gompertz, Ricker, and various other dynamics models play a fundamental role in quantifying strength of density dependence and studying long-term dynamics of wildlife populations. These models are frequently estimated using time series of abundance estimates that are inevitably subject to observation error and missing data. This issue can be addressed with a state-space modeling framework that jointly estimates the observed data model and the underlying stochastic population dynamics (SPD) model. In cases where abundance data are from multiple locations with a smaller spatial resolution (e.g., from mark-recapture and distance sampling studies), models are conventionally fitted to spatially pooled estimates of yearly abundances. Here, we demonstrate that a spatial version of SPD models can be directly estimated from short time series of spatially referenced distance sampling data in a unified hierarchical state-space modeling framework that also allows for spatial variance (covariance) in population growth. We also show that a full range of likelihood based inference, including estimability diagnostics and model selection, is feasible in this class of models using a data cloning algorithm. We further show through simulation experiments that the hierarchical state-space framework introduced herein efficiently captures the underlying dynamical parameters and spatial abundance distribution. We apply our methodology by analyzing a time series of line-transect distance sampling data for fin whales (Balaenoptera physalus) off the U.S. west coast. Although there were only seven surveys conducted during the study time frame, 1991-2014, our analysis detected presence of strong density regulation and provided reliable estimates of fin whale densities. In summary, we show that the integrative framework developed herein allows ecologists to better infer key population characteristics such as presence of density regulation and spatial variability in a
Directory of Open Access Journals (Sweden)
C Elizabeth McCarron
Full Text Available BACKGROUND: Bayesian hierarchical models have been proposed to combine evidence from different types of study designs. However, when combining evidence from randomised and non-randomised controlled studies, imbalances in patient characteristics between study arms may bias the results. The objective of this study was to assess the performance of a proposed Bayesian approach to adjust for imbalances in patient level covariates when combining evidence from both types of study designs. METHODOLOGY/PRINCIPAL FINDINGS: Simulation techniques, in which the truth is known, were used to generate sets of data for randomised and non-randomised studies. Covariate imbalances between study arms were introduced in the non-randomised studies. The performance of the Bayesian hierarchical model adjusted for imbalances was assessed in terms of bias. The data were also modelled using three other Bayesian approaches for synthesising evidence from randomised and non-randomised studies. The simulations considered six scenarios aimed at assessing the sensitivity of the results to changes in the impact of the imbalances and the relative number and size of studies of each type. For all six scenarios considered, the Bayesian hierarchical model adjusted for differences within studies gave results that were unbiased and closest to the true value compared to the other models. CONCLUSIONS/SIGNIFICANCE: Where informed health care decision making requires the synthesis of evidence from randomised and non-randomised study designs, the proposed hierarchical Bayesian method adjusted for differences in patient characteristics between study arms may facilitate the optimal use of all available evidence leading to unbiased results compared to unadjusted analyses.
Transport Simulation Model Calibration with Two-Step Cluster Analysis Procedure
Directory of Open Access Journals (Sweden)
Zenina Nadezda
2015-12-01
Full Text Available The calibration results of transport simulation model depend on selected parameters and their values. The aim of the present paper is to calibrate a transport simulation model by a two-step cluster analysis procedure to improve the reliability of simulation model results. Two global parameters have been considered: headway and simulation step. Normal, uniform and exponential headway generation models have been selected for headway. Application of two-step cluster analysis procedure to the calibration procedure has allowed reducing time needed for simulation step and headway generation model value selection.
Hu, Shihao; Jiang, Haodan; Xia, Zhenhai; Gao, Xiaosheng
2010-09-01
With unique hierarchical fibrillar structures on their feet, gecko lizards can walk on vertical walls or even ceilings. Recent experiments have shown that strong binding along the shear direction and easy lifting in the normal direction can be achieved by forming unidirectional carbon nanotube array with laterally distributed tips similar to gecko's feet. In this study, a multiscale modeling approach was developed to analyze friction and adhesion behaviors of this hierarchical fibrillar system. Vertically aligned carbon nanotube array with laterally distributed segments at the end was simulated by coarse grained molecular dynamics. The effects of the laterally distributed segments on friction and adhesion strengths were analyzed, and further adopted as cohesive laws used in finite element analysis at device scale. The results show that the laterally distributed segments play an essential role in achieving high force anisotropy between normal and shear directions in the adhesives. Finite element analysis reveals a new friction-enhanced adhesion mechanism of the carbon nanotube array, which also exists in gecko adhesive system. The multiscale modeling provides an approach to bridge the microlevel structures of the carbon nanotube array with its macrolevel adhesive behaviors, and the predictions from this modeling give an insight into the mechanisms of gecko-mimicking dry adhesives.
Semmens, Brice X; Ward, Eric J; Moore, Jonathan W; Darimont, Chris T
2009-07-09
Variability in resource use defines the width of a trophic niche occupied by a population. Intra-population variability in resource use may occur across hierarchical levels of population structure from individuals to subpopulations. Understanding how levels of population organization contribute to population niche width is critical to ecology and evolution. Here we describe a hierarchical stable isotope mixing model that can simultaneously estimate both the prey composition of a consumer diet and the diet variability among individuals and across levels of population organization. By explicitly estimating variance components for multiple scales, the model can deconstruct the niche width of a consumer population into relevant levels of population structure. We apply this new approach to stable isotope data from a population of gray wolves from coastal British Columbia, and show support for extensive intra-population niche variability among individuals, social groups, and geographically isolated subpopulations. The analytic method we describe improves mixing models by accounting for diet variability, and improves isotope niche width analysis by quantitatively assessing the contribution of levels of organization to the niche width of a population.
Zhu, L; Carlin, B P
Bayes and empirical Bayes methods have proven effective in smoothing crude maps of disease risk, eliminating the instability of estimates in low-population areas while maintaining overall geographic trends and patterns. Recent work extends these methods to the analysis of areal data which are spatially misaligned, that is, involving variables (typically counts or rates) which are aggregated over differing sets of regional boundaries. The addition of a temporal aspect complicates matters further, since now the misalignment can arise either within a given time point, or across time points (as when the regional boundaries themselves evolve over time). Hierarchical Bayesian methods (implemented via modern Markov chain Monte Carlo computing methods) enable the fitting of such models, but a formal comparison of their fit is hampered by their large size and often improper prior specifications. In this paper, we accomplish this comparison using the deviance information criterion (DIC), a recently proposed generalization of the Akaike information criterion (AIC) designed for complex hierarchical model settings like ours. We investigate the use of the delta method for obtaining an approximate variance estimate for DIC, in order to attach significance to apparent differences between models. We illustrate our approach using a spatially misaligned data set relating a measure of traffic density to paediatric asthma hospitalizations in San Diego County, California.
Evolutionary-Hierarchical Bases of the Formation of Cluster Model of Innovation Economic Development
Directory of Open Access Journals (Sweden)
Yuliya Vladimirovna Dubrovskaya
2016-10-01
Full Text Available The functioning of a modern economic system is based on the interaction of objects of different hierarchical levels. Thus, the problem of the study of innovation processes taking into account the mutual influence of the activities of these economic actors becomes important. The paper dwells evolutionary basis for the formation of models of innovation development on the basis of micro and macroeconomic analysis. Most of the concepts recognized that despite a big number of diverse models, the coordination of the relations between economic agents is of crucial importance for the successful innovation development. According to the results of the evolutionary-hierarchical analysis, the authors reveal key phases of the development of forms of business cooperation, science and government in the domestic economy. It has become the starting point of the conception of the characteristics of the interaction in the cluster models of innovation development of the economy. Considerable expectancies on improvement of the national innovative system are connected with the development of cluster and network structures. The main objective of government authorities is the formation of mechanisms and institutions that will foster cooperation between members of the clusters. The article explains that the clusters cannot become the factors in the growth of the national economy, not being an effective tool for interaction between the actors of the regional innovative systems.
Gotelli, Nicholas J.; Dorazio, Robert M.; Ellison, Aaron M.; Grossman, Gary D.
2010-01-01
Quantifying patterns of temporal trends in species assemblages is an important analytical challenge in community ecology. We describe methods of analysis that can be applied to a matrix of counts of individuals that is organized by species (rows) and time-ordered sampling periods (columns). We first developed a bootstrapping procedure to test the null hypothesis of random sampling from a stationary species abundance distribution with temporally varying sampling probabilities. This procedure can be modified to account for undetected species. We next developed a hierarchical model to estimate species-specific trends in abundance while accounting for species-specific probabilities of detection. We analysed two long-term datasets on stream fishes and grassland insects to demonstrate these methods. For both assemblages, the bootstrap test indicated that temporal trends in abundance were more heterogeneous than expected under the null model. We used the hierarchical model to estimate trends in abundance and identified sets of species in each assemblage that were steadily increasing, decreasing or remaining constant in abundance over more than a decade of standardized annual surveys. Our methods of analysis are broadly applicable to other ecological datasets, and they represent an advance over most existing procedures, which do not incorporate effects of incomplete sampling and imperfect detection.
2012-02-01
of-the-ground model ( Frankenstein and Koenig, 2004), and a sixteen parameter Gridded Surface Subsurface Hydrologic Analysis (GSSHA) (Downer and...efficient global minimization. Journal of Optimization Theory and its Applications, 76 (3), 501-521. Frankenstein , S., and G. Koenig. 2004. Fast All... Frankenstein , and C. W. Downer. 2009. Efficient Levenberg- Marquardt Method Based Model Independent Calibration. Environmental Modelling & Software (24