WorldWideScience

Sample records for hierarchical chain model

  1. Hierarchical Multiple Markov Chain Model for Unsupervised Texture Segmentation

    Czech Academy of Sciences Publication Activity Database

    Scarpa, G.; Gaetano, R.; Haindl, Michal; Zerubia, J.

    2009-01-01

    Roč. 18, č. 8 (2009), s. 1830-1843 ISSN 1057-7149 R&D Projects: GA ČR GA102/08/0593 EU Projects: European Commission(XE) 507752 - MUSCLE Institutional research plan: CEZ:AV0Z10750506 Keywords : Classification * texture analysis * segmentation * hierarchical image models * Markov process Subject RIV: BD - Theory of Information Impact factor: 2.848, year: 2009 http://library.utia.cas.cz/separaty/2009/RO/haindl-hierarchical multiple markov chain model for unsupervised texture segmentation.pdf

  2. Trans-dimensional matched-field geoacoustic inversion with hierarchical error models and interacting Markov chains.

    Science.gov (United States)

    Dettmer, Jan; Dosso, Stan E

    2012-10-01

    This paper develops a trans-dimensional approach to matched-field geoacoustic inversion, including interacting Markov chains to improve efficiency and an autoregressive model to account for correlated errors. The trans-dimensional approach and hierarchical seabed model allows inversion without assuming any particular parametrization by relaxing model specification to a range of plausible seabed models (e.g., in this case, the number of sediment layers is an unknown parameter). Data errors are addressed by sampling statistical error-distribution parameters, including correlated errors (covariance), by applying a hierarchical autoregressive error model. The well-known difficulty of low acceptance rates for trans-dimensional jumps is addressed with interacting Markov chains, resulting in a substantial increase in efficiency. The trans-dimensional seabed model and the hierarchical error model relax the degree of prior assumptions required in the inversion, resulting in substantially improved (more realistic) uncertainty estimates and a more automated algorithm. In particular, the approach gives seabed parameter uncertainty estimates that account for uncertainty due to prior model choice (layering and data error statistics). The approach is applied to data measured on a vertical array in the Mediterranean Sea.

  3. Hierarchical species distribution models

    Science.gov (United States)

    Hefley, Trevor J.; Hooten, Mevin B.

    2016-01-01

    Determining the distribution pattern of a species is important to increase scientific knowledge, inform management decisions, and conserve biodiversity. To infer spatial and temporal patterns, species distribution models have been developed for use with many sampling designs and types of data. Recently, it has been shown that count, presence-absence, and presence-only data can be conceptualized as arising from a point process distribution. Therefore, it is important to understand properties of the point process distribution. We examine how the hierarchical species distribution modeling framework has been used to incorporate a wide array of regression and theory-based components while accounting for the data collection process and making use of auxiliary information. The hierarchical modeling framework allows us to demonstrate how several commonly used species distribution models can be derived from the point process distribution, highlight areas of potential overlap between different models, and suggest areas where further research is needed.

  4. Bayesian nonparametric hierarchical modeling.

    Science.gov (United States)

    Dunson, David B

    2009-04-01

    In biomedical research, hierarchical models are very widely used to accommodate dependence in multivariate and longitudinal data and for borrowing of information across data from different sources. A primary concern in hierarchical modeling is sensitivity to parametric assumptions, such as linearity and normality of the random effects. Parametric assumptions on latent variable distributions can be challenging to check and are typically unwarranted, given available prior knowledge. This article reviews some recent developments in Bayesian nonparametric methods motivated by complex, multivariate and functional data collected in biomedical studies. The author provides a brief review of flexible parametric approaches relying on finite mixtures and latent class modeling. Dirichlet process mixture models are motivated by the need to generalize these approaches to avoid assuming a fixed finite number of classes. Focusing on an epidemiology application, the author illustrates the practical utility and potential of nonparametric Bayes methods.

  5. Hierarchical Bass model

    International Nuclear Information System (INIS)

    Tashiro, Tohru

    2014-01-01

    We propose a new model about diffusion of a product which includes a memory of how many adopters or advertisements a non-adopter met, where (non-)adopters mean people (not) possessing the product. This effect is lacking in the Bass model. As an application, we utilize the model to fit the iPod sales data, and so the better agreement is obtained than the Bass model

  6. Hierarchical Bass model

    Science.gov (United States)

    Tashiro, Tohru

    2014-03-01

    We propose a new model about diffusion of a product which includes a memory of how many adopters or advertisements a non-adopter met, where (non-)adopters mean people (not) possessing the product. This effect is lacking in the Bass model. As an application, we utilize the model to fit the iPod sales data, and so the better agreement is obtained than the Bass model.

  7. Hierarchical Semantic Model of Geovideo

    Directory of Open Access Journals (Sweden)

    XIE Xiao

    2015-05-01

    Full Text Available The public security incidents were getting increasingly challenging with regard to their new features, including multi-scale mobility, multistage dynamic evolution, as well as spatiotemporal concurrency and uncertainty in the complex urban environment. However, the existing video models, which were used/designed for independent archive or local analysis of surveillance video, have seriously inhibited emergency response to the urgent requirements.Aiming at the explicit representation of change mechanism in video, the paper proposed a novel hierarchical geovideo semantic model using UML. This model was characterized by the hierarchical representation of both data structure and semantics based on the change-oriented three domains (feature domain, process domain and event domain instead of overall semantic description of video streaming; combining both geographical semantics and video content semantics, in support of global semantic association between multiple geovideo data. The public security incidents by video surveillance are inspected as an example to illustrate the validity of this model.

  8. Hierarchical Planning Methodology for a Supply Chain Management

    Directory of Open Access Journals (Sweden)

    Virna ORTIZ-ARAYA

    2012-01-01

    Full Text Available Hierarchical production planning is a widely utilized methodology for real world capacitated production planning systems with the aim of establishing different decision–making levels of the planning issues on the time horizon considered. This paper presents a hierarchical approach proposed to a company that produces reusable shopping bags in Chile and Perú, to determine the optimal allocation of resources at the tactical level as well as over the most immediate planning horizon to meet customer demands for the next weeks. Starting from an aggregated production planning model, the aggregated decisions are disaggregated into refined decisions in two levels, using a couple of optimization models that impose appropriate constraints to keep coherence of the plan on the production system. The main features of the hierarchical solution approach are presented.

  9. Multicollinearity in hierarchical linear models.

    Science.gov (United States)

    Yu, Han; Jiang, Shanhe; Land, Kenneth C

    2015-09-01

    This study investigates an ill-posed problem (multicollinearity) in Hierarchical Linear Models from both the data and the model perspectives. We propose an intuitive, effective approach to diagnosing the presence of multicollinearity and its remedies in this class of models. A simulation study demonstrates the impacts of multicollinearity on coefficient estimates, associated standard errors, and variance components at various levels of multicollinearity for finite sample sizes typical in social science studies. We further investigate the role multicollinearity plays at each level for estimation of coefficient parameters in terms of shrinkage. Based on these analyses, we recommend a top-down method for assessing multicollinearity in HLMs that first examines the contextual predictors (Level-2 in a two-level model) and then the individual predictors (Level-1) and uses the results for data collection, research problem redefinition, model re-specification, variable selection and estimation of a final model. Copyright © 2015 Elsevier Inc. All rights reserved.

  10. Hierarchical modeling of active materials

    International Nuclear Information System (INIS)

    Taya, Minoru

    2003-01-01

    Intelligent (or smart) materials are increasingly becoming key materials for use in actuators and sensors. If an intelligent material is used as a sensor, it can be embedded in a variety of structure functioning as a health monitoring system to make their life longer with high reliability. If an intelligent material is used as an active material in an actuator, it plays a key role of making dynamic movement of the actuator under a set of stimuli. This talk intends to cover two different active materials in actuators, (1) piezoelectric laminate with FGM microstructure, (2) ferromagnetic shape memory alloy (FSMA). The advantage of using the FGM piezo laminate is to enhance its fatigue life while maintaining large bending displacement, while that of use in FSMA is its fast actuation while providing a large force and stroke capability. Use of hierarchical modeling of the above active materials is a key design step in optimizing its microstructure for enhancement of their performance. I will discuss briefly hierarchical modeling of the above two active materials. For FGM piezo laminate, we will use both micromechanical model and laminate theory, while for FSMA, the modeling interfacing nano-structure, microstructure and macro-behavior is discussed. (author)

  11. What are hierarchical models and how do we analyze them?

    Science.gov (United States)

    Royle, Andy

    2016-01-01

    In this chapter we provide a basic definition of hierarchical models and introduce the two canonical hierarchical models in this book: site occupancy and N-mixture models. The former is a hierarchical extension of logistic regression and the latter is a hierarchical extension of Poisson regression. We introduce basic concepts of probability modeling and statistical inference including likelihood and Bayesian perspectives. We go through the mechanics of maximizing the likelihood and characterizing the posterior distribution by Markov chain Monte Carlo (MCMC) methods. We give a general perspective on topics such as model selection and assessment of model fit, although we demonstrate these topics in practice in later chapters (especially Chapters 5, 6, 7, and 10 Chapter 5 Chapter 6 Chapter 7 Chapter 10)

  12. Logistic chain modelling

    NARCIS (Netherlands)

    Slats, P.A.; Bhola, B.; Evers, J.J.M.; Dijkhuizen, G.

    1995-01-01

    Logistic chain modelling is very important in improving the overall performance of the total logistic chain. Logistic models provide support for a large range of applications, such as analysing bottlenecks, improving customer service, configuring new logistic chains and adapting existing chains to

  13. Hierarchical modelling for the environmental sciences statistical methods and applications

    CERN Document Server

    Clark, James S

    2006-01-01

    New statistical tools are changing the way in which scientists analyze and interpret data and models. Hierarchical Bayes and Markov Chain Monte Carlo methods for analysis provide a consistent framework for inference and prediction where information is heterogeneous and uncertain, processes are complicated, and responses depend on scale. Nowhere are these methods more promising than in the environmental sciences.

  14. Classification using Hierarchical Naive Bayes models

    DEFF Research Database (Denmark)

    Langseth, Helge; Dyhre Nielsen, Thomas

    2006-01-01

    Classification problems have a long history in the machine learning literature. One of the simplest, and yet most consistently well-performing set of classifiers is the Naïve Bayes models. However, an inherent problem with these classifiers is the assumption that all attributes used to describe......, termed Hierarchical Naïve Bayes models. Hierarchical Naïve Bayes models extend the modeling flexibility of Naïve Bayes models by introducing latent variables to relax some of the independence statements in these models. We propose a simple algorithm for learning Hierarchical Naïve Bayes models...

  15. Hierarchical modeling and analysis for spatial data

    CERN Document Server

    Banerjee, Sudipto; Gelfand, Alan E

    2003-01-01

    Among the many uses of hierarchical modeling, their application to the statistical analysis of spatial and spatio-temporal data from areas such as epidemiology And environmental science has proven particularly fruitful. Yet to date, the few books that address the subject have been either too narrowly focused on specific aspects of spatial analysis, or written at a level often inaccessible to those lacking a strong background in mathematical statistics.Hierarchical Modeling and Analysis for Spatial Data is the first accessible, self-contained treatment of hierarchical methods, modeling, and dat

  16. Quantum Ising model on hierarchical structures

    International Nuclear Information System (INIS)

    Lin Zhifang; Tao Ruibao.

    1989-11-01

    A quantum Ising chain with both the exchange couplings and the transverse fields arranged in a hierarchical way is considered. Exact analytical results for the critical line and energy gap are obtained. It is shown that when R 1 not= R 2 , where R 1 and R 2 are the hierarchical parameters for the exchange couplings and the transverse fields, respectively, the system undergoes a phase transition in a different universality class from the pure quantum Ising chain with R 1 =R 2 =1. On the other hand, when R 1 =R 2 =R, there exists a critical value R c dependent on the furcating number of the hierarchy. In case of R > R c , the system is shown to exhibit as Ising-like critical point with the critical behaviour the same as in the pure case, while for R c the system belongs to another universality class. (author). 19 refs, 2 figs

  17. Analysis hierarchical model for discrete event systems

    Science.gov (United States)

    Ciortea, E. M.

    2015-11-01

    The This paper presents the hierarchical model based on discrete event network for robotic systems. Based on the hierarchical approach, Petri network is analysed as a network of the highest conceptual level and the lowest level of local control. For modelling and control of complex robotic systems using extended Petri nets. Such a system is structured, controlled and analysed in this paper by using Visual Object Net ++ package that is relatively simple and easy to use, and the results are shown as representations easy to interpret. The hierarchical structure of the robotic system is implemented on computers analysed using specialized programs. Implementation of hierarchical model discrete event systems, as a real-time operating system on a computer network connected via a serial bus is possible, where each computer is dedicated to local and Petri model of a subsystem global robotic system. Since Petri models are simplified to apply general computers, analysis, modelling, complex manufacturing systems control can be achieved using Petri nets. Discrete event systems is a pragmatic tool for modelling industrial systems. For system modelling using Petri nets because we have our system where discrete event. To highlight the auxiliary time Petri model using transport stream divided into hierarchical levels and sections are analysed successively. Proposed robotic system simulation using timed Petri, offers the opportunity to view the robotic time. Application of goods or robotic and transmission times obtained by measuring spot is obtained graphics showing the average time for transport activity, using the parameters sets of finished products. individually.

  18. Learning with hierarchical-deep models.

    Science.gov (United States)

    Salakhutdinov, Ruslan; Tenenbaum, Joshua B; Torralba, Antonio

    2013-08-01

    We introduce HD (or “Hierarchical-Deep”) models, a new compositional learning architecture that integrates deep learning models with structured hierarchical Bayesian (HB) models. Specifically, we show how we can learn a hierarchical Dirichlet process (HDP) prior over the activities of the top-level features in a deep Boltzmann machine (DBM). This compound HDP-DBM model learns to learn novel concepts from very few training example by learning low-level generic features, high-level features that capture correlations among low-level features, and a category hierarchy for sharing priors over the high-level features that are typical of different kinds of concepts. We present efficient learning and inference algorithms for the HDP-DBM model and show that it is able to learn new concepts from very few examples on CIFAR-100 object recognition, handwritten character recognition, and human motion capture datasets.

  19. Comparing hierarchical models via the marginalized deviance information criterion.

    Science.gov (United States)

    Quintero, Adrian; Lesaffre, Emmanuel

    2018-07-20

    Hierarchical models are extensively used in pharmacokinetics and longitudinal studies. When the estimation is performed from a Bayesian approach, model comparison is often based on the deviance information criterion (DIC). In hierarchical models with latent variables, there are several versions of this statistic: the conditional DIC (cDIC) that incorporates the latent variables in the focus of the analysis and the marginalized DIC (mDIC) that integrates them out. Regardless of the asymptotic and coherency difficulties of cDIC, this alternative is usually used in Markov chain Monte Carlo (MCMC) methods for hierarchical models because of practical convenience. The mDIC criterion is more appropriate in most cases but requires integration of the likelihood, which is computationally demanding and not implemented in Bayesian software. Therefore, we consider a method to compute mDIC by generating replicate samples of the latent variables that need to be integrated out. This alternative can be easily conducted from the MCMC output of Bayesian packages and is widely applicable to hierarchical models in general. Additionally, we propose some approximations in order to reduce the computational complexity for large-sample situations. The method is illustrated with simulated data sets and 2 medical studies, evidencing that cDIC may be misleading whilst mDIC appears pertinent. Copyright © 2018 John Wiley & Sons, Ltd.

  20. A hierarchical model for ordinal matrix factorization

    DEFF Research Database (Denmark)

    Paquet, Ulrich; Thomson, Blaise; Winther, Ole

    2012-01-01

    This paper proposes a hierarchical probabilistic model for ordinal matrix factorization. Unlike previous approaches, we model the ordinal nature of the data and take a principled approach to incorporating priors for the hidden variables. Two algorithms are presented for inference, one based...

  1. Hierarchical Context Modeling for Video Event Recognition.

    Science.gov (United States)

    Wang, Xiaoyang; Ji, Qiang

    2016-10-11

    Current video event recognition research remains largely target-centered. For real-world surveillance videos, targetcentered event recognition faces great challenges due to large intra-class target variation, limited image resolution, and poor detection and tracking results. To mitigate these challenges, we introduced a context-augmented video event recognition approach. Specifically, we explicitly capture different types of contexts from three levels including image level, semantic level, and prior level. At the image level, we introduce two types of contextual features including the appearance context features and interaction context features to capture the appearance of context objects and their interactions with the target objects. At the semantic level, we propose a deep model based on deep Boltzmann machine to learn event object representations and their interactions. At the prior level, we utilize two types of prior-level contexts including scene priming and dynamic cueing. Finally, we introduce a hierarchical context model that systematically integrates the contextual information at different levels. Through the hierarchical context model, contexts at different levels jointly contribute to the event recognition. We evaluate the hierarchical context model for event recognition on benchmark surveillance video datasets. Results show that incorporating contexts in each level can improve event recognition performance, and jointly integrating three levels of contexts through our hierarchical model achieves the best performance.

  2. Hierarchical Bayesian Models of Subtask Learning

    Science.gov (United States)

    Anglim, Jeromy; Wynton, Sarah K. A.

    2015-01-01

    The current study used Bayesian hierarchical methods to challenge and extend previous work on subtask learning consistency. A general model of individual-level subtask learning was proposed focusing on power and exponential functions with constraints to test for inconsistency. To study subtask learning, we developed a novel computer-based booking…

  3. Hierarchical models in the brain.

    Directory of Open Access Journals (Sweden)

    Karl Friston

    2008-11-01

    Full Text Available This paper describes a general model that subsumes many parametric models for continuous data. The model comprises hidden layers of state-space or dynamic causal models, arranged so that the output of one provides input to another. The ensuing hierarchy furnishes a model for many types of data, of arbitrary complexity. Special cases range from the general linear model for static data to generalised convolution models, with system noise, for nonlinear time-series analysis. Crucially, all of these models can be inverted using exactly the same scheme, namely, dynamic expectation maximization. This means that a single model and optimisation scheme can be used to invert a wide range of models. We present the model and a brief review of its inversion to disclose the relationships among, apparently, diverse generative models of empirical data. We then show that this inversion can be formulated as a simple neural network and may provide a useful metaphor for inference and learning in the brain.

  4. Topic Modeling of Hierarchical Corpora /

    OpenAIRE

    Kim, Do-kyum

    2014-01-01

    The sizes of modern digital libraries have grown beyond our capacity to comprehend manually. Thus we need new tools to help us in organizing and browsing large corpora of text that do not require manually examining each document. To this end, machine learning researchers have developed topic models, statistical learning algorithms for automatic comprehension of large collections of text. Topic models provide both global and local views of a corpus; they discover topics that run through the co...

  5. Supply chain reliability modelling

    Directory of Open Access Journals (Sweden)

    Eugen Zaitsev

    2012-03-01

    Full Text Available Background: Today it is virtually impossible to operate alone on the international level in the logistics business. This promotes the establishment and development of new integrated business entities - logistic operators. However, such cooperation within a supply chain creates also many problems related to the supply chain reliability as well as the optimization of the supplies planning. The aim of this paper was to develop and formulate the mathematical model and algorithms to find the optimum plan of supplies by using economic criterion and the model for the probability evaluating of non-failure operation of supply chain. Methods: The mathematical model and algorithms to find the optimum plan of supplies were developed and formulated by using economic criterion and the model for the probability evaluating of non-failure operation of supply chain. Results and conclusions: The problem of ensuring failure-free performance of goods supply channel analyzed in the paper is characteristic of distributed network systems that make active use of business process outsourcing technologies. The complex planning problem occurring in such systems that requires taking into account the consumer's requirements for failure-free performance in terms of supply volumes and correctness can be reduced to a relatively simple linear programming problem through logical analysis of the structures. The sequence of the operations, which should be taken into account during the process of the supply planning with the supplier's functional reliability, was presented.

  6. AN INTEGER PROGRAMMING MODEL FOR HIERARCHICAL WORKFORCE

    Directory of Open Access Journals (Sweden)

    BANU SUNGUR

    2013-06-01

    Full Text Available The model presented in this paper is based on the model developed by Billionnet for the hierarchical workforce problem. In Billionnet’s Model, while determining the workers’ weekly costs, weekly working hours of workers are not taken into consideration. In our model, the weekly costs per worker are reduced in proportion to the working hours per week. Our model is illustrated on the Billionnet’s Example. The models in question are compared and evaluated on the basis of the results obtained from the example problem. A reduction is achieved in the total cost by the proposed model.

  7. Internet advertising effectiveness by using hierarchical model

    OpenAIRE

    RAHMANI, Samaneh

    2015-01-01

    Abstract. Present paper has been developed with the title of internet advertising effectiveness by using hierarchical model. Presenting the question: Today Internet is an important channel in marketing and advertising. The reason for this could be the ability of the Internet to reduce costs and people’s access to online services[1]. Also advertisers can easily access a multitude of users and communicate with them at low cost [9]. On the other hand, compared to traditional advertising, interne...

  8. A Hierarchical Agency Model of Deposit Insurance

    OpenAIRE

    Jonathan Carroll; Shino Takayama

    2010-01-01

    This paper develops a hierarchical agency model of deposit insurance. The main purpose is to undertake a game theoretic analysis of the consequences of deposit insurance schemes and their effects on monitoring incentives for banks. Using this simple framework, we analyze both risk- independent and risk-dependent premium schemes along with reserve requirement constraints. The results provide policymakers with not only a better understanding of the effects of deposit insurance on welfare and th...

  9. Academic Education Chain Operation Model

    OpenAIRE

    Ruskov, Petko; Ruskov, Andrey

    2007-01-01

    This paper presents an approach for modelling the educational processes as a value added chain. It is an attempt to use a business approach to interpret and compile existing business and educational processes towards reference models and suggest an Academic Education Chain Operation Model. The model can be used to develop an Academic Chain Operation Reference Model.

  10. Hierarchic modeling of heat exchanger thermal hydraulics

    International Nuclear Information System (INIS)

    Horvat, A.; Koncar, B.

    2002-01-01

    Volume Averaging Technique (VAT) is employed in order to model the heat exchanger cross-flow as a porous media flow. As the averaging of the transport equations lead to a closure problem, separate relations are introduced to model interphase momentum and heat transfer between fluid flow and the solid structure. The hierarchic modeling is used to calculate the local drag coefficient C d as a function of Reynolds number Re h . For that purpose a separate model of REV is built and DNS of flow through REV is performed. The local values of heat transfer coefficient h are obtained from available literature. The geometry of the simulation domain and boundary conditions follow the geometry of the experimental test section used at U.C.L.A. The calculated temperature fields reveal that the geometry with denser pin-fins arrangement (HX1) heats fluid flow faster. The temperature field in the HX2 exhibits the formation of thermal boundary layer between pin-fins, which has a significant role in overall thermal performance of the heat exchanger. Although presented discrepancies of the whole-section drag coefficient C d are large, we believe that hierarchic modeling is an appropriate strategy for calculation of complex transport phenomena in heat exchanger geometries.(author)

  11. Hierarchical Bayesian spatial models for multispecies conservation planning and monitoring.

    Science.gov (United States)

    Carroll, Carlos; Johnson, Devin S; Dunk, Jeffrey R; Zielinski, William J

    2010-12-01

    Biologists who develop and apply habitat models are often familiar with the statistical challenges posed by their data's spatial structure but are unsure of whether the use of complex spatial models will increase the utility of model results in planning. We compared the relative performance of nonspatial and hierarchical Bayesian spatial models for three vertebrate and invertebrate taxa of conservation concern (Church's sideband snails [Monadenia churchi], red tree voles [Arborimus longicaudus], and Pacific fishers [Martes pennanti pacifica]) that provide examples of a range of distributional extents and dispersal abilities. We used presence-absence data derived from regional monitoring programs to develop models with both landscape and site-level environmental covariates. We used Markov chain Monte Carlo algorithms and a conditional autoregressive or intrinsic conditional autoregressive model framework to fit spatial models. The fit of Bayesian spatial models was between 35 and 55% better than the fit of nonspatial analogue models. Bayesian spatial models outperformed analogous models developed with maximum entropy (Maxent) methods. Although the best spatial and nonspatial models included similar environmental variables, spatial models provided estimates of residual spatial effects that suggested how ecological processes might structure distribution patterns. Spatial models built from presence-absence data improved fit most for localized endemic species with ranges constrained by poorly known biogeographic factors and for widely distributed species suspected to be strongly affected by unmeasured environmental variables or population processes. By treating spatial effects as a variable of interest rather than a nuisance, hierarchical Bayesian spatial models, especially when they are based on a common broad-scale spatial lattice (here the national Forest Inventory and Analysis grid of 24 km(2) hexagons), can increase the relevance of habitat models to multispecies

  12. Galactic chemical evolution in hierarchical formation models

    Science.gov (United States)

    Arrigoni, Matias

    2010-10-01

    The chemical properties and abundance ratios of galaxies provide important information about their formation histories. Galactic chemical evolution has been modelled in detail within the monolithic collapse scenario. These models have successfully described the abundance distributions in our Galaxy and other spiral discs, as well as the trends of metallicity and abundance ratios observed in early-type galaxies. In the last three decades, however, the paradigm of hierarchical assembly in a Cold Dark Matter (CDM) cosmology has revised the picture of how structure in the Universe forms and evolves. In this scenario, galaxies form when gas radiatively cools and condenses inside dark matter haloes, which themselves follow dissipationless gravitational collapse. The CDM picture has been successful at predicting many observed properties of galaxies (for example, the luminosity and stellar mass function of galaxies, color-magnitude or star formation rate vs. stellar mass distributions, relative numbers of early and late-type galaxies, gas fractions and size distributions of spiral galaxies, and the global star formation history), though many potential problems and open questions remain. It is therefore interesting to see whether chemical evolution models, when implemented within this modern cosmological context, are able to correctly predict the observed chemical properties of galaxies. With the advent of more powerfull telescopes and detectors, precise observations of chemical abundances and abundance ratios in various phases (stellar, ISM, ICM) offer the opportunity to obtain strong constraints on galaxy formation histories and the physics that shapes them. However, in order to take advantage of these observations, it is necessary to implement detailed modeling of chemical evolution into a modern cosmological model of hierarchical assembly.

  13. Academic Education Chain Operation Model

    NARCIS (Netherlands)

    Ruskov, Petko; Ruskov, Andrey

    2007-01-01

    This paper presents an approach for modelling the educational processes as a value added chain. It is an attempt to use a business approach to interpret and compile existing business and educational processes towards reference models and suggest an Academic Education Chain Operation Model. The model

  14. Predicting protein subcellular locations using hierarchical ensemble of Bayesian classifiers based on Markov chains

    Directory of Open Access Journals (Sweden)

    Eils Roland

    2006-06-01

    Full Text Available Abstract Background The subcellular location of a protein is closely related to its function. It would be worthwhile to develop a method to predict the subcellular location for a given protein when only the amino acid sequence of the protein is known. Although many efforts have been made to predict subcellular location from sequence information only, there is the need for further research to improve the accuracy of prediction. Results A novel method called HensBC is introduced to predict protein subcellular location. HensBC is a recursive algorithm which constructs a hierarchical ensemble of classifiers. The classifiers used are Bayesian classifiers based on Markov chain models. We tested our method on six various datasets; among them are Gram-negative bacteria dataset, data for discriminating outer membrane proteins and apoptosis proteins dataset. We observed that our method can predict the subcellular location with high accuracy. Another advantage of the proposed method is that it can improve the accuracy of the prediction of some classes with few sequences in training and is therefore useful for datasets with imbalanced distribution of classes. Conclusion This study introduces an algorithm which uses only the primary sequence of a protein to predict its subcellular location. The proposed recursive scheme represents an interesting methodology for learning and combining classifiers. The method is computationally efficient and competitive with the previously reported approaches in terms of prediction accuracies as empirical results indicate. The code for the software is available upon request.

  15. Entrepreneurial intention modeling using hierarchical multiple regression

    Directory of Open Access Journals (Sweden)

    Marina Jeger

    2014-12-01

    Full Text Available The goal of this study is to identify the contribution of effectuation dimensions to the predictive power of the entrepreneurial intention model over and above that which can be accounted for by other predictors selected and confirmed in previous studies. As is often the case in social and behavioral studies, some variables are likely to be highly correlated with each other. Therefore, the relative amount of variance in the criterion variable explained by each of the predictors depends on several factors such as the order of variable entry and sample specifics. The results show the modest predictive power of two dimensions of effectuation prior to the introduction of the theory of planned behavior elements. The article highlights the main advantages of applying hierarchical regression in social sciences as well as in the specific context of entrepreneurial intention formation, and addresses some of the potential pitfalls that this type of analysis entails.

  16. Hierarchical Multinomial Processing Tree Models: A Latent-Trait Approach

    Science.gov (United States)

    Klauer, Karl Christoph

    2010-01-01

    Multinomial processing tree models are widely used in many areas of psychology. A hierarchical extension of the model class is proposed, using a multivariate normal distribution of person-level parameters with the mean and covariance matrix to be estimated from the data. The hierarchical model allows one to take variability between persons into…

  17. A hierarchical stochastic model for bistable perception.

    Directory of Open Access Journals (Sweden)

    Stefan Albert

    2017-11-01

    Full Text Available Viewing of ambiguous stimuli can lead to bistable perception alternating between the possible percepts. During continuous presentation of ambiguous stimuli, percept changes occur as single events, whereas during intermittent presentation of ambiguous stimuli, percept changes occur at more or less regular intervals either as single events or bursts. Response patterns can be highly variable and have been reported to show systematic differences between patients with schizophrenia and healthy controls. Existing models of bistable perception often use detailed assumptions and large parameter sets which make parameter estimation challenging. Here we propose a parsimonious stochastic model that provides a link between empirical data analysis of the observed response patterns and detailed models of underlying neuronal processes. Firstly, we use a Hidden Markov Model (HMM for the times between percept changes, which assumes one single state in continuous presentation and a stable and an unstable state in intermittent presentation. The HMM captures the observed differences between patients with schizophrenia and healthy controls, but remains descriptive. Therefore, we secondly propose a hierarchical Brownian model (HBM, which produces similar response patterns but also provides a relation to potential underlying mechanisms. The main idea is that neuronal activity is described as an activity difference between two competing neuronal populations reflected in Brownian motions with drift. This differential activity generates switching between the two conflicting percepts and between stable and unstable states with similar mechanisms on different neuronal levels. With only a small number of parameters, the HBM can be fitted closely to a high variety of response patterns and captures group differences between healthy controls and patients with schizophrenia. At the same time, it provides a link to mechanistic models of bistable perception, linking the group

  18. A hierarchical stochastic model for bistable perception.

    Science.gov (United States)

    Albert, Stefan; Schmack, Katharina; Sterzer, Philipp; Schneider, Gaby

    2017-11-01

    Viewing of ambiguous stimuli can lead to bistable perception alternating between the possible percepts. During continuous presentation of ambiguous stimuli, percept changes occur as single events, whereas during intermittent presentation of ambiguous stimuli, percept changes occur at more or less regular intervals either as single events or bursts. Response patterns can be highly variable and have been reported to show systematic differences between patients with schizophrenia and healthy controls. Existing models of bistable perception often use detailed assumptions and large parameter sets which make parameter estimation challenging. Here we propose a parsimonious stochastic model that provides a link between empirical data analysis of the observed response patterns and detailed models of underlying neuronal processes. Firstly, we use a Hidden Markov Model (HMM) for the times between percept changes, which assumes one single state in continuous presentation and a stable and an unstable state in intermittent presentation. The HMM captures the observed differences between patients with schizophrenia and healthy controls, but remains descriptive. Therefore, we secondly propose a hierarchical Brownian model (HBM), which produces similar response patterns but also provides a relation to potential underlying mechanisms. The main idea is that neuronal activity is described as an activity difference between two competing neuronal populations reflected in Brownian motions with drift. This differential activity generates switching between the two conflicting percepts and between stable and unstable states with similar mechanisms on different neuronal levels. With only a small number of parameters, the HBM can be fitted closely to a high variety of response patterns and captures group differences between healthy controls and patients with schizophrenia. At the same time, it provides a link to mechanistic models of bistable perception, linking the group differences to

  19. Constructive Epistemic Modeling: A Hierarchical Bayesian Model Averaging Method

    Science.gov (United States)

    Tsai, F. T. C.; Elshall, A. S.

    2014-12-01

    Constructive epistemic modeling is the idea that our understanding of a natural system through a scientific model is a mental construct that continually develops through learning about and from the model. Using the hierarchical Bayesian model averaging (HBMA) method [1], this study shows that segregating different uncertain model components through a BMA tree of posterior model probabilities, model prediction, within-model variance, between-model variance and total model variance serves as a learning tool [2]. First, the BMA tree of posterior model probabilities permits the comparative evaluation of the candidate propositions of each uncertain model component. Second, systemic model dissection is imperative for understanding the individual contribution of each uncertain model component to the model prediction and variance. Third, the hierarchical representation of the between-model variance facilitates the prioritization of the contribution of each uncertain model component to the overall model uncertainty. We illustrate these concepts using the groundwater modeling of a siliciclastic aquifer-fault system. The sources of uncertainty considered are from geological architecture, formation dip, boundary conditions and model parameters. The study shows that the HBMA analysis helps in advancing knowledge about the model rather than forcing the model to fit a particularly understanding or merely averaging several candidate models. [1] Tsai, F. T.-C., and A. S. Elshall (2013), Hierarchical Bayesian model averaging for hydrostratigraphic modeling: Uncertainty segregation and comparative evaluation. Water Resources Research, 49, 5520-5536, doi:10.1002/wrcr.20428. [2] Elshall, A.S., and F. T.-C. Tsai (2014). Constructive epistemic modeling of groundwater flow with geological architecture and boundary condition uncertainty under Bayesian paradigm, Journal of Hydrology, 517, 105-119, doi: 10.1016/j.jhydrol.2014.05.027.

  20. Bayesian hierarchical modelling of North Atlantic windiness

    Science.gov (United States)

    Vanem, E.; Breivik, O. N.

    2013-03-01

    Extreme weather conditions represent serious natural hazards to ship operations and may be the direct cause or contributing factor to maritime accidents. Such severe environmental conditions can be taken into account in ship design and operational windows can be defined that limits hazardous operations to less extreme conditions. Nevertheless, possible changes in the statistics of extreme weather conditions, possibly due to anthropogenic climate change, represent an additional hazard to ship operations that is less straightforward to account for in a consistent way. Obviously, there are large uncertainties as to how future climate change will affect the extreme weather conditions at sea and there is a need for stochastic models that can describe the variability in both space and time at various scales of the environmental conditions. Previously, Bayesian hierarchical space-time models have been developed to describe the variability and complex dependence structures of significant wave height in space and time. These models were found to perform reasonably well and provided some interesting results, in particular, pertaining to long-term trends in the wave climate. In this paper, a similar framework is applied to oceanic windiness and the spatial and temporal variability of the 10-m wind speed over an area in the North Atlantic ocean is investigated. When the results from the model for North Atlantic windiness is compared to the results for significant wave height over the same area, it is interesting to observe that whereas an increasing trend in significant wave height was identified, no statistically significant long-term trend was estimated in windiness. This may indicate that the increase in significant wave height is not due to an increase in locally generated wind waves, but rather to increased swell. This observation is also consistent with studies that have suggested a poleward shift of the main storm tracks.

  1. Bayesian hierarchical modelling of North Atlantic windiness

    Directory of Open Access Journals (Sweden)

    E. Vanem

    2013-03-01

    Full Text Available Extreme weather conditions represent serious natural hazards to ship operations and may be the direct cause or contributing factor to maritime accidents. Such severe environmental conditions can be taken into account in ship design and operational windows can be defined that limits hazardous operations to less extreme conditions. Nevertheless, possible changes in the statistics of extreme weather conditions, possibly due to anthropogenic climate change, represent an additional hazard to ship operations that is less straightforward to account for in a consistent way. Obviously, there are large uncertainties as to how future climate change will affect the extreme weather conditions at sea and there is a need for stochastic models that can describe the variability in both space and time at various scales of the environmental conditions. Previously, Bayesian hierarchical space-time models have been developed to describe the variability and complex dependence structures of significant wave height in space and time. These models were found to perform reasonably well and provided some interesting results, in particular, pertaining to long-term trends in the wave climate. In this paper, a similar framework is applied to oceanic windiness and the spatial and temporal variability of the 10-m wind speed over an area in the North Atlantic ocean is investigated. When the results from the model for North Atlantic windiness is compared to the results for significant wave height over the same area, it is interesting to observe that whereas an increasing trend in significant wave height was identified, no statistically significant long-term trend was estimated in windiness. This may indicate that the increase in significant wave height is not due to an increase in locally generated wind waves, but rather to increased swell. This observation is also consistent with studies that have suggested a poleward shift of the main storm tracks.

  2. Microscale and nanoscale hierarchical structured mesh films with superhydrophobic and superoleophilic properties induced by long-chain fatty acids

    International Nuclear Information System (INIS)

    Wang Shutao; Song Yanlin; Jiang Lei

    2007-01-01

    Inspired by the lotus effect, we fabricate new microscale and nanoscale hierarchical structured copper mesh films by a simple electrochemical deposition. After modification of the long-chain fatty acid monolayer, these films show superhydrophobic and superoleophilic properties, which could be used for the effective separation of oil and water. The length of the fatty acid chain strongly influences the surface wettability of as-prepared films. It is confirmed that the cooperative effect of the hierarchical structure of the copper film and the nature of the long-chain fatty acid contribute to this unique surface wettability

  3. Modeling methodology for supply chain synthesis and disruption analysis

    Science.gov (United States)

    Wu, Teresa; Blackhurst, Jennifer

    2004-11-01

    The concept of an integrated or synthesized supply chain is a strategy for managing today's globalized and customer driven supply chains in order to better meet customer demands. Synthesizing individual entities into an integrated supply chain can be a challenging task due to a variety of factors including conflicting objectives, mismatched incentives and constraints of the individual entities. Furthermore, understanding the effects of disruptions occurring at any point in the system is difficult when working toward synthesizing supply chain operations. Therefore, the goal of this research is to present a modeling methodology to manage the synthesis of a supply chain by linking hierarchical levels of the system and to model and analyze disruptions in the integrated supply chain. The contribution of this research is threefold: (1) supply chain systems can be modeled hierarchically (2) the performance of synthesized supply chain system can be evaluated quantitatively (3) reachability analysis is used to evaluate the system performance and verify whether a specific state is reachable, allowing the user to understand the extent of effects of a disruption.

  4. Hierarchical Neural Regression Models for Customer Churn Prediction

    Directory of Open Access Journals (Sweden)

    Golshan Mohammadi

    2013-01-01

    Full Text Available As customers are the main assets of each industry, customer churn prediction is becoming a major task for companies to remain in competition with competitors. In the literature, the better applicability and efficiency of hierarchical data mining techniques has been reported. This paper considers three hierarchical models by combining four different data mining techniques for churn prediction, which are backpropagation artificial neural networks (ANN, self-organizing maps (SOM, alpha-cut fuzzy c-means (α-FCM, and Cox proportional hazards regression model. The hierarchical models are ANN + ANN + Cox, SOM + ANN + Cox, and α-FCM + ANN + Cox. In particular, the first component of the models aims to cluster data in two churner and nonchurner groups and also filter out unrepresentative data or outliers. Then, the clustered data as the outputs are used to assign customers to churner and nonchurner groups by the second technique. Finally, the correctly classified data are used to create Cox proportional hazards model. To evaluate the performance of the hierarchical models, an Iranian mobile dataset is considered. The experimental results show that the hierarchical models outperform the single Cox regression baseline model in terms of prediction accuracy, Types I and II errors, RMSE, and MAD metrics. In addition, the α-FCM + ANN + Cox model significantly performs better than the two other hierarchical models.

  5. The Revised Hierarchical Model: A critical review and assessment

    OpenAIRE

    Kroll, Judith F.; van Hell, Janet G.; Tokowicz, Natasha; Green, David W.

    2010-01-01

    Brysbaert and Duyck (2009) suggest that it is time to abandon the Revised Hierarchical Model (Kroll and Stewart, 1994) in favor of connectionist models such as BIA+ (Dijkstra and Van Heuven, 2002) that more accurately account for the recent evidence on nonselective access in bilingual word recognition. In this brief response, we first review the history of the Revised Hierarchical Model (RHM), consider the set of issues that it was proposed to address, and then evaluate the evidence that supp...

  6. Hierarchical regression analysis in structural Equation Modeling

    NARCIS (Netherlands)

    de Jong, P.F.

    1999-01-01

    In a hierarchical or fixed-order regression analysis, the independent variables are entered into the regression equation in a prespecified order. Such an analysis is often performed when the extra amount of variance accounted for in a dependent variable by a specific independent variable is the main

  7. Slow logarithmic relaxation in models with hierarchically constrained dynamics

    OpenAIRE

    Brey, J. J.; Prados, A.

    2000-01-01

    A general kind of models with hierarchically constrained dynamics is shown to exhibit logarithmic anomalous relaxation, similarly to a variety of complex strongly interacting materials. The logarithmic behavior describes most of the decay of the response function.

  8. Bayesian disease mapping: hierarchical modeling in spatial epidemiology

    National Research Council Canada - National Science Library

    Lawson, Andrew

    2013-01-01

    .... Exploring these new developments, Bayesian Disease Mapping: Hierarchical Modeling in Spatial Epidemiology, Second Edition provides an up-to-date, cohesive account of the full range of Bayesian disease mapping methods and applications...

  9. Random walk hierarchy measure: What is more hierarchical, a chain, a tree or a star?

    Science.gov (United States)

    Czégel, Dániel; Palla, Gergely

    2015-01-01

    Signs of hierarchy are prevalent in a wide range of systems in nature and society. One of the key problems is quantifying the importance of hierarchical organisation in the structure of the network representing the interactions or connections between the fundamental units of the studied system. Although a number of notable methods are already available, their vast majority is treating all directed acyclic graphs as already maximally hierarchical. Here we propose a hierarchy measure based on random walks on the network. The novelty of our approach is that directed trees corresponding to multi level pyramidal structures obtain higher hierarchy scores compared to directed chains and directed stars. Furthermore, in the thermodynamic limit the hierarchy measure of regular trees is converging to a well defined limit depending only on the branching number. When applied to real networks, our method is computationally very effective, as the result can be evaluated with arbitrary precision by subsequent multiplications of the transition matrix describing the random walk process. In addition, the tests on real world networks provided very intuitive results, e.g., the trophic levels obtained from our approach on a food web were highly consistent with former results from ecology. PMID:26657012

  10. Random walk hierarchy measure: What is more hierarchical, a chain, a tree or a star?

    Science.gov (United States)

    Czégel, Dániel; Palla, Gergely

    2015-12-10

    Signs of hierarchy are prevalent in a wide range of systems in nature and society. One of the key problems is quantifying the importance of hierarchical organisation in the structure of the network representing the interactions or connections between the fundamental units of the studied system. Although a number of notable methods are already available, their vast majority is treating all directed acyclic graphs as already maximally hierarchical. Here we propose a hierarchy measure based on random walks on the network. The novelty of our approach is that directed trees corresponding to multi level pyramidal structures obtain higher hierarchy scores compared to directed chains and directed stars. Furthermore, in the thermodynamic limit the hierarchy measure of regular trees is converging to a well defined limit depending only on the branching number. When applied to real networks, our method is computationally very effective, as the result can be evaluated with arbitrary precision by subsequent multiplications of the transition matrix describing the random walk process. In addition, the tests on real world networks provided very intuitive results, e.g., the trophic levels obtained from our approach on a food web were highly consistent with former results from ecology.

  11. Road network safety evaluation using Bayesian hierarchical joint model.

    Science.gov (United States)

    Wang, Jie; Huang, Helai

    2016-05-01

    Safety and efficiency are commonly regarded as two significant performance indicators of transportation systems. In practice, road network planning has focused on road capacity and transport efficiency whereas the safety level of a road network has received little attention in the planning stage. This study develops a Bayesian hierarchical joint model for road network safety evaluation to help planners take traffic safety into account when planning a road network. The proposed model establishes relationships between road network risk and micro-level variables related to road entities and traffic volume, as well as socioeconomic, trip generation and network density variables at macro level which are generally used for long term transportation plans. In addition, network spatial correlation between intersections and their connected road segments is also considered in the model. A road network is elaborately selected in order to compare the proposed hierarchical joint model with a previous joint model and a negative binomial model. According to the results of the model comparison, the hierarchical joint model outperforms the joint model and negative binomial model in terms of the goodness-of-fit and predictive performance, which indicates the reasonableness of considering the hierarchical data structure in crash prediction and analysis. Moreover, both random effects at the TAZ level and the spatial correlation between intersections and their adjacent segments are found to be significant, supporting the employment of the hierarchical joint model as an alternative in road-network-level safety modeling as well. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. Fuzzy hierarchical model for risk assessment principles, concepts, and practical applications

    CERN Document Server

    Chan, Hing Kai

    2013-01-01

    Risk management is often complicated by situational uncertainties and the subjective preferences of decision makers. Fuzzy Hierarchical Model for Risk Assessment introduces a fuzzy-based hierarchical approach to solve risk management problems considering both qualitative and quantitative criteria to tackle imprecise information.   This approach is illustrated through number of case studies using examples from the food, fashion and electronics sectors to cover a range of applications including supply chain management, green product design and green initiatives. These practical examples explore how this method can be adapted and fine tuned to fit other industries as well.   Supported by an extensive literature review, Fuzzy Hierarchical Model for Risk Assessment  comprehensively introduces a new method for project managers across all industries as well as researchers in risk management.

  13. Advances in Applications of Hierarchical Bayesian Methods with Hydrological Models

    Science.gov (United States)

    Alexander, R. B.; Schwarz, G. E.; Boyer, E. W.

    2017-12-01

    Mechanistic and empirical watershed models are increasingly used to inform water resource decisions. Growing access to historical stream measurements and data from in-situ sensor technologies has increased the need for improved techniques for coupling models with hydrological measurements. Techniques that account for the intrinsic uncertainties of both models and measurements are especially needed. Hierarchical Bayesian methods provide an efficient modeling tool for quantifying model and prediction uncertainties, including those associated with measurements. Hierarchical methods can also be used to explore spatial and temporal variations in model parameters and uncertainties that are informed by hydrological measurements. We used hierarchical Bayesian methods to develop a hybrid (statistical-mechanistic) SPARROW (SPAtially Referenced Regression On Watershed attributes) model of long-term mean annual streamflow across diverse environmental and climatic drainages in 18 U.S. hydrological regions. Our application illustrates the use of a new generation of Bayesian methods that offer more advanced computational efficiencies than the prior generation. Evaluations of the effects of hierarchical (regional) variations in model coefficients and uncertainties on model accuracy indicates improved prediction accuracies (median of 10-50%) but primarily in humid eastern regions, where model uncertainties are one-third of those in arid western regions. Generally moderate regional variability is observed for most hierarchical coefficients. Accounting for measurement and structural uncertainties, using hierarchical state-space techniques, revealed the effects of spatially-heterogeneous, latent hydrological processes in the "localized" drainages between calibration sites; this improved model precision, with only minor changes in regional coefficients. Our study can inform advances in the use of hierarchical methods with hydrological models to improve their integration with stream

  14. Dynamic modeling of presence of occupants using inhomogeneous Markov chains

    DEFF Research Database (Denmark)

    Andersen, Philip Hvidthøft Delff; Iversen, Anne; Madsen, Henrik

    2014-01-01

    on time of day, and by use of a filter of the observations it is able to capture per-employee sequence dynamics. Simulations using this method are compared with simulations using homogeneous Markov chains and show far better ability to reproduce key properties of the data. The method is based...... on inhomogeneous Markov chains with where the transition probabilities are estimated using generalized linear models with polynomials, B-splines, and a filter of passed observations as inputs. For treating the dispersion of the data series, a hierarchical model structure is used where one model is for low presence...

  15. Hierarchical Bayesian Modeling of Fluid-Induced Seismicity

    Science.gov (United States)

    Broccardo, M.; Mignan, A.; Wiemer, S.; Stojadinovic, B.; Giardini, D.

    2017-11-01

    In this study, we present a Bayesian hierarchical framework to model fluid-induced seismicity. The framework is based on a nonhomogeneous Poisson process with a fluid-induced seismicity rate proportional to the rate of injected fluid. The fluid-induced seismicity rate model depends upon a set of physically meaningful parameters and has been validated for six fluid-induced case studies. In line with the vision of hierarchical Bayesian modeling, the rate parameters are considered as random variables. We develop both the Bayesian inference and updating rules, which are used to develop a probabilistic forecasting model. We tested the Basel 2006 fluid-induced seismic case study to prove that the hierarchical Bayesian model offers a suitable framework to coherently encode both epistemic uncertainty and aleatory variability. Moreover, it provides a robust and consistent short-term seismic forecasting model suitable for online risk quantification and mitigation.

  16. Determining Predictor Importance in Hierarchical Linear Models Using Dominance Analysis

    Science.gov (United States)

    Luo, Wen; Azen, Razia

    2013-01-01

    Dominance analysis (DA) is a method used to evaluate the relative importance of predictors that was originally proposed for linear regression models. This article proposes an extension of DA that allows researchers to determine the relative importance of predictors in hierarchical linear models (HLM). Commonly used measures of model adequacy in…

  17. Hierarchical modeling of molecular energies using a deep neural network

    Science.gov (United States)

    Lubbers, Nicholas; Smith, Justin S.; Barros, Kipton

    2018-06-01

    We introduce the Hierarchically Interacting Particle Neural Network (HIP-NN) to model molecular properties from datasets of quantum calculations. Inspired by a many-body expansion, HIP-NN decomposes properties, such as energy, as a sum over hierarchical terms. These terms are generated from a neural network—a composition of many nonlinear transformations—acting on a representation of the molecule. HIP-NN achieves the state-of-the-art performance on a dataset of 131k ground state organic molecules and predicts energies with 0.26 kcal/mol mean absolute error. With minimal tuning, our model is also competitive on a dataset of molecular dynamics trajectories. In addition to enabling accurate energy predictions, the hierarchical structure of HIP-NN helps to identify regions of model uncertainty.

  18. Applying Hierarchical Model Calibration to Automatically Generated Items.

    Science.gov (United States)

    Williamson, David M.; Johnson, Matthew S.; Sinharay, Sandip; Bejar, Isaac I.

    This study explored the application of hierarchical model calibration as a means of reducing, if not eliminating, the need for pretesting of automatically generated items from a common item model prior to operational use. Ultimately the successful development of automatic item generation (AIG) systems capable of producing items with highly similar…

  19. A HIERARCHICAL SET OF MODELS FOR SPECIES RESPONSE ANALYSIS

    NARCIS (Netherlands)

    HUISMAN, J; OLFF, H; FRESCO, LFM

    Variation in the abundance of species in space and/or time can be caused by a wide range of underlying processes. Before such causes can be analysed we need simple mathematical models which can describe the observed response patterns. For this purpose a hierarchical set of models is presented. These

  20. A hierarchical set of models for species response analysis

    NARCIS (Netherlands)

    Huisman, J.; Olff, H.; Fresco, L.F.M.

    1993-01-01

    Variation in the abundance of species in space and/or time can be caused by a wide range of underlying processes. Before such causes can be analysed we need simple mathematical models which can describe the observed response patterns. For this purpose a hierarchical set of models is presented. These

  1. The Revised Hierarchical Model: A critical review and assessment

    NARCIS (Netherlands)

    Kroll, J.F.; Hell, J.G. van; Tokowicz, N.; Green, D.W.

    2010-01-01

    Brysbaert and Duyck (this issue) suggest that it is time to abandon the Revised Hierarchical Model (Kroll and Stewart, 1994) in favor of connectionist models such as BIA+ (Dijkstra and Van Heuven, 2002) that more accurately account for the recent evidence on non-selective access in bilingual word

  2. A hierarchical model exhibiting the Kosterlitz-Thouless fixed point

    International Nuclear Information System (INIS)

    Marchetti, D.H.U.; Perez, J.F.

    1985-01-01

    A hierarchical model for 2-d Coulomb gases displaying a line stable of fixed points describing the Kosterlitz-Thouless phase transition is constructed. For Coulomb gases corresponding to Z sub(N)- models these fixed points are stable for an intermediate temperature interval. (Author) [pt

  3. Hierarchical graphs for rule-based modeling of biochemical systems

    Directory of Open Access Journals (Sweden)

    Hu Bin

    2011-02-01

    Full Text Available Abstract Background In rule-based modeling, graphs are used to represent molecules: a colored vertex represents a component of a molecule, a vertex attribute represents the internal state of a component, and an edge represents a bond between components. Components of a molecule share the same color. Furthermore, graph-rewriting rules are used to represent molecular interactions. A rule that specifies addition (removal of an edge represents a class of association (dissociation reactions, and a rule that specifies a change of a vertex attribute represents a class of reactions that affect the internal state of a molecular component. A set of rules comprises an executable model that can be used to determine, through various means, the system-level dynamics of molecular interactions in a biochemical system. Results For purposes of model annotation, we propose the use of hierarchical graphs to represent structural relationships among components and subcomponents of molecules. We illustrate how hierarchical graphs can be used to naturally document the structural organization of the functional components and subcomponents of two proteins: the protein tyrosine kinase Lck and the T cell receptor (TCR complex. We also show that computational methods developed for regular graphs can be applied to hierarchical graphs. In particular, we describe a generalization of Nauty, a graph isomorphism and canonical labeling algorithm. The generalized version of the Nauty procedure, which we call HNauty, can be used to assign canonical labels to hierarchical graphs or more generally to graphs with multiple edge types. The difference between the Nauty and HNauty procedures is minor, but for completeness, we provide an explanation of the entire HNauty algorithm. Conclusions Hierarchical graphs provide more intuitive formal representations of proteins and other structured molecules with multiple functional components than do the regular graphs of current languages for

  4. A Hierarchal Risk Assessment Model Using the Evidential Reasoning Rule

    Directory of Open Access Journals (Sweden)

    Xiaoxiao Ji

    2017-02-01

    Full Text Available This paper aims to develop a hierarchical risk assessment model using the newly-developed evidential reasoning (ER rule, which constitutes a generic conjunctive probabilistic reasoning process. In this paper, we first provide a brief introduction to the basics of the ER rule and emphasize the strengths for representing and aggregating uncertain information from multiple experts and sources. Further, we discuss the key steps of developing the hierarchical risk assessment framework systematically, including (1 formulation of risk assessment hierarchy; (2 representation of both qualitative and quantitative information; (3 elicitation of attribute weights and information reliabilities; (4 aggregation of assessment information using the ER rule and (5 quantification and ranking of risks using utility-based transformation. The proposed hierarchical risk assessment framework can potentially be implemented to various complex and uncertain systems. A case study on the fire/explosion risk assessment of marine vessels demonstrates the applicability of the proposed risk assessment model.

  5. Conceptual hierarchical modeling to describe wetland plant community organization

    Science.gov (United States)

    Little, A.M.; Guntenspergen, G.R.; Allen, T.F.H.

    2010-01-01

    Using multivariate analysis, we created a hierarchical modeling process that describes how differently-scaled environmental factors interact to affect wetland-scale plant community organization in a system of small, isolated wetlands on Mount Desert Island, Maine. We followed the procedure: 1) delineate wetland groups using cluster analysis, 2) identify differently scaled environmental gradients using non-metric multidimensional scaling, 3) order gradient hierarchical levels according to spatiotem-poral scale of fluctuation, and 4) assemble hierarchical model using group relationships with ordination axes and post-hoc tests of environmental differences. Using this process, we determined 1) large wetland size and poor surface water chemistry led to the development of shrub fen wetland vegetation, 2) Sphagnum and water chemistry differences affected fen vs. marsh / sedge meadows status within small wetlands, and 3) small-scale hydrologic differences explained transitions between forested vs. non-forested and marsh vs. sedge meadow vegetation. This hierarchical modeling process can help explain how upper level contextual processes constrain biotic community response to lower-level environmental changes. It creates models with more nuanced spatiotemporal complexity than classification and regression tree procedures. Using this process, wetland scientists will be able to generate more generalizable theories of plant community organization, and useful management models. ?? Society of Wetland Scientists 2009.

  6. Use of a Bayesian hierarchical model to study the allometric scaling of the fetoplacental weight ratio

    Directory of Open Access Journals (Sweden)

    Fidel Ernesto Castro Morales

    2016-03-01

    Full Text Available Abstract Objectives: to propose the use of a Bayesian hierarchical model to study the allometric scaling of the fetoplacental weight ratio, including possible confounders. Methods: data from 26 singleton pregnancies with gestational age at birth between 37 and 42 weeks were analyzed. The placentas were collected immediately after delivery and stored under refrigeration until the time of analysis, which occurred within up to 12 hours. Maternal data were collected from medical records. A Bayesian hierarchical model was proposed and Markov chain Monte Carlo simulation methods were used to obtain samples from distribution a posteriori. Results: the model developed showed a reasonable fit, even allowing for the incorporation of variables and a priori information on the parameters used. Conclusions: new variables can be added to the modelfrom the available code, allowing many possibilities for data analysis and indicating the potential for use in research on the subject.

  7. Control of discrete event systems modeled as hierarchical state machines

    Science.gov (United States)

    Brave, Y.; Heymann, M.

    1991-01-01

    The authors examine a class of discrete event systems (DESs) modeled as asynchronous hierarchical state machines (AHSMs). For this class of DESs, they provide an efficient method for testing reachability, which is an essential step in many control synthesis procedures. This method utilizes the asynchronous nature and hierarchical structure of AHSMs, thereby illustrating the advantage of the AHSM representation as compared with its equivalent (flat) state machine representation. An application of the method is presented where an online minimally restrictive solution is proposed for the problem of maintaining a controlled AHSM within prescribed legal bounds.

  8. The fishing industry - toward supply chain modelling

    DEFF Research Database (Denmark)

    Jensen, Toke Koldborg; Nielsen, Jette; Larsen, Erling P.

    Mathematical models for simulating and optimizing supply chain aspects such as distribution planning and optimal use of raw materials are widely used. However, modelling based on a holistic chain view is less studied, and food-related aspects such as quality and shelf life issues enforce additional...... requirements onto the chains. In this paper, we consider the supply chain structure of the Danish fishing industry and illustrate the potential of using mathematical models to identify quality and value-adding activities. This is a first step toward innovative supply chain modelling aimed to identify benefits...... for actors along chains in the fishing industry....

  9. Analysis of Error Propagation Within Hierarchical Air Combat Models

    Science.gov (United States)

    2016-06-01

    values alone are propagated through layers of combat models, the final results will likely be biased, and risk underestimated. An air-to-air...values alone are propagated through layers of combat models, the final results will likely be biased, and risk underestimated. An air-to-air engagement... PROPAGATION WITHIN HIERARCHICAL AIR COMBAT MODELS by Salih Ilaslan June 2016 Thesis Advisor: Thomas W. Lucas Second Reader: Jeffrey

  10. Supply chain strategies, issues and models

    CERN Document Server

    Ramanathan, Ramakrishnan

    2014-01-01

    In the 21st century, supply chain operations and relationships among supply chain partners have become highly challenging, necessitating new approaches, e.g., the development of new models. Supply Chain Strategies, Issues and Models discusses supply chain issues and models with examples from actual industrial cases. Expert authors with a wide spectrum of knowledge working in various areas of supply chain management from various geographical locations offer refreshing, novel and insightful ideas and address possible solutions using established theories and models. Supply Chain Strategies, Issues and Models features studies that have used mathematical modeling, statistical analyses and also descriptive qualitative studies. The chapters cover many relevant themes related to supply chains and logistics including supply chain complexity, information sharing, quality (six sigma), electronic Kanbans, inventory models, scheduling, purchasing and contracts. To facilitate easy reading, the chapters that deal with suppl...

  11. Metastable states in the hierarchical Dyson model drive parallel processing in the hierarchical Hopfield network

    International Nuclear Information System (INIS)

    Agliari, Elena; Barra, Adriano; Guerra, Francesco; Galluzzi, Andrea; Tantari, Daniele; Tavani, Flavia

    2015-01-01

    In this paper, we introduce and investigate the statistical mechanics of hierarchical neural networks. First, we approach these systems à la Mattis, by thinking of the Dyson model as a single-pattern hierarchical neural network. We also discuss the stability of different retrievable states as predicted by the related self-consistencies obtained both from a mean-field bound and from a bound that bypasses the mean-field limitation. The latter is worked out by properly reabsorbing the magnetization fluctuations related to higher levels of the hierarchy into effective fields for the lower levels. Remarkably, mixing Amit's ansatz technique for selecting candidate-retrievable states with the interpolation procedure for solving for the free energy of these states, we prove that, due to gauge symmetry, the Dyson model accomplishes both serial and parallel processing. We extend this scenario to multiple stored patterns by implementing the Hebb prescription for learning within the couplings. This results in Hopfield-like networks constrained on a hierarchical topology, for which, by restricting to the low-storage regime where the number of patterns grows at its most logarithmical with the amount of neurons, we prove the existence of the thermodynamic limit for the free energy, and we give an explicit expression of its mean-field bound and of its related improved bound. We studied the resulting self-consistencies for the Mattis magnetizations, which act as order parameters, are studied and the stability of solutions is analyzed to get a picture of the overall retrieval capabilities of the system according to both mean-field and non-mean-field scenarios. Our main finding is that embedding the Hebbian rule on a hierarchical topology allows the network to accomplish both serial and parallel processing. By tuning the level of fast noise affecting it or triggering the decay of the interactions with the distance among neurons, the system may switch from sequential retrieval to

  12. Hierarchical Models of the Nearshore Complex System

    National Research Council Canada - National Science Library

    Werner, Brad

    2004-01-01

    .... This grant was termination funding for the Werner group, specifically aimed at finishing up and publishing research related to synoptic imaging of near shore bathymetry, testing models for beach cusp...

  13. Hybrid modeling and empirical analysis of automobile supply chain network

    Science.gov (United States)

    Sun, Jun-yan; Tang, Jian-ming; Fu, Wei-ping; Wu, Bing-ying

    2017-05-01

    Based on the connection mechanism of nodes which automatically select upstream and downstream agents, a simulation model for dynamic evolutionary process of consumer-driven automobile supply chain is established by integrating ABM and discrete modeling in the GIS-based map. Firstly, the rationality is proved by analyzing the consistency of sales and changes in various agent parameters between the simulation model and a real automobile supply chain. Second, through complex network theory, hierarchical structures of the model and relationships of networks at different levels are analyzed to calculate various characteristic parameters such as mean distance, mean clustering coefficients, and degree distributions. By doing so, it verifies that the model is a typical scale-free network and small-world network. Finally, the motion law of this model is analyzed from the perspective of complex self-adaptive systems. The chaotic state of the simulation system is verified, which suggests that this system has typical nonlinear characteristics. This model not only macroscopically illustrates the dynamic evolution of complex networks of automobile supply chain but also microcosmically reflects the business process of each agent. Moreover, the model construction and simulation of the system by means of combining CAS theory and complex networks supplies a novel method for supply chain analysis, as well as theory bases and experience for supply chain analysis of auto companies.

  14. Hierarchical and coupling model of factors influencing vessel traffic flow.

    Directory of Open Access Journals (Sweden)

    Zhao Liu

    Full Text Available Understanding the characteristics of vessel traffic flow is crucial in maintaining navigation safety, efficiency, and overall waterway transportation management. Factors influencing vessel traffic flow possess diverse features such as hierarchy, uncertainty, nonlinearity, complexity, and interdependency. To reveal the impact mechanism of the factors influencing vessel traffic flow, a hierarchical model and a coupling model are proposed in this study based on the interpretative structural modeling method. The hierarchical model explains the hierarchies and relationships of the factors using a graph. The coupling model provides a quantitative method that explores interaction effects of factors using a coupling coefficient. The coupling coefficient is obtained by determining the quantitative indicators of the factors and their weights. Thereafter, the data obtained from Port of Tianjin is used to verify the proposed coupling model. The results show that the hierarchical model of the factors influencing vessel traffic flow can explain the level, structure, and interaction effect of the factors; the coupling model is efficient in analyzing factors influencing traffic volumes. The proposed method can be used for analyzing increases in vessel traffic flow in waterway transportation system.

  15. Hierarchical and coupling model of factors influencing vessel traffic flow.

    Science.gov (United States)

    Liu, Zhao; Liu, Jingxian; Li, Huanhuan; Li, Zongzhi; Tan, Zhirong; Liu, Ryan Wen; Liu, Yi

    2017-01-01

    Understanding the characteristics of vessel traffic flow is crucial in maintaining navigation safety, efficiency, and overall waterway transportation management. Factors influencing vessel traffic flow possess diverse features such as hierarchy, uncertainty, nonlinearity, complexity, and interdependency. To reveal the impact mechanism of the factors influencing vessel traffic flow, a hierarchical model and a coupling model are proposed in this study based on the interpretative structural modeling method. The hierarchical model explains the hierarchies and relationships of the factors using a graph. The coupling model provides a quantitative method that explores interaction effects of factors using a coupling coefficient. The coupling coefficient is obtained by determining the quantitative indicators of the factors and their weights. Thereafter, the data obtained from Port of Tianjin is used to verify the proposed coupling model. The results show that the hierarchical model of the factors influencing vessel traffic flow can explain the level, structure, and interaction effect of the factors; the coupling model is efficient in analyzing factors influencing traffic volumes. The proposed method can be used for analyzing increases in vessel traffic flow in waterway transportation system.

  16. Petascale Hierarchical Modeling VIA Parallel Execution

    Energy Technology Data Exchange (ETDEWEB)

    Gelman, Andrew [Principal Investigator

    2014-04-14

    The research allows more effective model building. By allowing researchers to fit complex models to large datasets in a scalable manner, our algorithms and software enable more effective scientific research. In the new area of “big data,” it is often necessary to fit “big models” to adjust for systematic differences between sample and population. For this task, scalable and efficient model-fitting tools are needed, and these have been achieved with our new Hamiltonian Monte Carlo algorithm, the no-U-turn sampler, and our new C++ program, Stan. In layman’s terms, our research enables researchers to create improved mathematical modes for large and complex systems.

  17. Hierarchical Modelling of Flood Risk for Engineering Decision Analysis

    DEFF Research Database (Denmark)

    Custer, Rocco

    protection structures in the hierarchical flood protection system - is identified. To optimise the design of protection structures, fragility and vulnerability models must allow for consideration of decision alternatives. While such vulnerability models are available for large protection structures (e...... systems, as well as the implementation of the flood risk analysis methodology and the vulnerability modelling approach are illustrated with an example application. In summary, the present thesis provides a characterisation of hierarchical flood protection systems as well as several methodologies to model...... and robust. Traditional risk management solutions, e.g. dike construction, are not particularly flexible, as they are difficult to adapt to changing risk. Conversely, the recent concept of integrated flood risk management, entailing a combination of several structural and non-structural risk management...

  18. A Hierarchical Visualization Analysis Model of Power Big Data

    Science.gov (United States)

    Li, Yongjie; Wang, Zheng; Hao, Yang

    2018-01-01

    Based on the conception of integrating VR scene and power big data analysis, a hierarchical visualization analysis model of power big data is proposed, in which levels are designed, targeting at different abstract modules like transaction, engine, computation, control and store. The regularly departed modules of power data storing, data mining and analysis, data visualization are integrated into one platform by this model. It provides a visual analysis solution for the power big data.

  19. Fully probabilistic design of hierarchical Bayesian models

    Czech Academy of Sciences Publication Activity Database

    Quinn, A.; Kárný, Miroslav; Guy, Tatiana Valentine

    2016-01-01

    Roč. 369, č. 1 (2016), s. 532-547 ISSN 0020-0255 R&D Projects: GA ČR GA13-13502S Institutional support: RVO:67985556 Keywords : Fully probabilistic design * Ideal distribution * Minimum cross-entropy principle * Bayesian conditioning * Kullback-Leibler divergence * Bayesian nonparametric modelling Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 4.832, year: 2016 http://library.utia.cas.cz/separaty/2016/AS/karny-0463052.pdf

  20. Hierarchical Model Predictive Control for Resource Distribution

    DEFF Research Database (Denmark)

    Bendtsen, Jan Dimon; Trangbæk, K; Stoustrup, Jakob

    2010-01-01

    units. The approach is inspired by smart-grid electric power production and consumption systems, where the flexibility of a large number of power producing and/or power consuming units can be exploited in a smart-grid solution. The objective is to accommodate the load variation on the grid, arising......This paper deals with hierarchichal model predictive control (MPC) of distributed systems. A three level hierachical approach is proposed, consisting of a high level MPC controller, a second level of so-called aggregators, controlled by an online MPC-like algorithm, and a lower level of autonomous...... on one hand from varying consumption, on the other hand by natural variations in power production e.g. from wind turbines. The approach presented is based on quadratic optimization and possess the properties of low algorithmic complexity and of scalability. In particular, the proposed design methodology...

  1. Coordinated supply chain dynamic production planning model

    Science.gov (United States)

    Chandra, Charu; Grabis, Janis

    2001-10-01

    Coordination of different and often contradicting interests of individual supply chain members is one of the important issues in supply chain management because the individual members can not succeed without success of the supply chain and vice versa. This paper investigates a supply chain dynamic production planning problem with emphasis on coordination. A planning problem is formally described using a supply chain kernel, which defines supply chain configuration, management policies, available resources and objectives both at supply chain or macro and supply chain member or micro levels. The coordinated model is solved in order to balance decisions made at the macro and micro levels and members' profitability is used as the coordination criterion. The coordinated model is used to determine inventory levels and production capacity across the supply chain. Application of the coordinated model distributes costs burden uniformly among supply chain members and preserves overall efficiency of the supply chain. Influence of the demand series uncertainty is investigated. The production planning model is a part of the integrated supply chain decision modeling system, which is shared among the supply chain members across the Internet.

  2. Hierarchical Model Predictive Control for Plug-and-Play Resource Distribution

    DEFF Research Database (Denmark)

    Bendtsen, Jan Dimon; Trangbæk, K; Stoustrup, Jakob

    2012-01-01

    of autonomous units. The approach is inspired by smart-grid electric power production and consumption systems, where the flexibility of a large number of power producing and/or power consuming units can be exploited in a smart-grid solution. The objective is to accommodate the load variation on the grid......This chapter deals with hierarchical model predictive control (MPC) of distributed systems. A three level hierarchical approach is proposed, consisting of a high level MPC controller, a second level of so-called aggregators, controlled by an online MPC-like algorithm, and a lower level......, arising on one hand from varying consumption, on the other hand by natural variations in power production e.g. from wind turbines. The proposed method can also be applied to supply chain management systems, where the challenge is to balance demand and supply, using a number of storages each with a maximal...

  3. Introduction to Hierarchical Bayesian Modeling for Ecological Data

    CERN Document Server

    Parent, Eric

    2012-01-01

    Making statistical modeling and inference more accessible to ecologists and related scientists, Introduction to Hierarchical Bayesian Modeling for Ecological Data gives readers a flexible and effective framework to learn about complex ecological processes from various sources of data. It also helps readers get started on building their own statistical models. The text begins with simple models that progressively become more complex and realistic through explanatory covariates and intermediate hidden states variables. When fitting the models to data, the authors gradually present the concepts a

  4. A hierarchical spatiotemporal analog forecasting model for count data.

    Science.gov (United States)

    McDermott, Patrick L; Wikle, Christopher K; Millspaugh, Joshua

    2018-01-01

    Analog forecasting is a mechanism-free nonlinear method that forecasts a system forward in time by examining how past states deemed similar to the current state moved forward. Previous applications of analog forecasting has been successful at producing robust forecasts for a variety of ecological and physical processes, but it has typically been presented in an empirical or heuristic procedure, rather than as a formal statistical model. The methodology presented here extends the model-based analog method of McDermott and Wikle (Environmetrics, 27, 2016, 70) by placing analog forecasting within a fully hierarchical statistical framework that can accommodate count observations. Using a Bayesian approach, the hierarchical analog model is able to quantify rigorously the uncertainty associated with forecasts. Forecasting waterfowl settling patterns in the northwestern United States and Canada is conducted by applying the hierarchical analog model to a breeding population survey dataset. Sea surface temperature (SST) in the Pacific Ocean is used to help identify potential analogs for the waterfowl settling patterns.

  5. Bayesian hierarchical model for large-scale covariance matrix estimation.

    Science.gov (United States)

    Zhu, Dongxiao; Hero, Alfred O

    2007-12-01

    Many bioinformatics problems implicitly depend on estimating large-scale covariance matrix. The traditional approaches tend to give rise to high variance and low accuracy due to "overfitting." We cast the large-scale covariance matrix estimation problem into the Bayesian hierarchical model framework, and introduce dependency between covariance parameters. We demonstrate the advantages of our approaches over the traditional approaches using simulations and OMICS data analysis.

  6. The fish industry - toward supply chain modelling

    DEFF Research Database (Denmark)

    Jensen, Toke Koldborg; Nielsen, Jette; Larsen, Erling

    2010-01-01

    such as quality and shelf-life issues enforce additional requirements onto the chains. In this article, we consider the supply chain structure of the fish industry. We discuss and illustrate the potential of using mathematical models to identify quality and value-adding activities. The article provides a first......Mathematical models for simulating and optimizing aspects of supply chains such as distribution, planning, and optimal handling of raw materials are widely used. However, modeling based on a holistic chain view including several or all supply chain agents is less studied, and food-related aspects...... step toward innovative supply chain modeling aimed to identify benefits for all agents along chains in the fish industry....

  7. Hierarchical composites: Analysis of damage evolution based on fiber bundle model

    DEFF Research Database (Denmark)

    Mishnaevsky, Leon

    2011-01-01

    A computational model of multiscale composites is developed on the basis of the fiber bundle model with the hierarchical load sharing rule, and employed to study the effect of the microstructures of hierarchical composites on their damage resistance. Two types of hierarchical materials were consi...

  8. Hierarchical modeling of cluster size in wildlife surveys

    Science.gov (United States)

    Royle, J. Andrew

    2008-01-01

    Clusters or groups of individuals are the fundamental unit of observation in many wildlife sampling problems, including aerial surveys of waterfowl, marine mammals, and ungulates. Explicit accounting of cluster size in models for estimating abundance is necessary because detection of individuals within clusters is not independent and detectability of clusters is likely to increase with cluster size. This induces a cluster size bias in which the average cluster size in the sample is larger than in the population at large. Thus, failure to account for the relationship between delectability and cluster size will tend to yield a positive bias in estimates of abundance or density. I describe a hierarchical modeling framework for accounting for cluster-size bias in animal sampling. The hierarchical model consists of models for the observation process conditional on the cluster size distribution and the cluster size distribution conditional on the total number of clusters. Optionally, a spatial model can be specified that describes variation in the total number of clusters per sample unit. Parameter estimation, model selection, and criticism may be carried out using conventional likelihood-based methods. An extension of the model is described for the situation where measurable covariates at the level of the sample unit are available. Several candidate models within the proposed class are evaluated for aerial survey data on mallard ducks (Anas platyrhynchos).

  9. Do means-end chains exist? Experimental tests of their hierarchicity, automatic spreading activation, directionality, and self-relevance

    DEFF Research Database (Denmark)

    Scholderer, Joachim; Grunert, Klaus G.

    2004-01-01

    , bidirectionality, and self-relevance. The predictions were tested in altogether six experiments, using the same basic methodology. Two sessions were held with each participant. In a pilot session, a set of conventional MEC representations was elicited from each participant using the laddering technique. From......Despite its popularity in consumer research, means-end chain (MEC) theory suffers from problems of unconfirmed validity. Theoretically, MECs can be cast as associative networks with a three-layered structure that should exhibit four properties: hierarchicity, automatic spreading activation...

  10. A hierarchical community occurrence model for North Carolina stream fish

    Science.gov (United States)

    Midway, S.R.; Wagner, Tyler; Tracy, B.H.

    2016-01-01

    The southeastern USA is home to one of the richest—and most imperiled and threatened—freshwater fish assemblages in North America. For many of these rare and threatened species, conservation efforts are often limited by a lack of data. Drawing on a unique and extensive data set spanning over 20 years, we modeled occurrence probabilities of 126 stream fish species sampled throughout North Carolina, many of which occur more broadly in the southeastern USA. Specifically, we developed species-specific occurrence probabilities from hierarchical Bayesian multispecies models that were based on common land use and land cover covariates. We also used index of biotic integrity tolerance classifications as a second level in the model hierarchy; we identify this level as informative for our work, but it is flexible for future model applications. Based on the partial-pooling property of the models, we were able to generate occurrence probabilities for many imperiled and data-poor species in addition to highlighting a considerable amount of occurrence heterogeneity that supports species-specific investigations whenever possible. Our results provide critical species-level information on many threatened and imperiled species as well as information that may assist with re-evaluation of existing management strategies, such as the use of surrogate species. Finally, we highlight the use of a relatively simple hierarchical model that can easily be generalized for similar situations in which conventional models fail to provide reliable estimates for data-poor groups.

  11. Linguistic steganography on Twitter: hierarchical language modeling with manual interaction

    Science.gov (United States)

    Wilson, Alex; Blunsom, Phil; Ker, Andrew D.

    2014-02-01

    This work proposes a natural language stegosystem for Twitter, modifying tweets as they are written to hide 4 bits of payload per tweet, which is a greater payload than previous systems have achieved. The system, CoverTweet, includes novel components, as well as some already developed in the literature. We believe that the task of transforming covers during embedding is equivalent to unilingual machine translation (paraphrasing), and we use this equivalence to de ne a distortion measure based on statistical machine translation methods. The system incorporates this measure of distortion to rank possible tweet paraphrases, using a hierarchical language model; we use human interaction as a second distortion measure to pick the best. The hierarchical language model is designed to model the speci c language of the covers, which in this setting is the language of the Twitter user who is embedding. This is a change from previous work, where general-purpose language models have been used. We evaluate our system by testing the output against human judges, and show that humans are unable to distinguish stego tweets from cover tweets any better than random guessing.

  12. Hierarchical Swarm Model: A New Approach to Optimization

    Directory of Open Access Journals (Sweden)

    Hanning Chen

    2010-01-01

    Full Text Available This paper presents a novel optimization model called hierarchical swarm optimization (HSO, which simulates the natural hierarchical complex system from where more complex intelligence can emerge for complex problems solving. This proposed model is intended to suggest ways that the performance of HSO-based algorithms on complex optimization problems can be significantly improved. This performance improvement is obtained by constructing the HSO hierarchies, which means that an agent in a higher level swarm can be composed of swarms of other agents from lower level and different swarms of different levels evolve on different spatiotemporal scale. A novel optimization algorithm (named PS2O, based on the HSO model, is instantiated and tested to illustrate the ideas of HSO model clearly. Experiments were conducted on a set of 17 benchmark optimization problems including both continuous and discrete cases. The results demonstrate remarkable performance of the PS2O algorithm on all chosen benchmark functions when compared to several successful swarm intelligence and evolutionary algorithms.

  13. Modelling green and lean supply chains

    DEFF Research Database (Denmark)

    Govindan, Kannan; Carvalho, Helena; Azevedo, Susana G.

    2017-01-01

    This manuscript proposes a model to support decision making and to help managers identify the best set of green and lean supply chain management practices to improve their eco-efficiency. To attain this objective, a mathematical model based on eco-efficiency concepts is suggested to overcome...... a strategic framework to support the design of eco-efficient supply chains....

  14. The Realized Hierarchical Archimedean Copula in Risk Modelling

    Directory of Open Access Journals (Sweden)

    Ostap Okhrin

    2017-06-01

    Full Text Available This paper introduces the concept of the realized hierarchical Archimedean copula (rHAC. The proposed approach inherits the ability of the copula to capture the dependencies among financial time series, and combines it with additional information contained in high-frequency data. The considered model does not suffer from the curse of dimensionality, and is able to accurately predict high-dimensional distributions. This flexibility is obtained by using a hierarchical structure in the copula. The time variability of the model is provided by daily forecasts of the realized correlation matrix, which is used to estimate the structure and the parameters of the rHAC. Extensive simulation studies show the validity of the estimator based on this realized correlation matrix, and its performance, in comparison to the benchmark models. The application of the estimator to one-day-ahead Value at Risk (VaR prediction using high-frequency data exhibits good forecasting properties for a multivariate portfolio.

  15. Business Modeling - Supply Chain Management

    OpenAIRE

    Abdillah, Leon

    2017-01-01

    BM-SCM consists of: 1) Introduction, 2) Basic Concepts, 3) Inventory Management, 4) Forecasting Material, 5) Requirements, ) Transportation Management, 7) Vendor Management, 8) Warehouse Management, 9) Cross Docking, 10) Third Party Logistics (3PLs), 11) IT in Supply Chain, and 12) Presentations.

  16. Learning Hierarchical User Interest Models from Web Pages

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    We propose an algorithm for learning hierarchical user interest models according to the Web pages users have browsed. In this algorithm, the interests of a user are represented into a tree which is called a user interest tree, the content and the structure of which can change simultaneously to adapt to the changes in a user's interests. This expression represents a user's specific and general interests as a continuum. In some sense, specific interests correspond to short-term interests, while general interests correspond to long-term interests. So this representation more really reflects the users' interests. The algorithm can automatically model a user's multiple interest domains, dynamically generate the interest models and prune a user interest tree when the number of the nodes in it exceeds given value. Finally, we show the experiment results in a Chinese Web Site.

  17. A Bayesian Hierarchical Model for Relating Multiple SNPs within Multiple Genes to Disease Risk

    Directory of Open Access Journals (Sweden)

    Lewei Duan

    2013-01-01

    Full Text Available A variety of methods have been proposed for studying the association of multiple genes thought to be involved in a common pathway for a particular disease. Here, we present an extension of a Bayesian hierarchical modeling strategy that allows for multiple SNPs within each gene, with external prior information at either the SNP or gene level. The model involves variable selection at the SNP level through latent indicator variables and Bayesian shrinkage at the gene level towards a prior mean vector and covariance matrix that depend on external information. The entire model is fitted using Markov chain Monte Carlo methods. Simulation studies show that the approach is capable of recovering many of the truly causal SNPs and genes, depending upon their frequency and size of their effects. The method is applied to data on 504 SNPs in 38 candidate genes involved in DNA damage response in the WECARE study of second breast cancers in relation to radiotherapy exposure.

  18. Modeling evolutionary dynamics of epigenetic mutations in hierarchically organized tumors.

    Directory of Open Access Journals (Sweden)

    Andrea Sottoriva

    2011-05-01

    Full Text Available The cancer stem cell (CSC concept is a highly debated topic in cancer research. While experimental evidence in favor of the cancer stem cell theory is apparently abundant, the results are often criticized as being difficult to interpret. An important reason for this is that most experimental data that support this model rely on transplantation studies. In this study we use a novel cellular Potts model to elucidate the dynamics of established malignancies that are driven by a small subset of CSCs. Our results demonstrate that epigenetic mutations that occur during mitosis display highly altered dynamics in CSC-driven malignancies compared to a classical, non-hierarchical model of growth. In particular, the heterogeneity observed in CSC-driven tumors is considerably higher. We speculate that this feature could be used in combination with epigenetic (methylation sequencing studies of human malignancies to prove or refute the CSC hypothesis in established tumors without the need for transplantation. Moreover our tumor growth simulations indicate that CSC-driven tumors display evolutionary features that can be considered beneficial during tumor progression. Besides an increased heterogeneity they also exhibit properties that allow the escape of clones from local fitness peaks. This leads to more aggressive phenotypes in the long run and makes the neoplasm more adaptable to stringent selective forces such as cancer treatment. Indeed when therapy is applied the clone landscape of the regrown tumor is more aggressive with respect to the primary tumor, whereas the classical model demonstrated similar patterns before and after therapy. Understanding these often counter-intuitive fundamental properties of (non-hierarchically organized malignancies is a crucial step in validating the CSC concept as well as providing insight into the therapeutical consequences of this model.

  19. Markov chains models, algorithms and applications

    CERN Document Server

    Ching, Wai-Ki; Ng, Michael K; Siu, Tak-Kuen

    2013-01-01

    This new edition of Markov Chains: Models, Algorithms and Applications has been completely reformatted as a text, complete with end-of-chapter exercises, a new focus on management science, new applications of the models, and new examples with applications in financial risk management and modeling of financial data.This book consists of eight chapters.  Chapter 1 gives a brief introduction to the classical theory on both discrete and continuous time Markov chains. The relationship between Markov chains of finite states and matrix theory will also be highlighted. Some classical iterative methods

  20. Tractography segmentation using a hierarchical Dirichlet processes mixture model.

    Science.gov (United States)

    Wang, Xiaogang; Grimson, W Eric L; Westin, Carl-Fredrik

    2011-01-01

    In this paper, we propose a new nonparametric Bayesian framework to cluster white matter fiber tracts into bundles using a hierarchical Dirichlet processes mixture (HDPM) model. The number of clusters is automatically learned driven by data with a Dirichlet process (DP) prior instead of being manually specified. After the models of bundles have been learned from training data without supervision, they can be used as priors to cluster/classify fibers of new subjects for comparison across subjects. When clustering fibers of new subjects, new clusters can be created for structures not observed in the training data. Our approach does not require computing pairwise distances between fibers and can cluster a huge set of fibers across multiple subjects. We present results on several data sets, the largest of which has more than 120,000 fibers. Copyright © 2010 Elsevier Inc. All rights reserved.

  1. Hierarchical decision modeling essays in honor of Dundar F. Kocaoglu

    CERN Document Server

    2016-01-01

    This volume, developed in honor of Dr. Dundar F. Kocaoglu, aims to demonstrate the applications of the Hierarchical Decision Model (HDM) in different sectors and its capacity in decision analysis. It is comprised of essays from noted scholars, academics and researchers of engineering and technology management around the world. This book is organized into four parts: Technology Assessment, Strategic Planning, National Technology Planning and Decision Making Tools. Dr. Dundar F. Kocaoglu is one of the pioneers of multiple decision models using hierarchies, and creator of the HDM in decision analysis. HDM is a mission-oriented method for evaluation and/or selection among alternatives. A wide range of alternatives can be considered, including but not limited to, different technologies, projects, markets, jobs, products, cities to live in, houses to buy, apartments to rent, and schools to attend. Dr. Kocaoglu’s approach has been adopted for decision problems in many industrial sectors, including electronics rese...

  2. TYPE Ia SUPERNOVA LIGHT CURVE INFERENCE: HIERARCHICAL MODELS IN THE OPTICAL AND NEAR-INFRARED

    International Nuclear Information System (INIS)

    Mandel, Kaisey S.; Narayan, Gautham; Kirshner, Robert P.

    2011-01-01

    We have constructed a comprehensive statistical model for Type Ia supernova (SN Ia) light curves spanning optical through near-infrared (NIR) data. A hierarchical framework coherently models multiple random and uncertain effects, including intrinsic supernova (SN) light curve covariances, dust extinction and reddening, and distances. An improved BAYESN Markov Chain Monte Carlo code computes probabilistic inferences for the hierarchical model by sampling the global probability density of parameters describing individual SNe and the population. We have applied this hierarchical model to optical and NIR data of 127 SNe Ia from PAIRITEL, CfA3, Carnegie Supernova Project, and the literature. We find an apparent population correlation between the host galaxy extinction A V and the ratio of total-to-selective dust absorption R V . For SNe with low dust extinction, A V ∼ V ∼ 2.5-2.9, while at high extinctions, A V ∼> 1, low values of R V < 2 are favored. The NIR luminosities are excellent standard candles and are less sensitive to dust extinction. They exhibit low correlation with optical peak luminosities, and thus provide independent information on distances. The combination of NIR and optical data constrains the dust extinction and improves the predictive precision of individual SN Ia distances by about 60%. Using cross-validation, we estimate an rms distance modulus prediction error of 0.11 mag for SNe with optical and NIR data versus 0.15 mag for SNe with optical data alone. Continued study of SNe Ia in the NIR is important for improving their utility as precise and accurate cosmological distance indicators.

  3. Bayesian hierarchical model for variations in earthquake peak ground acceleration within small-aperture arrays

    KAUST Repository

    Rahpeyma, Sahar; Halldorsson, Benedikt; Hrafnkelsson, Birgir; Jonsson, Sigurjon

    2018-01-01

    Knowledge of the characteristics of earthquake ground motion is fundamental for earthquake hazard assessments. Over small distances, relative to the source–site distance, where uniform site conditions are expected, the ground motion variability is also expected to be insignificant. However, despite being located on what has been characterized as a uniform lava‐rock site condition, considerable peak ground acceleration (PGA) variations were observed on stations of a small‐aperture array (covering approximately 1 km2) of accelerographs in Southwest Iceland during the Ölfus earthquake of magnitude 6.3 on May 29, 2008 and its sequence of aftershocks. We propose a novel Bayesian hierarchical model for the PGA variations accounting separately for earthquake event effects, station effects, and event‐station effects. An efficient posterior inference scheme based on Markov chain Monte Carlo (MCMC) simulations is proposed for the new model. The variance of the station effect is certainly different from zero according to the posterior density, indicating that individual station effects are different from one another. The Bayesian hierarchical model thus captures the observed PGA variations and quantifies to what extent the source and recording sites contribute to the overall variation in ground motions over relatively small distances on the lava‐rock site condition.

  4. Bayesian hierarchical model for variations in earthquake peak ground acceleration within small-aperture arrays

    KAUST Repository

    Rahpeyma, Sahar

    2018-04-17

    Knowledge of the characteristics of earthquake ground motion is fundamental for earthquake hazard assessments. Over small distances, relative to the source–site distance, where uniform site conditions are expected, the ground motion variability is also expected to be insignificant. However, despite being located on what has been characterized as a uniform lava‐rock site condition, considerable peak ground acceleration (PGA) variations were observed on stations of a small‐aperture array (covering approximately 1 km2) of accelerographs in Southwest Iceland during the Ölfus earthquake of magnitude 6.3 on May 29, 2008 and its sequence of aftershocks. We propose a novel Bayesian hierarchical model for the PGA variations accounting separately for earthquake event effects, station effects, and event‐station effects. An efficient posterior inference scheme based on Markov chain Monte Carlo (MCMC) simulations is proposed for the new model. The variance of the station effect is certainly different from zero according to the posterior density, indicating that individual station effects are different from one another. The Bayesian hierarchical model thus captures the observed PGA variations and quantifies to what extent the source and recording sites contribute to the overall variation in ground motions over relatively small distances on the lava‐rock site condition.

  5. Regulator Loss Functions and Hierarchical Modeling for Safety Decision Making.

    Science.gov (United States)

    Hatfield, Laura A; Baugh, Christine M; Azzone, Vanessa; Normand, Sharon-Lise T

    2017-07-01

    Regulators must act to protect the public when evidence indicates safety problems with medical devices. This requires complex tradeoffs among risks and benefits, which conventional safety surveillance methods do not incorporate. To combine explicit regulator loss functions with statistical evidence on medical device safety signals to improve decision making. In the Hospital Cost and Utilization Project National Inpatient Sample, we select pediatric inpatient admissions and identify adverse medical device events (AMDEs). We fit hierarchical Bayesian models to the annual hospital-level AMDE rates, accounting for patient and hospital characteristics. These models produce expected AMDE rates (a safety target), against which we compare the observed rates in a test year to compute a safety signal. We specify a set of loss functions that quantify the costs and benefits of each action as a function of the safety signal. We integrate the loss functions over the posterior distribution of the safety signal to obtain the posterior (Bayes) risk; the preferred action has the smallest Bayes risk. Using simulation and an analysis of AMDE data, we compare our minimum-risk decisions to a conventional Z score approach for classifying safety signals. The 2 rules produced different actions for nearly half of hospitals (45%). In the simulation, decisions that minimize Bayes risk outperform Z score-based decisions, even when the loss functions or hierarchical models are misspecified. Our method is sensitive to the choice of loss functions; eliciting quantitative inputs to the loss functions from regulators is challenging. A decision-theoretic approach to acting on safety signals is potentially promising but requires careful specification of loss functions in consultation with subject matter experts.

  6. Hierarchical self-assembly of PDMA-b-PS chains into granular nanoparticles: genesis and fate.

    Science.gov (United States)

    Bianchi, Alberto; Mauri, Michele; Bonetti, Simone; Koynov, Kaloian; Kappl, Michael; Lieberwirth, Ingo; Butt, Hans-Jürgen; Simonutti, Roberto

    2014-12-01

    The hierarchical self-assembly of an amphiphilic block copolymer, poly(N,N-dimethylacrylamide)-block-polystyrene with a very short hydrophilic block (PDMA10 -b-PS62 ), in large granular nanoparticles is reported. While these nanoparticles are stable in water, their disaggregation can be induced either mechanically (i.e., by applying a force via the tip of the cantilever of an atomic force microscope (AFM)) or by partial hydrolysis of the acrylamide groups. AFM force spectroscopy images show the rupture of the particle as a combination of collapse and flow, while scanning electron microscopy (SEM) and transmission electron microscopy (TEM) images of partly hydrolyzed nanoparticles provide a clear picture of the granular structure. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  7. Model Checking Markov Chains: Techniques and Tools

    NARCIS (Netherlands)

    Zapreev, I.S.

    2008-01-01

    This dissertation deals with four important aspects of model checking Markov chains: the development of efficient model-checking tools, the improvement of model-checking algorithms, the efficiency of the state-space reduction techniques, and the development of simulation-based model-checking

  8. Model Checking Structured Infinite Markov Chains

    NARCIS (Netherlands)

    Remke, Anne Katharina Ingrid

    2008-01-01

    In the past probabilistic model checking hast mostly been restricted to finite state models. This thesis explores the possibilities of model checking with continuous stochastic logic (CSL) on infinite-state Markov chains. We present an in-depth treatment of model checking algorithms for two special

  9. GSMNet: A Hierarchical Graph Model for Moving Objects in Networks

    Directory of Open Access Journals (Sweden)

    Hengcai Zhang

    2017-03-01

    Full Text Available Existing data models for moving objects in networks are often limited by flexibly controlling the granularity of representing networks and the cost of location updates and do not encompass semantic information, such as traffic states, traffic restrictions and social relationships. In this paper, we aim to fill the gap of traditional network-constrained models and propose a hierarchical graph model called the Geo-Social-Moving model for moving objects in Networks (GSMNet that adopts four graph structures, RouteGraph, SegmentGraph, ObjectGraph and MoveGraph, to represent the underlying networks, trajectories and semantic information in an integrated manner. The bulk of user-defined data types and corresponding operators is proposed to handle moving objects and answer a new class of queries supporting three kinds of conditions: spatial, temporal and semantic information. Then, we develop a prototype system with the native graph database system Neo4Jto implement the proposed GSMNet model. In the experiment, we conduct the performance evaluation using simulated trajectories generated from the BerlinMOD (Berlin Moving Objects Database benchmark and compare with the mature MOD system Secondo. The results of 17 benchmark queries demonstrate that our proposed GSMNet model has strong potential to reduce time-consuming table join operations an d shows remarkable advantages with regard to representing semantic information and controlling the cost of location updates.

  10. Application of hierarchical genetic models to Raven and WAIS subtests: a Dutch twin study

    NARCIS (Netherlands)

    Rijsdijk, F.V.; Vernon, P.A.; Boomsma, D.I.

    2002-01-01

    Hierarchical models of intelligence are highly informative and widely accepted. Application of these models to twin data, however, is sparse. This paper addresses the question of how a genetic hierarchical model fits the Wechsler Adult Intelligence Scale (WAIS) subtests and the Raven Standard

  11. Providing hierarchical approach for measuring supply chain performance using AHP and DEMATEL methodologies

    Directory of Open Access Journals (Sweden)

    Ali Najmi

    2010-06-01

    Full Text Available Measuring the performance of a supply chain is normally of a function of various parameters. Such a problem often involves in a multiple criteria decision making (MCMD problem where different criteria need to be defined and calculated, properly. During the past two decades, Analytical hierarchy procedure (AHP and DEMATEL have been some of the most popular MCDM approaches for prioritizing various attributes. The study of this paper uses a new methodology which is a combination of AHP and DEMATEL to rank various parameters affecting the performance of the supply chain. The DEMATEL is used for understanding the relationship between comparison metrics and AHP is used for the integration to provide a value for the overall performance.

  12. Security Modeling on the Supply Chain Networks

    Directory of Open Access Journals (Sweden)

    Marn-Ling Shing

    2007-10-01

    Full Text Available In order to keep the price down, a purchaser sends out the request for quotation to a group of suppliers in a supply chain network. The purchaser will then choose a supplier with the best combination of price and quality. A potential supplier will try to collect the related information about other suppliers so he/she can offer the best bid to the purchaser. Therefore, confidentiality becomes an important consideration for the design of a supply chain network. Chen et al. have proposed the application of the Bell-LaPadula model in the design of a secured supply chain network. In the Bell-LaPadula model, a subject can be in one of different security clearances and an object can be in one of various security classifications. All the possible combinations of (Security Clearance, Classification pair in the Bell-LaPadula model can be thought as different states in the Markov Chain model. This paper extends the work done by Chen et al., provides more details on the Markov Chain model and illustrates how to use it to monitor the security state transition in the supply chain network.

  13. MODELING THE RED SEQUENCE: HIERARCHICAL GROWTH YET SLOW LUMINOSITY EVOLUTION

    International Nuclear Information System (INIS)

    Skelton, Rosalind E.; Bell, Eric F.; Somerville, Rachel S.

    2012-01-01

    We explore the effects of mergers on the evolution of massive early-type galaxies by modeling the evolution of their stellar populations in a hierarchical context. We investigate how a realistic red sequence population set up by z ∼ 1 evolves under different assumptions for the merger and star formation histories, comparing changes in color, luminosity, and mass. The purely passive fading of existing red sequence galaxies, with no further mergers or star formation, results in dramatic changes at the bright end of the luminosity function and color-magnitude relation. Without mergers there is too much evolution in luminosity at a fixed space density compared to observations. The change in color and magnitude at a fixed mass resembles that of a passively evolving population that formed relatively recently, at z ∼ 2. Mergers among the red sequence population ('dry mergers') occurring after z = 1 build up mass, counteracting the fading of the existing stellar populations to give smaller changes in both color and luminosity for massive galaxies. By allowing some galaxies to migrate from the blue cloud onto the red sequence after z = 1 through gas-rich mergers, younger stellar populations are added to the red sequence. This manifestation of the progenitor bias increases the scatter in age and results in even smaller changes in color and luminosity between z = 1 and z = 0 at a fixed mass. The resultant evolution appears much slower, resembling the passive evolution of a population that formed at high redshift (z ∼ 3-5), and is in closer agreement with observations. We conclude that measurements of the luminosity and color evolution alone are not sufficient to distinguish between the purely passive evolution of an old population and cosmologically motivated hierarchical growth, although these scenarios have very different implications for the mass growth of early-type galaxies over the last half of cosmic history.

  14. Hierarchical modeling and its numerical implementation for layered thin elastic structures

    Energy Technology Data Exchange (ETDEWEB)

    Cho, Jin-Rae [Hongik University, Sejong (Korea, Republic of)

    2017-05-15

    Thin elastic structures such as beam- and plate-like structures and laminates are characterized by the small thickness, which lead to classical plate and laminate theories in which the displacement fields through the thickness are assumed linear or higher-order polynomials. These classical theories are either insufficient to represent the complex stress variation through the thickness or may encounter the accuracy-computational cost dilemma. In order to overcome the inherent problem of classical theories, the concept of hierarchical modeling has been emerged. In the hierarchical modeling, the hierarchical models with different model levels are selected and combined within a structure domain, in order to make the modeling error be distributed as uniformly as possible throughout the problem domain. The purpose of current study is to explore the potential of hierarchical modeling for the effective numerical analysis of layered structures such as laminated composite. For this goal, the hierarchical models are constructed and the hierarchical modeling is implemented by selectively adjusting the level of hierarchical models. As well, the major characteristics of hierarchical models are investigated through the numerical experiments.

  15. Bayesian Hierarchical Random Effects Models in Forensic Science

    Directory of Open Access Journals (Sweden)

    Colin G. G. Aitken

    2018-04-01

    Full Text Available Statistical modeling of the evaluation of evidence with the use of the likelihood ratio has a long history. It dates from the Dreyfus case at the end of the nineteenth century through the work at Bletchley Park in the Second World War to the present day. The development received a significant boost in 1977 with a seminal work by Dennis Lindley which introduced a Bayesian hierarchical random effects model for the evaluation of evidence with an example of refractive index measurements on fragments of glass. Many models have been developed since then. The methods have now been sufficiently well-developed and have become so widespread that it is timely to try and provide a software package to assist in their implementation. With that in mind, a project (SAILR: Software for the Analysis and Implementation of Likelihood Ratios was funded by the European Network of Forensic Science Institutes through their Monopoly programme to develop a software package for use by forensic scientists world-wide that would assist in the statistical analysis and implementation of the approach based on likelihood ratios. It is the purpose of this document to provide a short review of a small part of this history. The review also provides a background, or landscape, for the development of some of the models within the SAILR package and references to SAILR as made as appropriate.

  16. Bayesian Hierarchical Random Effects Models in Forensic Science.

    Science.gov (United States)

    Aitken, Colin G G

    2018-01-01

    Statistical modeling of the evaluation of evidence with the use of the likelihood ratio has a long history. It dates from the Dreyfus case at the end of the nineteenth century through the work at Bletchley Park in the Second World War to the present day. The development received a significant boost in 1977 with a seminal work by Dennis Lindley which introduced a Bayesian hierarchical random effects model for the evaluation of evidence with an example of refractive index measurements on fragments of glass. Many models have been developed since then. The methods have now been sufficiently well-developed and have become so widespread that it is timely to try and provide a software package to assist in their implementation. With that in mind, a project (SAILR: Software for the Analysis and Implementation of Likelihood Ratios) was funded by the European Network of Forensic Science Institutes through their Monopoly programme to develop a software package for use by forensic scientists world-wide that would assist in the statistical analysis and implementation of the approach based on likelihood ratios. It is the purpose of this document to provide a short review of a small part of this history. The review also provides a background, or landscape, for the development of some of the models within the SAILR package and references to SAILR as made as appropriate.

  17. Renormalization group analysis of a simple hierarchical fermion model

    International Nuclear Information System (INIS)

    Dorlas, T.C.

    1991-01-01

    A simple hierarchical fermion model is constructed which gives rise to an exact renormalization transformation in a 2-dimensional parameter space. The behaviour of this transformation is studied. It has two hyperbolic fixed points for which the existence of a global critical line is proven. The asymptotic behaviour of the transformation is used to prove the existence of the thermodynamic limit in a certain domain in parameter space. Also the existence of a continuum limit for these theories is investigated using information about the asymptotic renormalization behaviour. It turns out that the 'trivial' fixed point gives rise to a two-parameter family of continuum limits corresponding to that part of parameter space where the renormalization trajectories originate at this fixed point. Although the model is not very realistic it serves as a simple example of the appliclation of the renormalization group to proving the existence of the thermodynamic limit and the continuum limit of lattice models. Moreover, it illustrates possible complications that can arise in global renormalization group behaviour, and that might also be present in other models where no global analysis of the renormalization transformation has yet been achieved. (orig.)

  18. Testing adaptive toolbox models: a Bayesian hierarchical approach.

    Science.gov (United States)

    Scheibehenne, Benjamin; Rieskamp, Jörg; Wagenmakers, Eric-Jan

    2013-01-01

    Many theories of human cognition postulate that people are equipped with a repertoire of strategies to solve the tasks they face. This theoretical framework of a cognitive toolbox provides a plausible account of intra- and interindividual differences in human behavior. Unfortunately, it is often unclear how to rigorously test the toolbox framework. How can a toolbox model be quantitatively specified? How can the number of toolbox strategies be limited to prevent uncontrolled strategy sprawl? How can a toolbox model be formally tested against alternative theories? The authors show how these challenges can be met by using Bayesian inference techniques. By means of parameter recovery simulations and the analysis of empirical data across a variety of domains (i.e., judgment and decision making, children's cognitive development, function learning, and perceptual categorization), the authors illustrate how Bayesian inference techniques allow toolbox models to be quantitatively specified, strategy sprawl to be contained, and toolbox models to be rigorously tested against competing theories. The authors demonstrate that their approach applies at the individual level but can also be generalized to the group level with hierarchical Bayesian procedures. The suggested Bayesian inference techniques represent a theoretical and methodological advancement for toolbox theories of cognition and behavior.

  19. Chain binomial models and binomial autoregressive processes.

    Science.gov (United States)

    Weiss, Christian H; Pollett, Philip K

    2012-09-01

    We establish a connection between a class of chain-binomial models of use in ecology and epidemiology and binomial autoregressive (AR) processes. New results are obtained for the latter, including expressions for the lag-conditional distribution and related quantities. We focus on two types of chain-binomial model, extinction-colonization and colonization-extinction models, and present two approaches to parameter estimation. The asymptotic distributions of the resulting estimators are studied, as well as their finite-sample performance, and we give an application to real data. A connection is made with standard AR models, which also has implications for parameter estimation. © 2011, The International Biometric Society.

  20. Modelling Lean and Green Supply Chain

    Science.gov (United States)

    Duarte, Susana Carla Vieira Lino Medina

    The success of an organization depends on the effective control of its supply chain. It is important to recognize new opportunities for organization and its supply chain. In the last few years the approach to lean, agile, resilient and green supply chain paradigms has been addressed in the scientific literature. Research in this field shows that the integration of these concepts revealed some contradictions among so many paradigms. This thesis is mainly focused on the lean and green approaches. Thirteen different management frameworks, embodied in awards, standards and tools were studied to understand if they could contribute for the modelling process of a lean and green approach. The study reveals a number of categories that are common in most management frameworks, providing adequate conditions for a lean and green supply chain transformation. A conceptual framework for the evaluation of a lean and green organization`s supply chain was proposed. The framework considers six key criteria, namely, leadership, people, strategic planning, stakeholders, processes and results. It was proposed an assessment method considering a criteria score for each criterion. The purpose is to understand how lean and green supply chain can be compatible, using principles, practices, techniques or tools (i.e. elements) that support both, a lean and a green approach, in all key criteria. A case study in the automotive upstream supply chain was performed to understand more deeply if the elements proposed for the conceptual framework could be implemented in a real-scenario. Based on the conceptual framework and the case study, a roadmap to achieve a lean-green transformation is presented. The proposed roadmap revealed its contribution to the understanding on how and when an organization`s supply chain should apply the lean and green elements. This study is relevant to practice, as it may assist managers in the adoption of a lean and green supply chain approach, giving insights for the

  1. Application of Hierarchical Linear Models/Linear Mixed-Effects Models in School Effectiveness Research

    Science.gov (United States)

    Ker, H. W.

    2014-01-01

    Multilevel data are very common in educational research. Hierarchical linear models/linear mixed-effects models (HLMs/LMEs) are often utilized to analyze multilevel data nowadays. This paper discusses the problems of utilizing ordinary regressions for modeling multilevel educational data, compare the data analytic results from three regression…

  2. A Bayesian model for binary Markov chains

    Directory of Open Access Journals (Sweden)

    Belkheir Essebbar

    2004-02-01

    Full Text Available This note is concerned with Bayesian estimation of the transition probabilities of a binary Markov chain observed from heterogeneous individuals. The model is founded on the Jeffreys' prior which allows for transition probabilities to be correlated. The Bayesian estimator is approximated by means of Monte Carlo Markov chain (MCMC techniques. The performance of the Bayesian estimates is illustrated by analyzing a small simulated data set.

  3. Spin chain model for correlated quantum channels

    Energy Technology Data Exchange (ETDEWEB)

    Rossini, Davide [International School for Advanced Studies SISSA/ISAS, via Beirut 2-4, I-34014 Trieste (Italy); Giovannetti, Vittorio; Montangero, Simone [NEST-CNR-INFM and Scuola Normale Superiore, Piazza dei Cavalieri 7, I-56126 Pisa (Italy)], E-mail: monta@sns.it

    2008-11-15

    We analyze the quality of the quantum information transmission along a correlated quantum channel by studying the average fidelity between input and output states and the average output purity, giving bounds for the entropy of the channel. Noise correlations in the channel are modeled by the coupling of each channel use with an element of a one-dimensional interacting quantum spin chain. Criticality of the environment chain is seen to emerge in the changes of the fidelity and of the purity.

  4. Hierarchical Bayesian modelling of mobility metrics for hazard model input calibration

    Science.gov (United States)

    Calder, Eliza; Ogburn, Sarah; Spiller, Elaine; Rutarindwa, Regis; Berger, Jim

    2015-04-01

    In this work we present a method to constrain flow mobility input parameters for pyroclastic flow models using hierarchical Bayes modeling of standard mobility metrics such as H/L and flow volume etc. The advantage of hierarchical modeling is that it can leverage the information in global dataset for a particular mobility metric in order to reduce the uncertainty in modeling of an individual volcano, especially important where individual volcanoes have only sparse datasets. We use compiled pyroclastic flow runout data from Colima, Merapi, Soufriere Hills, Unzen and Semeru volcanoes, presented in an open-source database FlowDat (https://vhub.org/groups/massflowdatabase). While the exact relationship between flow volume and friction varies somewhat between volcanoes, dome collapse flows originating from the same volcano exhibit similar mobility relationships. Instead of fitting separate regression models for each volcano dataset, we use a variation of the hierarchical linear model (Kass and Steffey, 1989). The model presents a hierarchical structure with two levels; all dome collapse flows and dome collapse flows at specific volcanoes. The hierarchical model allows us to assume that the flows at specific volcanoes share a common distribution of regression slopes, then solves for that distribution. We present comparisons of the 95% confidence intervals on the individual regression lines for the data set from each volcano as well as those obtained from the hierarchical model. The results clearly demonstrate the advantage of considering global datasets using this technique. The technique developed is demonstrated here for mobility metrics, but can be applied to many other global datasets of volcanic parameters. In particular, such methods can provide a means to better contain parameters for volcanoes for which we only have sparse data, a ubiquitous problem in volcanology.

  5. A hierarchical network modeling method for railway tunnels safety assessment

    Science.gov (United States)

    Zhou, Jin; Xu, Weixiang; Guo, Xin; Liu, Xumin

    2017-02-01

    Using network theory to model risk-related knowledge on accidents is regarded as potential very helpful in risk management. A large amount of defects detection data for railway tunnels is collected in autumn every year in China. It is extremely important to discover the regularities knowledge in database. In this paper, based on network theories and by using data mining techniques, a new method is proposed for mining risk-related regularities to support risk management in railway tunnel projects. A hierarchical network (HN) model which takes into account the tunnel structures, tunnel defects, potential failures and accidents is established. An improved Apriori algorithm is designed to rapidly and effectively mine correlations between tunnel structures and tunnel defects. Then an algorithm is presented in order to mine the risk-related regularities table (RRT) from the frequent patterns. At last, a safety assessment method is proposed by consideration of actual defects and possible risks of defects gained from the RRT. This method cannot only generate the quantitative risk results but also reveal the key defects and critical risks of defects. This paper is further development on accident causation network modeling methods which can provide guidance for specific maintenance measure.

  6. Production optimisation in the petrochemical industry by hierarchical multivariate modelling

    Energy Technology Data Exchange (ETDEWEB)

    Andersson, Magnus; Furusjoe, Erik; Jansson, Aasa

    2004-06-01

    This project demonstrates the advantages of applying hierarchical multivariate modelling in the petrochemical industry in order to increase knowledge of the total process. The models indicate possible ways to optimise the process regarding the use of energy and raw material, which is directly linked to the environmental impact of the process. The refinery of Nynaes Refining AB (Goeteborg, Sweden) has acted as a demonstration site in this project. The models developed for the demonstration site resulted in: Detection of an unknown process disturbance and suggestions of possible causes; Indications on how to increase the yield in combination with energy savings; The possibility to predict product quality from on-line process measurements, making the results available at a higher frequency than customary laboratory analysis; Quantification of the gradually lowered efficiency of heat transfer in the furnace and increased fuel consumption as an effect of soot build-up on the furnace coils; Increased knowledge of the relation between production rate and the efficiency of the heat exchangers. This report is one of two reports from the project. It contains a technical discussion of the result with some degree of detail. A shorter and more easily accessible report is also available, see IVL report B1586-A.

  7. Chains of mean-field models

    International Nuclear Information System (INIS)

    Hamed Hassani, S; Macris, Nicolas; Urbanke, Ruediger

    2012-01-01

    We consider a collection of Curie–Weiss (CW) spin systems, possibly with a random field, each of which is placed along the positions of a one-dimensional chain. The CW systems are coupled together by a Kac-type interaction in the longitudinal direction of the chain and by an infinite-range interaction in the direction transverse to the chain. Our motivations for studying this model come from recent findings in the theory of error-correcting codes based on spatially coupled graphs. We find that, although much simpler than the codes, the model studied here already displays similar behavior. We are interested in the van der Waals curve in a regime where the size of each Curie–Weiss model tends to infinity, and the length of the chain and range of the Kac interaction are large but finite. Below the critical temperature, and with appropriate boundary conditions, there appears a series of equilibrium states representing kink-like interfaces between the two equilibrium states of the individual system. The van der Waals curve oscillates periodically around the Maxwell plateau. These oscillations have a period inversely proportional to the chain length and an amplitude exponentially small in the range of the interaction; in other words, the spinodal points of the chain model lie exponentially close to the phase transition threshold. The amplitude of the oscillations is closely related to a Peierls–Nabarro free energy barrier for the motion of the kink along the chain. Analogies to similar phenomena and their possible algorithmic significance for graphical models of interest in coding theory and theoretical computer science are pointed out

  8. Linking landscape characteristics to local grizzly bear abundance using multiple detection methods in a hierarchical model

    Science.gov (United States)

    Graves, T.A.; Kendall, Katherine C.; Royle, J. Andrew; Stetz, J.B.; Macleod, A.C.

    2011-01-01

    Few studies link habitat to grizzly bear Ursus arctos abundance and these have not accounted for the variation in detection or spatial autocorrelation. We collected and genotyped bear hair in and around Glacier National Park in northwestern Montana during the summer of 2000. We developed a hierarchical Markov chain Monte Carlo model that extends the existing occupancy and count models by accounting for (1) spatially explicit variables that we hypothesized might influence abundance; (2) separate sub-models of detection probability for two distinct sampling methods (hair traps and rub trees) targeting different segments of the population; (3) covariates to explain variation in each sub-model of detection; (4) a conditional autoregressive term to account for spatial autocorrelation; (5) weights to identify most important variables. Road density and per cent mesic habitat best explained variation in female grizzly bear abundance; spatial autocorrelation was not supported. More female bears were predicted in places with lower road density and with more mesic habitat. Detection rates of females increased with rub tree sampling effort. Road density best explained variation in male grizzly bear abundance and spatial autocorrelation was supported. More male bears were predicted in areas of low road density. Detection rates of males increased with rub tree and hair trap sampling effort and decreased over the sampling period. We provide a new method to (1) incorporate multiple detection methods into hierarchical models of abundance; (2) determine whether spatial autocorrelation should be included in final models. Our results suggest that the influence of landscape variables is consistent between habitat selection and abundance in this system.

  9. Model Checking Infinite-State Markov Chains

    NARCIS (Netherlands)

    Remke, Anne Katharina Ingrid; Haverkort, Boudewijn R.H.M.; Cloth, L.

    2004-01-01

    In this paper algorithms for model checking CSL (continuous stochastic logic) against infinite-state continuous-time Markov chains of so-called quasi birth-death type are developed. In doing so we extend the applicability of CSL model checking beyond the recently proposed case for finite-state

  10. A joint model for multivariate hierarchical semicontinuous data with replications.

    Science.gov (United States)

    Kassahun-Yimer, Wondwosen; Albert, Paul S; Lipsky, Leah M; Nansel, Tonja R; Liu, Aiyi

    2017-01-01

    Longitudinal data are often collected in biomedical applications in such a way that measurements on more than one response are taken from a given subject repeatedly overtime. For some problems, these multiple profiles need to be modeled jointly to get insight on the joint evolution and/or association of these responses over time. In practice, such longitudinal outcomes may have many zeros that need to be accounted for in the analysis. For example, in dietary intake studies, as we focus on in this paper, some food components are eaten daily by almost all subjects, while others are consumed episodically, where individuals have time periods where they do not eat these components followed by periods where they do. These episodically consumed foods need to be adequately modeled to account for the many zeros that are encountered. In this paper, we propose a joint model to analyze multivariate hierarchical semicontinuous data characterized by many zeros and more than one replicate observations at each measurement occasion. This approach allows for different probability mechanisms for describing the zero behavior as compared with the mean intake given that the individual consumes the food. To deal with the potentially large number of multivariate profiles, we use a pairwise model fitting approach that was developed in the context of multivariate Gaussian random effects models with large number of multivariate components. The novelty of the proposed approach is that it incorporates: (1) multivariate, possibly correlated, response variables; (2) within subject correlation resulting from repeated measurements taken from each subject; (3) many zero observations; (4) overdispersion; and (5) replicate measurements at each visit time.

  11. Adaptive hierarchical grid model of water-borne pollutant dispersion

    Science.gov (United States)

    Borthwick, A. G. L.; Marchant, R. D.; Copeland, G. J. M.

    Water pollution by industrial and agricultural waste is an increasingly major public health issue. It is therefore important for water engineers and managers to be able to predict accurately the local behaviour of water-borne pollutants. This paper describes the novel and efficient coupling of dynamically adaptive hierarchical grids with standard solvers of the advection-diffusion equation. Adaptive quadtree grids are able to focus on regions of interest such as pollutant fronts, while retaining economy in the total number of grid elements through selective grid refinement. Advection is treated using Lagrangian particle tracking. Diffusion is solved separately using two grid-based methods; one is by explicit finite differences, the other a diffusion-velocity approach. Results are given in two dimensions for pure diffusion of an initially Gaussian plume, advection-diffusion of the Gaussian plume in the rotating flow field of a forced vortex, and the transport of species in a rectangular channel with side wall boundary layers. Close agreement is achieved with analytical solutions of the advection-diffusion equation and simulations from a Lagrangian random walk model. An application to Sepetiba Bay, Brazil is included to demonstrate the method with complex flows and topography.

  12. Hierarchical statistical modeling of xylem vulnerability to cavitation.

    Science.gov (United States)

    Ogle, Kiona; Barber, Jarrett J; Willson, Cynthia; Thompson, Brenda

    2009-01-01

    Cavitation of xylem elements diminishes the water transport capacity of plants, and quantifying xylem vulnerability to cavitation is important to understanding plant function. Current approaches to analyzing hydraulic conductivity (K) data to infer vulnerability to cavitation suffer from problems such as the use of potentially unrealistic vulnerability curves, difficulty interpreting parameters in these curves, a statistical framework that ignores sampling design, and an overly simplistic view of uncertainty. This study illustrates how two common curves (exponential-sigmoid and Weibull) can be reparameterized in terms of meaningful parameters: maximum conductivity (k(sat)), water potential (-P) at which percentage loss of conductivity (PLC) =X% (P(X)), and the slope of the PLC curve at P(X) (S(X)), a 'sensitivity' index. We provide a hierarchical Bayesian method for fitting the reparameterized curves to K(H) data. We illustrate the method using data for roots and stems of two populations of Juniperus scopulorum and test for differences in k(sat), P(X), and S(X) between different groups. Two important results emerge from this study. First, the Weibull model is preferred because it produces biologically realistic estimates of PLC near P = 0 MPa. Second, stochastic embolisms contribute an important source of uncertainty that should be included in such analyses.

  13. A Bayesian Approach to Model Selection in Hierarchical Mixtures-of-Experts Architectures.

    Science.gov (United States)

    Tanner, Martin A.; Peng, Fengchun; Jacobs, Robert A.

    1997-03-01

    There does not exist a statistical model that shows good performance on all tasks. Consequently, the model selection problem is unavoidable; investigators must decide which model is best at summarizing the data for each task of interest. This article presents an approach to the model selection problem in hierarchical mixtures-of-experts architectures. These architectures combine aspects of generalized linear models with those of finite mixture models in order to perform tasks via a recursive "divide-and-conquer" strategy. Markov chain Monte Carlo methodology is used to estimate the distribution of the architectures' parameters. One part of our approach to model selection attempts to estimate the worth of each component of an architecture so that relatively unused components can be pruned from the architecture's structure. A second part of this approach uses a Bayesian hypothesis testing procedure in order to differentiate inputs that carry useful information from nuisance inputs. Simulation results suggest that the approach presented here adheres to the dictum of Occam's razor; simple architectures that are adequate for summarizing the data are favored over more complex structures. Copyright 1997 Elsevier Science Ltd. All Rights Reserved.

  14. Bayesian Hierarchical Scale Mixtures of Log-Normal Models for Inference in Reliability with Stochastic Constraint

    Directory of Open Access Journals (Sweden)

    Hea-Jung Kim

    2017-06-01

    Full Text Available This paper develops Bayesian inference in reliability of a class of scale mixtures of log-normal failure time (SMLNFT models with stochastic (or uncertain constraint in their reliability measures. The class is comprehensive and includes existing failure time (FT models (such as log-normal, log-Cauchy, and log-logistic FT models as well as new models that are robust in terms of heavy-tailed FT observations. Since classical frequency approaches to reliability analysis based on the SMLNFT model with stochastic constraint are intractable, the Bayesian method is pursued utilizing a Markov chain Monte Carlo (MCMC sampling based approach. This paper introduces a two-stage maximum entropy (MaxEnt prior, which elicits a priori uncertain constraint and develops Bayesian hierarchical SMLNFT model by using the prior. The paper also proposes an MCMC method for Bayesian inference in the SMLNFT model reliability and calls attention to properties of the MaxEnt prior that are useful for method development. Finally, two data sets are used to illustrate how the proposed methodology works.

  15. Scale of association: hierarchical linear models and the measurement of ecological systems

    Science.gov (United States)

    Sean M. McMahon; Jeffrey M. Diez

    2007-01-01

    A fundamental challenge to understanding patterns in ecological systems lies in employing methods that can analyse, test and draw inference from measured associations between variables across scales. Hierarchical linear models (HLM) use advanced estimation algorithms to measure regression relationships and variance-covariance parameters in hierarchically structured...

  16. A novel Bayesian hierarchical model for road safety hotspot prediction.

    Science.gov (United States)

    Fawcett, Lee; Thorpe, Neil; Matthews, Joseph; Kremer, Karsten

    2017-02-01

    In this paper, we propose a Bayesian hierarchical model for predicting accident counts in future years at sites within a pool of potential road safety hotspots. The aim is to inform road safety practitioners of the location of likely future hotspots to enable a proactive, rather than reactive, approach to road safety scheme implementation. A feature of our model is the ability to rank sites according to their potential to exceed, in some future time period, a threshold accident count which may be used as a criterion for scheme implementation. Our model specification enables the classical empirical Bayes formulation - commonly used in before-and-after studies, wherein accident counts from a single before period are used to estimate counterfactual counts in the after period - to be extended to incorporate counts from multiple time periods. This allows site-specific variations in historical accident counts (e.g. locally-observed trends) to offset estimates of safety generated by a global accident prediction model (APM), which itself is used to help account for the effects of global trend and regression-to-mean (RTM). The Bayesian posterior predictive distribution is exploited to formulate predictions and to properly quantify our uncertainty in these predictions. The main contributions of our model include (i) the ability to allow accident counts from multiple time-points to inform predictions, with counts in more recent years lending more weight to predictions than counts from time-points further in the past; (ii) where appropriate, the ability to offset global estimates of trend by variations in accident counts observed locally, at a site-specific level; and (iii) the ability to account for unknown/unobserved site-specific factors which may affect accident counts. We illustrate our model with an application to accident counts at 734 potential hotspots in the German city of Halle; we also propose some simple diagnostics to validate the predictive capability of our

  17. Metamodeling Techniques to Aid in the Aggregation Process of Large Hierarchical Simulation Models

    National Research Council Canada - National Science Library

    Rodriguez, June F

    2008-01-01

    .... More specifically, investigating how to accurately aggregate hierarchical lower-level (higher resolution) models into the next higher-level in order to reduce the complexity of the overall simulation model...

  18. A Hierarchical Modeling for Reactive Power Optimization With Joint Transmission and Distribution Networks by Curve Fitting

    DEFF Research Database (Denmark)

    Ding, Tao; Li, Cheng; Huang, Can

    2018-01-01

    –slave structure and improves traditional centralized modeling methods by alleviating the big data problem in a control center. Specifically, the transmission-distribution-network coordination issue of the hierarchical modeling method is investigated. First, a curve-fitting approach is developed to provide a cost......In order to solve the reactive power optimization with joint transmission and distribution networks, a hierarchical modeling method is proposed in this paper. It allows the reactive power optimization of transmission and distribution networks to be performed separately, leading to a master...... optimality. Numerical results on two test systems verify the effectiveness of the proposed hierarchical modeling and curve-fitting methods....

  19. Estimating the Term Structure With a Semiparametric Bayesian Hierarchical Model: An Application to Corporate Bonds1

    Science.gov (United States)

    Cruz-Marcelo, Alejandro; Ensor, Katherine B.; Rosner, Gary L.

    2011-01-01

    The term structure of interest rates is used to price defaultable bonds and credit derivatives, as well as to infer the quality of bonds for risk management purposes. We introduce a model that jointly estimates term structures by means of a Bayesian hierarchical model with a prior probability model based on Dirichlet process mixtures. The modeling methodology borrows strength across term structures for purposes of estimation. The main advantage of our framework is its ability to produce reliable estimators at the company level even when there are only a few bonds per company. After describing the proposed model, we discuss an empirical application in which the term structure of 197 individual companies is estimated. The sample of 197 consists of 143 companies with only one or two bonds. In-sample and out-of-sample tests are used to quantify the improvement in accuracy that results from approximating the term structure of corporate bonds with estimators by company rather than by credit rating, the latter being a popular choice in the financial literature. A complete description of a Markov chain Monte Carlo (MCMC) scheme for the proposed model is available as Supplementary Material. PMID:21765566

  20. A Bayesian hierarchical model for demand curve analysis.

    Science.gov (United States)

    Ho, Yen-Yi; Nhu Vo, Tien; Chu, Haitao; Luo, Xianghua; Le, Chap T

    2018-07-01

    Drug self-administration experiments are a frequently used approach to assessing the abuse liability and reinforcing property of a compound. It has been used to assess the abuse liabilities of various substances such as psychomotor stimulants and hallucinogens, food, nicotine, and alcohol. The demand curve generated from a self-administration study describes how demand of a drug or non-drug reinforcer varies as a function of price. With the approval of the 2009 Family Smoking Prevention and Tobacco Control Act, demand curve analysis provides crucial evidence to inform the US Food and Drug Administration's policy on tobacco regulation, because it produces several important quantitative measurements to assess the reinforcing strength of nicotine. The conventional approach popularly used to analyze the demand curve data is individual-specific non-linear least square regression. The non-linear least square approach sets out to minimize the residual sum of squares for each subject in the dataset; however, this one-subject-at-a-time approach does not allow for the estimation of between- and within-subject variability in a unified model framework. In this paper, we review the existing approaches to analyze the demand curve data, non-linear least square regression, and the mixed effects regression and propose a new Bayesian hierarchical model. We conduct simulation analyses to compare the performance of these three approaches and illustrate the proposed approaches in a case study of nicotine self-administration in rats. We present simulation results and discuss the benefits of using the proposed approaches.

  1. Quantitative models for sustainable supply chain management

    DEFF Research Database (Denmark)

    Brandenburg, M.; Govindan, Kannan; Sarkis, J.

    2014-01-01

    and directions of this research area, this paper provides a content analysis of 134 carefully identified papers on quantitative, formal models that address sustainability aspects in the forward SC. It was found that a preponderance of the publications and models appeared in a limited set of six journals......Sustainability, the consideration of environmental factors and social aspects, in supply chain management (SCM) has become a highly relevant topic for researchers and practitioners. The application of operations research methods and related models, i.e. formal modeling, for closed-loop SCM...... and reverse logistics has been effectively reviewed in previously published research. This situation is in contrast to the understanding and review of mathematical models that focus on environmental or social factors in forward supply chains (SC), which has seen less investigation. To evaluate developments...

  2. Logistics Chains in Freight Transport Modelling

    NARCIS (Netherlands)

    Davydenko, I.Y.

    2015-01-01

    The flow of trade is not equal to transport flows, mainly due to the fact that warehouses and distribution facilities are used as intermediary stops on the way from production locations to the points of consumption or further rework of goods. This thesis proposes a logistics chain model, which

  3. Type Ia Supernova Light Curve Inference: Hierarchical Models for Nearby SN Ia in the Optical and Near Infrared

    Science.gov (United States)

    Mandel, Kaisey; Kirshner, R. P.; Narayan, G.; Wood-Vasey, W. M.; Friedman, A. S.; Hicken, M.

    2010-01-01

    I have constructed a comprehensive statistical model for Type Ia supernova light curves spanning optical through near infrared data simultaneously. The near infrared light curves are found to be excellent standard candles (sigma(MH) = 0.11 +/- 0.03 mag) that are less vulnerable to systematic error from dust extinction, a major confounding factor for cosmological studies. A hierarchical statistical framework incorporates coherently multiple sources of randomness and uncertainty, including photometric error, intrinsic supernova light curve variations and correlations, dust extinction and reddening, peculiar velocity dispersion and distances, for probabilistic inference with Type Ia SN light curves. Inferences are drawn from the full probability density over individual supernovae and the SN Ia and dust populations, conditioned on a dataset of SN Ia light curves and redshifts. To compute probabilistic inferences with hierarchical models, I have developed BayeSN, a Markov Chain Monte Carlo algorithm based on Gibbs sampling. This code explores and samples the global probability density of parameters describing individual supernovae and the population. I have applied this hierarchical model to optical and near infrared data of over 100 nearby Type Ia SN from PAIRITEL, the CfA3 sample, and the literature. Using this statistical model, I find that SN with optical and NIR data have a smaller residual scatter in the Hubble diagram than SN with only optical data. The continued study of Type Ia SN in the near infrared will be important for improving their utility as precise and accurate cosmological distance indicators.

  4. Statistical modelling of railway track geometry degradation using Hierarchical Bayesian models

    International Nuclear Information System (INIS)

    Andrade, A.R.; Teixeira, P.F.

    2015-01-01

    Railway maintenance planners require a predictive model that can assess the railway track geometry degradation. The present paper uses a Hierarchical Bayesian model as a tool to model the main two quality indicators related to railway track geometry degradation: the standard deviation of longitudinal level defects and the standard deviation of horizontal alignment defects. Hierarchical Bayesian Models (HBM) are flexible statistical models that allow specifying different spatially correlated components between consecutive track sections, namely for the deterioration rates and the initial qualities parameters. HBM are developed for both quality indicators, conducting an extensive comparison between candidate models and a sensitivity analysis on prior distributions. HBM is applied to provide an overall assessment of the degradation of railway track geometry, for the main Portuguese railway line Lisbon–Oporto. - Highlights: • Rail track geometry degradation is analysed using Hierarchical Bayesian models. • A Gibbs sampling strategy is put forward to estimate the HBM. • Model comparison and sensitivity analysis find the most suitable model. • We applied the most suitable model to all the segments of the main Portuguese line. • Tackling spatial correlations using CAR structures lead to a better model fit

  5. Modeling for Green Supply Chain Evaluation

    Directory of Open Access Journals (Sweden)

    Elham Falatoonitoosi

    2013-01-01

    Full Text Available Green supply chain management (GSCM has become a practical approach to develop environmental performance. Under strict regulations and stakeholder pressures, enterprises need to enhance and improve GSCM practices, which are influenced by both traditional and green factors. This study developed a causal evaluation model to guide selection of qualified suppliers by prioritizing various criteria and mapping causal relationships to find effective criteria to improve green supply chain. The aim of the case study was to model and examine the influential and important main GSCM practices, namely, green logistics, organizational performance, green organizational activities, environmental protection, and green supplier evaluation. In the case study, decision-making trial and evaluation laboratory technique is applied to test the developed model. The result of the case study shows only “green supplier evaluation” and “green organizational activities” criteria of the model are in the cause group and the other criteria are in the effect group.

  6. Modelling and genetic algorithm based optimisation of inverse supply chain

    Science.gov (United States)

    Bányai, T.

    2009-04-01

    (Recycling of household appliances with emphasis on reuse options). The purpose of this paper is the presentation of a possible method for avoiding the unnecessary environmental risk and landscape use through unprovoked large supply chain of collection systems of recycling processes. In the first part of the paper the author presents the mathematical model of recycling related collection systems (applied especially for wastes of electric and electronic products) and in the second part of the work a genetic algorithm based optimisation method will be demonstrated, by the aid of which it is possible to determine the optimal structure of the inverse supply chain from the point of view economical, ecological and logistic objective functions. The model of the inverse supply chain is based on a multi-level, hierarchical collection system. In case of this static model it is assumed that technical conditions are permanent. The total costs consist of three parts: total infrastructure costs, total material handling costs and environmental risk costs. The infrastructure-related costs are dependent only on the specific fixed costs and the specific unit costs of the operation points (collection, pre-treatment, treatment, recycling and reuse plants). The costs of warehousing and transportation are represented by the material handling related costs. The most important factors determining the level of environmental risk cost are the number of out of time recycled (treated or reused) products, the number of supply chain objects and the length of transportation routes. The objective function is the minimization of the total cost taking into consideration the constraints. However a lot of research work discussed the design of supply chain [8], but most of them concentrate on linear cost functions. In the case of this model non-linear cost functions were used. The non-linear cost functions and the possible high number of objects of the inverse supply chain leaded to the problem of choosing a

  7. Modelling of cyclical stratigraphy using Markov chains

    Energy Technology Data Exchange (ETDEWEB)

    Kulatilake, P.H.S.W.

    1987-07-01

    State-of-the-art on modelling of cyclical stratigraphy using first-order Markov chains is reviewed. Shortcomings of the presently available procedures are identified. A procedure which eliminates all the identified shortcomings is presented. Required statistical tests to perform this modelling are given in detail. An example (the Oficina formation in eastern Venezuela) is given to illustrate the presented procedure. 12 refs., 3 tabs. 1 fig.

  8. Calibrating the sqHIMMELI v1.0 wetland methane emission model with hierarchical modeling and adaptive MCMC

    Science.gov (United States)

    Susiluoto, Jouni; Raivonen, Maarit; Backman, Leif; Laine, Marko; Makela, Jarmo; Peltola, Olli; Vesala, Timo; Aalto, Tuula

    2018-03-01

    Estimating methane (CH4) emissions from natural wetlands is complex, and the estimates contain large uncertainties. The models used for the task are typically heavily parameterized and the parameter values are not well known. In this study, we perform a Bayesian model calibration for a new wetland CH4 emission model to improve the quality of the predictions and to understand the limitations of such models.The detailed process model that we analyze contains descriptions for CH4 production from anaerobic respiration, CH4 oxidation, and gas transportation by diffusion, ebullition, and the aerenchyma cells of vascular plants. The processes are controlled by several tunable parameters. We use a hierarchical statistical model to describe the parameters and obtain the posterior distributions of the parameters and uncertainties in the processes with adaptive Markov chain Monte Carlo (MCMC), importance resampling, and time series analysis techniques. For the estimation, the analysis utilizes measurement data from the Siikaneva flux measurement site in southern Finland. The uncertainties related to the parameters and the modeled processes are described quantitatively. At the process level, the flux measurement data are able to constrain the CH4 production processes, methane oxidation, and the different gas transport processes. The posterior covariance structures explain how the parameters and the processes are related. Additionally, the flux and flux component uncertainties are analyzed both at the annual and daily levels. The parameter posterior densities obtained provide information regarding importance of the different processes, which is also useful for development of wetland methane emission models other than the square root HelsinkI Model of MEthane buiLd-up and emIssion for peatlands (sqHIMMELI). The hierarchical modeling allows us to assess the effects of some of the parameters on an annual basis. The results of the calibration and the cross validation suggest that

  9. MARKOV CHAIN PORTFOLIO LIQUIDITY OPTIMIZATION MODEL

    Directory of Open Access Journals (Sweden)

    Eder Oliveira Abensur

    2014-05-01

    Full Text Available The international financial crisis of September 2008 and May 2010 showed the importance of liquidity as an attribute to be considered in portfolio decisions. This study proposes an optimization model based on available public data, using Markov chain and Genetic Algorithms concepts as it considers the classic duality of risk versus return and incorporating liquidity costs. The work intends to propose a multi-criterion non-linear optimization model using liquidity based on a Markov chain. The non-linear model was tested using Genetic Algorithms with twenty five Brazilian stocks from 2007 to 2009. The results suggest that this is an innovative development methodology and useful for developing an efficient and realistic financial portfolio, as it considers many attributes such as risk, return and liquidity.

  10. Maturity models in supply chain sustainability

    DEFF Research Database (Denmark)

    Correia, Elisabete; Carvalho, Helena; Azevedo, Susana G.

    2017-01-01

    A systematic literature review of supply chain maturity models with sustainability concerns is presented. The objective is to give insights into methodological issues related to maturity models, namely the research objectives; the research methods used to develop, validate and test them; the scope...... of maturity levels. The comprehensive review, analysis, and synthesis of the maturity model literature represent an important contribution to the organization of this research area, making possible to clarify some confusion that exists about concepts, approaches and components of maturity models...

  11. Hierarchical functional model for automobile development; Jidosha kaihatsu no tame no kaisogata kino model

    Energy Technology Data Exchange (ETDEWEB)

    Sumida, S [U-shin Ltd., Tokyo (Japan); Nagamatsu, M; Maruyama, K [Hokkaido Institute of Technology, Sapporo (Japan); Hiramatsu, S [Mazda Motor Corp., Hiroshima (Japan)

    1997-10-01

    A new approach on modeling is put forward in order to compose the virtual prototype which is indispensable for fully computer integrated concurrent development of automobile product. A basic concept of the hierarchical functional model is proposed as the concrete form of this new modeling technology. This model is used mainly for explaining and simulating functions and efficiencies of both the parts and the total product of automobile. All engineers who engage themselves in design and development of automobile can collaborate with one another using this model. Some application examples are shown, and usefulness of this model is demonstrated. 5 refs., 5 figs.

  12. Multi-chain Markov chain Monte Carlo methods for computationally expensive models

    Science.gov (United States)

    Huang, M.; Ray, J.; Ren, H.; Hou, Z.; Bao, J.

    2017-12-01

    Markov chain Monte Carlo (MCMC) methods are used to infer model parameters from observational data. The parameters are inferred as probability densities, thus capturing estimation error due to sparsity of the data, and the shortcomings of the model. Multiple communicating chains executing the MCMC method have the potential to explore the parameter space better, and conceivably accelerate the convergence to the final distribution. We present results from tests conducted with the multi-chain method to show how the acceleration occurs i.e., for loose convergence tolerances, the multiple chains do not make much of a difference. The ensemble of chains also seems to have the ability to accelerate the convergence of a few chains that might start from suboptimal starting points. Finally, we show the performance of the chains in the estimation of O(10) parameters using computationally expensive forward models such as the Community Land Model, where the sampling burden is distributed over multiple chains.

  13. Recognizing Chinese characters in digital ink from non-native language writers using hierarchical models

    Science.gov (United States)

    Bai, Hao; Zhang, Xi-wen

    2017-06-01

    While Chinese is learned as a second language, its characters are taught step by step from their strokes to components, radicals to components, and their complex relations. Chinese Characters in digital ink from non-native language writers are deformed seriously, thus the global recognition approaches are poorer. So a progressive approach from bottom to top is presented based on hierarchical models. Hierarchical information includes strokes and hierarchical components. Each Chinese character is modeled as a hierarchical tree. Strokes in one Chinese characters in digital ink are classified with Hidden Markov Models and concatenated to the stroke symbol sequence. And then the structure of components in one ink character is extracted. According to the extraction result and the stroke symbol sequence, candidate characters are traversed and scored. Finally, the recognition candidate results are listed by descending. The method of this paper is validated by testing 19815 copies of the handwriting Chinese characters written by foreign students.

  14. New aerial survey and hierarchical model to estimate manatee abundance

    Science.gov (United States)

    Langimm, Cahterine A.; Dorazio, Robert M.; Stith, Bradley M.; Doyle, Terry J.

    2011-01-01

    Monitoring the response of endangered and protected species to hydrological restoration is a major component of the adaptive management framework of the Comprehensive Everglades Restoration Plan. The endangered Florida manatee (Trichechus manatus latirostris) lives at the marine-freshwater interface in southwest Florida and is likely to be affected by hydrologic restoration. To provide managers with prerestoration information on distribution and abundance for postrestoration comparison, we developed and implemented a new aerial survey design and hierarchical statistical model to estimate and map abundance of manatees as a function of patch-specific habitat characteristics, indicative of manatee requirements for offshore forage (seagrass), inland fresh drinking water, and warm-water winter refuge. We estimated the number of groups of manatees from dual-observer counts and estimated the number of individuals within groups by removal sampling. Our model is unique in that we jointly analyzed group and individual counts using assumptions that allow probabilities of group detection to depend on group size. Ours is the first analysis of manatee aerial surveys to model spatial and temporal abundance of manatees in association with habitat type while accounting for imperfect detection. We conducted the study in the Ten Thousand Islands area of southwestern Florida, USA, which was expected to be affected by the Picayune Strand Restoration Project to restore hydrology altered for a failed real-estate development. We conducted 11 surveys in 2006, spanning the cold, dry season and warm, wet season. To examine short-term and seasonal changes in distribution we flew paired surveys 1–2 days apart within a given month during the year. Manatees were sparsely distributed across the landscape in small groups. Probability of detection of a group increased with group size; the magnitude of the relationship between group size and detection probability varied among surveys. Probability

  15. A mechanical model of biomimetic adhesive pads with tilted and hierarchical structures.

    Science.gov (United States)

    Schargott, M

    2009-06-01

    A 3D model for hierarchical biomimetic adhesive pads is constructed. It is based on the main principles of the adhesive pads of the Tokay gecko and consists of hierarchical layers of vertical or tilted beams, where each layer is constructed in such a way that no cohesion between adjacent beams can occur. The elastic and adhesive properties are calculated analytically and numerically. For the adhesive contact on stochastically rough surfaces, the maximum adhesion force increases with increasing number of hierarchical layers. Additional calculations show that the adhesion force also depends on the height spectrum of the rough surface.

  16. A mechanical model of biomimetic adhesive pads with tilted and hierarchical structures

    Energy Technology Data Exchange (ETDEWEB)

    Schargott, M [Institute of Mechanics, Technische Universitaet Berlin, Strd 17 Juni 135, 10623 Berlin (Germany)], E-mail: martin.schargott@tu-berlin.de

    2009-06-01

    A 3D model for hierarchical biomimetic adhesive pads is constructed. It is based on the main principles of the adhesive pads of the Tokay gecko and consists of hierarchical layers of vertical or tilted beams, where each layer is constructed in such a way that no cohesion between adjacent beams can occur. The elastic and adhesive properties are calculated analytically and numerically. For the adhesive contact on stochastically rough surfaces, the maximum adhesion force increases with increasing number of hierarchical layers. Additional calculations show that the adhesion force also depends on the height spectrum of the rough surface.

  17. A mechanical model of biomimetic adhesive pads with tilted and hierarchical structures

    International Nuclear Information System (INIS)

    Schargott, M

    2009-01-01

    A 3D model for hierarchical biomimetic adhesive pads is constructed. It is based on the main principles of the adhesive pads of the Tokay gecko and consists of hierarchical layers of vertical or tilted beams, where each layer is constructed in such a way that no cohesion between adjacent beams can occur. The elastic and adhesive properties are calculated analytically and numerically. For the adhesive contact on stochastically rough surfaces, the maximum adhesion force increases with increasing number of hierarchical layers. Additional calculations show that the adhesion force also depends on the height spectrum of the rough surface

  18. Multivariate Markov chain modeling for stock markets

    Science.gov (United States)

    Maskawa, Jun-ichi

    2003-06-01

    We study a multivariate Markov chain model as a stochastic model of the price changes of portfolios in the framework of the mean field approximation. The time series of price changes are coded into the sequences of up and down spins according to their signs. We start with the discussion for small portfolios consisting of two stock issues. The generalization of our model to arbitrary size of portfolio is constructed by a recurrence relation. The resultant form of the joint probability of the stationary state coincides with Gibbs measure assigned to each configuration of spin glass model. Through the analysis of actual portfolios, it has been shown that the synchronization of the direction of the price changes is well described by the model.

  19. Usability Prediction & Ranking of SDLC Models Using Fuzzy Hierarchical Usability Model

    Science.gov (United States)

    Gupta, Deepak; Ahlawat, Anil K.; Sagar, Kalpna

    2017-06-01

    Evaluation of software quality is an important aspect for controlling and managing the software. By such evaluation, improvements in software process can be made. The software quality is significantly dependent on software usability. Many researchers have proposed numbers of usability models. Each model considers a set of usability factors but do not cover all the usability aspects. Practical implementation of these models is still missing, as there is a lack of precise definition of usability. Also, it is very difficult to integrate these models into current software engineering practices. In order to overcome these challenges, this paper aims to define the term `usability' using the proposed hierarchical usability model with its detailed taxonomy. The taxonomy considers generic evaluation criteria for identifying the quality components, which brings together factors, attributes and characteristics defined in various HCI and software models. For the first time, the usability model is also implemented to predict more accurate usability values. The proposed system is named as fuzzy hierarchical usability model that can be easily integrated into the current software engineering practices. In order to validate the work, a dataset of six software development life cycle models is created and employed. These models are ranked according to their predicted usability values. This research also focuses on the detailed comparison of proposed model with the existing usability models.

  20. Bottom-up learning of hierarchical models in a class of deterministic POMDP environments

    Directory of Open Access Journals (Sweden)

    Itoh Hideaki

    2015-09-01

    Full Text Available The theory of partially observable Markov decision processes (POMDPs is a useful tool for developing various intelligent agents, and learning hierarchical POMDP models is one of the key approaches for building such agents when the environments of the agents are unknown and large. To learn hierarchical models, bottom-up learning methods in which learning takes place in a layer-by-layer manner from the lowest to the highest layer are already extensively used in some research fields such as hidden Markov models and neural networks. However, little attention has been paid to bottom-up approaches for learning POMDP models. In this paper, we present a novel bottom-up learning algorithm for hierarchical POMDP models and prove that, by using this algorithm, a perfect model (i.e., a model that can perfectly predict future observations can be learned at least in a class of deterministic POMDP environments

  1. Stochastic Simulation of a Full-Chain Reptation Model with Constraint Release, Chain-Length Fluctuations and Chain Stretching

    DEFF Research Database (Denmark)

    Neergaard, Jesper; Schieber, Jay D.

    1999-01-01

    A self-consistent reptation model that includes chain stretching, chain-length fluctuations, segment connectivity and constraint release is used to predict transient and steady flows. Quantitative comparisons are made with entangledsolution data. The model is able to capture quantitatively all...

  2. Automatic thoracic anatomy segmentation on CT images using hierarchical fuzzy models and registration

    Science.gov (United States)

    Sun, Kaioqiong; Udupa, Jayaram K.; Odhner, Dewey; Tong, Yubing; Torigian, Drew A.

    2014-03-01

    This paper proposes a thoracic anatomy segmentation method based on hierarchical recognition and delineation guided by a built fuzzy model. Labeled binary samples for each organ are registered and aligned into a 3D fuzzy set representing the fuzzy shape model for the organ. The gray intensity distributions of the corresponding regions of the organ in the original image are recorded in the model. The hierarchical relation and mean location relation between different organs are also captured in the model. Following the hierarchical structure and location relation, the fuzzy shape model of different organs is registered to the given target image to achieve object recognition. A fuzzy connected delineation method is then used to obtain the final segmentation result of organs with seed points provided by recognition. The hierarchical structure and location relation integrated in the model provide the initial parameters for registration and make the recognition efficient and robust. The 3D fuzzy model combined with hierarchical affine registration ensures that accurate recognition can be obtained for both non-sparse and sparse organs. The results on real images are presented and shown to be better than a recently reported fuzzy model-based anatomy recognition strategy.

  3. Hierarchical modeling and inference in ecology: The analysis of data from populations, metapopulations and communities

    Science.gov (United States)

    Royle, J. Andrew; Dorazio, Robert M.

    2008-01-01

    A guide to data collection, modeling and inference strategies for biological survey data using Bayesian and classical statistical methods. This book describes a general and flexible framework for modeling and inference in ecological systems based on hierarchical models, with a strict focus on the use of probability models and parametric inference. Hierarchical models represent a paradigm shift in the application of statistics to ecological inference problems because they combine explicit models of ecological system structure or dynamics with models of how ecological systems are observed. The principles of hierarchical modeling are developed and applied to problems in population, metapopulation, community, and metacommunity systems. The book provides the first synthetic treatment of many recent methodological advances in ecological modeling and unifies disparate methods and procedures. The authors apply principles of hierarchical modeling to ecological problems, including * occurrence or occupancy models for estimating species distribution * abundance models based on many sampling protocols, including distance sampling * capture-recapture models with individual effects * spatial capture-recapture models based on camera trapping and related methods * population and metapopulation dynamic models * models of biodiversity, community structure and dynamics.

  4. A Markov Chain Model for Contagion

    Directory of Open Access Journals (Sweden)

    Angelos Dassios

    2014-11-01

    Full Text Available We introduce a bivariate Markov chain counting process with contagion for modelling the clustering arrival of loss claims with delayed settlement for an insurance company. It is a general continuous-time model framework that also has the potential to be applicable to modelling the clustering arrival of events, such as jumps, bankruptcies, crises and catastrophes in finance, insurance and economics with both internal contagion risk and external common risk. Key distributional properties, such as the moments and probability generating functions, for this process are derived. Some special cases with explicit results and numerical examples and the motivation for further actuarial applications are also discussed. The model can be considered a generalisation of the dynamic contagion process introduced by Dassios and Zhao (2011.

  5. Subjective value of risky foods for individual domestic chicks: a hierarchical Bayesian model.

    Science.gov (United States)

    Kawamori, Ai; Matsushima, Toshiya

    2010-05-01

    For animals to decide which prey to attack, the gain and delay of the food item must be integrated in a value function. However, the subjective value is not obtained by expected profitability when it is accompanied by risk. To estimate the subjective value, we examined choices in a cross-shaped maze with two colored feeders in domestic chicks. When tested by a reversal in food amount or delay, chicks changed choices similarly in both conditions (experiment 1). We therefore examined risk sensitivity for amount and delay (experiment 2) by supplying one feeder with food of fixed profitability and the alternative feeder with high- or low-profitability food at equal probability. Profitability varied in amount (groups 1 and 2 at high and low variance) or in delay (group 3). To find the equilibrium, the amount (groups 1 and 2) or delay (group 3) of the food in the fixed feeder was adjusted in a total of 18 blocks. The Markov chain Monte Carlo method was applied to a hierarchical Bayesian model to estimate the subjective value. Chicks undervalued the variable feeder in group 1 and were indifferent in group 2 but overvalued the variable feeder in group 3 at a population level. Re-examination without the titration procedure (experiment 3) suggested that the subjective value was not absolute for each option. When the delay was varied, the variable option was often given a paradoxically high value depending on fixed alternative. Therefore, the basic assumption of the uniquely determined value function might be questioned.

  6. Hierarchical modeling of systems with similar components: A framework for adaptive monitoring and control

    International Nuclear Information System (INIS)

    Memarzadeh, Milad; Pozzi, Matteo; Kolter, J. Zico

    2016-01-01

    System management includes the selection of maintenance actions depending on the available observations: when a system is made up by components known to be similar, data collected on one is also relevant for the management of others. This is typically the case of wind farms, which are made up by similar turbines. Optimal management of wind farms is an important task due to high cost of turbines' operation and maintenance: in this context, we recently proposed a method for planning and learning at system-level, called PLUS, built upon the Partially Observable Markov Decision Process (POMDP) framework, which treats transition and emission probabilities as random variables, and is therefore suitable for including model uncertainty. PLUS models the components as independent or identical. In this paper, we extend that formulation, allowing for a weaker similarity among components. The proposed approach, called Multiple Uncertain POMDP (MU-POMDP), models the components as POMDPs, and assumes the corresponding parameters as dependent random variables. Through this framework, we can calibrate specific degradation and emission models for each component while, at the same time, process observations at system-level. We compare the performance of the proposed MU-POMDP with PLUS, and discuss its potential and computational complexity. - Highlights: • A computational framework is proposed for adaptive monitoring and control. • It adopts a scheme based on Markov Chain Monte Carlo for inference and learning. • Hierarchical Bayesian modeling is used to allow a system-level flow of information. • Results show potential of significant savings in management of wind farms.

  7. Ranking of Business Process Simulation Software Tools with DEX/QQ Hierarchical Decision Model.

    Science.gov (United States)

    Damij, Nadja; Boškoski, Pavle; Bohanec, Marko; Mileva Boshkoska, Biljana

    2016-01-01

    The omnipresent need for optimisation requires constant improvements of companies' business processes (BPs). Minimising the risk of inappropriate BP being implemented is usually performed by simulating the newly developed BP under various initial conditions and "what-if" scenarios. An effectual business process simulations software (BPSS) is a prerequisite for accurate analysis of an BP. Characterisation of an BPSS tool is a challenging task due to the complex selection criteria that includes quality of visual aspects, simulation capabilities, statistical facilities, quality reporting etc. Under such circumstances, making an optimal decision is challenging. Therefore, various decision support models are employed aiding the BPSS tool selection. The currently established decision support models are either proprietary or comprise only a limited subset of criteria, which affects their accuracy. Addressing this issue, this paper proposes a new hierarchical decision support model for ranking of BPSS based on their technical characteristics by employing DEX and qualitative to quantitative (QQ) methodology. Consequently, the decision expert feeds the required information in a systematic and user friendly manner. There are three significant contributions of the proposed approach. Firstly, the proposed hierarchical model is easily extendible for adding new criteria in the hierarchical structure. Secondly, a fully operational decision support system (DSS) tool that implements the proposed hierarchical model is presented. Finally, the effectiveness of the proposed hierarchical model is assessed by comparing the resulting rankings of BPSS with respect to currently available results.

  8. Robust Real-Time Music Transcription with a Compositional Hierarchical Model.

    Science.gov (United States)

    Pesek, Matevž; Leonardis, Aleš; Marolt, Matija

    2017-01-01

    The paper presents a new compositional hierarchical model for robust music transcription. Its main features are unsupervised learning of a hierarchical representation of input data, transparency, which enables insights into the learned representation, as well as robustness and speed which make it suitable for real-world and real-time use. The model consists of multiple layers, each composed of a number of parts. The hierarchical nature of the model corresponds well to hierarchical structures in music. The parts in lower layers correspond to low-level concepts (e.g. tone partials), while the parts in higher layers combine lower-level representations into more complex concepts (tones, chords). The layers are learned in an unsupervised manner from music signals. Parts in each layer are compositions of parts from previous layers based on statistical co-occurrences as the driving force of the learning process. In the paper, we present the model's structure and compare it to other hierarchical approaches in the field of music information retrieval. We evaluate the model's performance for the multiple fundamental frequency estimation. Finally, we elaborate on extensions of the model towards other music information retrieval tasks.

  9. A generalized linear factor model approach to the hierarchical framework for responses and response times.

    Science.gov (United States)

    Molenaar, Dylan; Tuerlinckx, Francis; van der Maas, Han L J

    2015-05-01

    We show how the hierarchical model for responses and response times as developed by van der Linden (2007), Fox, Klein Entink, and van der Linden (2007), Klein Entink, Fox, and van der Linden (2009), and Glas and van der Linden (2010) can be simplified to a generalized linear factor model with only the mild restriction that there is no hierarchical model at the item side. This result is valuable as it enables all well-developed modelling tools and extensions that come with these methods. We show that the restriction we impose on the hierarchical model does not influence parameter recovery under realistic circumstances. In addition, we present two illustrative real data analyses to demonstrate the practical benefits of our approach. © 2014 The British Psychological Society.

  10. Developing Model for Supply Chain Management - the Case of Croatia

    Directory of Open Access Journals (Sweden)

    E. Jurun

    2004-01-01

    Full Text Available This paper describes a model of supply chain management (SCM. It explains overall supply chain issues, strategic importance of SCM, supply chain strategies and an example of mathematical formulation. A supply chain is a global network of organizations that cooperate to improve the flows of material and information between suppliers and customers at the lowest cost and the highest speed. The objective of a supply chain is customer satisfaction. At the strategic level, a supply chain can be considered as being composed of five activities: buy, make, move, store and sell. Each activity is a module. The set of modules, along with its links, constitutes a model of the supply chain. Our paper presents some insights into the supply chain strategies of companies in Croatia. The major goal of this paper is to show a model for supply chain management in mathematical terms, with an example of mathematical formulation.

  11. Predicting Longitudinal Change in Language Production and Comprehension in Individuals with Down Syndrome: Hierarchical Linear Modeling.

    Science.gov (United States)

    Chapman, Robin S.; Hesketh, Linda J.; Kistler, Doris J.

    2002-01-01

    Longitudinal change in syntax comprehension and production skill, measured over six years, was modeled in 31 individuals (ages 5-20) with Down syndrome. The best fitting Hierarchical Linear Modeling model of comprehension uses age and visual and auditory short-term memory as predictors of initial status, and age for growth trajectory. (Contains…

  12. Measuring Teacher Effectiveness through Hierarchical Linear Models: Exploring Predictors of Student Achievement and Truancy

    Science.gov (United States)

    Subedi, Bidya Raj; Reese, Nancy; Powell, Randy

    2015-01-01

    This study explored significant predictors of student's Grade Point Average (GPA) and truancy (days absent), and also determined teacher effectiveness based on proportion of variance explained at teacher level model. We employed a two-level hierarchical linear model (HLM) with student and teacher data at level-1 and level-2 models, respectively.…

  13. Heuristics for Hierarchical Partitioning with Application to Model Checking

    DEFF Research Database (Denmark)

    Möller, Michael Oliver; Alur, Rajeev

    2001-01-01

    Given a collection of connected components, it is often desired to cluster together parts of strong correspondence, yielding a hierarchical structure. We address the automation of this process and apply heuristics to battle the combinatorial and computational complexity. We define a cost function...... that captures the quality of a structure relative to the connections and favors shallow structures with a low degree of branching. Finding a structure with minimal cost is NP-complete. We present a greedy polynomial-time algorithm that approximates good solutions incrementally by local evaluation of a heuristic...... function. We argue for a heuristic function based on four criteria: the number of enclosed connections, the number of components, the number of touched connections and the depth of the structure. We report on an application in the context of formal verification, where our algorithm serves as a preprocessor...

  14. Performance Modeling of Communication Networks with Markov Chains

    CERN Document Server

    Mo, Jeonghoon

    2010-01-01

    This book is an introduction to Markov chain modeling with applications to communication networks. It begins with a general introduction to performance modeling in Chapter 1 where we introduce different performance models. We then introduce basic ideas of Markov chain modeling: Markov property, discrete time Markov chain (DTMe and continuous time Markov chain (CTMe. We also discuss how to find the steady state distributions from these Markov chains and how they can be used to compute the system performance metric. The solution methodologies include a balance equation technique, limiting probab

  15. The Advancement Value Chain: An Exploratory Model

    Science.gov (United States)

    Leonard, Edward F., III

    2005-01-01

    Since the introduction of the value chain concept in 1985, several varying, yet virtually similar, value chains have been developed for the business enterprise. Shifting to higher education, can a value chain be found that links together the various activities of advancement so that an institution's leaders can actually look at the philanthropic…

  16. The composite supply chain efficiency model: A case study of the Sishen-Saldanha supply chain

    Directory of Open Access Journals (Sweden)

    Leila L. Goedhals-Gerber

    2016-01-01

    Full Text Available As South Africa strives to be a major force in global markets, it is essential that South African supply chains achieve and maintain a competitive advantage. One approach to achieving this is to ensure that South African supply chains maximise their levels of efficiency. Consequently, the efficiency levels of South Africa’s supply chains must be evaluated. The objective of this article is to propose a model that can assist South African industries in becoming internationally competitive by providing them with a tool for evaluating their levels of efficiency both as individual firms and as a component in an overall supply chain. The Composite Supply Chain Efficiency Model (CSCEM was developed to measure supply chain efficiency across supply chains using variables identified as problem areas experienced by South African supply chains. The CSCEM is tested in this article using the Sishen-Saldanda iron ore supply chain as a case study. The results indicate that all three links or nodes along the Sishen-Saldanha iron ore supply chain performed well. The average efficiency of the rail leg was 97.34%, while the average efficiency of the mine and the port were 97% and 95.44%, respectively. The results also show that the CSCEM can be used by South African firms to measure their levels of supply chain efficiency. This article concludes with the benefits of the CSCEM.

  17. Clinical, laboratory, and demographic determinants of hospitalization due to dengue in 7613 patients: A retrospective study based on hierarchical models.

    Science.gov (United States)

    da Silva, Natal Santos; Undurraga, Eduardo A; da Silva Ferreira, Elis Regina; Estofolete, Cássia Fernanda; Nogueira, Maurício Lacerda

    2018-01-01

    In Brazil, the incidence of hospitalization due to dengue, as an indicator of severity, has drastically increased since 1998. The objective of our study was to identify risk factors associated with subsequent hospitalization related to dengue. We analyzed 7613 dengue confirmed via serology (ELISA), non-structural protein 1, or polymerase chain reaction amplification. We used a hierarchical framework to generate a multivariate logistic regression based on a variety of risk variables. This was followed by multiple statistical analyses to assess hierarchical model accuracy, variance, goodness of fit, and whether or not this model reliably represented the population. The final model, which included age, sex, ethnicity, previous dengue infection, hemorrhagic manifestations, plasma leakage, and organ failure, showed that all measured parameters, with the exception of previous dengue, were statistically significant. The presence of organ failure was associated with the highest risk of subsequent dengue hospitalization (OR=5·75; CI=3·53-9·37). Therefore, plasma leakage and organ failure were the main indicators of hospitalization due to dengue, although other variables of minor importance should also be considered to refer dengue patients to hospital treatment, which may lead to a reduction in avoidable deaths as well as costs related to dengue. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. The Hierarchical Trend Model for property valuation and local price indices

    NARCIS (Netherlands)

    Francke, M.K.; Vos, G.A.

    2002-01-01

    This paper presents a hierarchical trend model (HTM) for selling prices of houses, addressing three main problems: the spatial and temporal dependence of selling prices and the dependency of price index changes on housing quality. In this model the general price trend, cluster-level price trends,

  19. Measuring Service Quality in Higher Education: Development of a Hierarchical Model (HESQUAL)

    Science.gov (United States)

    Teeroovengadum, Viraiyan; Kamalanabhan, T. J.; Seebaluck, Ashley Keshwar

    2016-01-01

    Purpose: This paper aims to develop and empirically test a hierarchical model for measuring service quality in higher education. Design/methodology/approach: The first phase of the study consisted of qualitative research methods and a comprehensive literature review, which allowed the development of a conceptual model comprising 53 service quality…

  20. Avoiding Boundary Estimates in Hierarchical Linear Models through Weakly Informative Priors

    Science.gov (United States)

    Chung, Yeojin; Rabe-Hesketh, Sophia; Gelman, Andrew; Dorie, Vincent; Liu, Jinchen

    2012-01-01

    Hierarchical or multilevel linear models are widely used for longitudinal or cross-sectional data on students nested in classes and schools, and are particularly important for estimating treatment effects in cluster-randomized trials, multi-site trials, and meta-analyses. The models can allow for variation in treatment effects, as well as…

  1. LiDAR based prediction of forest biomass using hierarchical models with spatially varying coefficients

    Science.gov (United States)

    Chad Babcock; Andrew O. Finley; John B. Bradford; Randy Kolka; Richard Birdsey; Michael G. Ryan

    2015-01-01

    Many studies and production inventory systems have shown the utility of coupling covariates derived from Light Detection and Ranging (LiDAR) data with forest variables measured on georeferenced inventory plots through regression models. The objective of this study was to propose and assess the use of a Bayesian hierarchical modeling framework that accommodates both...

  2. A Hierarchical Linear Model for Estimating Gender-Based Earnings Differentials.

    Science.gov (United States)

    Haberfield, Yitchak; Semyonov, Moshe; Addi, Audrey

    1998-01-01

    Estimates of gender earnings inequality in data from 116,431 Jewish workers were compared using a hierarchical linear model (HLM) and ordinary least squares model. The HLM allows estimation of the extent to which earnings inequality depends on occupational characteristics. (SK)

  3. Galactic chemical evolution in hierarchical formation models - I. Early-type galaxies in the local Universe

    NARCIS (Netherlands)

    Arrigoni, Matías; Trager, Scott C.; Somerville, Rachel S.; Gibson, Brad K.

    We study the metallicities and abundance ratios of early-type galaxies in cosmological semi-analytic models (SAMs) within the hierarchical galaxy formation paradigm. To achieve this we implemented a detailed galactic chemical evolution model and can now predict abundances of individual elements for

  4. Galactic chemical evolution in hierarchical formation models : I. Early-type galaxies in the local Universe

    NARCIS (Netherlands)

    Arrigoni, Matias; Trager, Scott C.; Somerville, Rachel S.; Gibson, Brad K.

    2010-01-01

    We study the metallicities and abundance ratios of early-type galaxies in cosmological semi-analytic models (SAMs) within the hierarchical galaxy formation paradigm. To achieve this we implemented a detailed galactic chemical evolution model and can now predict abundances of individual elements for

  5. Hierarchical Bayesian modeling of the space - time diffusion patterns of cholera epidemic in Kumasi, Ghana

    NARCIS (Netherlands)

    Osei, Frank B.; Osei, F.B.; Duker, Alfred A.; Stein, A.

    2011-01-01

    This study analyses the joint effects of the two transmission routes of cholera on the space-time diffusion dynamics. Statistical models are developed and presented to investigate the transmission network routes of cholera diffusion. A hierarchical Bayesian modelling approach is employed for a joint

  6. A Hybrid PO - Higher-Order Hierarchical MoM Formulation using Curvilinear Geometry Modeling

    DEFF Research Database (Denmark)

    Jørgensen, E.; Meincke, Peter; Breinbjerg, Olav

    2003-01-01

    which implies a very modest memory requirement. Nevertheless, the hierarchical feature of the basis functions maintains the ability to treat small geometrical details efficiently. In addition, the scatterer is modelled with higher-order curved patches which allows accurate modelling of curved surfaces...

  7. Soft tissue deformation using a Hierarchical Finite Element Model.

    Science.gov (United States)

    Faraci, Alessandro; Bello, Fernando; Darzi, Ara

    2004-01-01

    Simulating soft tissue deformation in real-time has become increasingly important in order to provide a realistic virtual environment for training surgical skills. Several methods have been proposed with the aim of rendering in real-time the mechanical and physiological behaviour of human organs, one of the most popular being Finite Element Method (FEM). In this paper we present a new approach to the solution of the FEM problem introducing the concept of parent and child mesh within the development of a hierarchical FEM. The online selection of the child mesh is presented with the purpose to adapt the mesh hierarchy in real-time. This permits further refinement of the child mesh increasing the detail of the deformation without slowing down the simulation and giving the possibility of integrating force feedback. The results presented demonstrate the application of our proposed framework using a desktop virtual reality (VR) system that incorporates stereo vision with integrated haptics co-location via a desktop Phantom force feedback device.

  8. Transformation of renormalization groups in 2N-component fermion hierarchical model

    International Nuclear Information System (INIS)

    Stepanov, R.G.

    2006-01-01

    The 2N-component fermion model on the hierarchical lattice is studied. The explicit formulae for renormalization groups transformation in the space of coefficients setting the Grassmannian-significant density of the free measure are presented. The inverse transformation of the renormalization group is calculated. The definition of immovable points of renormalization groups is reduced to solving the set of algebraic equations. The interesting connection between renormalization group transformations in boson and fermion hierarchical models is found out. It is shown that one transformation is obtained from other one by the substitution of N on -N [ru

  9. Supply chain modeling of forest fuel

    Energy Technology Data Exchange (ETDEWEB)

    Gunnarsson, Helene; Lundgren, Jan T.; Roennqvist, Mikael

    2001-04-01

    We study the problem of deciding when and where forest residues are to be converted into forest fuel, and how the residues are to be transported and stored in order to satisfy demand at heating plants. Decisions also include whether or not additional harvest areas and saw-mills are to be contracted. In addition, we consider the flow of products from saw-mills and import harbors, and address the question about which terminals to use. The planning horizon is one year and monthly time periods are considered. The supply chain problem is formulated as a large mixed integer linear programming model. In order to obtain solutions within reasonable time we have developed a heuristic solution approach. Computational results from a large Swedish supplying entrepreneur are reported.

  10. Supply chain modeling of forest fuel

    International Nuclear Information System (INIS)

    Gunnarsson, Helene; Lundgren, Jan T.; Roennqvist, Mikael

    2001-04-01

    We study the problem of deciding when and where forest residues are to be converted into forest fuel, and how the residues are to be transported and stored in order to satisfy demand at heating plants. Decisions also include whether or not additional harvest areas and saw-mills are to be contracted. In addition, we consider the flow of products from saw-mills and import harbors, and address the question about which terminals to use. The planning horizon is one year and monthly time periods are considered. The supply chain problem is formulated as a large mixed integer linear programming model. In order to obtain solutions within reasonable time we have developed a heuristic solution approach. Computational results from a large Swedish supplying entrepreneur are reported

  11. UNCERTAINTY SUPPLY CHAIN MODEL AND TRANSPORT IN ITS DEPLOYMENTS

    Directory of Open Access Journals (Sweden)

    Fabiana Lucena Oliveira

    2014-05-01

    Full Text Available This article discusses the Model Uncertainty of Supply Chain, and proposes a matrix with their transportation modes best suited to their chains. From the detailed analysis of the matrix of uncertainty, it is suggested transportation modes best suited to the management of these chains, so that transport is the most appropriate optimization of the gains previously proposed by the original model, particularly when supply chains are distant from suppliers of raw materials and / or supplies.Here we analyze in detail Agile Supply Chains, which is a result of Uncertainty Supply Chain Model, with special attention to Manaus Industrial Center. This research was done at Manaus Industrial Pole, which is a model of industrial agglomerations, based in Manaus, State of Amazonas (Brazil, which contemplates different supply chains and strategies sharing same infrastructure of transport, handling and storage and clearance process and uses inbound for suppliers of raw material.  The state of art contemplates supply chain management, uncertainty supply chain model, agile supply chains, Manaus Industrial Center (MIC and Brazilian legislation, as a business case, and presents concepts and features, of each one. The main goal is to present and discuss how transport is able to support Uncertainty Supply Chain Model, in order to complete management model. The results obtained confirms the hypothesis of integrated logistics processes are able to guarantee attractivity for industrial agglomerations, and open discussions when the suppliers are far from the manufacturer center, in a logistics management.

  12. System Dynamics Modeling for Supply Chain Information Sharing

    Science.gov (United States)

    Feng, Yang

    In this paper, we try to use the method of system dynamics to model supply chain information sharing. Firstly, we determine the model boundaries, establish system dynamics model of supply chain before information sharing, analyze the model's simulation results under different changed parameters and suggest improvement proposal. Then, we establish system dynamics model of supply chain information sharing and make comparison and analysis on the two model's simulation results, to show the importance of information sharing in supply chain management. We wish that all these simulations would provide scientific supports for enterprise decision-making.

  13. Experiments in Error Propagation within Hierarchal Combat Models

    Science.gov (United States)

    2015-09-01

    stochastic Lanchester campaign model that contains 18 Blue and 25 Red submarines. The outputs of the campaign models are analyzed statistically. The...sampled in a variety of ways, including just the mean, and used to calculate the attrition coefficients for a stochastic Lanchester campaign model...9 2. Lanchester Models .............................................................................10 III. SCENARIO AND MODEL DEVELOPMENT

  14. INFOGRAPHIC MODELING OF THE HIERARCHICAL STRUCTURE OF THE MANAGEMENT SYSTEM EXPOSED TO AN INNOVATIVE CONFLICT

    Directory of Open Access Journals (Sweden)

    Chulkov Vitaliy Olegovich

    2012-12-01

    Full Text Available This article deals with the infographic modeling of hierarchical management systems exposed to innovative conflicts. The authors analyze the facts that serve as conflict drivers in the construction management environment. The reasons for innovative conflicts include changes in hierarchical structures of management systems, adjustment of workers to new management conditions, changes in the ideology, etc. Conflicts under consideration may involve contradictions between requests placed by customers and the legislation, any risks that may originate from the above contradiction, conflicts arising from any failure to comply with any accepted standards of conduct, etc. One of the main objectives of the theory of hierarchical structures is to develop a model capable of projecting potential innovative conflicts. Models described in the paper reflect dynamic changes in patterns of external impacts within the conflict area. The simplest model element is a monad, or an indivisible set of characteristics of participants at the pre-set level. Interaction between two monads forms a diad. Modeling of situations that involve a different number of monads, diads, resources and impacts can improve methods used to control and manage hierarchical structures in the construction industry. However, in the absence of any mathematical models employed to simulate conflict-related events, processes and situations, any research into, projection and management of interpersonal and group-to-group conflicts are to be performed in the legal environment

  15. Towards an Empirical-Relational Model of Supply Chain Flexibility

    OpenAIRE

    Santanu Mandal

    2015-01-01

    Supply chains are prone to disruptions and associated risks. To develop capabilities for risk mitigation, supply chains need to be flexible. A flexible supply chain can respond better to environmental contingencies. Based on the theoretical tenets of resource-based view, relational view and dynamic capabilities theory, the current study develops a relational model of supply chain flexibility comprising trust, commitment, communication, co-operation, adaptation and interdependence. Subsequentl...

  16. Application of hierarchical genetic models to Raven and WAIS subtests: a Dutch twin study.

    Science.gov (United States)

    Rijsdijk, Frühling V; Vernon, P A; Boomsma, Dorret I

    2002-05-01

    Hierarchical models of intelligence are highly informative and widely accepted. Application of these models to twin data, however, is sparse. This paper addresses the question of how a genetic hierarchical model fits the Wechsler Adult Intelligence Scale (WAIS) subtests and the Raven Standard Progressive test score, collected in 194 18-year-old Dutch twin pairs. We investigated whether first-order group factors possess genetic and environmental variance independent of the higher-order general factor and whether the hierarchical structure is significant for all sources of variance. A hierarchical model with the 3 Cohen group-factors (verbal comprehension, perceptual organisation and freedom-from-distractibility) and a higher-order g factor showed the best fit to the phenotypic data and to additive genetic influences (A), whereas the unique environmental source of variance (E) could be modeled by a single general factor and specifics. There was no evidence for common environmental influences. The covariation among the WAIS group factors and the covariation between the group factors and the Raven is predominantly influenced by a second-order genetic factor and strongly support the notion of a biological basis of g.

  17. A Hierarchical Bayesian Model to Predict Self-Thinning Line for Chinese Fir in Southern China.

    Directory of Open Access Journals (Sweden)

    Xiongqing Zhang

    Full Text Available Self-thinning is a dynamic equilibrium between forest growth and mortality at full site occupancy. Parameters of the self-thinning lines are often confounded by differences across various stand and site conditions. For overcoming the problem of hierarchical and repeated measures, we used hierarchical Bayesian method to estimate the self-thinning line. The results showed that the self-thinning line for Chinese fir (Cunninghamia lanceolata (Lamb.Hook. plantations was not sensitive to the initial planting density. The uncertainty of model predictions was mostly due to within-subject variability. The simulation precision of hierarchical Bayesian method was better than that of stochastic frontier function (SFF. Hierarchical Bayesian method provided a reasonable explanation of the impact of other variables (site quality, soil type, aspect, etc. on self-thinning line, which gave us the posterior distribution of parameters of self-thinning line. The research of self-thinning relationship could be benefit from the use of hierarchical Bayesian method.

  18. Approximate Bayesian Computation by Subset Simulation using hierarchical state-space models

    Science.gov (United States)

    Vakilzadeh, Majid K.; Huang, Yong; Beck, James L.; Abrahamsson, Thomas

    2017-02-01

    A new multi-level Markov Chain Monte Carlo algorithm for Approximate Bayesian Computation, ABC-SubSim, has recently appeared that exploits the Subset Simulation method for efficient rare-event simulation. ABC-SubSim adaptively creates a nested decreasing sequence of data-approximating regions in the output space that correspond to increasingly closer approximations of the observed output vector in this output space. At each level, multiple samples of the model parameter vector are generated by a component-wise Metropolis algorithm so that the predicted output corresponding to each parameter value falls in the current data-approximating region. Theoretically, if continued to the limit, the sequence of data-approximating regions would converge on to the observed output vector and the approximate posterior distributions, which are conditional on the data-approximation region, would become exact, but this is not practically feasible. In this paper we study the performance of the ABC-SubSim algorithm for Bayesian updating of the parameters of dynamical systems using a general hierarchical state-space model. We note that the ABC methodology gives an approximate posterior distribution that actually corresponds to an exact posterior where a uniformly distributed combined measurement and modeling error is added. We also note that ABC algorithms have a problem with learning the uncertain error variances in a stochastic state-space model and so we treat them as nuisance parameters and analytically integrate them out of the posterior distribution. In addition, the statistical efficiency of the original ABC-SubSim algorithm is improved by developing a novel strategy to regulate the proposal variance for the component-wise Metropolis algorithm at each level. We demonstrate that Self-regulated ABC-SubSim is well suited for Bayesian system identification by first applying it successfully to model updating of a two degree-of-freedom linear structure for three cases: globally

  19. A hierarchical spatial model of avian abundance with application to Cerulean Warblers

    Science.gov (United States)

    Thogmartin, Wayne E.; Sauer, John R.; Knutson, Melinda G.

    2004-01-01

    Surveys collecting count data are the primary means by which abundance is indexed for birds. These counts are confounded, however, by nuisance effects including observer effects and spatial correlation between counts. Current methods poorly accommodate both observer and spatial effects because modeling these spatially autocorrelated counts within a hierarchical framework is not practical using standard statistical approaches. We propose a Bayesian approach to this problem and provide as an example of its implementation a spatial model of predicted abundance for the Cerulean Warbler (Dendroica cerulea) in the Prairie-Hardwood Transition of the upper midwestern United States. We used an overdispersed Poisson regression with fixed and random effects, fitted by Markov chain Monte Carlo methods. We used 21 years of North American Breeding Bird Survey counts as the response in a loglinear function of explanatory variables describing habitat, spatial relatedness, year effects, and observer effects. The model included a conditional autoregressive term representing potential correlation between adjacent route counts. Categories of explanatory habitat variables in the model included land cover composition and configuration, climate, terrain heterogeneity, and human influence. The inherent hierarchy in the model was from counts occurring, in part, as a function of observers within survey routes within years. We found that the percentage of forested wetlands, an index of wetness potential, and an interaction between mean annual precipitation and deciduous forest patch size best described Cerulean Warbler abundance. Based on a map of relative abundance derived from the posterior parameter estimates, we estimated that only 15% of the species' population occurred on federal land, necessitating active engagement of public landowners and state agencies in the conservation of the breeding habitat for this species. Models of this type can be applied to any data in which the response

  20. A tactical supply chain planning model with multiple flexibility options

    DEFF Research Database (Denmark)

    Esmaeilikia, Masoud; Fahimnia, Behnam; Sarkis, Joeseph

    2016-01-01

    Supply chain flexibility is widely recognized as an approach to manage uncertainty. Uncertainty in the supply chain may arise from a number of sources such as demand and supply interruptions and lead time variability. A tactical supply chain planning model with multiple flexibility options...... incorporated in sourcing, manufacturing and logistics functions can be used for the analysis of flexibility adjustment in an existing supply chain. This paper develops such a tactical supply chain planning model incorporating a realistic range of flexibility options. A novel solution method is designed...

  1. Time to failure of hierarchical load-transfer models of fracture

    DEFF Research Database (Denmark)

    Vázquez-Prada, M; Gómez, J B; Moreno, Y

    1999-01-01

    The time to failure, T, of dynamical models of fracture for a hierarchical load-transfer geometry is studied. Using a probabilistic strategy and juxtaposing hierarchical structures of height n, we devise an exact method to compute T, for structures of height n+1. Bounding T, for large n, we are a...... are able to deduce that the time to failure tends to a nonzero value when n tends to infinity. This numerical conclusion is deduced for both power law and exponential breakdown rules....

  2. Modelling antibody side chain conformations using heuristic database search.

    Science.gov (United States)

    Ritchie, D W; Kemp, G J

    1997-01-01

    We have developed a knowledge-based system which models the side chain conformations of residues in the variable domains of antibody Fv fragments. The system is written in Prolog and uses an object-oriented database of aligned antibody structures in conjunction with a side chain rotamer library. The antibody database provides 3-dimensional clusters of side chain conformations which can be copied en masse into the model structure. The object-oriented database architecture facilitates a navigational style of database access, necessary to assemble side chains clusters. Around 60% of the model is built using side chain clusters and this eliminates much of the combinatorial complexity associated with many other side chain placement algorithms. Construction and placement of side chain clusters is guided by a heuristic cost function based on a simple model of side chain packing interactions. Even with a simple model, we find that a large proportion of side chain conformations are modelled accurately. We expect our approach could be used with other homologous protein families, in addition to antibodies, both to improve the quality of model structures and to give a "smart start" to the side chain placement problem.

  3. From Playability to a Hierarchical Game Usability Model

    OpenAIRE

    Nacke, Lennart E.

    2010-01-01

    This paper presents a brief review of current game usability models. This leads to the conception of a high-level game development-centered usability model that integrates current usability approaches in game industry and game research.

  4. Evaluation of Validity and Reliability for Hierarchical Scales Using Latent Variable Modeling

    Science.gov (United States)

    Raykov, Tenko; Marcoulides, George A.

    2012-01-01

    A latent variable modeling method is outlined, which accomplishes estimation of criterion validity and reliability for a multicomponent measuring instrument with hierarchical structure. The approach provides point and interval estimates for the scale criterion validity and reliability coefficients, and can also be used for testing composite or…

  5. Predicting Examination Performance Using an Expanded Integrated Hierarchical Model of Test Emotions and Achievement Goals

    Science.gov (United States)

    Putwain, Dave; Deveney, Carolyn

    2009-01-01

    The aim of this study was to examine an expanded integrative hierarchical model of test emotions and achievement goal orientations in predicting the examination performance of undergraduate students. Achievement goals were theorised as mediating the relationship between test emotions and performance. 120 undergraduate students completed…

  6. Using Hierarchical Linear Modelling to Examine Factors Predicting English Language Students' Reading Achievement

    Science.gov (United States)

    Fung, Karen; ElAtia, Samira

    2015-01-01

    Using Hierarchical Linear Modelling (HLM), this study aimed to identify factors such as ESL/ELL/EAL status that would predict students' reading performance in an English language arts exam taken across Canada. Using data from the 2007 administration of the Pan-Canadian Assessment Program (PCAP) along with the accompanying surveys for students and…

  7. The Hierarchical Factor Model of ADHD: Invariant across Age and National Groupings?

    Science.gov (United States)

    Toplak, Maggie E.; Sorge, Geoff B.; Flora, David B.; Chen, Wai; Banaschewski, Tobias; Buitelaar, Jan; Ebstein, Richard; Eisenberg, Jacques; Franke, Barbara; Gill, Michael; Miranda, Ana; Oades, Robert D.; Roeyers, Herbert; Rothenberger, Aribert; Sergeant, Joseph; Sonuga-Barke, Edmund; Steinhausen, Hans-Christoph; Thompson, Margaret; Tannock, Rosemary; Asherson, Philip; Faraone, Stephen V.

    2012-01-01

    Objective: To examine the factor structure of attention-deficit/hyperactivity disorder (ADHD) in a clinical sample of 1,373 children and adolescents with ADHD and their 1,772 unselected siblings recruited from different countries across a large age range. Hierarchical and correlated factor analytic models were compared separately in the ADHD and…

  8. Symptom structure of PTSD: support for a hierarchical model separating core PTSD symptoms from dysphoria

    NARCIS (Netherlands)

    Rademaker, Arthur R.; van Minnen, Agnes; Ebberink, Freek; van Zuiden, Mirjam; Hagenaars, Muriel A.; Geuze, Elbert

    2012-01-01

    As of yet, no collective agreement has been reached regarding the precise factor structure of posttraumatic stress disorder (PTSD). Several alternative factor-models have been proposed in the last decades. The current study examined the fit of a hierarchical adaptation of the Simms et al. (2002)

  9. Hierarchical models for informing general biomass equations with felled tree data

    Science.gov (United States)

    Brian J. Clough; Matthew B. Russell; Christopher W. Woodall; Grant M. Domke; Philip J. Radtke

    2015-01-01

    We present a hierarchical framework that uses a large multispecies felled tree database to inform a set of general models for predicting tree foliage biomass, with accompanying uncertainty, within the FIA database. Results suggest significant prediction uncertainty for individual trees and reveal higher errors when predicting foliage biomass for larger trees and for...

  10. Perfect observables for the hierarchical non-linear O(N)-invariant σ-model

    International Nuclear Information System (INIS)

    Wieczerkowski, C.; Xylander, Y.

    1995-05-01

    We compute moving eigenvalues and the eigenvectors of the linear renormalization group transformation for observables along the renormalized trajectory of the hierarchical non-linear O(N)-invariant σ-model by means of perturbation theory in the running coupling constant. Moving eigenvectors are defined as solutions to a Callan-Symanzik type equation. (orig.)

  11. Intraclass Correlation Coefficients in Hierarchical Designs: Evaluation Using Latent Variable Modeling

    Science.gov (United States)

    Raykov, Tenko

    2011-01-01

    Interval estimation of intraclass correlation coefficients in hierarchical designs is discussed within a latent variable modeling framework. A method accomplishing this aim is outlined, which is applicable in two-level studies where participants (or generally lower-order units) are clustered within higher-order units. The procedure can also be…

  12. Value Chain Model for Steel Manufacturing Sector: A Case Study

    OpenAIRE

    S G Acharyulu; K Venkata Subbaiah; K Narayana Rao

    2018-01-01

    Michael E Porter developed a value chain model for manufacturing sector with five primary activities and four supporting activities. The value chain model developed by Porter is extended to a steel manufacturing sector due to expansions of steel plants has become a continual process for their growth and survival. In this paper a value chain model for steel manufacturing sector is developed considering five primary activities and six support activities.

  13. Towards effective food chains : models and applications

    NARCIS (Netherlands)

    Trienekens, J.H.; Top, J.L.; Vorst, van der J.G.A.J.; Beulens, A.J.M.

    2010-01-01

    Food chain management research can help in the analysis and redesign of value creation and the product flow throughout the chain from primary producer down to the consumer. The aim is to meet consumer and societal requirements effectively at minimal cost. In the Wageningen UR strategic research

  14. A modeling framework for supply chain simulation

    NARCIS (Netherlands)

    van der Zee, D.J.; van der Vorst, J.G.A.J.

    2002-01-01

    In many industries logistic optimization on a company scale is no longer sufficient to meet the competition. Nowadays, competition takes place between supply chains. Intrinsic to the concept and success of a supply chain is the tuning of the activities of the companies involved. Given the complexity

  15. A maturity model for industrial supply chains

    NARCIS (Netherlands)

    Hameri, A.P.; McKay, K.N.; Wiers, V.C.S.

    2013-01-01

    This article takes an evolutionary view of supply chains to suggest a series of distinct, contextual phases for supply chain execution and what maturity might mean at each phase. For example, what is best practice in a mature industry might not be best practice in a pioneering situation.Three

  16. Hierarchical Markov Model in Life Insurance and Social Benefit Schemes

    Directory of Open Access Journals (Sweden)

    Jiwook Jang

    2018-06-01

    Full Text Available We explored the effect of the jump-diffusion process on a social benefit scheme consisting of life insurance, unemployment/disability benefits, and retirement benefits. To do so, we used a four-state Markov chain with multiple decrements. Assuming independent state-wise intensities taking the form of a jump-diffusion process and deterministic interest rates, we evaluated the prospective reserves for this scheme in which the individual is employed at inception. We then numerically demonstrated the state of the reserves for the scheme under jump-diffusion and non-jump-diffusion settings. By decomposing the reserve equation into five components, our numerical illustration indicated that an extension of the retirement age has a spillover effect that would increase government expenses for other social insurance programs. We also conducted sensitivity analyses and examined the total-reserves components by changing the relevant parameters of the transition intensities, which are the average jump-size parameter, average jump frequency, and diffusion parameters of the chosen states, with figures provided. Our computation revealed that the total reserve is most sensitive to changes in average jump frequency.

  17. An Analysis of Turkey's PISA 2015 Results Using Two-Level Hierarchical Linear Modelling

    Science.gov (United States)

    Atas, Dogu; Karadag, Özge

    2017-01-01

    In the field of education, most of the data collected are multi-level structured. Cities, city based schools, school based classes and finally students in the classrooms constitute a hierarchical structure. Hierarchical linear models give more accurate results compared to standard models when the data set has a structure going far as individuals,…

  18. Technical Note: Probabilistically constraining proxy age–depth models within a Bayesian hierarchical reconstruction model

    Directory of Open Access Journals (Sweden)

    J. P. Werner

    2015-03-01

    Full Text Available Reconstructions of the late-Holocene climate rely heavily upon proxies that are assumed to be accurately dated by layer counting, such as measurements of tree rings, ice cores, and varved lake sediments. Considerable advances could be achieved if time-uncertain proxies were able to be included within these multiproxy reconstructions, and if time uncertainties were recognized and correctly modeled for proxies commonly treated as free of age model errors. Current approaches for accounting for time uncertainty are generally limited to repeating the reconstruction using each one of an ensemble of age models, thereby inflating the final estimated uncertainty – in effect, each possible age model is given equal weighting. Uncertainties can be reduced by exploiting the inferred space–time covariance structure of the climate to re-weight the possible age models. Here, we demonstrate how Bayesian hierarchical climate reconstruction models can be augmented to account for time-uncertain proxies. Critically, although a priori all age models are given equal probability of being correct, the probabilities associated with the age models are formally updated within the Bayesian framework, thereby reducing uncertainties. Numerical experiments show that updating the age model probabilities decreases uncertainty in the resulting reconstructions, as compared with the current de facto standard of sampling over all age models, provided there is sufficient information from other data sources in the spatial region of the time-uncertain proxy. This approach can readily be generalized to non-layer-counted proxies, such as those derived from marine sediments.

  19. Calculation of single chain cellulose elasticity using fully atomistic modeling

    Science.gov (United States)

    Xiawa Wu; Robert J. Moon; Ashlie Martini

    2011-01-01

    Cellulose nanocrystals, a potential base material for green nanocomposites, are ordered bundles of cellulose chains. The properties of these chains have been studied for many years using atomic-scale modeling. However, model predictions are difficult to interpret because of the significant dependence of predicted properties on model details. The goal of this study is...

  20. A hierarchical causal modeling for large industrial plants supervision

    International Nuclear Information System (INIS)

    Dziopa, P.; Leyval, L.

    1994-01-01

    A supervision system has to analyse the process current state and the way it will evolve after a modification of the inputs or disturbance. It is proposed to base this analysis on a hierarchy of models, witch differ by the number of involved variables and the abstraction level used to describe their temporal evolution. In a first step, special attention is paid to causal models building, from the most abstract one. Once the hierarchy of models has been build, the most detailed model parameters are estimated. Several models of different abstraction levels can be used for on line prediction. These methods have been applied to a nuclear reprocessing plant. The abstraction level could be chosen on line by the operator. Moreover when an abnormal process behaviour is detected a more detailed model is automatically triggered in order to focus the operator attention on the suspected subsystem. (authors). 11 refs., 11 figs

  1. Sparse Event Modeling with Hierarchical Bayesian Kernel Methods

    Science.gov (United States)

    2016-01-05

    SECURITY CLASSIFICATION OF: The research objective of this proposal was to develop a predictive Bayesian kernel approach to model count data based on...several predictive variables. Such an approach, which we refer to as the Poisson Bayesian kernel model, is able to model the rate of occurrence of... kernel methods made use of: (i) the Bayesian property of improving predictive accuracy as data are dynamically obtained, and (ii) the kernel function

  2. Hierarchical modelling of line commutated power systems used in particle accelerators using Saber

    International Nuclear Information System (INIS)

    Reimund, J.A.

    1993-01-01

    This paper discusses the use of hierarchical simulation models using the program Saber trademark for the prediction of magnet ripple currents generated by the power supply/output filter combination. Modeling of an entire power system connected to output filters and particle accelerator ring magnets will be presented. Special emphasis is made on the modeling of power source imbalances caused by transformer impedance imbalances and utility variances. The affect that these imbalances have on the harmonic content of ripple current is also investigated

  3. A test of the hierarchical model of litter decomposition

    DEFF Research Database (Denmark)

    Bradford, Mark A.; Veen, G. F.; Bonis, Anne

    2017-01-01

    Our basic understanding of plant litter decomposition informs the assumptions underlying widely applied soil biogeochemical models, including those embedded in Earth system models. Confidence in projected carbon cycle-climate feedbacks therefore depends on accurate knowledge about the controls...... regulating the rate at which plant biomass is decomposed into products such as CO2. Here we test underlying assumptions of the dominant conceptual model of litter decomposition. The model posits that a primary control on the rate of decomposition at regional to global scales is climate (temperature...

  4. Simulating individual-based models of epidemics in hierarchical networks

    NARCIS (Netherlands)

    Quax, R.; Bader, D.A.; Sloot, P.M.A.

    2009-01-01

    Current mathematical modeling methods for the spreading of infectious diseases are too simplified and do not scale well. We present the Simulator of Epidemic Evolution in Complex Networks (SEECN), an efficient simulator of detailed individual-based models by parameterizing separate dynamics

  5. A three-component, hierarchical model of executive attention

    OpenAIRE

    Whittle, Sarah; Pantelis, Christos; Testa, Renee; Tiego, Jeggan; Bellgrove, Mark

    2017-01-01

    Executive attention refers to the goal-directed control of attention. Existing models of executive attention distinguish between three correlated, but empirically dissociable, factors related to selectively attending to task-relevant stimuli (Selective Attention), inhibiting task-irrelevant responses (Response Inhibition), and actively maintaining goal-relevant information (Working Memory Capacity). In these models, Selective Attention and Response Inhibition are moderately strongly correlate...

  6. An open-population hierarchical distance sampling model

    Science.gov (United States)

    Sollmann, Rachel; Beth Gardner,; Richard B Chandler,; Royle, J. Andrew; T Scott Sillett,

    2015-01-01

    Modeling population dynamics while accounting for imperfect detection is essential to monitoring programs. Distance sampling allows estimating population size while accounting for imperfect detection, but existing methods do not allow for direct estimation of demographic parameters. We develop a model that uses temporal correlation in abundance arising from underlying population dynamics to estimate demographic parameters from repeated distance sampling surveys. Using a simulation study motivated by designing a monitoring program for island scrub-jays (Aphelocoma insularis), we investigated the power of this model to detect population trends. We generated temporally autocorrelated abundance and distance sampling data over six surveys, using population rates of change of 0.95 and 0.90. We fit the data generating Markovian model and a mis-specified model with a log-linear time effect on abundance, and derived post hoc trend estimates from a model estimating abundance for each survey separately. We performed these analyses for varying number of survey points. Power to detect population changes was consistently greater under the Markov model than under the alternatives, particularly for reduced numbers of survey points. The model can readily be extended to more complex demographic processes than considered in our simulations. This novel framework can be widely adopted for wildlife population monitoring.

  7. An open-population hierarchical distance sampling model.

    Science.gov (United States)

    Sollmann, Rahel; Gardner, Beth; Chandler, Richard B; Royle, J Andrew; Sillett, T Scott

    2015-02-01

    Modeling population dynamics while accounting for imperfect detection is essential to monitoring programs. Distance sampling allows estimating population size while accounting for imperfect detection, but existing methods do not allow for estimation of demographic parameters. We develop a model that uses temporal correlation in abundance arising from underlying population dynamics to estimate demographic parameters from repeated distance sampling surveys. Using a simulation study motivated by designing a monitoring program for Island Scrub-Jays (Aphelocoma insularis), we investigated the power of this model to detect population trends. We generated temporally autocorrelated abundance and distance sampling data over six surveys, using population rates of change of 0.95 and 0.90. We fit the data generating Markovian model and a mis-specified model with a log-linear time effect on abundance, and derived post hoc trend estimates from a model estimating abundance for each survey separately. We performed these analyses for varying numbers of survey points. Power to detect population changes was consistently greater under the Markov model than under the alternatives, particularly for reduced numbers of survey points. The model can readily be extended to more complex demographic processes than considered in our simulations. This novel framework can be widely adopted for wildlife population monitoring.

  8. Hierarchical material models for fragmentation modeling in NIF-ALE-AMR

    International Nuclear Information System (INIS)

    Fisher, A C; Masters, N D; Koniges, A E; Anderson, R W; Gunney, B T N; Wang, P; Becker, R; Dixit, P; Benson, D J

    2008-01-01

    Fragmentation is a fundamental process that naturally spans micro to macroscopic scales. Recent advances in algorithms, computer simulations, and hardware enable us to connect the continuum to microstructural regimes in a real simulation through a heterogeneous multiscale mathematical model. We apply this model to the problem of predicting how targets in the NIF chamber dismantle, so that optics and diagnostics can be protected from damage. The mechanics of the initial material fracture depend on the microscopic grain structure. In order to effectively simulate the fragmentation, this process must be modeled at the subgrain level with computationally expensive crystal plasticity models. However, there are not enough computational resources to model the entire NIF target at this microscopic scale. In order to accomplish these calculations, a hierarchical material model (HMM) is being developed. The HMM will allow fine-scale modeling of the initial fragmentation using computationally expensive crystal plasticity, while the elements at the mesoscale can use polycrystal models, and the macroscopic elements use analytical flow stress models. The HMM framework is built upon an adaptive mesh refinement (AMR) capability. We present progress in implementing the HMM in the NIF-ALE-AMR code. Additionally, we present test simulations relevant to NIF targets

  9. Hierarchical material models for fragmentation modeling in NIF-ALE-AMR

    Energy Technology Data Exchange (ETDEWEB)

    Fisher, A C; Masters, N D; Koniges, A E; Anderson, R W; Gunney, B T N; Wang, P; Becker, R [Lawrence Livermore National Laboratory, PO Box 808, Livermore, CA 94551 (United States); Dixit, P; Benson, D J [University of California San Diego, 9500 Gilman Dr., La Jolla. CA 92093 (United States)], E-mail: fisher47@llnl.gov

    2008-05-15

    Fragmentation is a fundamental process that naturally spans micro to macroscopic scales. Recent advances in algorithms, computer simulations, and hardware enable us to connect the continuum to microstructural regimes in a real simulation through a heterogeneous multiscale mathematical model. We apply this model to the problem of predicting how targets in the NIF chamber dismantle, so that optics and diagnostics can be protected from damage. The mechanics of the initial material fracture depend on the microscopic grain structure. In order to effectively simulate the fragmentation, this process must be modeled at the subgrain level with computationally expensive crystal plasticity models. However, there are not enough computational resources to model the entire NIF target at this microscopic scale. In order to accomplish these calculations, a hierarchical material model (HMM) is being developed. The HMM will allow fine-scale modeling of the initial fragmentation using computationally expensive crystal plasticity, while the elements at the mesoscale can use polycrystal models, and the macroscopic elements use analytical flow stress models. The HMM framework is built upon an adaptive mesh refinement (AMR) capability. We present progress in implementing the HMM in the NIF-ALE-AMR code. Additionally, we present test simulations relevant to NIF targets.

  10. Modeling Radionuclide Decay Chain Migration Using HYDROGEOCHEM

    Science.gov (United States)

    Lin, T. C.; Tsai, C. H.; Lai, K. H.; Chen, J. S.

    2014-12-01

    Nuclear technology has been employed for energy production for several decades. Although people receive many benefits from nuclear energy, there are inevitably environmental pollutions as well as human health threats posed by the radioactive materials releases from nuclear waste disposed in geological repositories or accidental releases of radionuclides from nuclear facilities. Theoretical studies have been undertaken to understand the transport of radionuclides in subsurface environments because that the radionuclide transport in groundwater is one of the main pathway in exposure scenarios for the intake of radionuclides. The radionuclide transport in groundwater can be predicted using analytical solution as well as numerical models. In this study, we simulate the transport of the radionuclide decay chain using HYDROGEOCHEM. The simulated results are verified against the analytical solution available in the literature. Excellent agreements between the numerical simulation and the analytical are observed for a wide spectrum of concentration. HYDROGECHEM is a useful tool assessing the ecological and environmental impact of the accidental radionuclide releases such as the Fukushima nuclear disaster where multiple radionuclides leaked through the reactor, subsequently contaminating the local groundwater and ocean seawater in the vicinity of the nuclear plant.

  11. Comparing the performance of flat and hierarchical Habitat/Land-Cover classification models in a NATURA 2000 site

    Science.gov (United States)

    Gavish, Yoni; O'Connell, Jerome; Marsh, Charles J.; Tarantino, Cristina; Blonda, Palma; Tomaselli, Valeria; Kunin, William E.

    2018-02-01

    The increasing need for high quality Habitat/Land-Cover (H/LC) maps has triggered considerable research into novel machine-learning based classification models. In many cases, H/LC classes follow pre-defined hierarchical classification schemes (e.g., CORINE), in which fine H/LC categories are thematically nested within more general categories. However, none of the existing machine-learning algorithms account for this pre-defined hierarchical structure. Here we introduce a novel Random Forest (RF) based application of hierarchical classification, which fits a separate local classification model in every branching point of the thematic tree, and then integrates all the different local models to a single global prediction. We applied the hierarchal RF approach in a NATURA 2000 site in Italy, using two land-cover (CORINE, FAO-LCCS) and one habitat classification scheme (EUNIS) that differ from one another in the shape of the class hierarchy. For all 3 classification schemes, both the hierarchical model and a flat model alternative provided accurate predictions, with kappa values mostly above 0.9 (despite using only 2.2-3.2% of the study area as training cells). The flat approach slightly outperformed the hierarchical models when the hierarchy was relatively simple, while the hierarchical model worked better under more complex thematic hierarchies. Most misclassifications came from habitat pairs that are thematically distant yet spectrally similar. In 2 out of 3 classification schemes, the additional constraints of the hierarchical model resulted with fewer such serious misclassifications relative to the flat model. The hierarchical model also provided valuable information on variable importance which can shed light into "black-box" based machine learning algorithms like RF. We suggest various ways by which hierarchical classification models can increase the accuracy and interpretability of H/LC classification maps.

  12. The application of a hierarchical Bayesian spatiotemporal model for ...

    Indian Academy of Sciences (India)

    Process (GP) model by using the Gibbs sampling method. The result for ... good indicator of the HBST method. The statistical ... summary and discussion of future works are given .... spatiotemporal package in R language (R core team. 2013).

  13. Bayesian disease mapping: hierarchical modeling in spatial epidemiology

    National Research Council Canada - National Science Library

    Lawson, Andrew

    2013-01-01

    Since the publication of the first edition, many new Bayesian tools and methods have been developed for space-time data analysis, the predictive modeling of health outcomes, and other spatial biostatistical areas...

  14. Hierarchical models and iterative optimization of hybrid systems

    Energy Technology Data Exchange (ETDEWEB)

    Rasina, Irina V. [Ailamazyan Program Systems Institute, Russian Academy of Sciences, Peter One str. 4a, Pereslavl-Zalessky, 152021 (Russian Federation); Baturina, Olga V. [Trapeznikov Control Sciences Institute, Russian Academy of Sciences, Profsoyuznaya str. 65, 117997, Moscow (Russian Federation); Nasatueva, Soelma N. [Buryat State University, Smolina str.24a, Ulan-Ude, 670000 (Russian Federation)

    2016-06-08

    A class of hybrid control systems on the base of two-level discrete-continuous model is considered. The concept of this model was proposed and developed in preceding works as a concretization of the general multi-step system with related optimality conditions. A new iterative optimization procedure for such systems is developed on the base of localization of the global optimality conditions via contraction the control set.

  15. Charge distribution in an two-chain dual model

    International Nuclear Information System (INIS)

    Fialkowski, K.; Kotanski, A.

    1983-01-01

    Charge distributions in the multiple production processes are analysed using the dual chain model. A parametrisation of charge distributions for single dual chains based on the νp and anti vp data is proposed. The rapidity charge distributions are then calculated for pp and anti pp collisions and compared with the previous calculations based on the recursive cascade model of single chains. The results differ at the SPS collider energies and in the energy dependence of the net forward charge supplying the useful tests of the dual chain model. (orig.)

  16. Model of service-oriented catering supply chain performance evaluation

    OpenAIRE

    Gou, Juanqiong; Shen, Guguan; Chai, Rui

    2013-01-01

    Purpose: The aim of this paper is constructing a performance evaluation model for service-oriented catering supply chain. Design/methodology/approach: With the research on the current situation of catering industry, this paper summarized the characters of the catering supply chain, and then presents the service-oriented catering supply chain model based on the platform of logistics and information. At last, the fuzzy AHP method is used to evaluate the performance of service-oriented catering ...

  17. A hybrid deterministic-probabilistic approach to model the mechanical response of helically arranged hierarchical strands

    Science.gov (United States)

    Fraldi, M.; Perrella, G.; Ciervo, M.; Bosia, F.; Pugno, N. M.

    2017-09-01

    Very recently, a Weibull-based probabilistic strategy has been successfully applied to bundles of wires to determine their overall stress-strain behaviour, also capturing previously unpredicted nonlinear and post-elastic features of hierarchical strands. This approach is based on the so-called "Equal Load Sharing (ELS)" hypothesis by virtue of which, when a wire breaks, the load acting on the strand is homogeneously redistributed among the surviving wires. Despite the overall effectiveness of the method, some discrepancies between theoretical predictions and in silico Finite Element-based simulations or experimental findings might arise when more complex structures are analysed, e.g. helically arranged bundles. To overcome these limitations, an enhanced hybrid approach is proposed in which the probability of rupture is combined with a deterministic mechanical model of a strand constituted by helically-arranged and hierarchically-organized wires. The analytical model is validated comparing its predictions with both Finite Element simulations and experimental tests. The results show that generalized stress-strain responses - incorporating tension/torsion coupling - are naturally found and, once one or more elements break, the competition between geometry and mechanics of the strand microstructure, i.e. the different cross sections and helical angles of the wires in the different hierarchical levels of the strand, determines the no longer homogeneous stress redistribution among the surviving wires whose fate is hence governed by a "Hierarchical Load Sharing" criterion.

  18. A hierarchical stress release model for synthetic seismicity

    Science.gov (United States)

    Bebbington, Mark

    1997-06-01

    We construct a stochastic dynamic model for synthetic seismicity involving stochastic stress input, release, and transfer in an environment of heterogeneous strength and interacting segments. The model is not fault-specific, having a number of adjustable parameters with physical interpretation, namely, stress relaxation, stress transfer, stress dissipation, segment structure, strength, and strength heterogeneity, which affect the seismicity in various ways. Local parameters are chosen to be consistent with large historical events, other parameters to reproduce bulk seismicity statistics for the fault as a whole. The one-dimensional fault is divided into a number of segments, each comprising a varying number of nodes. Stress input occurs at each node in a simple random process, representing the slow buildup due to tectonic plate movements. Events are initiated, subject to a stochastic hazard function, when the stress on a node exceeds the local strength. An event begins with the transfer of excess stress to neighboring nodes, which may in turn transfer their excess stress to the next neighbor. If the event grows to include the entire segment, then most of the stress on the segment is transferred to neighboring segments (or dissipated) in a characteristic event. These large events may themselves spread to other segments. We use the Middle America Trench to demonstrate that this model, using simple stochastic stress input and triggering mechanisms, can produce behavior consistent with the historical record over five units of magnitude. We also investigate the effects of perturbing various parameters in order to show how the model might be tailored to a specific fault structure. The strength of the model lies in this ability to reproduce the behavior of a general linear fault system through the choice of a relatively small number of parameters. It remains to develop a procedure for estimating the internal state of the model from the historical observations in order to

  19. Automated main-chain model building by template matching and iterative fragment extension

    International Nuclear Information System (INIS)

    Terwilliger, Thomas C.

    2003-01-01

    A method for automated macromolecular main-chain model building is described. An algorithm for the automated macromolecular model building of polypeptide backbones is described. The procedure is hierarchical. In the initial stages, many overlapping polypeptide fragments are built. In subsequent stages, the fragments are extended and then connected. Identification of the locations of helical and β-strand regions is carried out by FFT-based template matching. Fragment libraries of helices and β-strands from refined protein structures are then positioned at the potential locations of helices and strands and the longest segments that fit the electron-density map are chosen. The helices and strands are then extended using fragment libraries consisting of sequences three amino acids long derived from refined protein structures. The resulting segments of polypeptide chain are then connected by choosing those which overlap at two or more C α positions. The fully automated procedure has been implemented in RESOLVE and is capable of model building at resolutions as low as 3.5 Å. The algorithm is useful for building a preliminary main-chain model that can serve as a basis for refinement and side-chain addition

  20. Calibration of Automatically Generated Items Using Bayesian Hierarchical Modeling.

    Science.gov (United States)

    Johnson, Matthew S.; Sinharay, Sandip

    For complex educational assessments, there is an increasing use of "item families," which are groups of related items. However, calibration or scoring for such an assessment requires fitting models that take into account the dependence structure inherent among the items that belong to the same item family. C. Glas and W. van der Linden…

  1. A hierarchical modeling of information seeking behavior of school ...

    African Journals Online (AJOL)

    The aim of this study was to investigate the information seeking behavior of school teachers in the public primary schools of rural areas of Nigeria and to draw up a model of their information-seeking behavior. A Cross-sectional survey design research was employed to carry out the research. Findings showed that the ...

  2. Generic Database Cost Models for Hierarchical Memory Systems

    NARCIS (Netherlands)

    S. Manegold (Stefan); P.A. Boncz (Peter); M.L. Kersten (Martin)

    2002-01-01

    textabstractAccurate prediction of operator execution time is a prerequisite for database query optimization. Although extensively studied for conventional disk-based DBMSs, cost modeling in main-memory DBMSs is still an open issue. Recent database research has demonstrated that memory access is

  3. Generic database cost models for hierarchical memory systems

    NARCIS (Netherlands)

    S. Manegold (Stefan); P.A. Boncz (Peter); M.L. Kersten (Martin)

    2002-01-01

    textabstractAccurate prediction of operator execution time is a prerequisite fordatabase query optimization. Although extensively studied for conventionaldisk-based DBMSs, cost modeling in main-memory DBMSs is still an openissue. Recent database research has demonstrated that memory access ismore

  4. Markov Chain Models for the Stochastic Modeling of Pitting Corrosion

    OpenAIRE

    Valor, A.; Caleyo, F.; Alfonso, L.; Velázquez, J. C.; Hallen, J. M.

    2013-01-01

    The stochastic nature of pitting corrosion of metallic structures has been widely recognized. It is assumed that this kind of deterioration retains no memory of the past, so only the current state of the damage influences its future development. This characteristic allows pitting corrosion to be categorized as a Markov process. In this paper, two different models of pitting corrosion, developed using Markov chains, are presented. Firstly, a continuous-time, nonhomogeneous linear growth (pure ...

  5. Bayesian Hierarchical Distributed Lag Models for Summer Ozone Exposure and Cardio-Respiratory Mortality

    OpenAIRE

    Yi Huang; Francesca Dominici; Michelle Bell

    2004-01-01

    In this paper, we develop Bayesian hierarchical distributed lag models for estimating associations between daily variations in summer ozone levels and daily variations in cardiovascular and respiratory (CVDRESP) mortality counts for 19 U.S. large cities included in the National Morbidity Mortality Air Pollution Study (NMMAPS) for the period 1987 - 1994. At the first stage, we define a semi-parametric distributed lag Poisson regression model to estimate city-specific relative rates of CVDRESP ...

  6. Chain Risk Model for quantifying cost effectiveness of phytosanitary measures

    NARCIS (Netherlands)

    Benninga, J.; Hennen, W.H.G.J.; Schans, van de J.

    2010-01-01

    A Chain Risk Model (CRM) was developed for a cost effective assessment of phytosanitary measures. The CRM model can be applied to phytosanitary assessments of all agricultural product chains. In CRM, stages are connected by product volume flows with which pest infections can be spread from one stage

  7. A hierarchical analysis of terrestrial ecosystem model Biome-BGC: Equilibrium analysis and model calibration

    Energy Technology Data Exchange (ETDEWEB)

    Thornton, Peter E [ORNL; Wang, Weile [ORNL; Law, Beverly E. [Oregon State University; Nemani, Ramakrishna R [NASA Ames Research Center

    2009-01-01

    The increasing complexity of ecosystem models represents a major difficulty in tuning model parameters and analyzing simulated results. To address this problem, this study develops a hierarchical scheme that simplifies the Biome-BGC model into three functionally cascaded tiers and analyzes them sequentially. The first-tier model focuses on leaf-level ecophysiological processes; it simulates evapotranspiration and photosynthesis with prescribed leaf area index (LAI). The restriction on LAI is then lifted in the following two model tiers, which analyze how carbon and nitrogen is cycled at the whole-plant level (the second tier) and in all litter/soil pools (the third tier) to dynamically support the prescribed canopy. In particular, this study analyzes the steady state of these two model tiers by a set of equilibrium equations that are derived from Biome-BGC algorithms and are based on the principle of mass balance. Instead of spinning-up the model for thousands of climate years, these equations are able to estimate carbon/nitrogen stocks and fluxes of the target (steady-state) ecosystem directly from the results obtained by the first-tier model. The model hierarchy is examined with model experiments at four AmeriFlux sites. The results indicate that the proposed scheme can effectively calibrate Biome-BGC to simulate observed fluxes of evapotranspiration and photosynthesis; and the carbon/nitrogen stocks estimated by the equilibrium analysis approach are highly consistent with the results of model simulations. Therefore, the scheme developed in this study may serve as a practical guide to calibrate/analyze Biome-BGC; it also provides an efficient way to solve the problem of model spin-up, especially for applications over large regions. The same methodology may help analyze other similar ecosystem models as well.

  8. Generic Database Cost Models for Hierarchical Memory Systems

    OpenAIRE

    Manegold, Stefan; Boncz, Peter; Kersten, Martin

    2002-01-01

    textabstractAccurate prediction of operator execution time is a prerequisite for database query optimization. Although extensively studied for conventional disk-based DBMSs, cost modeling in main-memory DBMSs is still an open issue. Recent database research has demonstrated that memory access is more and more becoming a significant---if not the major---cost component of database operations. If used properly, fast but small cache memories---usually organized in cascading hierarchy between CPU ...

  9. Probabilistic Inference: Task Dependency and Individual Differences of Probability Weighting Revealed by Hierarchical Bayesian Modeling.

    Science.gov (United States)

    Boos, Moritz; Seer, Caroline; Lange, Florian; Kopp, Bruno

    2016-01-01

    Cognitive determinants of probabilistic inference were examined using hierarchical Bayesian modeling techniques. A classic urn-ball paradigm served as experimental strategy, involving a factorial two (prior probabilities) by two (likelihoods) design. Five computational models of cognitive processes were compared with the observed behavior. Parameter-free Bayesian posterior probabilities and parameter-free base rate neglect provided inadequate models of probabilistic inference. The introduction of distorted subjective probabilities yielded more robust and generalizable results. A general class of (inverted) S-shaped probability weighting functions had been proposed; however, the possibility of large differences in probability distortions not only across experimental conditions, but also across individuals, seems critical for the model's success. It also seems advantageous to consider individual differences in parameters of probability weighting as being sampled from weakly informative prior distributions of individual parameter values. Thus, the results from hierarchical Bayesian modeling converge with previous results in revealing that probability weighting parameters show considerable task dependency and individual differences. Methodologically, this work exemplifies the usefulness of hierarchical Bayesian modeling techniques for cognitive psychology. Theoretically, human probabilistic inference might be best described as the application of individualized strategic policies for Bayesian belief revision.

  10. Statistical shear lag model - unraveling the size effect in hierarchical composites.

    Science.gov (United States)

    Wei, Xiaoding; Filleter, Tobin; Espinosa, Horacio D

    2015-05-01

    Numerous experimental and computational studies have established that the hierarchical structures encountered in natural materials, such as the brick-and-mortar structure observed in sea shells, are essential for achieving defect tolerance. Due to this hierarchy, the mechanical properties of natural materials have a different size dependence compared to that of typical engineered materials. This study aimed to explore size effects on the strength of bio-inspired staggered hierarchical composites and to define the influence of the geometry of constituents in their outstanding defect tolerance capability. A statistical shear lag model is derived by extending the classical shear lag model to account for the statistics of the constituents' strength. A general solution emerges from rigorous mathematical derivations, unifying the various empirical formulations for the fundamental link length used in previous statistical models. The model shows that the staggered arrangement of constituents grants composites a unique size effect on mechanical strength in contrast to homogenous continuous materials. The model is applied to hierarchical yarns consisting of double-walled carbon nanotube bundles to assess its predictive capabilities for novel synthetic materials. Interestingly, the model predicts that yarn gauge length does not significantly influence the yarn strength, in close agreement with experimental observations. Copyright © 2015 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.

  11. Accounting for uncertainty in ecological analysis: the strengths and limitations of hierarchical statistical modeling.

    Science.gov (United States)

    Cressie, Noel; Calder, Catherine A; Clark, James S; Ver Hoef, Jay M; Wikle, Christopher K

    2009-04-01

    Analyses of ecological data should account for the uncertainty in the process(es) that generated the data. However, accounting for these uncertainties is a difficult task, since ecology is known for its complexity. Measurement and/or process errors are often the only sources of uncertainty modeled when addressing complex ecological problems, yet analyses should also account for uncertainty in sampling design, in model specification, in parameters governing the specified model, and in initial and boundary conditions. Only then can we be confident in the scientific inferences and forecasts made from an analysis. Probability and statistics provide a framework that accounts for multiple sources of uncertainty. Given the complexities of ecological studies, the hierarchical statistical model is an invaluable tool. This approach is not new in ecology, and there are many examples (both Bayesian and non-Bayesian) in the literature illustrating the benefits of this approach. In this article, we provide a baseline for concepts, notation, and methods, from which discussion on hierarchical statistical modeling in ecology can proceed. We have also planted some seeds for discussion and tried to show where the practical difficulties lie. Our thesis is that hierarchical statistical modeling is a powerful way of approaching ecological analysis in the presence of inevitable but quantifiable uncertainties, even if practical issues sometimes require pragmatic compromises.

  12. Multi-subject hierarchical inverse covariance modelling improves estimation of functional brain networks.

    Science.gov (United States)

    Colclough, Giles L; Woolrich, Mark W; Harrison, Samuel J; Rojas López, Pedro A; Valdes-Sosa, Pedro A; Smith, Stephen M

    2018-05-07

    A Bayesian model for sparse, hierarchical inverse covariance estimation is presented, and applied to multi-subject functional connectivity estimation in the human brain. It enables simultaneous inference of the strength of connectivity between brain regions at both subject and population level, and is applicable to fmri, meg and eeg data. Two versions of the model can encourage sparse connectivity, either using continuous priors to suppress irrelevant connections, or using an explicit description of the network structure to estimate the connection probability between each pair of regions. A large evaluation of this model, and thirteen methods that represent the state of the art of inverse covariance modelling, is conducted using both simulated and resting-state functional imaging datasets. Our novel Bayesian approach has similar performance to the best extant alternative, Ng et al.'s Sparse Group Gaussian Graphical Model algorithm, which also is based on a hierarchical structure. Using data from the Human Connectome Project, we show that these hierarchical models are able to reduce the measurement error in meg beta-band functional networks by 10%, producing concomitant increases in estimates of the genetic influence on functional connectivity. Copyright © 2018. Published by Elsevier Inc.

  13. Latent Variable Regression 4-Level Hierarchical Model Using Multisite Multiple-Cohorts Longitudinal Data. CRESST Report 801

    Science.gov (United States)

    Choi, Kilchan

    2011-01-01

    This report explores a new latent variable regression 4-level hierarchical model for monitoring school performance over time using multisite multiple-cohorts longitudinal data. This kind of data set has a 4-level hierarchical structure: time-series observation nested within students who are nested within different cohorts of students. These…

  14. System Dynamics Model for VMI&TPL Integrated Supply Chains

    Directory of Open Access Journals (Sweden)

    Guo Li

    2013-01-01

    Full Text Available This paper establishes VMI-APIOBPCS II model by extending VMI-APIOBPCS model from serial supply chain to distribution supply chain. Then TPL is introduced to this VMI distribution supply chain, and operational framework and process of VMI&TPL integrated supply chain are analyzed deeply. On this basis VMI-APIOBPCS II model is then changed to VMI&TPL-APIOBPCS model and VMI&TPL integrated operation mode is simulated. Finally, compared with VMI-APIOBPCS model, the TPL’s important role of goods consolidation and risk sharing in VMI&TPL integrated supply chain is analyzed in detail from the aspects of bullwhip effect, inventory level, service level, and so on.

  15. Innovative supply chain optimization models with multiple uncertainty factors

    DEFF Research Database (Denmark)

    Choi, Tsan Ming; Govindan, Kannan; Li, Xiang

    2017-01-01

    Uncertainty is an inherent factor that affects all dimensions of supply chain activities. In today’s business environment, initiatives to deal with one specific type of uncertainty might not be effective since other types of uncertainty factors and disruptions may be present. These factors relate...... to supply chain competition and coordination. Thus, to achieve a more efficient and effective supply chain requires the deployment of innovative optimization models and novel methods. This preface provides a concise review of critical research issues regarding innovative supply chain optimization models...

  16. Principal-subordinate hierarchical multi-objective programming model of initial water rights allocation

    Directory of Open Access Journals (Sweden)

    Dan Wu

    2009-06-01

    Full Text Available The principal-subordinate hierarchical multi-objective programming model of initial water rights allocation was developed based on the principle of coordinated and sustainable development of different regions and water sectors within a basin. With the precondition of strictly controlling maximum emissions rights, initial water rights were allocated between the first and the second levels of the hierarchy in order to promote fair and coordinated development across different regions of the basin and coordinated and efficient water use across different water sectors, realize the maximum comprehensive benefits to the basin, promote the unity of quantity and quality of initial water rights allocation, and eliminate water conflict across different regions and water sectors. According to interactive decision-making theory, a principal-subordinate hierarchical interactive iterative algorithm based on the satisfaction degree was developed and used to solve the initial water rights allocation model. A case study verified the validity of the model.

  17. Hierarchic stochastic modelling applied to intracellular Ca(2+ signals.

    Directory of Open Access Journals (Sweden)

    Gregor Moenke

    Full Text Available Important biological processes like cell signalling and gene expression have noisy components and are very complex at the same time. Mathematical analysis of such systems has often been limited to the study of isolated subsystems, or approximations are used that are difficult to justify. Here we extend a recently published method (Thurley and Falcke, PNAS 2011 which is formulated in observable system configurations instead of molecular transitions. This reduces the number of system states by several orders of magnitude and avoids fitting of kinetic parameters. The method is applied to Ca(2+ signalling. Ca(2+ is a ubiquitous second messenger transmitting information by stochastic sequences of concentration spikes, which arise by coupling of subcellular Ca(2+ release events (puffs. We derive analytical expressions for a mechanistic Ca(2+ model, based on recent data from live cell imaging, and calculate Ca(2+ spike statistics in dependence on cellular parameters like stimulus strength or number of Ca(2+ channels. The new approach substantiates a generic Ca(2+ model, which is a very convenient way to simulate Ca(2+ spike sequences with correct spiking statistics.

  18. Probabilistic inference: Task dependency and individual differences of probability weighting revealed by hierarchical Bayesian modelling

    Directory of Open Access Journals (Sweden)

    Moritz eBoos

    2016-05-01

    Full Text Available Cognitive determinants of probabilistic inference were examined using hierarchical Bayesian modelling techniques. A classic urn-ball paradigm served as experimental strategy, involving a factorial two (prior probabilities by two (likelihoods design. Five computational models of cognitive processes were compared with the observed behaviour. Parameter-free Bayesian posterior probabilities and parameter-free base rate neglect provided inadequate models of probabilistic inference. The introduction of distorted subjective probabilities yielded more robust and generalizable results. A general class of (inverted S-shaped probability weighting functions had been proposed; however, the possibility of large differences in probability distortions not only across experimental conditions, but also across individuals, seems critical for the model’s success. It also seems advantageous to consider individual differences in parameters of probability weighting as being sampled from weakly informative prior distributions of individual parameter values. Thus, the results from hierarchical Bayesian modelling converge with previous results in revealing that probability weighting parameters show considerable task dependency and individual differences. Methodologically, this work exemplifies the usefulness of hierarchical Bayesian modelling techniques for cognitive psychology. Theoretically, human probabilistic inference might be best described as the application of individualized strategic policies for Bayesian belief revision.

  19. Modeling when people quit: Bayesian censored geometric models with hierarchical and latent-mixture extensions.

    Science.gov (United States)

    Okada, Kensuke; Vandekerckhove, Joachim; Lee, Michael D

    2018-02-01

    People often interact with environments that can provide only a finite number of items as resources. Eventually a book contains no more chapters, there are no more albums available from a band, and every Pokémon has been caught. When interacting with these sorts of environments, people either actively choose to quit collecting new items, or they are forced to quit when the items are exhausted. Modeling the distribution of how many items people collect before they quit involves untangling these two possibilities, We propose that censored geometric models are a useful basic technique for modeling the quitting distribution, and, show how, by implementing these models in a hierarchical and latent-mixture framework through Bayesian methods, they can be extended to capture the additional features of specific situations. We demonstrate this approach by developing and testing a series of models in two case studies involving real-world data. One case study deals with people choosing jokes from a recommender system, and the other deals with people completing items in a personality survey.

  20. The Case for A Hierarchal System Model for Linux Clusters

    Energy Technology Data Exchange (ETDEWEB)

    Seager, M; Gorda, B

    2009-06-05

    The computer industry today is no longer driven, as it was in the 40s, 50s and 60s, by High-performance computing requirements. Rather, HPC systems, especially Leadership class systems, sit on top of a pyramid investment mode. Figure 1 shows a representative pyramid investment model for systems hardware. At the base of the pyramid is the huge investment (order 10s of Billions of US Dollars per year) in semiconductor fabrication and process technologies. These costs, which are approximately doubling with every generation, are funded from investments multiple markets: enterprise, desktops, games, embedded and specialized devices. Over and above these base technology investments are investments for critical technology elements such as microprocessor, chipsets and memory ASIC components. Investments for these components are spread across the same markets as the base semiconductor processes investments. These second tier investments are approximately half the size of the lower level of the pyramid. The next technology investment layer up, tier 3, is more focused on scalable computing systems such as those needed for HPC and other markets. These tier 3 technology elements include networking (SAN, WAN and LAN), interconnects and large scalable SMP designs. Above these is tier 4 are relatively small investments necessary to build very large, scalable systems high-end or Leadership class systems. Primary among these are the specialized network designs of vertically integrated systems, etc.

  1. Partner Selection Optimization Model of Agricultural Enterprises in Supply Chain

    OpenAIRE

    Feipeng Guo; Qibei Lu

    2013-01-01

    With more and more importance of correctly selecting partners in supply chain of agricultural enterprises, a large number of partner evaluation techniques are widely used in the field of agricultural science research. This study established a partner selection model to optimize the issue of agricultural supply chain partner selection. Firstly, it constructed a comprehensive evaluation index system after analyzing the real characteristics of agricultural supply chain. Secondly, a heuristic met...

  2. Emotional intelligence is a second-stratum factor of intelligence: evidence from hierarchical and bifactor models.

    Science.gov (United States)

    MacCann, Carolyn; Joseph, Dana L; Newman, Daniel A; Roberts, Richard D

    2014-04-01

    This article examines the status of emotional intelligence (EI) within the structure of human cognitive abilities. To evaluate whether EI is a 2nd-stratum factor of intelligence, data were fit to a series of structural models involving 3 indicators each for fluid intelligence, crystallized intelligence, quantitative reasoning, visual processing, and broad retrieval ability, as well as 2 indicators each for emotion perception, emotion understanding, and emotion management. Unidimensional, multidimensional, hierarchical, and bifactor solutions were estimated in a sample of 688 college and community college students. Results suggest adequate fit for 2 models: (a) an oblique 8-factor model (with 5 traditional cognitive ability factors and 3 EI factors) and (b) a hierarchical solution (with cognitive g at the highest level and EI representing a 2nd-stratum factor that loads onto g at λ = .80). The acceptable relative fit of the hierarchical model confirms the notion that EI is a group factor of cognitive ability, marking the expression of intelligence in the emotion domain. The discussion proposes a possible expansion of Cattell-Horn-Carroll theory to include EI as a 2nd-stratum factor of similar standing to factors such as fluid intelligence and visual processing.

  3. Action detection by double hierarchical multi-structure space-time statistical matching model

    Science.gov (United States)

    Han, Jing; Zhu, Junwei; Cui, Yiyin; Bai, Lianfa; Yue, Jiang

    2018-03-01

    Aimed at the complex information in videos and low detection efficiency, an actions detection model based on neighboring Gaussian structure and 3D LARK features is put forward. We exploit a double hierarchical multi-structure space-time statistical matching model (DMSM) in temporal action localization. First, a neighboring Gaussian structure is presented to describe the multi-scale structural relationship. Then, a space-time statistical matching method is proposed to achieve two similarity matrices on both large and small scales, which combines double hierarchical structural constraints in model by both the neighboring Gaussian structure and the 3D LARK local structure. Finally, the double hierarchical similarity is fused and analyzed to detect actions. Besides, the multi-scale composite template extends the model application into multi-view. Experimental results of DMSM on the complex visual tracker benchmark data sets and THUMOS 2014 data sets show the promising performance. Compared with other state-of-the-art algorithm, DMSM achieves superior performances.

  4. Oscillatory Critical Amplitudes in Hierarchical Models and the Harris Function of Branching Processes

    Science.gov (United States)

    Costin, Ovidiu; Giacomin, Giambattista

    2013-02-01

    Oscillatory critical amplitudes have been repeatedly observed in hierarchical models and, in the cases that have been taken into consideration, these oscillations are so small to be hardly detectable. Hierarchical models are tightly related to iteration of maps and, in fact, very similar phenomena have been repeatedly reported in many fields of mathematics, like combinatorial evaluations and discrete branching processes. It is precisely in the context of branching processes with bounded off-spring that T. Harris, in 1948, first set forth the possibility that the logarithm of the moment generating function of the rescaled population size, in the super-critical regime, does not grow near infinity as a power, but it has an oscillatory prefactor (the Harris function). These oscillations have been observed numerically only much later and, while the origin is clearly tied to the discrete character of the iteration, the amplitude size is not so well understood. The purpose of this note is to reconsider the issue for hierarchical models and in what is arguably the most elementary setting—the pinning model—that actually just boils down to iteration of polynomial maps (and, notably, quadratic maps). In this note we show that the oscillatory critical amplitude for pinning models and the Harris function coincide. Moreover we make explicit the link between these oscillatory functions and the geometry of the Julia set of the map, making thus rigorous and quantitative some ideas set forth in Derrida et al. (Commun. Math. Phys. 94:115-132, 1984).

  5. On hierarchical models for visual recognition and learning of objects, scenes, and activities

    CERN Document Server

    Spehr, Jens

    2015-01-01

    In many computer vision applications, objects have to be learned and recognized in images or image sequences. This book presents new probabilistic hierarchical models that allow an efficient representation of multiple objects of different categories, scales, rotations, and views. The idea is to exploit similarities between objects and object parts in order to share calculations and avoid redundant information. Furthermore inference approaches for fast and robust detection are presented. These new approaches combine the idea of compositional and similarity hierarchies and overcome limitations of previous methods. Besides classical object recognition the book shows the use for detection of human poses in a project for gait analysis. The use of activity detection is presented for the design of environments for ageing, to identify activities and behavior patterns in smart homes. In a presented project for parking spot detection using an intelligent vehicle, the proposed approaches are used to hierarchically model...

  6. Model of service-oriented catering supply chain performance evaluation

    Directory of Open Access Journals (Sweden)

    Juanqiong Gou

    2013-03-01

    Full Text Available Purpose: The aim of this paper is constructing a performance evaluation model for service-oriented catering supply chain. Design/methodology/approach: With the research on the current situation of catering industry, this paper summarized the characters of the catering supply chain, and then presents the service-oriented catering supply chain model based on the platform of logistics and information. At last, the fuzzy AHP method is used to evaluate the performance of service-oriented catering supply chain. Findings: With the analysis of the characteristics of catering supply chain, we construct the performance evaluation model in order to guarantee the food safety, logistics efficiency, price stability and so on. Practical implications: In order to evolve an efficient and effective service supply chain, it can not only used to own enterprise improvement, but also can be used for selecting different customers, to choose a different model of development. Originality/value: This paper has a new definition of service-oriented catering supply chain. And it offers a model to evaluate the performance of this catering supply chain.

  7. A hierarchical lattice spring model to simulate the mechanics of 2-D materials-based composites

    Directory of Open Access Journals (Sweden)

    Lucas eBrely

    2015-07-01

    Full Text Available In the field of engineering materials, strength and toughness are typically two mutually exclusive properties. Structural biological materials such as bone, tendon or dentin have resolved this conflict and show unprecedented damage tolerance, toughness and strength levels. The common feature of these materials is their hierarchical heterogeneous structure, which contributes to increased energy dissipation before failure occurring at different scale levels. These structural properties are the key to exceptional bioinspired material mechanical properties, in particular for nanocomposites. Here, we develop a numerical model in order to simulate the mechanisms involved in damage progression and energy dissipation at different size scales in nano- and macro-composites, which depend both on the heterogeneity of the material and on the type of hierarchical structure. Both these aspects have been incorporated into a 2-dimensional model based on a Lattice Spring Model, accounting for geometrical nonlinearities and including statistically-based fracture phenomena. The model has been validated by comparing numerical results to continuum and fracture mechanics results as well as finite elements simulations, and then employed to study how structural aspects impact on hierarchical composite material properties. Results obtained with the numerical code highlight the dependence of stress distributions on matrix properties and reinforcement dispersion, geometry and properties, and how failure of sacrificial elements is directly involved in the damage tolerance of the material. Thanks to the rapidly developing field of nanocomposite manufacture, it is already possible to artificially create materials with multi-scale hierarchical reinforcements. The developed code could be a valuable support in the design and optimization of these advanced materials, drawing inspiration and going beyond biological materials with exceptional mechanical properties.

  8. Loss Performance Modeling for Hierarchical Heterogeneous Wireless Networks With Speed-Sensitive Call Admission Control

    DEFF Research Database (Denmark)

    Huang, Qian; Huang, Yue-Cai; Ko, King-Tim

    2011-01-01

    . This approach avoids unnecessary and frequent handoff between cells and reduces signaling overheads. An approximation model with guaranteed accuracy and low computational complexity is presented for the loss performance of multiservice traffic. The accuracy of numerical results is validated by comparing......A hierarchical overlay structure is an alternative solution that integrates existing and future heterogeneous wireless networks to provide subscribers with better mobile broadband services. Traffic loss performance in such integrated heterogeneous networks is necessary for an operator's network...

  9. A Review of the Wood Pellet Value Chain, Modern Value/Supply Chain Management Approaches, and Value/Supply Chain Models

    Directory of Open Access Journals (Sweden)

    Natalie M. Hughes

    2014-01-01

    Full Text Available We reviewed 153 peer-reviewed sources to provide identification of modern supply chain management techniques and exploration of supply chain modeling, to offer decision support to managers. Ultimately, the review is intended to assist member-companies of supply chains, mainly producers, improve their current management approaches, by directing them to studies that may be suitable for direct application to their supply chains and value chains for improved efficiency and profitability. We found that information on supply chain management and modeling techniques in general is available. However, few Canadian-based published studies exist regarding a demand-driven modeling approach to value/supply chain management for wood pellet production. Only three papers were found specifically on wood pellet value chain analysis. We propose that more studies should be carried out on the value chain of wood pellet manufacturing, as well as demand-driven management and modeling approaches with improved demand forecasting methods.

  10. Bayesian Poisson hierarchical models for crash data analysis: Investigating the impact of model choice on site-specific predictions.

    Science.gov (United States)

    Khazraee, S Hadi; Johnson, Valen; Lord, Dominique

    2018-08-01

    The Poisson-gamma (PG) and Poisson-lognormal (PLN) regression models are among the most popular means for motor vehicle crash data analysis. Both models belong to the Poisson-hierarchical family of models. While numerous studies have compared the overall performance of alternative Bayesian Poisson-hierarchical models, little research has addressed the impact of model choice on the expected crash frequency prediction at individual sites. This paper sought to examine whether there are any trends among candidate models predictions e.g., that an alternative model's prediction for sites with certain conditions tends to be higher (or lower) than that from another model. In addition to the PG and PLN models, this research formulated a new member of the Poisson-hierarchical family of models: the Poisson-inverse gamma (PIGam). Three field datasets (from Texas, Michigan and Indiana) covering a wide range of over-dispersion characteristics were selected for analysis. This study demonstrated that the model choice can be critical when the calibrated models are used for prediction at new sites, especially when the data are highly over-dispersed. For all three datasets, the PIGam model would predict higher expected crash frequencies than would the PLN and PG models, in order, indicating a clear link between the models predictions and the shape of their mixing distributions (i.e., gamma, lognormal, and inverse gamma, respectively). The thicker tail of the PIGam and PLN models (in order) may provide an advantage when the data are highly over-dispersed. The analysis results also illustrated a major deficiency of the Deviance Information Criterion (DIC) in comparing the goodness-of-fit of hierarchical models; models with drastically different set of coefficients (and thus predictions for new sites) may yield similar DIC values, because the DIC only accounts for the parameters in the lowest (observation) level of the hierarchy and ignores the higher levels (regression coefficients

  11. A sow replacement model using Bayesian updating in a three-level hierarchic Markov process. II. Optimization model

    DEFF Research Database (Denmark)

    Kristensen, Anders Ringgaard; Søllested, Thomas Algot

    2004-01-01

    improvements. The biological model of the replacement model is described in a previous paper and in this paper the optimization model is described. The model is developed as a prototype for use under practical conditions. The application of the model is demonstrated using data from two commercial Danish sow......Recent methodological improvements in replacement models comprising multi-level hierarchical Markov processes and Bayesian updating have hardly been implemented in any replacement model and the aim of this study is to present a sow replacement model that really uses these methodological...... herds. It is concluded that the Bayesian updating technique and the hierarchical structure decrease the size of the state space dramatically. Since parameter estimates vary considerably among herds it is concluded that decision support concerning sow replacement only makes sense with parameters...

  12. Topics in Computational Bayesian Statistics With Applications to Hierarchical Models in Astronomy and Sociology

    Science.gov (United States)

    Sahai, Swupnil

    This thesis includes three parts. The overarching theme is how to analyze structured hierarchical data, with applications to astronomy and sociology. The first part discusses how expectation propagation can be used to parallelize the computation when fitting big hierarchical bayesian models. This methodology is then used to fit a novel, nonlinear mixture model to ultraviolet radiation from various regions of the observable universe. The second part discusses how the Stan probabilistic programming language can be used to numerically integrate terms in a hierarchical bayesian model. This technique is demonstrated on supernovae data to significantly speed up convergence to the posterior distribution compared to a previous study that used a Gibbs-type sampler. The third part builds a formal latent kernel representation for aggregate relational data as a way to more robustly estimate the mixing characteristics of agents in a network. In particular, the framework is applied to sociology surveys to estimate, as a function of ego age, the age and sex composition of the personal networks of individuals in the United States.

  13. Process chain modeling and selection in an additive manufacturing context

    DEFF Research Database (Denmark)

    Thompson, Mary Kathryn; Stolfi, Alessandro; Mischkot, Michael

    2016-01-01

    This paper introduces a new two-dimensional approach to modeling manufacturing process chains. This approach is used to consider the role of additive manufacturing technologies in process chains for a part with micro scale features and no internal geometry. It is shown that additive manufacturing...... evolving fields like additive manufacturing....

  14. A Review of the Wood Pellet Value Chain, Modern Value/Supply Chain Management Approaches, and Value/Supply Chain Models

    OpenAIRE

    Hughes, Natalie M.; Shahi, Chander; Pulkki, Reino

    2014-01-01

    We reviewed 153 peer-reviewed sources to provide identification of modern supply chain management techniques and exploration of supply chain modeling, to offer decision support to managers. Ultimately, the review is intended to assist member-companies of supply chains, mainly producers, improve their current management approaches, by directing them to studies that may be suitable for direct application to their supply chains and value chains for improved efficiency and profitability. We found...

  15. Audiovisual preservation strategies, data models and value-chains

    OpenAIRE

    Addis, Matthew; Wright, Richard

    2010-01-01

    This is a report on preservation strategies, models and value-chains for digital file-based audiovisual content. The report includes: (a)current and emerging value-chains and business-models for audiovisual preservation;(b) a comparison of preservation strategies for audiovisual content including their strengths and weaknesses, and(c) a review of current preservation metadata models, and requirements for extension to support audiovisual files.

  16. Simple model of inhibition of chain-branching combustion processes

    Science.gov (United States)

    Babushok, Valeri I.; Gubernov, Vladimir V.; Minaev, Sergei S.; Miroshnichenko, Taisia P.

    2017-11-01

    A simple kinetic model has been suggested to describe the inhibition and extinction of flame propagation in reaction systems with chain-branching reactions typical for hydrocarbon systems. The model is based on the generalised model of the combustion process with chain-branching reaction combined with the one-stage reaction describing the thermal mode of flame propagation with the addition of inhibition reaction steps. Inhibitor addition suppresses the radical overshoot in flame and leads to the change of reaction mode from the chain-branching reaction to a thermal mode of flame propagation. With the increase of inhibitor the transition of chain-branching mode of reaction to the reaction with straight-chains (non-branching chain reaction) is observed. The inhibition part of the model includes a block of three reactions to describe the influence of the inhibitor. The heat losses are incorporated into the model via Newton cooling. The flame extinction is the result of the decreased heat release of inhibited reaction processes and the suppression of radical overshoot with the further decrease of the reaction rate due to the temperature decrease and mixture dilution. A comparison of the results of modelling laminar premixed methane/air flames inhibited by potassium bicarbonate (gas phase model, detailed kinetic model) with the results obtained using the suggested simple model is presented. The calculations with the detailed kinetic model demonstrate the following modes of combustion process: (1) flame propagation with chain-branching reaction (with radical overshoot, inhibitor addition decreases the radical overshoot down to the equilibrium level); (2) saturation of chemical influence of inhibitor, and (3) transition to thermal mode of flame propagation (non-branching chain mode of reaction). The suggested simple kinetic model qualitatively reproduces the modes of flame propagation with the addition of the inhibitor observed using detailed kinetic models.

  17. Exploring Neural Network Models with Hierarchical Memories and Their Use in Modeling Biological Systems

    Science.gov (United States)

    Pusuluri, Sai Teja

    Energy landscapes are often used as metaphors for phenomena in biology, social sciences and finance. Different methods have been implemented in the past for the construction of energy landscapes. Neural network models based on spin glass physics provide an excellent mathematical framework for the construction of energy landscapes. This framework uses a minimal number of parameters and constructs the landscape using data from the actual phenomena. In the past neural network models were used to mimic the storage and retrieval process of memories (patterns) in the brain. With advances in the field now, these models are being used in machine learning, deep learning and modeling of complex phenomena. Most of the past literature focuses on increasing the storage capacity and stability of stored patterns in the network but does not study these models from a modeling perspective or an energy landscape perspective. This dissertation focuses on neural network models both from a modeling perspective and from an energy landscape perspective. I firstly show how the cellular interconversion phenomenon can be modeled as a transition between attractor states on an epigenetic landscape constructed using neural network models. The model allows the identification of a reaction coordinate of cellular interconversion by analyzing experimental and simulation time course data. Monte Carlo simulations of the model show that the initial phase of cellular interconversion is a Poisson process and the later phase of cellular interconversion is a deterministic process. Secondly, I explore the static features of landscapes generated using neural network models, such as sizes of basins of attraction and densities of metastable states. The simulation results show that the static landscape features are strongly dependent on the correlation strength and correlation structure between patterns. Using different hierarchical structures of the correlation between patterns affects the landscape features

  18. Prognostics for Steam Generator Tube Rupture using Markov Chain model

    International Nuclear Information System (INIS)

    Kim, Gibeom; Heo, Gyunyoung; Kim, Hyeonmin

    2016-01-01

    This paper will describe the prognostics method for evaluating and forecasting the ageing effect and demonstrate the procedure of prognostics for the Steam Generator Tube Rupture (SGTR) accident. Authors will propose the data-driven method so called MCMC (Markov Chain Monte Carlo) which is preferred to the physical-model method in terms of flexibility and availability. Degradation data is represented as growth of burst probability over time. Markov chain model is performed based on transition probability of state. And the state must be discrete variable. Therefore, burst probability that is continuous variable have to be changed into discrete variable to apply Markov chain model to the degradation data. The Markov chain model which is one of prognostics methods was described and the pilot demonstration for a SGTR accident was performed as a case study. The Markov chain model is strong since it is possible to be performed without physical models as long as enough data are available. However, in the case of the discrete Markov chain used in this study, there must be loss of information while the given data is discretized and assigned to the finite number of states. In this process, original information might not be reflected on prediction sufficiently. This should be noted as the limitation of discrete models. Now we will be studying on other prognostics methods such as GPM (General Path Model) which is also data-driven method as well as the particle filer which belongs to physical-model method and conducting comparison analysis

  19. Market Competitiveness Evaluation of Mechanical Equipment with a Pairwise Comparisons Hierarchical Model.

    Science.gov (United States)

    Hou, Fujun

    2016-01-01

    This paper provides a description of how market competitiveness evaluations concerning mechanical equipment can be made in the context of multi-criteria decision environments. It is assumed that, when we are evaluating the market competitiveness, there are limited number of candidates with some required qualifications, and the alternatives will be pairwise compared on a ratio scale. The qualifications are depicted as criteria in hierarchical structure. A hierarchical decision model called PCbHDM was used in this study based on an analysis of its desirable traits. Illustration and comparison shows that the PCbHDM provides a convenient and effective tool for evaluating the market competitiveness of mechanical equipment. The researchers and practitioners might use findings of this paper in application of PCbHDM.

  20. Hierarchical relaxation dynamics in a tilted two-band Bose-Hubbard model

    Science.gov (United States)

    Cosme, Jayson G.

    2018-04-01

    We numerically examine slow and hierarchical relaxation dynamics of interacting bosons described by a tilted two-band Bose-Hubbard model. The system is found to exhibit signatures of quantum chaos within the spectrum and the validity of the eigenstate thermalization hypothesis for relevant physical observables is demonstrated for certain parameter regimes. Using the truncated Wigner representation in the semiclassical limit of the system, dynamics of relevant observables reveal hierarchical relaxation and the appearance of prethermalized states is studied from the perspective of statistics of the underlying mean-field trajectories. The observed prethermalization scenario can be attributed to different stages of glassy dynamics in the mode-time configuration space due to dynamical phase transition between ergodic and nonergodic trajectories.

  1. Hierarchical Bayesian nonparametric mixture models for clustering with variable relevance determination.

    Science.gov (United States)

    Yau, Christopher; Holmes, Chris

    2011-07-01

    We propose a hierarchical Bayesian nonparametric mixture model for clustering when some of the covariates are assumed to be of varying relevance to the clustering problem. This can be thought of as an issue in variable selection for unsupervised learning. We demonstrate that by defining a hierarchical population based nonparametric prior on the cluster locations scaled by the inverse covariance matrices of the likelihood we arrive at a 'sparsity prior' representation which admits a conditionally conjugate prior. This allows us to perform full Gibbs sampling to obtain posterior distributions over parameters of interest including an explicit measure of each covariate's relevance and a distribution over the number of potential clusters present in the data. This also allows for individual cluster specific variable selection. We demonstrate improved inference on a number of canonical problems.

  2. Improving Hierarchical Models Using Historical Data with Applications in High-Throughput Genomics Data Analysis.

    Science.gov (United States)

    Li, Ben; Li, Yunxiao; Qin, Zhaohui S

    2017-06-01

    Modern high-throughput biotechnologies such as microarray and next generation sequencing produce a massive amount of information for each sample assayed. However, in a typical high-throughput experiment, only limited amount of data are observed for each individual feature, thus the classical 'large p , small n ' problem. Bayesian hierarchical model, capable of borrowing strength across features within the same dataset, has been recognized as an effective tool in analyzing such data. However, the shrinkage effect, the most prominent feature of hierarchical features, can lead to undesirable over-correction for some features. In this work, we discuss possible causes of the over-correction problem and propose several alternative solutions. Our strategy is rooted in the fact that in the Big Data era, large amount of historical data are available which should be taken advantage of. Our strategy presents a new framework to enhance the Bayesian hierarchical model. Through simulation and real data analysis, we demonstrated superior performance of the proposed strategy. Our new strategy also enables borrowing information across different platforms which could be extremely useful with emergence of new technologies and accumulation of data from different platforms in the Big Data era. Our method has been implemented in R package "adaptiveHM", which is freely available from https://github.com/benliemory/adaptiveHM.

  3. Hierarchical modelling of temperature and habitat size effects on population dynamics of North Atlantic cod

    DEFF Research Database (Denmark)

    Mantzouni, Irene; Sørensen, Helle; O'Hara, Robert B.

    2010-01-01

    and Beverton and Holt stock–recruitment (SR) models were extended by applying hierarchical methods, mixed-effects models, and Bayesian inference to incorporate the influence of these ecosystem factors on model parameters representing cod maximum reproductive rate and carrying capacity. We identified......Understanding how temperature affects cod (Gadus morhua) ecology is important for forecasting how populations will develop as climate changes in future. The effects of spawning-season temperature and habitat size on cod recruitment dynamics have been investigated across the North Atlantic. Ricker...

  4. Modeling for mechanical response of CICC by hierarchical approach and ABAQUS simulation

    Energy Technology Data Exchange (ETDEWEB)

    Li, Y.X.; Wang, X.; Gao, Y.W., E-mail: ywgao@lzu.edu.cn; Zhou, Y.H.

    2013-11-15

    Highlights: • We develop an analytical model based on the hierarchical approach of classical wire rope theory. • The numerical model is set up through ABAQUS to verify and enhance the theoretical model. • We calculate two concerned mechanical response: global displacement–load curve and local axial strain distribution. • Elastic–plasticity is the main character in loading curve, and the friction between adjacent strands plays a significant role in the distribution map. -- Abstract: An unexpected degradation frequently occurs in superconducting cable (CICC) due to the mechanical response (deformation) when suffering from electromagnetic load and thermal load during operation. Because of the cable's hierarchical twisted configuration, it is difficult to quantitatively model the mechanical response. In addition, the local mechanical characteristics such as strain distribution could be hardly monitored via experimental method. To address this issue, we develop an analytical model based on the hierarchical approach of classical wire rope theory. This approach follows the algorithm advancing successively from n + 1 stage (e.g. 3 × 3 × 5 subcable) to n stage (e.g. 3 × 3 subcable). There are no complicated numerical procedures required in this model. Meanwhile, the numerical model is set up through ABAQUS to verify and enhance the theoretical model. Subsequently, we calculate two concerned mechanical responses: global displacement–load curve and local axial strain distribution. We find that in the global displacement–load curve, the elastic–plasticity is the main character, and the higher-level cable shows enhanced nonlinear characteristics. As for the local distribution, the friction among adjacent strands plays a significant role in this map. The magnitude of friction strongly influences the regularity of the distribution at different twisted stages. More detailed results are presented in this paper.

  5. Modeling for mechanical response of CICC by hierarchical approach and ABAQUS simulation

    International Nuclear Information System (INIS)

    Li, Y.X.; Wang, X.; Gao, Y.W.; Zhou, Y.H.

    2013-01-01

    Highlights: • We develop an analytical model based on the hierarchical approach of classical wire rope theory. • The numerical model is set up through ABAQUS to verify and enhance the theoretical model. • We calculate two concerned mechanical response: global displacement–load curve and local axial strain distribution. • Elastic–plasticity is the main character in loading curve, and the friction between adjacent strands plays a significant role in the distribution map. -- Abstract: An unexpected degradation frequently occurs in superconducting cable (CICC) due to the mechanical response (deformation) when suffering from electromagnetic load and thermal load during operation. Because of the cable's hierarchical twisted configuration, it is difficult to quantitatively model the mechanical response. In addition, the local mechanical characteristics such as strain distribution could be hardly monitored via experimental method. To address this issue, we develop an analytical model based on the hierarchical approach of classical wire rope theory. This approach follows the algorithm advancing successively from n + 1 stage (e.g. 3 × 3 × 5 subcable) to n stage (e.g. 3 × 3 subcable). There are no complicated numerical procedures required in this model. Meanwhile, the numerical model is set up through ABAQUS to verify and enhance the theoretical model. Subsequently, we calculate two concerned mechanical responses: global displacement–load curve and local axial strain distribution. We find that in the global displacement–load curve, the elastic–plasticity is the main character, and the higher-level cable shows enhanced nonlinear characteristics. As for the local distribution, the friction among adjacent strands plays a significant role in this map. The magnitude of friction strongly influences the regularity of the distribution at different twisted stages. More detailed results are presented in this paper

  6. A sow replacement model using Bayesian updating in a three-level hierarchic Markov process. I. Biological model

    DEFF Research Database (Denmark)

    Kristensen, Anders Ringgaard; Søllested, Thomas Algot

    2004-01-01

    that really uses all these methodological improvements. In this paper, the biological model describing the performance and feed intake of sows is presented. In particular, estimation of herd specific parameters is emphasized. The optimization model is described in a subsequent paper......Several replacement models have been presented in literature. In other applicational areas like dairy cow replacement, various methodological improvements like hierarchical Markov processes and Bayesian updating have been implemented, but not in sow models. Furthermore, there are methodological...... improvements like multi-level hierarchical Markov processes with decisions on multiple time scales, efficient methods for parameter estimations at herd level and standard software that has been hardly implemented at all in any replacement model. The aim of this study is to present a sow replacement model...

  7. A hierarchical modeling methodology for the definition and selection of requirements

    Science.gov (United States)

    Dufresne, Stephane

    This dissertation describes the development of a requirements analysis methodology that takes into account the concept of operations and the hierarchical decomposition of aerospace systems. At the core of the methodology, the Analytic Network Process (ANP) is used to ensure the traceability between the qualitative and quantitative information present in the hierarchical model. The proposed methodology is implemented to the requirements definition of a hurricane tracker Unmanned Aerial Vehicle. Three research objectives are identified in this work; (1) improve the requirements mapping process by matching the stakeholder expectations with the concept of operations, systems and available resources; (2) reduce the epistemic uncertainty surrounding the requirements and requirements mapping; and (3) improve the requirements down-selection process by taking into account the level of importance of the criteria and the available resources. Several challenges are associated with the identification and definition of requirements. The complexity of the system implies that a large number of requirements are needed to define the systems. These requirements are defined early in the conceptual design, where the level of knowledge is relatively low and the level of uncertainty is large. The proposed methodology intends to increase the level of knowledge and reduce the level of uncertainty by guiding the design team through a structured process. To address these challenges, a new methodology is created to flow-down the requirements from the stakeholder expectations to the systems alternatives. A taxonomy of requirements is created to classify the information gathered during the problem definition. Subsequently, the operational and systems functions and measures of effectiveness are integrated to a hierarchical model to allow the traceability of the information. Monte Carlo methods are used to evaluate the variations of the hierarchical model elements and consequently reduce the

  8. Hierarchical model generation for architecture reconstruction using laser-scanned point clouds

    Science.gov (United States)

    Ning, Xiaojuan; Wang, Yinghui; Zhang, Xiaopeng

    2014-06-01

    Architecture reconstruction using terrestrial laser scanner is a prevalent and challenging research topic. We introduce an automatic, hierarchical architecture generation framework to produce full geometry of architecture based on a novel combination of facade structures detection, detailed windows propagation, and hierarchical model consolidation. Our method highlights the generation of geometric models automatically fitting the design information of the architecture from sparse, incomplete, and noisy point clouds. First, the planar regions detected in raw point clouds are interpreted as three-dimensional clusters. Then, the boundary of each region extracted by projecting the points into its corresponding two-dimensional plane is classified to obtain detailed shape structure elements (e.g., windows and doors). Finally, a polyhedron model is generated by calculating the proposed local structure model, consolidated structure model, and detailed window model. Experiments on modeling the scanned real-life buildings demonstrate the advantages of our method, in which the reconstructed models not only correspond to the information of architectural design accurately, but also satisfy the requirements for visualization and analysis.

  9. Hierarchical Agent-Based Integrated Modelling Approach for Microgrids with Adoption of EVs and HRES

    Directory of Open Access Journals (Sweden)

    Peng Han

    2014-01-01

    Full Text Available The large adoption of electric vehicles (EVs, hybrid renewable energy systems (HRESs, and the increasing of the loads shall bring significant challenges to the microgrid. The methodology to model microgrid with high EVs and HRESs penetrations is the key to EVs adoption assessment and optimized HRESs deployment. However, considering the complex interactions of the microgrid containing massive EVs and HRESs, any previous single modelling approaches are insufficient. Therefore in this paper, the methodology named Hierarchical Agent-based Integrated Modelling Approach (HAIMA is proposed. With the effective integration of the agent-based modelling with other advanced modelling approaches, the proposed approach theoretically contributes to a new microgrid model hierarchically constituted by microgrid management layer, component layer, and event layer. Then the HAIMA further links the key parameters and interconnects them to achieve the interactions of the whole model. Furthermore, HAIMA practically contributes to a comprehensive microgrid operation system, through which the assessment of the proposed model and the impact of the EVs adoption are achieved. Simulations show that the proposed HAIMA methodology will be beneficial for the microgrid study and EV’s operation assessment and shall be further utilized for the energy management, electricity consumption prediction, the EV scheduling control, and HRES deployment optimization.

  10. An adaptive sampling method for variable-fidelity surrogate models using improved hierarchical kriging

    Science.gov (United States)

    Hu, Jiexiang; Zhou, Qi; Jiang, Ping; Shao, Xinyu; Xie, Tingli

    2018-01-01

    Variable-fidelity (VF) modelling methods have been widely used in complex engineering system design to mitigate the computational burden. Building a VF model generally includes two parts: design of experiments and metamodel construction. In this article, an adaptive sampling method based on improved hierarchical kriging (ASM-IHK) is proposed to refine the improved VF model. First, an improved hierarchical kriging model is developed as the metamodel, in which the low-fidelity model is varied through a polynomial response surface function to capture the characteristics of a high-fidelity model. Secondly, to reduce local approximation errors, an active learning strategy based on a sequential sampling method is introduced to make full use of the already required information on the current sampling points and to guide the sampling process of the high-fidelity model. Finally, two numerical examples and the modelling of the aerodynamic coefficient for an aircraft are provided to demonstrate the approximation capability of the proposed approach, as well as three other metamodelling methods and two sequential sampling methods. The results show that ASM-IHK provides a more accurate metamodel at the same simulation cost, which is very important in metamodel-based engineering design problems.

  11. Performance analysis of Supply Chain Management with Supply Chain Operation reference model

    Science.gov (United States)

    Hasibuan, Abdurrozzaq; Arfah, Mahrani; Parinduri, Luthfi; Hernawati, Tri; Suliawati; Harahap, Bonar; Rahmah Sibuea, Siti; Krianto Sulaiman, Oris; purwadi, Adi

    2018-04-01

    This research was conducted at PT. Shamrock Manufacturing Corpora, the company is required to think creatively to implement competition strategy by producing goods/services that are more qualified, cheaper. Therefore, it is necessary to measure the performance of Supply Chain Management in order to improve the competitiveness. Therefore, the company is required to optimize its production output to meet the export quality standard. This research begins with the creation of initial dimensions based on Supply Chain Management process, ie Plan, Source, Make, Delivery, and Return with hierarchy based on Supply Chain Reference Operation that is Reliability, Responsiveness, Agility, Cost, and Asset. Key Performance Indicator identification becomes a benchmark in performance measurement whereas Snorm De Boer normalization serves to equalize Key Performance Indicator value. Analiytical Hierarchy Process is done to assist in determining priority criteria. Measurement of Supply Chain Management performance at PT. Shamrock Manufacturing Corpora produces SC. Responsiveness (0.649) has higher weight (priority) than other alternatives. The result of performance analysis using Supply Chain Reference Operation model of Supply Chain Management performance at PT. Shamrock Manufacturing Corpora looks good because its monitoring system between 50-100 is good.

  12. A simplified parsimonious higher order multivariate Markov chain model

    Science.gov (United States)

    Wang, Chao; Yang, Chuan-sheng

    2017-09-01

    In this paper, a simplified parsimonious higher-order multivariate Markov chain model (SPHOMMCM) is presented. Moreover, parameter estimation method of TPHOMMCM is give. Numerical experiments shows the effectiveness of TPHOMMCM.

  13. A tridiagonal parsimonious higher order multivariate Markov chain model

    Science.gov (United States)

    Wang, Chao; Yang, Chuan-sheng

    2017-09-01

    In this paper, we present a tridiagonal parsimonious higher-order multivariate Markov chain model (TPHOMMCM). Moreover, estimation method of the parameters in TPHOMMCM is give. Numerical experiments illustrate the effectiveness of TPHOMMCM.

  14. Symmetry chains for the atomic shell model. I. Classification of symmetry chains for atomic configurations

    International Nuclear Information System (INIS)

    Gruber, B.; Thomas, M.S.

    1980-01-01

    In this article the symmetry chains for the atomic shell model are classified in such a way that they lead from the group SU(4l+2) to its subgroup SOsub(J)(3). The atomic configurations (nl)sup(N) transform like irreducible representations of the group SU(4l+2), while SOsub(J)(3) corresponds to total angular momentum in SU(4l+2). The defining matrices for the various embeddings are given for each symmetry chain that is obtained. These matrices also define the projection onto the weight subspaces for the corresponding subsymmetries and thus relate the various quantum numbers and determine the branching of representations. It is shown in this article that three (interrelated) symmetry chains are obtained which correspond to L-S coupling, j-j coupling, and a seniority dependent coupling. Moreover, for l<=6 these chains are complete, i.e., there are no other chains but these. In articles to follow, the symmetry chains that lead from the group SO(8l+5) to SOsub(J)(3) will be discussed, with the entire atomic shell transforming like an irreducible representation of SO(8l+5). The transformation properties of the states of the atomic shell will be determined according to the various symmetry chains obtained. The symmetry lattice discussed in this article forms a sublattice of the larger symmetry lattice with SO(8l+5) as supergroup. Thus the transformation properties of the states of the atomic configurations, according to the various symmetry chains discussed in this article, will be obtained too. (author)

  15. Modeling sustainability in renewable energy supply chain systems

    Science.gov (United States)

    Xie, Fei

    This dissertation aims at modeling sustainability of renewable fuel supply chain systems against emerging challenges. In particular, the dissertation focuses on the biofuel supply chain system design, and manages to develop advanced modeling framework and corresponding solution methods in tackling challenges in sustaining biofuel supply chain systems. These challenges include: (1) to integrate "environmental thinking" into the long-term biofuel supply chain planning; (2) to adopt multimodal transportation to mitigate seasonality in biofuel supply chain operations; (3) to provide strategies in hedging against uncertainty from conversion technology; and (4) to develop methodologies in long-term sequential planning of the biofuel supply chain under uncertainties. All models are mixed integer programs, which also involves multi-objective programming method and two-stage/multistage stochastic programming methods. In particular for the long-term sequential planning under uncertainties, to reduce the computational challenges due to the exponential expansion of the scenario tree, I also developed efficient ND-Max method which is more efficient than CPLEX and Nested Decomposition method. Through result analysis of four independent studies, it is found that the proposed modeling frameworks can effectively improve the economic performance, enhance environmental benefits and reduce risks due to systems uncertainties for the biofuel supply chain systems.

  16. Dynamic classification of fetal heart rates by hierarchical Dirichlet process mixture models.

    Directory of Open Access Journals (Sweden)

    Kezi Yu

    Full Text Available In this paper, we propose an application of non-parametric Bayesian (NPB models for classification of fetal heart rate (FHR recordings. More specifically, we propose models that are used to differentiate between FHR recordings that are from fetuses with or without adverse outcomes. In our work, we rely on models based on hierarchical Dirichlet processes (HDP and the Chinese restaurant process with finite capacity (CRFC. Two mixture models were inferred from real recordings, one that represents healthy and another, non-healthy fetuses. The models were then used to classify new recordings and provide the probability of the fetus being healthy. First, we compared the classification performance of the HDP models with that of support vector machines on real data and concluded that the HDP models achieved better performance. Then we demonstrated the use of mixture models based on CRFC for dynamic classification of the performance of (FHR recordings in a real-time setting.

  17. The SU(2 vertical stroke 3) spin chain sigma model

    International Nuclear Information System (INIS)

    Hernandez, R.; Lopez, E.

    2005-01-01

    The one-loop planar dilatation operator of N = 4 supersymmetric Yang-Mills is isomorphic to the hamiltonian of an integrable PSU(2,2 vertical stroke 4) spin chain. We construct the non-linear sigma model describing the continuum limit of the SU(2 vertical stroke 3) subsector of the N = 4 chain. We explicitly identify the spin chain sigma model with the one for a superstring moving in AdS 5 x S 5 with large angular momentum along the five-sphere. (Abstract Copyright [2005], Wiley Periodicals, Inc.)

  18. HDDM: Hierarchical Bayesian estimation of the Drift-Diffusion Model in Python.

    Science.gov (United States)

    Wiecki, Thomas V; Sofer, Imri; Frank, Michael J

    2013-01-01

    The diffusion model is a commonly used tool to infer latent psychological processes underlying decision-making, and to link them to neural mechanisms based on response times. Although efficient open source software has been made available to quantitatively fit the model to data, current estimation methods require an abundance of response time measurements to recover meaningful parameters, and only provide point estimates of each parameter. In contrast, hierarchical Bayesian parameter estimation methods are useful for enhancing statistical power, allowing for simultaneous estimation of individual subject parameters and the group distribution that they are drawn from, while also providing measures of uncertainty in these parameters in the posterior distribution. Here, we present a novel Python-based toolbox called HDDM (hierarchical drift diffusion model), which allows fast and flexible estimation of the the drift-diffusion model and the related linear ballistic accumulator model. HDDM requires fewer data per subject/condition than non-hierarchical methods, allows for full Bayesian data analysis, and can handle outliers in the data. Finally, HDDM supports the estimation of how trial-by-trial measurements (e.g., fMRI) influence decision-making parameters. This paper will first describe the theoretical background of the drift diffusion model and Bayesian inference. We then illustrate usage of the toolbox on a real-world data set from our lab. Finally, parameter recovery studies show that HDDM beats alternative fitting methods like the χ(2)-quantile method as well as maximum likelihood estimation. The software and documentation can be downloaded at: http://ski.clps.brown.edu/hddm_docs/

  19. HDDM: Hierarchical Bayesian estimation of the Drift-Diffusion Model in Python

    Directory of Open Access Journals (Sweden)

    Thomas V Wiecki

    2013-08-01

    Full Text Available The diffusion model is a commonly used tool to infer latent psychological processes underlying decision making, and to link them to neural mechanisms based on reaction times. Although efficient open source software has been made available to quantitatively fit the model to data, current estimation methods require an abundance of reaction time measurements to recover meaningful parameters, and only provide point estimates of each parameter. In contrast, hierarchical Bayesian parameter estimation methods are useful for enhancing statistical power, allowing for simultaneous estimation of individual subject parameters and the group distribution that they are drawn from, while also providing measures of uncertainty in these parameters in the posterior distribution. Here, we present a novel Python-based toolbox called HDDM (hierarchical drift diffusion model, which allows fast and flexible estimation of the the drift-diffusion model and the related linear ballistic accumulator model. HDDM requires fewer data per subject / condition than non-hierarchical method, allows for full Bayesian data analysis, and can handle outliers in the data. Finally, HDDM supports the estimation of how trial-by-trial measurements (e.g. fMRI influence decision making parameters. This paper will first describe the theoretical background of drift-diffusion model and Bayesian inference. We then illustrate usage of the toolbox on a real-world data set from our lab. Finally, parameter recovery studies show that HDDM beats alternative fitting methods like the chi-quantile method as well as maximum likelihood estimation. The software and documentation can be downloaded at: http://ski.clps.brown.edu/hddm_docs

  20. Effect of including decay chains on predictions of equilibrium-type terrestrial food chain models

    International Nuclear Information System (INIS)

    Kirchner, G.

    1990-01-01

    Equilibrium-type food chain models are commonly used for assessing the radiological impact to man from environmental releases of radionuclides. Usually these do not take into account build-up of radioactive decay products during environmental transport. This may be a potential source of underprediction. For estimating consequences of this simplification, the equations of an internationally recognised terrestrial food chain model have been extended to include decay chains of variable length. Example calculations show that for releases from light water reactors as expected both during routine operation and in the case of severe accidents, the build-up of decay products during environmental transport is generally of minor importance. However, a considerable number of radionuclides of potential radiological significance have been identified which show marked contributions of decay products to calculated contamination of human food and resulting radiation dose rates. (author)

  1. Automated main-chain model building by template matching and iterative fragment extension.

    Science.gov (United States)

    Terwilliger, Thomas C

    2003-01-01

    An algorithm for the automated macromolecular model building of polypeptide backbones is described. The procedure is hierarchical. In the initial stages, many overlapping polypeptide fragments are built. In subsequent stages, the fragments are extended and then connected. Identification of the locations of helical and beta-strand regions is carried out by FFT-based template matching. Fragment libraries of helices and beta-strands from refined protein structures are then positioned at the potential locations of helices and strands and the longest segments that fit the electron-density map are chosen. The helices and strands are then extended using fragment libraries consisting of sequences three amino acids long derived from refined protein structures. The resulting segments of polypeptide chain are then connected by choosing those which overlap at two or more C(alpha) positions. The fully automated procedure has been implemented in RESOLVE and is capable of model building at resolutions as low as 3.5 A. The algorithm is useful for building a preliminary main-chain model that can serve as a basis for refinement and side-chain addition.

  2. A distribution planning model for natural gas supply chain: A case study

    International Nuclear Information System (INIS)

    Hamedi, Maryam; Zanjirani Farahani, Reza; Husseini, Mohammad Moattar; Esmaeilian, Gholam Reza

    2009-01-01

    In this paper, a real-world case study of a natural gas supply chain is investigated. By using concepts related to natural gas industry and the relations among the components of transmission and distribution network, a six-level supply chain has been introduced and presented schematically. The defined supply chain is a single-objective, multi-period, and single-product problem that is formulated as a mixed integer non-linear programming model, which can easily be linearized. The objective of this model is to minimize direct or indirect distribution costs. There are six groups of constraints including capacity, input and output balancing, demand satisfaction, network flow continuity, and relative constraints to the required binary variables. The solution algorithm of the problem is hierarchical; in each step, one section of the problem is solved using an exact method; the outputs of this section are passed to the next relative section as inputs. Finally, it is shown that the problem has been solved in a reasonable time and desirable results are attained. The use of proposed model and its solution approach have been studied in two gas trunk lines, to present the priority of its cost saving

  3. An extended chain Ising model and its Glauber dynamics

    International Nuclear Information System (INIS)

    Zhao Xing-Yu; Fan Xiao-Hui; Huang Yi-Neng; Huang Xin-Ru

    2012-01-01

    It was first proposed that an extended chain Ising (ECI) model contains the Ising chain model, single spin double-well potentials and a pure phonon heat bath of a specific energy exchange with the spins. The extension method is easy to apply to high dimensional cases. Then the single spin-flip probability (rate) of the ECI model is deduced based on the Boltzmann principle and general statistical principles of independent events and the model is simplified to an extended chain Glauber—Ising (ECGI) model. Moreover, the relaxation dynamics of the ECGI model were simulated by the Monte Carlo method and a comparison with the predictions of the special chain Glauber—Ising (SCGI) model was presented. It was found that the results of the two models are consistent with each other when the Ising chain length is large enough and temperature is relative low, which is the most valuable case of the model applications. These show that the ECI model will provide a firm physical base for the widely used single spin-flip rate proposed by Glauber and a possible route to obtain the single spin-flip rate of other form and even the multi-spin-flip rate. (condensed matter: electronic structure, electrical, magnetic, and optical properties)

  4. Hierarchical modeling for rare event detection and cell subset alignment across flow cytometry samples.

    Directory of Open Access Journals (Sweden)

    Andrew Cron

    Full Text Available Flow cytometry is the prototypical assay for multi-parameter single cell analysis, and is essential in vaccine and biomarker research for the enumeration of antigen-specific lymphocytes that are often found in extremely low frequencies (0.1% or less. Standard analysis of flow cytometry data relies on visual identification of cell subsets by experts, a process that is subjective and often difficult to reproduce. An alternative and more objective approach is the use of statistical models to identify cell subsets of interest in an automated fashion. Two specific challenges for automated analysis are to detect extremely low frequency event subsets without biasing the estimate by pre-processing enrichment, and the ability to align cell subsets across multiple data samples for comparative analysis. In this manuscript, we develop hierarchical modeling extensions to the Dirichlet Process Gaussian Mixture Model (DPGMM approach we have previously described for cell subset identification, and show that the hierarchical DPGMM (HDPGMM naturally generates an aligned data model that captures both commonalities and variations across multiple samples. HDPGMM also increases the sensitivity to extremely low frequency events by sharing information across multiple samples analyzed simultaneously. We validate the accuracy and reproducibility of HDPGMM estimates of antigen-specific T cells on clinically relevant reference peripheral blood mononuclear cell (PBMC samples with known frequencies of antigen-specific T cells. These cell samples take advantage of retrovirally TCR-transduced T cells spiked into autologous PBMC samples to give a defined number of antigen-specific T cells detectable by HLA-peptide multimer binding. We provide open source software that can take advantage of both multiple processors and GPU-acceleration to perform the numerically-demanding computations. We show that hierarchical modeling is a useful probabilistic approach that can provide a

  5. Hierarchical Models for Type Ia Supernova Light Curves in the Optical and Near Infrared

    Science.gov (United States)

    Mandel, Kaisey; Narayan, G.; Kirshner, R. P.

    2011-01-01

    I have constructed a comprehensive statistical model for Type Ia supernova optical and near infrared light curves. Since the near infrared light curves are excellent standard candles and are less sensitive to dust extinction and reddening, the combination of near infrared and optical data better constrains the host galaxy extinction and improves the precision of distance predictions to SN Ia. A hierarchical probabilistic model coherently accounts for multiple random and uncertain effects, including photometric error, intrinsic supernova light curve variations and correlations across phase and wavelength, dust extinction and reddening, peculiar velocity dispersion and distances. An improved BayeSN MCMC code is implemented for computing probabilistic inferences for individual supernovae and the SN Ia and host galaxy dust populations. I use this hierarchical model to analyze nearby Type Ia supernovae with optical and near infared data from the PAIRITEL, CfA3, and CSP samples and the literature. Using cross-validation to test the robustness of the model predictions, I find that the rms Hubble diagram scatter of predicted distance moduli is 0.11 mag for SN with optical and near infrared data versus 0.15 mag for SN with only optical data. Accounting for the dispersion expected from random peculiar velocities, the rms intrinsic prediction error is 0.08-0.10 mag for SN with both optical and near infrared light curves. I discuss results for the inferred intrinsic correlation structures of the optical-NIR SN Ia light curves and the host galaxy dust distribution captured by the hierarchical model. The continued observation and analysis of Type Ia SN in the optical and near infrared is important for improving their utility as precise and accurate cosmological distance indicators.

  6. A conceptual modeling framework for discrete event simulation using hierarchical control structures.

    Science.gov (United States)

    Furian, N; O'Sullivan, M; Walker, C; Vössner, S; Neubacher, D

    2015-08-01

    Conceptual Modeling (CM) is a fundamental step in a simulation project. Nevertheless, it is only recently that structured approaches towards the definition and formulation of conceptual models have gained importance in the Discrete Event Simulation (DES) community. As a consequence, frameworks and guidelines for applying CM to DES have emerged and discussion of CM for DES is increasing. However, both the organization of model-components and the identification of behavior and system control from standard CM approaches have shortcomings that limit CM's applicability to DES. Therefore, we discuss the different aspects of previous CM frameworks and identify their limitations. Further, we present the Hierarchical Control Conceptual Modeling framework that pays more attention to the identification of a models' system behavior, control policies and dispatching routines and their structured representation within a conceptual model. The framework guides the user step-by-step through the modeling process and is illustrated by a worked example.

  7. Ising model on tangled chain - 1: Free energy and entropy

    International Nuclear Information System (INIS)

    Mejdani, R.

    1993-04-01

    In this paper we have considered an Ising model defined on tangled chain, in which more bonds have been added to those of pure Ising chain. to understand their competition, particularly between ferromagnetic and antiferromagnetic bonds, we have studied, using the transfer matrix method, some simple analytical calculations and an iterative algorithm, the behaviour of the free energy and entropy, particularly in the zero-field and zero temperature limit, for different configurations of the ferromagnetic tangled chain and different types of addition interaction (ferromagnetic or antiferromagnetic). We found that the condition J=J' between the ferromagnetic interaction J along the chain and the antiferromagnetic interaction J' across the chain is somewhat as a ''transition-region'' condition for this behaviour. Our results indicate also the existence of non-zero entropy at zero temperature. (author). 17 refs, 8 figs

  8. Efficient Actor-Critic Algorithm with Hierarchical Model Learning and Planning

    Science.gov (United States)

    Fu, QiMing

    2016-01-01

    To improve the convergence rate and the sample efficiency, two efficient learning methods AC-HMLP and RAC-HMLP (AC-HMLP with ℓ 2-regularization) are proposed by combining actor-critic algorithm with hierarchical model learning and planning. The hierarchical models consisting of the local and the global models, which are learned at the same time during learning of the value function and the policy, are approximated by local linear regression (LLR) and linear function approximation (LFA), respectively. Both the local model and the global model are applied to generate samples for planning; the former is used only if the state-prediction error does not surpass the threshold at each time step, while the latter is utilized at the end of each episode. The purpose of taking both models is to improve the sample efficiency and accelerate the convergence rate of the whole algorithm through fully utilizing the local and global information. Experimentally, AC-HMLP and RAC-HMLP are compared with three representative algorithms on two Reinforcement Learning (RL) benchmark problems. The results demonstrate that they perform best in terms of convergence rate and sample efficiency. PMID:27795704

  9. Thermal conductivity of the Lennard-Jones chain fluid model.

    Science.gov (United States)

    Galliero, Guillaume; Boned, Christian

    2009-12-01

    Nonequilibrium molecular dynamics simulations have been performed to estimate, analyze, and correlate the thermal conductivity of a fluid composed of short Lennard-Jones chains (up to 16 segments) over a large range of thermodynamic conditions. It is shown that the dilute gas contribution to the thermal conductivity decreases when the chain length increases for a given temperature. In dense states, simulation results indicate that the residual thermal conductivity of the monomer increases strongly with density, but is weakly dependent on the temperature. Compared to the monomer value, it has been noted that the residual thermal conductivity of the chain was slightly decreasing with its length. Using these results, an empirical relation, including a contribution due to the critical enhancement, is proposed to provide an accurate estimation of the thermal conductivity of the Lennard-Jones chain fluid model (up to 16 segments) over the domain 0.8values of the Lennard-Jones chain fluid model merge on the same "universal" curve when plotted as a function of the excess entropy. Furthermore, it is shown that the reduced configurational thermal conductivity of the Lennard-Jones chain fluid model is approximately proportional to the reduced excess entropy for all fluid states and all chain lengths.

  10. [Healthcare value chain: a model for the Brazilian healthcare system].

    Science.gov (United States)

    Pedroso, Marcelo Caldeira; Malik, Ana Maria

    2012-10-01

    This article presents a model of the healthcare value chain which consists of a schematic representation of the Brazilian healthcare system. The proposed model is adapted for the Brazilian reality and has the scope and flexibility for use in academic activities and analysis of the healthcare sector in Brazil. It places emphasis on three components: the main activities of the value chain, grouped in vertical and horizontal links; the mission of each link and the main value chain flows. The proposed model consists of six vertical and three horizontal links, amounting to nine. These are: knowledge development; supply of products and technologies; healthcare services; financial intermediation; healthcare financing; healthcare consumption; regulation; distribution of healthcare products; and complementary and support services. Four flows can be used to analyze the value chain: knowledge and innovation; products and services; financial; and information.

  11. Hierarchical Self Assembly of Patterns from the Robinson Tilings: DNA Tile Design in an Enhanced Tile Assembly Model.

    Science.gov (United States)

    Padilla, Jennifer E; Liu, Wenyan; Seeman, Nadrian C

    2012-06-01

    We introduce a hierarchical self assembly algorithm that produces the quasiperiodic patterns found in the Robinson tilings and suggest a practical implementation of this algorithm using DNA origami tiles. We modify the abstract Tile Assembly Model, (aTAM), to include active signaling and glue activation in response to signals to coordinate the hierarchical assembly of Robinson patterns of arbitrary size from a small set of tiles according to the tile substitution algorithm that generates them. Enabling coordinated hierarchical assembly in the aTAM makes possible the efficient encoding of the recursive process of tile substitution.

  12. Switching Markov chains for a holistic modeling of SIS unavailability

    International Nuclear Information System (INIS)

    Mechri, Walid; Simon, Christophe; BenOthman, Kamel

    2015-01-01

    This paper proposes a holistic approach to model the Safety Instrumented Systems (SIS). The model is based on Switching Markov Chain and integrates several parameters like Common Cause Failure, Imperfect Proof testing, partial proof testing, etc. The basic concepts of Switching Markov Chain applied to reliability analysis are introduced and a model to compute the unavailability for a case study is presented. The proposed Switching Markov Chain allows us to assess the effect of each parameter on the SIS performance. The proposed method ensures the relevance of the results. - Highlights: • A holistic approach to model the unavailability safety systems using Switching Markov chains. • The model integrates several parameters like probability of failure due to the test, the probability of not detecting a failure in a test. • The basic concepts of the Switching Markov Chains are introduced and applied to compute the unavailability for safety systems. • The proposed Switching Markov Chain allows assessing the effect of each parameter on the chemical reactor performance

  13. Markov Chain Models for the Stochastic Modeling of Pitting Corrosion

    Directory of Open Access Journals (Sweden)

    A. Valor

    2013-01-01

    Full Text Available The stochastic nature of pitting corrosion of metallic structures has been widely recognized. It is assumed that this kind of deterioration retains no memory of the past, so only the current state of the damage influences its future development. This characteristic allows pitting corrosion to be categorized as a Markov process. In this paper, two different models of pitting corrosion, developed using Markov chains, are presented. Firstly, a continuous-time, nonhomogeneous linear growth (pure birth Markov process is used to model external pitting corrosion in underground pipelines. A closed-form solution of the system of Kolmogorov's forward equations is used to describe the transition probability function in a discrete pit depth space. The transition probability function is identified by correlating the stochastic pit depth mean with the empirical deterministic mean. In the second model, the distribution of maximum pit depths in a pitting experiment is successfully modeled after the combination of two stochastic processes: pit initiation and pit growth. Pit generation is modeled as a nonhomogeneous Poisson process, in which induction time is simulated as the realization of a Weibull process. Pit growth is simulated using a nonhomogeneous Markov process. An analytical solution of Kolmogorov's system of equations is also found for the transition probabilities from the first Markov state. Extreme value statistics is employed to find the distribution of maximum pit depths.

  14. Process-based modelling of tree and stand growth: towards a hierarchical treatment of multiscale processes

    International Nuclear Information System (INIS)

    Makela, A.

    2003-01-01

    A generally accepted method has not emerged for managing the different temporal and spatial scales in a forest ecosystem. This paper reviews a hierarchical-modular modelling tradition, with the main focus on individual tree growth throughout the rotation. At this scale, model performance requires (i) realistic long-term dynamic properties, (ii) realistic responses of growth and mortality of competing individuals, and (iii) realistic responses to ecophysio-logical inputs. Model development and validation are illustrated through allocation patterns, height growth, and size-related feedbacks. Empirical work to test the approach is reviewed. In this approach, finer scale effects are embedded in parameters calculated using more detailed, interacting modules. This is exemplified by (i) the within-year effect of weather on annual photosynthesis, (ii) the effects of fast soil processes on carbon allocation and photosynthesis, and (iii) the utilization of detailed stem structure to predict wood quality. Prevailing management paradigms are reflected in growth modelling. A shift of emphasis has occurred from productivity in homogeneous canopies towards, e.g., wood quality versus total yield, spatially more explicit models, and growth decline in old-growth forests. The new problems emphasize the hierarchy of the system and interscale interactions, suggesting that the hierarchical-modular approach could prove constructive. (author)

  15. Hierarchical modeling and robust synthesis for the preliminary design of large scale complex systems

    Science.gov (United States)

    Koch, Patrick Nathan

    Large-scale complex systems are characterized by multiple interacting subsystems and the analysis of multiple disciplines. The design and development of such systems inevitably requires the resolution of multiple conflicting objectives. The size of complex systems, however, prohibits the development of comprehensive system models, and thus these systems must be partitioned into their constituent parts. Because simultaneous solution of individual subsystem models is often not manageable iteration is inevitable and often excessive. In this dissertation these issues are addressed through the development of a method for hierarchical robust preliminary design exploration to facilitate concurrent system and subsystem design exploration, for the concurrent generation of robust system and subsystem specifications for the preliminary design of multi-level, multi-objective, large-scale complex systems. This method is developed through the integration and expansion of current design techniques: (1) Hierarchical partitioning and modeling techniques for partitioning large-scale complex systems into more tractable parts, and allowing integration of subproblems for system synthesis, (2) Statistical experimentation and approximation techniques for increasing both the efficiency and the comprehensiveness of preliminary design exploration, and (3) Noise modeling techniques for implementing robust preliminary design when approximate models are employed. The method developed and associated approaches are illustrated through their application to the preliminary design of a commercial turbofan turbine propulsion system; the turbofan system-level problem is partitioned into engine cycle and configuration design and a compressor module is integrated for more detailed subsystem-level design exploration, improving system evaluation.

  16. A Hierarchical Feature Extraction Model for Multi-Label Mechanical Patent Classification

    Directory of Open Access Journals (Sweden)

    Jie Hu

    2018-01-01

    Full Text Available Various studies have focused on feature extraction methods for automatic patent classification in recent years. However, most of these approaches are based on the knowledge from experts in related domains. Here we propose a hierarchical feature extraction model (HFEM for multi-label mechanical patent classification, which is able to capture both local features of phrases as well as global and temporal semantics. First, a n-gram feature extractor based on convolutional neural networks (CNNs is designed to extract salient local lexical-level features. Next, a long dependency feature extraction model based on the bidirectional long–short-term memory (BiLSTM neural network model is proposed to capture sequential correlations from higher-level sequence representations. Then the HFEM algorithm and its hierarchical feature extraction architecture are detailed. We establish the training, validation and test datasets, containing 72,532, 18,133, and 2679 mechanical patent documents, respectively, and then check the performance of HFEMs. Finally, we compared the results of the proposed HFEM and three other single neural network models, namely CNN, long–short-term memory (LSTM, and BiLSTM. The experimental results indicate that our proposed HFEM outperforms the other compared models in both precision and recall.

  17. A hierarchical bayesian model to quantify uncertainty of stream water temperature forecasts.

    Directory of Open Access Journals (Sweden)

    Guillaume Bal

    Full Text Available Providing generic and cost effective modelling approaches to reconstruct and forecast freshwater temperature using predictors as air temperature and water discharge is a prerequisite to understanding ecological processes underlying the impact of water temperature and of global warming on continental aquatic ecosystems. Using air temperature as a simple linear predictor of water temperature can lead to significant bias in forecasts as it does not disentangle seasonality and long term trends in the signal. Here, we develop an alternative approach based on hierarchical Bayesian statistical time series modelling of water temperature, air temperature and water discharge using seasonal sinusoidal periodic signals and time varying means and amplitudes. Fitting and forecasting performances of this approach are compared with that of simple linear regression between water and air temperatures using i an emotive simulated example, ii application to three French coastal streams with contrasting bio-geographical conditions and sizes. The time series modelling approach better fit data and does not exhibit forecasting bias in long term trends contrary to the linear regression. This new model also allows for more accurate forecasts of water temperature than linear regression together with a fair assessment of the uncertainty around forecasting. Warming of water temperature forecast by our hierarchical Bayesian model was slower and more uncertain than that expected with the classical regression approach. These new forecasts are in a form that is readily usable in further ecological analyses and will allow weighting of outcomes from different scenarios to manage climate change impacts on freshwater wildlife.

  18. A conceptual modeling framework for discrete event simulation using hierarchical control structures

    Science.gov (United States)

    Furian, N.; O’Sullivan, M.; Walker, C.; Vössner, S.; Neubacher, D.

    2015-01-01

    Conceptual Modeling (CM) is a fundamental step in a simulation project. Nevertheless, it is only recently that structured approaches towards the definition and formulation of conceptual models have gained importance in the Discrete Event Simulation (DES) community. As a consequence, frameworks and guidelines for applying CM to DES have emerged and discussion of CM for DES is increasing. However, both the organization of model-components and the identification of behavior and system control from standard CM approaches have shortcomings that limit CM’s applicability to DES. Therefore, we discuss the different aspects of previous CM frameworks and identify their limitations. Further, we present the Hierarchical Control Conceptual Modeling framework that pays more attention to the identification of a models’ system behavior, control policies and dispatching routines and their structured representation within a conceptual model. The framework guides the user step-by-step through the modeling process and is illustrated by a worked example. PMID:26778940

  19. Modeling Uncertainty of Directed Movement via Markov Chains

    Directory of Open Access Journals (Sweden)

    YIN Zhangcai

    2015-10-01

    Full Text Available Probabilistic time geography (PTG is suggested as an extension of (classical time geography, in order to present the uncertainty of an agent located at the accessible position by probability. This may provide a quantitative basis for most likely finding an agent at a location. In recent years, PTG based on normal distribution or Brown bridge has been proposed, its variance, however, is irrelevant with the agent's speed or divergent with the increase of the speed; so they are difficult to take into account application pertinence and stability. In this paper, a new method is proposed to model PTG based on Markov chain. Firstly, a bidirectional conditions Markov chain is modeled, the limit of which, when the moving speed is large enough, can be regarded as the Brown bridge, thus has the characteristics of digital stability. Then, the directed movement is mapped to Markov chains. The essential part is to build step length, the state space and transfer matrix of Markov chain according to the space and time position of directional movement, movement speed information, to make sure the Markov chain related to the movement speed. Finally, calculating continuously the probability distribution of the directed movement at any time by the Markov chains, it can be get the possibility of an agent located at the accessible position. Experimental results show that, the variance based on Markov chains not only is related to speed, but also is tending towards stability with increasing the agent's maximum speed.

  20. A non-parametric hierarchical model to discover behavior dynamics from tracks

    NARCIS (Netherlands)

    Kooij, J.F.P.; Englebienne, G.; Gavrila, D.M.

    2012-01-01

    We present a novel non-parametric Bayesian model to jointly discover the dynamics of low-level actions and high-level behaviors of tracked people in open environments. Our model represents behaviors as Markov chains of actions which capture high-level temporal dynamics. Actions may be shared by

  1. Power plant reliability calculation with Markov chain models

    International Nuclear Information System (INIS)

    Senegacnik, A.; Tuma, M.

    1998-01-01

    In the paper power plant operation is modelled using continuous time Markov chains with discrete state space. The model is used to compute the power plant reliability and the importance and influence of individual states, as well as the transition probabilities between states. For comparison the model is fitted to data for coal and nuclear power plants recorded over several years. (orig.) [de

  2. Inozemtsev's hyperbolic spin model and its related spin chain

    International Nuclear Information System (INIS)

    Barba, J.C.; Finkel, F.; Gonzalez-Lopez, A.; Rodriguez, M.A.

    2010-01-01

    In this paper we study Inozemtsev's su(m) quantum spin model with hyperbolic interactions and the associated spin chain of Haldane-Shastry type introduced by Frahm and Inozemtsev. We compute the spectrum of Inozemtsev's model, and use this result and the freezing trick to derive a simple analytic expression for the partition function of the Frahm-Inozemtsev chain. We show that the energy levels of the latter chain can be written in terms of the usual motifs for the Haldane-Shastry chain, although with a different dispersion relation. The formula for the partition function is used to analyze the behavior of the level density and the distribution of spacings between consecutive unfolded levels. We discuss the relevance of our results in connection with two well-known conjectures in quantum chaos.

  3. Hierarchical modeling of plasma and transport phenomena in a dielectric barrier discharge reactor

    Science.gov (United States)

    Bali, N.; Aggelopoulos, C. A.; Skouras, E. D.; Tsakiroglou, C. D.; Burganos, V. N.

    2017-12-01

    A novel dual-time hierarchical approach is developed to link the plasma process to macroscopic transport phenomena in the interior of a dielectric barrier discharge (DBD) reactor that has been used for soil remediation (Aggelopoulos et al 2016 Chem. Eng. J. 301 353-61). The generation of active species by plasma reactions is simulated at the microseconds (µs) timescale, whereas convection and thermal conduction are simulated at the macroscopic (minutes) timescale. This hierarchical model is implemented in order to investigate the influence of the plasma DBD process on the transport and reaction mechanisms during remediation of polluted soil. In the microscopic model, the variables of interest include the plasma-induced reactive concentrations, while in the macroscopic approach, the temperature distribution, and the velocity field both inside the discharge gap and within the polluted soil material as well. For the latter model, the Navier-Stokes and Darcy Brinkman equations for the transport phenomena in the porous domain are solved numerically using a FEM software. The effective medium theory is employed to provide estimates of the effective time-evolving and three-phase transport properties in the soil sample. Model predictions considering the temporal evolution of the plasma remediation process are presented and compared with corresponding experimental data.

  4. A model of shape memory materials with hierarchical twinning: statics and dynamics

    International Nuclear Information System (INIS)

    Saxena, A.; Bishop, A.R.; Wu, Y.; Lookman, T.

    1995-01-01

    We consider a model of shape memory materials in which hierarchical twinning near the habit plane (austenite-martensite interface) is a new and crucial ingredient. The model includes (1) a triple-well potential (φ 6 model) in local shear strain, (2) strain gradient terms up to second order in strain and fourth order in gradient, and (3) all symmetry allowed compositional fluctuation-induced strain gradient terms. The last term favors hierarchy which enables communication between macroscopic (cm) and microscopic (A) regions essential for shape memory. Hierarchy also stabilizes tweed formation (criss-cross patterns of twins). External stress or pressure modulates (''patterns'') the spacing of domain walls. Therefore the ''pattern'' is encoded in the modulated hierarchical variation of the depth and width of the twins. This hierarchy of length scales provides a related hierarchy of time scales and thus the possibility of non-exponential decay. The four processes of the complete shape memory cycle-write, record, erase and recall-are explained within this model. Preliminary results based on 2D molecular dynamics are shown for tweed and hierarchy formation. (orig.)

  5. Relating Memory To Functional Performance In Normal Aging to Dementia Using Hierarchical Bayesian Cognitive Processing Models

    Science.gov (United States)

    Shankle, William R.; Pooley, James P.; Steyvers, Mark; Hara, Junko; Mangrola, Tushar; Reisberg, Barry; Lee, Michael D.

    2012-01-01

    Determining how cognition affects functional abilities is important in Alzheimer’s disease and related disorders (ADRD). 280 patients (normal or ADRD) received a total of 1,514 assessments using the Functional Assessment Staging Test (FAST) procedure and the MCI Screen (MCIS). A hierarchical Bayesian cognitive processing (HBCP) model was created by embedding a signal detection theory (SDT) model of the MCIS delayed recognition memory task into a hierarchical Bayesian framework. The SDT model used latent parameters of discriminability (memory process) and response bias (executive function) to predict, simultaneously, recognition memory performance for each patient and each FAST severity group. The observed recognition memory data did not distinguish the six FAST severity stages, but the latent parameters completely separated them. The latent parameters were also used successfully to transform the ordinal FAST measure into a continuous measure reflecting the underlying continuum of functional severity. HBCP models applied to recognition memory data from clinical practice settings accurately translated a latent measure of cognition to a continuous measure of functional severity for both individuals and FAST groups. Such a translation links two levels of brain information processing, and may enable more accurate correlations with other levels, such as those characterized by biomarkers. PMID:22407225

  6. Hierarchical Bayesian Markov switching models with application to predicting spawning success of shovelnose sturgeon

    Science.gov (United States)

    Holan, S.H.; Davis, G.M.; Wildhaber, M.L.; DeLonay, A.J.; Papoulias, D.M.

    2009-01-01

    The timing of spawning in fish is tightly linked to environmental factors; however, these factors are not very well understood for many species. Specifically, little information is available to guide recruitment efforts for endangered species such as the sturgeon. Therefore, we propose a Bayesian hierarchical model for predicting the success of spawning of the shovelnose sturgeon which uses both biological and behavioural (longitudinal) data. In particular, we use data that were produced from a tracking study that was conducted in the Lower Missouri River. The data that were produced from this study consist of biological variables associated with readiness to spawn along with longitudinal behavioural data collected by using telemetry and archival data storage tags. These high frequency data are complex both biologically and in the underlying behavioural process. To accommodate such complexity we developed a hierarchical linear regression model that uses an eigenvalue predictor, derived from the transition probability matrix of a two-state Markov switching model with generalized auto-regressive conditional heteroscedastic dynamics. Finally, to minimize the computational burden that is associated with estimation of this model, a parallel computing approach is proposed. ?? Journal compilation 2009 Royal Statistical Society.

  7. Hierarchical Bayesian modelling of gene expression time series across irregularly sampled replicates and clusters.

    Science.gov (United States)

    Hensman, James; Lawrence, Neil D; Rattray, Magnus

    2013-08-20

    Time course data from microarrays and high-throughput sequencing experiments require simple, computationally efficient and powerful statistical models to extract meaningful biological signal, and for tasks such as data fusion and clustering. Existing methodologies fail to capture either the temporal or replicated nature of the experiments, and often impose constraints on the data collection process, such as regularly spaced samples, or similar sampling schema across replications. We propose hierarchical Gaussian processes as a general model of gene expression time-series, with application to a variety of problems. In particular, we illustrate the method's capacity for missing data imputation, data fusion and clustering.The method can impute data which is missing both systematically and at random: in a hold-out test on real data, performance is significantly better than commonly used imputation methods. The method's ability to model inter- and intra-cluster variance leads to more biologically meaningful clusters. The approach removes the necessity for evenly spaced samples, an advantage illustrated on a developmental Drosophila dataset with irregular replications. The hierarchical Gaussian process model provides an excellent statistical basis for several gene-expression time-series tasks. It has only a few additional parameters over a regular GP, has negligible additional complexity, is easily implemented and can be integrated into several existing algorithms. Our experiments were implemented in python, and are available from the authors' website: http://staffwww.dcs.shef.ac.uk/people/J.Hensman/.

  8. Diagnostics for generalized linear hierarchical models in network meta-analysis.

    Science.gov (United States)

    Zhao, Hong; Hodges, James S; Carlin, Bradley P

    2017-09-01

    Network meta-analysis (NMA) combines direct and indirect evidence comparing more than 2 treatments. Inconsistency arises when these 2 information sources differ. Previous work focuses on inconsistency detection, but little has been done on how to proceed after identifying inconsistency. The key issue is whether inconsistency changes an NMA's substantive conclusions. In this paper, we examine such discrepancies from a diagnostic point of view. Our methods seek to detect influential and outlying observations in NMA at a trial-by-arm level. These observations may have a large effect on the parameter estimates in NMA, or they may deviate markedly from other observations. We develop formal diagnostics for a Bayesian hierarchical model to check the effect of deleting any observation. Diagnostics are specified for generalized linear hierarchical NMA models and investigated for both published and simulated datasets. Results from our example dataset using either contrast- or arm-based models and from the simulated datasets indicate that the sources of inconsistency in NMA tend not to be influential, though results from the example dataset suggest that they are likely to be outliers. This mimics a familiar result from linear model theory, in which outliers with low leverage are not influential. Future extensions include incorporating baseline covariates and individual-level patient data. Copyright © 2017 John Wiley & Sons, Ltd.

  9. Hierarchical spatial models for predicting pygmy rabbit distribution and relative abundance

    Science.gov (United States)

    Wilson, T.L.; Odei, J.B.; Hooten, M.B.; Edwards, T.C.

    2010-01-01

    Conservationists routinely use species distribution models to plan conservation, restoration and development actions, while ecologists use them to infer process from pattern. These models tend to work well for common or easily observable species, but are of limited utility for rare and cryptic species. This may be because honest accounting of known observation bias and spatial autocorrelation are rarely included, thereby limiting statistical inference of resulting distribution maps. We specified and implemented a spatially explicit Bayesian hierarchical model for a cryptic mammal species (pygmy rabbit Brachylagus idahoensis). Our approach used two levels of indirect sign that are naturally hierarchical (burrows and faecal pellets) to build a model that allows for inference on regression coefficients as well as spatially explicit model parameters. We also produced maps of rabbit distribution (occupied burrows) and relative abundance (number of burrows expected to be occupied by pygmy rabbits). The model demonstrated statistically rigorous spatial prediction by including spatial autocorrelation and measurement uncertainty. We demonstrated flexibility of our modelling framework by depicting probabilistic distribution predictions using different assumptions of pygmy rabbit habitat requirements. Spatial representations of the variance of posterior predictive distributions were obtained to evaluate heterogeneity in model fit across the spatial domain. Leave-one-out cross-validation was conducted to evaluate the overall model fit. Synthesis and applications. Our method draws on the strengths of previous work, thereby bridging and extending two active areas of ecological research: species distribution models and multi-state occupancy modelling. Our framework can be extended to encompass both larger extents and other species for which direct estimation of abundance is difficult. ?? 2010 The Authors. Journal compilation ?? 2010 British Ecological Society.

  10. Estimating effectiveness in HIV prevention trials with a Bayesian hierarchical compound Poisson frailty model

    Science.gov (United States)

    Coley, Rebecca Yates; Browna, Elizabeth R.

    2016-01-01

    Inconsistent results in recent HIV prevention trials of pre-exposure prophylactic interventions may be due to heterogeneity in risk among study participants. Intervention effectiveness is most commonly estimated with the Cox model, which compares event times between populations. When heterogeneity is present, this population-level measure underestimates intervention effectiveness for individuals who are at risk. We propose a likelihood-based Bayesian hierarchical model that estimates the individual-level effectiveness of candidate interventions by accounting for heterogeneity in risk with a compound Poisson-distributed frailty term. This model reflects the mechanisms of HIV risk and allows that some participants are not exposed to HIV and, therefore, have no risk of seroconversion during the study. We assess model performance via simulation and apply the model to data from an HIV prevention trial. PMID:26869051

  11. Hierarchical competition models with the Allee effect II: the case of immigration.

    Science.gov (United States)

    Assas, Laila; Dennis, Brian; Elaydi, Saber; Kwessi, Eddy; Livadiotis, George

    2015-01-01

    This is part II of an earlier paper that dealt with hierarchical models with the Allee effect but with no immigration. In this paper, we greatly simplify the proofs in part I and provide a proof of the global dynamics of the non-hyperbolic cases that were previously conjectured. Then, we show how immigration to one of the species or to both would, drastically, change the dynamics of the system. It is shown that if the level of immigration to one or to both species is above a specified level, then there will be no extinction region where both species go to extinction.

  12. High-accuracy critical exponents for O(N) hierarchical 3D sigma models

    International Nuclear Information System (INIS)

    Godina, J. J.; Li, L.; Meurice, Y.; Oktay, M. B.

    2006-01-01

    The critical exponent γ and its subleading exponent Δ in the 3D O(N) Dyson's hierarchical model for N up to 20 are calculated with high accuracy. We calculate the critical temperatures for the measure δ(φ-vector.φ-vector-1). We extract the first coefficients of the 1/N expansion from our numerical data. We show that the leading and subleading exponents agree with Polchinski equation and the equivalent Litim equation, in the local potential approximation, with at least 4 significant digits

  13. A hierarchical Markov decision process modeling feeding and marketing decisions of growing pigs

    DEFF Research Database (Denmark)

    Pourmoayed, Reza; Nielsen, Lars Relund; Kristensen, Anders Ringgaard

    2016-01-01

    Feeding is the most important cost in the production of growing pigs and has a direct impact on the marketing decisions, growth and the final quality of the meat. In this paper, we address the sequential decision problem of when to change the feed-mix within a finisher pig pen and when to pick pigs...... for marketing. We formulate a hierarchical Markov decision process with three levels representing the decision process. The model considers decisions related to feeding and marketing and finds the optimal decision given the current state of the pen. The state of the system is based on information from on...

  14. MEASURING THE DATA MODEL QUALITY IN THE ESUPPLY CHAINS

    Directory of Open Access Journals (Sweden)

    Zora Arsovski

    2012-03-01

    Full Text Available The implementation of Internet technology in business has enabled the development of e-business supply chains with large-scale information integration among all partners.The development of information systems (IS is based on the established business objectives whose achievement, among other things, directly depends on the quality of development and design of IS. In the process of analysis of the key elements of company operations in the supply chain, process model and corresponding data model are designed which should enable selection of appropriate information system architecture. The quality of the implemented information system, which supports e-supply chain, directly depends on the level of data model quality. One of the serious limitations of the data model is its complexity. With a large number of entities, data model is difficult to analyse, monitor and maintain. The problem gets bigger when looking at an integrated data model at the level of participating partners in the supply chain, where the data model usually consists of hundreds or even thousands of entities.The paper will analyse the key elements affecting the quality of data models and show their interactions and factors of significance. In addition, the paper presents various measures for assessing the quality of the data model on which it is possible to easily locate the problems and focus efforts in specific parts of a complex data model where it is not economically feasible to review every detail of the model.

  15. Kinks, chains, and loop groups in the CPn sigma models

    International Nuclear Information System (INIS)

    Harland, Derek

    2009-01-01

    We consider topological solitons in the CP n sigma models in two space dimensions. In particular, we study 'kinks', which are independent of one coordinate up to a rotation of the target space, and 'chains', which are periodic in one coordinate up to a rotation of the target space. Kinks and chains both exhibit constituents, similar to monopoles and calorons in SU(n) Yang-Mills-Higgs and Yang-Mills theories. We examine the constituent structure using Lie algebras.

  16. A Model for Sustainable Value Creation in Supply Chain

    OpenAIRE

    KORDİTABAR, Seyed Behzad

    2015-01-01

    Abstract. In order to survive, every company needs to achieve sustainable profitability, which is impossible unless there is sustainable value creation. Regarding the fact that sustainability is closely related with concepts of supply chain management, the present paper intends to propose through a conceptual theorization approach a new comprehensive model drawing on concepts of value creation and sustainability from the perspective of supply chain, specifying the dimensions contributing to s...

  17. Demographic Modeling Via 3-dimensional Markov Chains

    OpenAIRE

    Viquez, Juan Jose; Campos, Alexander; Loria, Jorge; Mendoza, Luis Alfredo; Viquez, Jorge Aurelio

    2017-01-01

    This article presents a new model for demographic simulation which can be used to forecast and estimate the number of people in pension funds (contributors and retirees) as well as workers in a public institution. Furthermore, the model introduces opportunities to quantify the financial ows coming from future populations such as salaries, contributions, salary supplements, employer contribution to savings/pensions, among others. The implementation of this probabilistic model will be of great ...

  18. Modelling of 137Cs concentration change in organisms of the Japanese coastal food chains

    International Nuclear Information System (INIS)

    Tateda, Y.; Nakahara, M.; Nakamura, R.

    1999-01-01

    In order to predict 137 CS concentrations in marine organisms of Japanese coastal food chains, a basic compartment model being composed of nuclide transfer both from seawater and food chain was investigated. Food chain structure of typical Japanese coastal water is established to include detritus, food chain, benthic food chain and planktonic food chain

  19. Fracture Mechanical Markov Chain Crack Growth Model

    DEFF Research Database (Denmark)

    Gansted, L.; Brincker, Rune; Hansen, Lars Pilegaard

    1991-01-01

    propagation process can be described by a discrete space Markov theory. The model is applicable to deterministic as well as to random loading. Once the model parameters for a given material have been determined, the results can be used for any structure as soon as the geometrical function is known....

  20. Large-scale model of flow in heterogeneous and hierarchical porous media

    Science.gov (United States)

    Chabanon, Morgan; Valdés-Parada, Francisco J.; Ochoa-Tapia, J. Alberto; Goyeau, Benoît

    2017-11-01

    Heterogeneous porous structures are very often encountered in natural environments, bioremediation processes among many others. Reliable models for momentum transport are crucial whenever mass transport or convective heat occurs in these systems. In this work, we derive a large-scale average model for incompressible single-phase flow in heterogeneous and hierarchical soil porous media composed of two distinct porous regions embedding a solid impermeable structure. The model, based on the local mechanical equilibrium assumption between the porous regions, results in a unique momentum transport equation where the global effective permeability naturally depends on the permeabilities at the intermediate mesoscopic scales and therefore includes the complex hierarchical structure of the soil. The associated closure problem is numerically solved for various configurations and properties of the heterogeneous medium. The results clearly show that the effective permeability increases with the volume fraction of the most permeable porous region. It is also shown that the effective permeability is sensitive to the dimensionality spatial arrangement of the porous regions and in particular depends on the contact between the impermeable solid and the two porous regions.

  1. Evolutionary-Hierarchical Bases of the Formation of Cluster Model of Innovation Economic Development

    Directory of Open Access Journals (Sweden)

    Yuliya Vladimirovna Dubrovskaya

    2016-10-01

    Full Text Available The functioning of a modern economic system is based on the interaction of objects of different hierarchical levels. Thus, the problem of the study of innovation processes taking into account the mutual influence of the activities of these economic actors becomes important. The paper dwells evolutionary basis for the formation of models of innovation development on the basis of micro and macroeconomic analysis. Most of the concepts recognized that despite a big number of diverse models, the coordination of the relations between economic agents is of crucial importance for the successful innovation development. According to the results of the evolutionary-hierarchical analysis, the authors reveal key phases of the development of forms of business cooperation, science and government in the domestic economy. It has become the starting point of the conception of the characteristics of the interaction in the cluster models of innovation development of the economy. Considerable expectancies on improvement of the national innovative system are connected with the development of cluster and network structures. The main objective of government authorities is the formation of mechanisms and institutions that will foster cooperation between members of the clusters. The article explains that the clusters cannot become the factors in the growth of the national economy, not being an effective tool for interaction between the actors of the regional innovative systems.

  2. A hierarchical probabilistic model for rapid object categorization in natural scenes.

    Directory of Open Access Journals (Sweden)

    Xiaofu He

    Full Text Available Humans can categorize objects in complex natural scenes within 100-150 ms. This amazing ability of rapid categorization has motivated many computational models. Most of these models require extensive training to obtain a decision boundary in a very high dimensional (e.g., ∼6,000 in a leading model feature space and often categorize objects in natural scenes by categorizing the context that co-occurs with objects when objects do not occupy large portions of the scenes. It is thus unclear how humans achieve rapid scene categorization.To address this issue, we developed a hierarchical probabilistic model for rapid object categorization in natural scenes. In this model, a natural object category is represented by a coarse hierarchical probability distribution (PD, which includes PDs of object geometry and spatial configuration of object parts. Object parts are encoded by PDs of a set of natural object structures, each of which is a concatenation of local object features. Rapid categorization is performed as statistical inference. Since the model uses a very small number (∼100 of structures for even complex object categories such as animals and cars, it requires little training and is robust in the presence of large variations within object categories and in their occurrences in natural scenes. Remarkably, we found that the model categorized animals in natural scenes and cars in street scenes with a near human-level performance. We also found that the model located animals and cars in natural scenes, thus overcoming a flaw in many other models which is to categorize objects in natural context by categorizing contextual features. These results suggest that coarse PDs of object categories based on natural object structures and statistical operations on these PDs may underlie the human ability to rapidly categorize scenes.

  3. A chain-retrieval model for voluntary task switching.

    Science.gov (United States)

    Vandierendonck, André; Demanet, Jelle; Liefooghe, Baptist; Verbruggen, Frederick

    2012-09-01

    To account for the findings obtained in voluntary task switching, this article describes and tests the chain-retrieval model. This model postulates that voluntary task selection involves retrieval of task information from long-term memory, which is then used to guide task selection and task execution. The model assumes that the retrieved information consists of acquired sequences (or chains) of tasks, that selection may be biased towards chains containing more task repetitions and that bottom-up triggered repetitions may overrule the intended task. To test this model, four experiments are reported. In Studies 1 and 2, sequences of task choices and the corresponding transition sequences (task repetitions or switches) were analyzed with the help of dependency statistics. The free parameters of the chain-retrieval model were estimated on the observed task sequences and these estimates were used to predict autocorrelations of tasks and transitions. In Studies 3 and 4, sequences of hand choices and their transitions were analyzed similarly. In all studies, the chain-retrieval model yielded better fits and predictions than statistical models of event choice. In applications to voluntary task switching (Studies 1 and 2), all three parameters of the model were needed to account for the data. When no task switching was required (Studies 3 and 4), the chain-retrieval model could account for the data with one or two parameters clamped to a neutral value. Implications for our understanding of voluntary task selection and broader theoretical implications are discussed. Copyright © 2012 Elsevier Inc. All rights reserved.

  4. A dust spectral energy distribution model with hierarchical Bayesian inference - I. Formalism and benchmarking

    Science.gov (United States)

    Galliano, Frédéric

    2018-05-01

    This article presents a new dust spectral energy distribution (SED) model, named HerBIE, aimed at eliminating the noise-induced correlations and large scatter obtained when performing least-squares fits. The originality of this code is to apply the hierarchical Bayesian approach to full dust models, including realistic optical properties, stochastic heating, and the mixing of physical conditions in the observed regions. We test the performances of our model by applying it to synthetic observations. We explore the impact on the recovered parameters of several effects: signal-to-noise ratio, SED shape, sample size, the presence of intrinsic correlations, the wavelength coverage, and the use of different SED model components. We show that this method is very efficient: the recovered parameters are consistently distributed around their true values. We do not find any clear bias, even for the most degenerate parameters, or with extreme signal-to-noise ratios.

  5. A Multiobjective Optimization Model in Automotive Supply Chain Networks

    Directory of Open Access Journals (Sweden)

    Abdolhossein Sadrnia

    2013-01-01

    Full Text Available In the new decade, green investment decisions are attracting more interest in design supply chains due to the hidden economic benefits and environmental legislative barriers. In this paper, a supply chain network design problem with both economic and environmental concerns is presented. Therefore, a multiobjective optimization model that captures the trade-off between the total logistics cost and CO2 emissions is proposed. With regard to the complexity of logistic networks, a new multiobjective swarm intelligence algorithm known as a multiobjective Gravitational search algorithm (MOGSA has been implemented for solving the proposed mathematical model. To evaluate the effectiveness of the model, a comprehensive set of numerical experiments is explained. The results obtained show that the proposed model can be applied as an effective tool in strategic planning for optimizing cost and CO2 emissions in an environmentally friendly automotive supply chain.

  6. Merging information from multi-model flood projections in a hierarchical Bayesian framework

    Science.gov (United States)

    Le Vine, Nataliya

    2016-04-01

    Multi-model ensembles are becoming widely accepted for flood frequency change analysis. The use of multiple models results in large uncertainty around estimates of flood magnitudes, due to both uncertainty in model selection and natural variability of river flow. The challenge is therefore to extract the most meaningful signal from the multi-model predictions, accounting for both model quality and uncertainties in individual model estimates. The study demonstrates the potential of a recently proposed hierarchical Bayesian approach to combine information from multiple models. The approach facilitates explicit treatment of shared multi-model discrepancy as well as the probabilistic nature of the flood estimates, by treating the available models as a sample from a hypothetical complete (but unobserved) set of models. The advantages of the approach are: 1) to insure an adequate 'baseline' conditions with which to compare future changes; 2) to reduce flood estimate uncertainty; 3) to maximize use of statistical information in circumstances where multiple weak predictions individually lack power, but collectively provide meaningful information; 4) to adjust multi-model consistency criteria when model biases are large; and 5) to explicitly consider the influence of the (model performance) stationarity assumption. Moreover, the analysis indicates that reducing shared model discrepancy is the key to further reduction of uncertainty in the flood frequency analysis. The findings are of value regarding how conclusions about changing exposure to flooding are drawn, and to flood frequency change attribution studies.

  7. Economic modelling of pork production-marketing chains

    NARCIS (Netherlands)

    Ouden, den M.

    1996-01-01

    The research described in this thesis was focused on the development of economic simulation and optimization computer models to support decision making with respect to pork production- marketing chains. The models include three production stages: pig farrowing, pig fattening and pig slaughtering

  8. Consequences of population models on the dynamics of food chains.

    NARCIS (Netherlands)

    Kooi, B.W.; Boer, M.P.; Kooijman, S.A.L.M.

    1998-01-01

    A class of bioenergetic ecological models is studied for the dynamics of food chains with a nutrient at the base. A constant influx rate of the nutrient and a constant efflux rate for all trophic levels is assumed. Starting point is a simple model where prey is converted into predator with a fixed

  9. Modeling cadmium in the feed chain and cattle organs

    NARCIS (Netherlands)

    Fels-Klerx, van der H.J.; Romkens, P.F.A.M.; Franz, E.; Raamsdonk, van L.W.D.

    2011-01-01

    The objectives of this study were to estimate cadmium contamination levels in different scenarios related to soil characteristics and assumptions regarding cadmium accumulation in the animal tissues, using quantitative supply chain modeling. The model takes into account soil cadmium levels, soil pH,

  10. Modelling of the radionuclide transport through terrestrial food chains

    International Nuclear Information System (INIS)

    Hanusik, V.

    1991-01-01

    The paper presents a terrestrial food chains model for computing potential human intake of radionuclides released into the atmosphere during normal NPP operation. Attention is paid to the choice of model parameter values. Results obtained by our approach are compared to those applied in current methodology. (orig.) [de

  11. Hierarchical neural network model of the visual system determining figure/ground relation

    Science.gov (United States)

    Kikuchi, Masayuki

    2017-07-01

    One of the most important functions of the visual perception in the brain is figure/ground interpretation from input images. Figural region in 2D image corresponding to object in 3D space are distinguished from background region extended behind the object. Previously the author proposed a neural network model of figure/ground separation constructed on the standpoint that local geometric features such as curvatures and outer angles at corners are extracted and propagated along input contour in a single layer network (Kikuchi & Akashi, 2001). However, such a processing principle has the defect that signal propagation requires manyiterations despite the fact that actual visual system determines figure/ground relation within the short period (Zhou et al., 2000). In order to attain speed-up for determining figure/ground, this study incorporates hierarchical architecture into the previous model. This study confirmed the effect of the hierarchization as for the computation time by simulation. As the number of layers increased, the required computation time reduced. However, such speed-up effect was saturatedas the layers increased to some extent. This study attempted to explain this saturation effect by the notion of average distance between vertices in the area of complex network, and succeeded to mimic the saturation effect by computer simulation.

  12. Toward combining thematic information with hierarchical multiscale segmentations using tree Markov random field model

    Science.gov (United States)

    Zhang, Xueliang; Xiao, Pengfeng; Feng, Xuezhi

    2017-09-01

    It has been a common idea to produce multiscale segmentations to represent the various geographic objects in high-spatial resolution remote sensing (HR) images. However, it remains a great challenge to automatically select the proper segmentation scale(s) just according to the image information. In this study, we propose a novel way of information fusion at object level by combining hierarchical multiscale segmentations with existed thematic information produced by classification or recognition. The tree Markov random field (T-MRF) model is designed for the multiscale combination framework, through which the object type is determined as close as the existed thematic information. At the same time, the object boundary is jointly determined by the thematic labels and the multiscale segments through the minimization of the energy function. The benefits of the proposed T-MRF combination model include: (1) reducing the dependence of segmentation scale selection when utilizing multiscale segmentations; (2) exploring the hierarchical context naturally imbedded in the multiscale segmentations. The HR images in both urban and rural areas are used in the experiments to show the effectiveness of the proposed combination framework on these two aspects.

  13. Empirical Bayes ranking and selection methods via semiparametric hierarchical mixture models in microarray studies.

    Science.gov (United States)

    Noma, Hisashi; Matsui, Shigeyuki

    2013-05-20

    The main purpose of microarray studies is screening of differentially expressed genes as candidates for further investigation. Because of limited resources in this stage, prioritizing genes are relevant statistical tasks in microarray studies. For effective gene selections, parametric empirical Bayes methods for ranking and selection of genes with largest effect sizes have been proposed (Noma et al., 2010; Biostatistics 11: 281-289). The hierarchical mixture model incorporates the differential and non-differential components and allows information borrowing across differential genes with separation from nuisance, non-differential genes. In this article, we develop empirical Bayes ranking methods via a semiparametric hierarchical mixture model. A nonparametric prior distribution, rather than parametric prior distributions, for effect sizes is specified and estimated using the "smoothing by roughening" approach of Laird and Louis (1991; Computational statistics and data analysis 12: 27-37). We present applications to childhood and infant leukemia clinical studies with microarrays for exploring genes related to prognosis or disease progression. Copyright © 2012 John Wiley & Sons, Ltd.

  14. Hierarchical modeling of genome-wide Short Tandem Repeat (STR) markers infers native American prehistory.

    Science.gov (United States)

    Lewis, Cecil M

    2010-02-01

    This study examines a genome-wide dataset of 678 Short Tandem Repeat loci characterized in 444 individuals representing 29 Native American populations as well as the Tundra Netsi and Yakut populations from Siberia. Using these data, the study tests four current hypotheses regarding the hierarchical distribution of neutral genetic variation in native South American populations: (1) the western region of South America harbors more variation than the eastern region of South America, (2) Central American and western South American populations cluster exclusively, (3) populations speaking the Chibchan-Paezan and Equatorial-Tucanoan language stock emerge as a group within an otherwise South American clade, (4) Chibchan-Paezan populations in Central America emerge together at the tips of the Chibchan-Paezan cluster. This study finds that hierarchical models with the best fit place Central American populations, and populations speaking the Chibchan-Paezan language stock, at a basal position or separated from the South American group, which is more consistent with a serial founder effect into South America than that previously described. Western (Andean) South America is found to harbor similar levels of variation as eastern (Equatorial-Tucanoan and Ge-Pano-Carib) South America, which is inconsistent with an initial west coast migration into South America. Moreover, in all relevant models, the estimates of genetic diversity within geographic regions suggest a major bottleneck or founder effect occurring within the North American subcontinent, before the peopling of Central and South America. 2009 Wiley-Liss, Inc.

  15. An approach based on Hierarchical Bayesian Graphical Models for measurement interpretation under uncertainty

    Science.gov (United States)

    Skataric, Maja; Bose, Sandip; Zeroug, Smaine; Tilke, Peter

    2017-02-01

    It is not uncommon in the field of non-destructive evaluation that multiple measurements encompassing a variety of modalities are available for analysis and interpretation for determining the underlying states of nature of the materials or parts being tested. Despite and sometimes due to the richness of data, significant challenges arise in the interpretation manifested as ambiguities and inconsistencies due to various uncertain factors in the physical properties (inputs), environment, measurement device properties, human errors, and the measurement data (outputs). Most of these uncertainties cannot be described by any rigorous mathematical means, and modeling of all possibilities is usually infeasible for many real time applications. In this work, we will discuss an approach based on Hierarchical Bayesian Graphical Models (HBGM) for the improved interpretation of complex (multi-dimensional) problems with parametric uncertainties that lack usable physical models. In this setting, the input space of the physical properties is specified through prior distributions based on domain knowledge and expertise, which are represented as Gaussian mixtures to model the various possible scenarios of interest for non-destructive testing applications. Forward models are then used offline to generate the expected distribution of the proposed measurements which are used to train a hierarchical Bayesian network. In Bayesian analysis, all model parameters are treated as random variables, and inference of the parameters is made on the basis of posterior distribution given the observed data. Learned parameters of the posterior distribution obtained after the training can therefore be used to build an efficient classifier for differentiating new observed data in real time on the basis of pre-trained models. We will illustrate the implementation of the HBGM approach to ultrasonic measurements used for cement evaluation of cased wells in the oil industry.

  16. Bayesian hierarchical models for smoothing in two-phase studies, with application to small area estimation.

    Science.gov (United States)

    Ross, Michelle; Wakefield, Jon

    2015-10-01

    Two-phase study designs are appealing since they allow for the oversampling of rare sub-populations which improves efficiency. In this paper we describe a Bayesian hierarchical model for the analysis of two-phase data. Such a model is particularly appealing in a spatial setting in which random effects are introduced to model between-area variability. In such a situation, one may be interested in estimating regression coefficients or, in the context of small area estimation, in reconstructing the population totals by strata. The efficiency gains of the two-phase sampling scheme are compared to standard approaches using 2011 birth data from the research triangle area of North Carolina. We show that the proposed method can overcome small sample difficulties and improve on existing techniques. We conclude that the two-phase design is an attractive approach for small area estimation.

  17. Parallel Motion Simulation of Large-Scale Real-Time Crowd in a Hierarchical Environmental Model

    Directory of Open Access Journals (Sweden)

    Xin Wang

    2012-01-01

    Full Text Available This paper presents a parallel real-time crowd simulation method based on a hierarchical environmental model. A dynamical model of the complex environment should be constructed to simulate the state transition and propagation of individual motions. By modeling of a virtual environment where virtual crowds reside, we employ different parallel methods on a topological layer, a path layer and a perceptual layer. We propose a parallel motion path matching method based on the path layer and a parallel crowd simulation method based on the perceptual layer. The large-scale real-time crowd simulation becomes possible with these methods. Numerical experiments are carried out to demonstrate the methods and results.

  18. Finite Time Blowup in a Realistic Food-Chain Model

    KAUST Repository

    Parshad, Rana; Ait Abderrahmane, Hamid; Upadhyay, Ranjit Kumar; Kumari, Nitu

    2013-01-01

    We investigate a realistic three-species food-chain model, with generalist top predator. The model based on a modified version of the Leslie-Gower scheme incorporates mutual interference in all the three populations and generalizes several other known models in the ecological literature. We show that the model exhibits finite time blowup in certain parameter range and for large enough initial data. This result implies that finite time blowup is possible in a large class of such three-species food-chain models. We propose a modification to the model and prove that the modified model has globally existing classical solutions, as well as a global attractor. We reconstruct the attractor using nonlinear time series analysis and show that it pssesses rich dynamics, including chaos in certain parameter regime, whilst avoiding blowup in any parameter regime. We also provide estimates on its fractal dimension as well as provide numerical simulations to visualise the spatiotemporal chaos.

  19. Finite Time Blowup in a Realistic Food-Chain Model

    KAUST Repository

    Parshad, Rana

    2013-05-19

    We investigate a realistic three-species food-chain model, with generalist top predator. The model based on a modified version of the Leslie-Gower scheme incorporates mutual interference in all the three populations and generalizes several other known models in the ecological literature. We show that the model exhibits finite time blowup in certain parameter range and for large enough initial data. This result implies that finite time blowup is possible in a large class of such three-species food-chain models. We propose a modification to the model and prove that the modified model has globally existing classical solutions, as well as a global attractor. We reconstruct the attractor using nonlinear time series analysis and show that it pssesses rich dynamics, including chaos in certain parameter regime, whilst avoiding blowup in any parameter regime. We also provide estimates on its fractal dimension as well as provide numerical simulations to visualise the spatiotemporal chaos.

  20. Delay model and performance testing for FPGA carry chain TDC

    International Nuclear Information System (INIS)

    Kang Xiaowen; Liu Yaqiang; Cui Junjian Yang Zhangcan; Jin Yongjie

    2011-01-01

    Time-of-flight (TOF) information would improve the performance of PET (position emission tomography). TDC design is a key technique. It proposed Carry Chain TDC Delay model. Through changing the significant delay parameter of model, paper compared the difference of TDC performance, and finally realized Time-to-Digital Convertor (TDC) based on Carry Chain Method using FPGA EP2C20Q240C8N with 69 ps LSB, max error below 2 LSB. Such result could meet the TOF demand. It also proposed a Coaxial Cable Measuring method for TDC testing, without High-precision test equipment. (authors)

  1. Quantitative Model for Supply Chain Visibility: Process Capability Perspective

    Directory of Open Access Journals (Sweden)

    Youngsu Lee

    2016-01-01

    Full Text Available Currently, the intensity of enterprise competition has increased as a result of a greater diversity of customer needs as well as the persistence of a long-term recession. The results of competition are becoming severe enough to determine the survival of company. To survive global competition, each firm must focus on achieving innovation excellence and operational excellence as core competency for sustainable competitive advantage. Supply chain management is now regarded as one of the most effective innovation initiatives to achieve operational excellence, and its importance has become ever more apparent. However, few companies effectively manage their supply chains, and the greatest difficulty is in achieving supply chain visibility. Many companies still suffer from a lack of visibility, and in spite of extensive research and the availability of modern technologies, the concepts and quantification methods to increase supply chain visibility are still ambiguous. Based on the extant researches in supply chain visibility, this study proposes an extended visibility concept focusing on a process capability perspective and suggests a more quantitative model using Z score in Six Sigma methodology to evaluate and improve the level of supply chain visibility.

  2. How does aging affect recognition-based inference? A hierarchical Bayesian modeling approach.

    Science.gov (United States)

    Horn, Sebastian S; Pachur, Thorsten; Mata, Rui

    2015-01-01

    The recognition heuristic (RH) is a simple strategy for probabilistic inference according to which recognized objects are judged to score higher on a criterion than unrecognized objects. In this article, a hierarchical Bayesian extension of the multinomial r-model is applied to measure use of the RH on the individual participant level and to re-evaluate differences between younger and older adults' strategy reliance across environments. Further, it is explored how individual r-model parameters relate to alternative measures of the use of recognition and other knowledge, such as adherence rates and indices from signal-detection theory (SDT). Both younger and older adults used the RH substantially more often in an environment with high than low recognition validity, reflecting adaptivity in strategy use across environments. In extension of previous analyses (based on adherence rates), hierarchical modeling revealed that in an environment with low recognition validity, (a) older adults had a stronger tendency than younger adults to rely on the RH and (b) variability in RH use between individuals was larger than in an environment with high recognition validity; variability did not differ between age groups. Further, the r-model parameters correlated moderately with an SDT measure expressing how well people can discriminate cases where the RH leads to a correct vs. incorrect inference; this suggests that the r-model and the SDT measures may offer complementary insights into the use of recognition in decision making. In conclusion, younger and older adults are largely adaptive in their application of the RH, but cognitive aging may be associated with an increased tendency to rely on this strategy. Copyright © 2014 Elsevier B.V. All rights reserved.

  3. Hierarchical Colored Petri Nets for Modeling and Analysis of Transit Signal Priority Control Systems

    Directory of Open Access Journals (Sweden)

    Yisheng An

    2018-01-01

    Full Text Available In this paper, we consider the problem of developing a model for traffic signal control with transit priority using Hierarchical Colored Petri nets (HCPN. Petri nets (PN are useful for state analysis of discrete event systems due to their powerful modeling capability and mathematical formalism. This paper focuses on their use to formalize the transit signal priority (TSP control model. In a four-phase traffic signal control model, the transit detection and two kinds of transit priority strategies are integrated to obtain the HCPN-based TSP control models. One of the advantages to use these models is the clear presentation of traffic light behaviors in terms of conditions and events that cause the detection of a priority request by a transit vehicle. Another advantage of the resulting models is that the correctness and reliability of the proposed strategies are easily analyzed. After their full reachable states are generated, the boundness, liveness, and fairness of the proposed models are verified. Experimental results show that the proposed control model provides transit vehicles with better effectiveness at intersections. This work helps advance the state of the art in the design of signal control models related to the intersection of roadways.

  4. Prion Amplification and Hierarchical Bayesian Modeling Refine Detection of Prion Infection

    Science.gov (United States)

    Wyckoff, A. Christy; Galloway, Nathan; Meyerett-Reid, Crystal; Powers, Jenny; Spraker, Terry; Monello, Ryan J.; Pulford, Bruce; Wild, Margaret; Antolin, Michael; Vercauteren, Kurt; Zabel, Mark

    2015-02-01

    Prions are unique infectious agents that replicate without a genome and cause neurodegenerative diseases that include chronic wasting disease (CWD) of cervids. Immunohistochemistry (IHC) is currently considered the gold standard for diagnosis of a prion infection but may be insensitive to early or sub-clinical CWD that are important to understanding CWD transmission and ecology. We assessed the potential of serial protein misfolding cyclic amplification (sPMCA) to improve detection of CWD prior to the onset of clinical signs. We analyzed tissue samples from free-ranging Rocky Mountain elk (Cervus elaphus nelsoni) and used hierarchical Bayesian analysis to estimate the specificity and sensitivity of IHC and sPMCA conditional on simultaneously estimated disease states. Sensitivity estimates were higher for sPMCA (99.51%, credible interval (CI) 97.15-100%) than IHC of obex (brain stem, 76.56%, CI 57.00-91.46%) or retropharyngeal lymph node (90.06%, CI 74.13-98.70%) tissues, or both (98.99%, CI 90.01-100%). Our hierarchical Bayesian model predicts the prevalence of prion infection in this elk population to be 18.90% (CI 15.50-32.72%), compared to previous estimates of 12.90%. Our data reveal a previously unidentified sub-clinical prion-positive portion of the elk population that could represent silent carriers capable of significantly impacting CWD ecology.

  5. Prion amplification and hierarchical Bayesian modeling refine detection of prion infection.

    Science.gov (United States)

    Wyckoff, A Christy; Galloway, Nathan; Meyerett-Reid, Crystal; Powers, Jenny; Spraker, Terry; Monello, Ryan J; Pulford, Bruce; Wild, Margaret; Antolin, Michael; VerCauteren, Kurt; Zabel, Mark

    2015-02-10

    Prions are unique infectious agents that replicate without a genome and cause neurodegenerative diseases that include chronic wasting disease (CWD) of cervids. Immunohistochemistry (IHC) is currently considered the gold standard for diagnosis of a prion infection but may be insensitive to early or sub-clinical CWD that are important to understanding CWD transmission and ecology. We assessed the potential of serial protein misfolding cyclic amplification (sPMCA) to improve detection of CWD prior to the onset of clinical signs. We analyzed tissue samples from free-ranging Rocky Mountain elk (Cervus elaphus nelsoni) and used hierarchical Bayesian analysis to estimate the specificity and sensitivity of IHC and sPMCA conditional on simultaneously estimated disease states. Sensitivity estimates were higher for sPMCA (99.51%, credible interval (CI) 97.15-100%) than IHC of obex (brain stem, 76.56%, CI 57.00-91.46%) or retropharyngeal lymph node (90.06%, CI 74.13-98.70%) tissues, or both (98.99%, CI 90.01-100%). Our hierarchical Bayesian model predicts the prevalence of prion infection in this elk population to be 18.90% (CI 15.50-32.72%), compared to previous estimates of 12.90%. Our data reveal a previously unidentified sub-clinical prion-positive portion of the elk population that could represent silent carriers capable of significantly impacting CWD ecology.

  6. Multilevel Hierarchical Modeling of Benthic Macroinvertebrate Responses to Urbanization in Nine Metropolitan Regions across the Conterminous United States

    Science.gov (United States)

    Kashuba, Roxolana; Cha, YoonKyung; Alameddine, Ibrahim; Lee, Boknam; Cuffney, Thomas F.

    2010-01-01

    Multilevel hierarchical modeling methodology has been developed for use in ecological data analysis. The effect of urbanization on stream macroinvertebrate communities was measured across a gradient of basins in each of nine metropolitan regions across the conterminous United States. The hierarchical nature of this dataset was harnessed in a multi-tiered model structure, predicting both invertebrate response at the basin scale and differences in invertebrate response at the region scale. Ordination site scores, total taxa richness, Ephemeroptera, Plecoptera, Trichoptera (EPT) taxa richness, and richness-weighted mean tolerance of organisms at a site were used to describe invertebrate responses. Percentage of urban land cover was used as a basin-level predictor variable. Regional mean precipitation, air temperature, and antecedent agriculture were used as region-level predictor variables. Multilevel hierarchical models were fit to both levels of data simultaneously, borrowing statistical strength from the complete dataset to reduce uncertainty in regional coefficient estimates. Additionally, whereas non-hierarchical regressions were only able to show differing relations between invertebrate responses and urban intensity separately for each region, the multilevel hierarchical regressions were able to explain and quantify those differences within a single model. In this way, this modeling approach directly establishes the importance of antecedent agricultural conditions in masking the response of invertebrates to urbanization in metropolitan regions such as Milwaukee-Green Bay, Wisconsin; Denver, Colorado; and Dallas-Fort Worth, Texas. Also, these models show that regions with high precipitation, such as Atlanta, Georgia; Birmingham, Alabama; and Portland, Oregon, start out with better regional background conditions of invertebrates prior to urbanization but experience faster negative rates of change with urbanization. Ultimately, this urbanization

  7. Modified allocation capacitated planning model in blood supply chain management

    Science.gov (United States)

    Mansur, A.; Vanany, I.; Arvitrida, N. I.

    2018-04-01

    Blood supply chain management (BSCM) is a complex process management that involves many cooperating stakeholders. BSCM involves four echelon processes, which are blood collection or procurement, production, inventory, and distribution. This research develops an optimization model of blood distribution planning. The efficiency of decentralization and centralization policies in a blood distribution chain are compared, by optimizing the amount of blood delivered from a blood center to a blood bank. This model is developed based on allocation problem of capacitated planning model. At the first stage, the capacity and the cost of transportation are considered to create an initial capacitated planning model. Then, the inventory holding and shortage costs are added to the model. These additional parameters of inventory costs lead the model to be more realistic and accurate.

  8. Epigenetic change detection and pattern recognition via Bayesian hierarchical hidden Markov models.

    Science.gov (United States)

    Wang, Xinlei; Zang, Miao; Xiao, Guanghua

    2013-06-15

    Epigenetics is the study of changes to the genome that can switch genes on or off and determine which proteins are transcribed without altering the DNA sequence. Recently, epigenetic changes have been linked to the development and progression of disease such as psychiatric disorders. High-throughput epigenetic experiments have enabled researchers to measure genome-wide epigenetic profiles and yield data consisting of intensity ratios of immunoprecipitation versus reference samples. The intensity ratios can provide a view of genomic regions where protein binding occur under one experimental condition and further allow us to detect epigenetic alterations through comparison between two different conditions. However, such experiments can be expensive, with only a few replicates available. Moreover, epigenetic data are often spatially correlated with high noise levels. In this paper, we develop a Bayesian hierarchical model, combined with hidden Markov processes with four states for modeling spatial dependence, to detect genomic sites with epigenetic changes from two-sample experiments with paired internal control. One attractive feature of the proposed method is that the four states of the hidden Markov process have well-defined biological meanings and allow us to directly call the change patterns based on the corresponding posterior probabilities. In contrast, none of existing methods can offer this advantage. In addition, the proposed method offers great power in statistical inference by spatial smoothing (via hidden Markov modeling) and information pooling (via hierarchical modeling). Both simulation studies and real data analysis in a cocaine addiction study illustrate the reliability and success of this method. Copyright © 2012 John Wiley & Sons, Ltd.

  9. CARBON-CHAIN SPECIES IN WARM-UP MODELS

    International Nuclear Information System (INIS)

    Hassel, George E.; Harada, Nanase; Herbst, Eric

    2011-01-01

    In previous warm-up chemical models of the low-mass star-forming region L1527, we investigated the evolution of carbon-chain unsaturated hydrocarbon species when the envelope temperature is slightly elevated to T ≈ 30 K. These models demonstrated that enhanced abundances of such species can be explained by gas-phase ion-molecule chemistry following the partial sublimation of methane from grain surfaces. We also concluded that the abundances of hydrocarbon radicals such as the C n H family should be further enhanced as the temperatures increase to higher values, but this conclusion stood in contrast with the lack of unambiguous detection of these species toward hot core and corino sources. Meanwhile, observational surveys have identified C 2 H, C 4 H, CH 3 CCH, and CH 3 OH toward hot corinos (especially IRAS 16293–2422) as well as toward L1527, with lower abundances for the carbon-chain radicals and higher abundances for the other two species toward the hot corinos. In addition, the Herschel Space Telescope has detected the bare linear chain C 3 in 50 K material surrounding young high-mass stellar objects. To understand these new results, we revisit previous warm-up models with an augmented gas-grain network that incorporated reactions from a gas-phase network that was constructed for use with increased temperature up to 800 K. Some of the newly adopted reactions between carbon-chain species and abundant H 2 possess chemical activation energy barriers. The revised model results now better reproduce the observed abundances of unsaturated carbon chains under hot corino (100 K) conditions and make predictions for the abundances of bare carbon chains in the 50 K regions observed by the Herschel HIFI detector.

  10. Investigations of model polymers: Dynamics of melts and statics of a long chain in a dilute melt of shorter chains

    International Nuclear Information System (INIS)

    Bishop, M.; Ceperley, D.; Frisch, H.L.; Kalos, M.H.

    1982-01-01

    We report additional results on a simple model of polymers, namely the diffusion in concentrated polymer systems and the static properties of one long chain in a dilute melt of shorter chains. It is found, for the polymer sizes and time scales amenable to our computer calculations, that there is as yet no evidence for a ''reptation'' regime in a melt. There is some indication of reptation in the case of a single chain moving through fixed obstacles. No statistically significant effect of the change, from excluded volume behavior of the long chain to ideal behavior as the shorter chains grow, is observed

  11. Markov Chain Models for Stochastic Behavior in Resonance Overlap Regions

    Science.gov (United States)

    McCarthy, Morgan; Quillen, Alice

    2018-01-01

    We aim to predict lifetimes of particles in chaotic zoneswhere resonances overlap. A continuous-time Markov chain model isconstructed using mean motion resonance libration timescales toestimate transition times between resonances. The model is applied todiffusion in the co-rotation region of a planet. For particles begunat low eccentricity, the model is effective for early diffusion, butnot at later time when particles experience close encounters to the planet.

  12. Modeling cadmium in the feed chain and cattle organs

    OpenAIRE

    Fels-Klerx, van der, H.J.; Romkens, P.F.A.M.; Franz, E.; Raamsdonk, van, L.W.D.

    2011-01-01

    The objectives of this study were to estimate cadmium contamination levels in different scenarios related to soil characteristics and assumptions regarding cadmium accumulation in the animal tissues, using quantitative supply chain modeling. The model takes into account soil cadmium levels, soil pH, soil-to-plant transfer, animal consumption patterns, and transfer into animal organs (liver and kidneys). The model was applied to cattle up to the age of six years which were fed roughage (maize ...

  13. Ising model on tangled chain - 2: Magnetization and susceptibility

    International Nuclear Information System (INIS)

    Mejdani, R.

    1993-05-01

    In the preceding paper we have considered an Ising model defined on tangled chain to study the behaviour of the free energy and entropy, particularly in the zero-field and zero-temperature limit. In this paper, following the main line and basing on some results of the previous work, we shall study in the ''language'' of state configurations the behaviour of the magnetization and the susceptibility for different conditions of the model, to understand better the competition between the ferromagnetic bonds along the chain and the antiferromagnetic additional bonds across the chain. Particularly interesting is the behaviour of the susceptibility in the zero-field and zero-temperature limit. Exact solutions for the magnetization and susceptibility, generated by analytical calculations and iterative algorithms, are described. The additional bonds, introduced as a form of perfectly disorder, indicate a particular effect on the spin correlation. We found that the condition J=-J' between the ferromagnetic interaction J along the chain and the antiferromagnetic interaction J' across the chain is somewhat as a ''transition-region'' condition for this behaviour. (author). 16 refs, 14 figs

  14. Hierarchical modeling of bycatch rates of sea turtles in the western North Atlantic

    Science.gov (United States)

    Gardner, B.; Sullivan, P.J.; Epperly, S.; Morreale, S.J.

    2008-01-01

    Previous studies indicate that the locations of the endangered loggerhead Caretta caretta and critically endangered leatherback Dermochelys coriacea sea turtles are influenced by water temperatures, and that incidental catch rates in the pelagic longline fishery vary by region. We present a Bayesian hierarchical model to examine the effects of environmental variables, including water temperature, on the number of sea turtles captured in the US pelagic longline fishery in the western North Atlantic. The modeling structure is highly flexible, utilizes a Bayesian model selection technique, and is fully implemented in the software program WinBUGS. The number of sea turtles captured is modeled as a zero-inflated Poisson distribution and the model incorporates fixed effects to examine region-specific differences in the parameter estimates. Results indicate that water temperature, region, bottom depth, and target species are all significant predictors of the number of loggerhead sea turtles captured. For leatherback sea turtles, the model with only target species had the most posterior model weight, though a re-parameterization of the model indicates that temperature influences the zero-inflation parameter. The relationship between the number of sea turtles captured and the variables of interest all varied by region. This suggests that management decisions aimed at reducing sea turtle bycatch may be more effective if they are spatially explicit. ?? Inter-Research 2008.

  15. A hierarchical updating method for finite element model of airbag buffer system under landing impact

    Directory of Open Access Journals (Sweden)

    He Huan

    2015-12-01

    Full Text Available In this paper, we propose an impact finite element (FE model for an airbag landing buffer system. First, an impact FE model has been formulated for a typical airbag landing buffer system. We use the independence of the structure FE model from the full impact FE model to develop a hierarchical updating scheme for the recovery module FE model and the airbag system FE model. Second, we define impact responses at key points to compare the computational and experimental results to resolve the inconsistency between the experimental data sampling frequency and experimental triggering. To determine the typical characteristics of the impact dynamics response of the airbag landing buffer system, we present the impact response confidence factors (IRCFs to evaluate how consistent the computational and experiment results are. An error function is defined between the experimental and computational results at key points of the impact response (KPIR to serve as a modified objective function. A radial basis function (RBF is introduced to construct updating variables for a surrogate model for updating the objective function, thereby converting the FE model updating problem to a soluble optimization problem. Finally, the developed method has been validated using an experimental and computational study on the impact dynamics of a classic airbag landing buffer system.

  16. Parameterization of aquatic ecosystem functioning and its natural variation: Hierarchical Bayesian modelling of plankton food web dynamics

    Science.gov (United States)

    Norros, Veera; Laine, Marko; Lignell, Risto; Thingstad, Frede

    2017-10-01

    Methods for extracting empirically and theoretically sound parameter values are urgently needed in aquatic ecosystem modelling to describe key flows and their variation in the system. Here, we compare three Bayesian formulations for mechanistic model parameterization that differ in their assumptions about the variation in parameter values between various datasets: 1) global analysis - no variation, 2) separate analysis - independent variation and 3) hierarchical analysis - variation arising from a shared distribution defined by hyperparameters. We tested these methods, using computer-generated and empirical data, coupled with simplified and reasonably realistic plankton food web models, respectively. While all methods were adequate, the simulated example demonstrated that a well-designed hierarchical analysis can result in the most accurate and precise parameter estimates and predictions, due to its ability to combine information across datasets. However, our results also highlighted sensitivity to hyperparameter prior distributions as an important caveat of hierarchical analysis. In the more complex empirical example, hierarchical analysis was able to combine precise identification of parameter values with reasonably good predictive performance, although the ranking of the methods was less straightforward. We conclude that hierarchical Bayesian analysis is a promising tool for identifying key ecosystem-functioning parameters and their variation from empirical datasets.

  17. A lattice gas model on a tangled chain

    International Nuclear Information System (INIS)

    Mejdani, R.

    1993-04-01

    We have used a model of a lattice gas defined on a tangled chain to study the enzyme kinetics by a modified transfer matrix method. By using a simple iterative algorithm we have obtained different kinds of saturation curves for different configurations of the tangled chain and different types of the additional interactions. In some special cases of configurations and interactions we have found the same equations for the saturation curves, which we have obtained before studying the lattice gas model with nearest neighbor interactions or the lattice gas model with alternate nearest neighbor interactions, using different techniques as the correlated walks' theory, the partition point technique or the transfer matrix model. This more general model and the new results could be useful for the experimental investigations. (author). 20 refs, 6 figs

  18. Markov chain model for demersal fish catch analysis in Indonesia

    Science.gov (United States)

    Firdaniza; Gusriani, N.

    2018-03-01

    As an archipelagic country, Indonesia has considerable potential fishery resources. One of the fish resources that has high economic value is demersal fish. Demersal fish is a fish with a habitat in the muddy seabed. Demersal fish scattered throughout the Indonesian seas. Demersal fish production in each Indonesia’s Fisheries Management Area (FMA) varies each year. In this paper we have discussed the Markov chain model for demersal fish yield analysis throughout all Indonesia’s Fisheries Management Area. Data of demersal fish catch in every FMA in 2005-2014 was obtained from Directorate of Capture Fisheries. From this data a transition probability matrix is determined by the number of transitions from the catch that lie below the median or above the median. The Markov chain model of demersal fish catch data was an ergodic Markov chain model, so that the limiting probability of the Markov chain model can be determined. The predictive value of demersal fishing yields was obtained by calculating the combination of limiting probability with average catch results below the median and above the median. The results showed that for 2018 and long-term demersal fishing results in most of FMA were below the median value.

  19. A simulation model of a coordinated decentralized linear supply chain

    NARCIS (Netherlands)

    Ashayeri, Jalal; Cannella, S.; Lopez Campos, M.; Miranda, P.A.

    2015-01-01

    This paper presents a simulation-based study of a coordinated, decentralized linear supply chain (SC) system. In the proposed model, any supply tier considers its successors as part of its inventory system and generates replenishment orders on the basis of its partners’ operational information. We

  20. Irreversible Markov chains in spin models: Topological excitations

    Science.gov (United States)

    Lei, Ze; Krauth, Werner

    2018-01-01

    We analyze the convergence of the irreversible event-chain Monte Carlo algorithm for continuous spin models in the presence of topological excitations. In the two-dimensional XY model, we show that the local nature of the Markov-chain dynamics leads to slow decay of vortex-antivortex correlations while spin waves decorrelate very quickly. Using a Fréchet description of the maximum vortex-antivortex distance, we quantify the contributions of topological excitations to the equilibrium correlations, and show that they vary from a dynamical critical exponent z∼ 2 at the critical temperature to z∼ 0 in the limit of zero temperature. We confirm the event-chain algorithm's fast relaxation (corresponding to z = 0) of spin waves in the harmonic approximation to the XY model. Mixing times (describing the approach towards equilibrium from the least favorable initial state) however remain much larger than equilibrium correlation times at low temperatures. We also describe the respective influence of topological monopole-antimonopole excitations and of spin waves on the event-chain dynamics in the three-dimensional Heisenberg model.

  1. Model checking conditional CSL for continuous-time Markov chains

    DEFF Research Database (Denmark)

    Gao, Yang; Xu, Ming; Zhan, Naijun

    2013-01-01

    In this paper, we consider the model-checking problem of continuous-time Markov chains (CTMCs) with respect to conditional logic. To the end, we extend Continuous Stochastic Logic introduced in Aziz et al. (2000) [1] to Conditional Continuous Stochastic Logic (CCSL) by introducing a conditional...

  2. Reservoir Modeling Combining Geostatistics with Markov Chain Monte Carlo Inversion

    DEFF Research Database (Denmark)

    Zunino, Andrea; Lange, Katrine; Melnikova, Yulia

    2014-01-01

    We present a study on the inversion of seismic reflection data generated from a synthetic reservoir model. Our aim is to invert directly for rock facies and porosity of the target reservoir zone. We solve this inverse problem using a Markov chain Monte Carlo (McMC) method to handle the nonlinear...

  3. Tactical supply chain planning models with inherent flexibility

    DEFF Research Database (Denmark)

    Esmaeilikia, Masoud; Fahimnia, Behnam; Sarkis, Joeseph

    2016-01-01

    Supply chains (SCs) can be managed at many levels. The use of tactical SC planning models with multiple flexibility options can help manage the usual operations efficiently and effectively, whilst improve the SC resiliency in response to inherent environmental uncertainties. This paper defines ta...

  4. Trade and compliance cost model in the international supply chain

    NARCIS (Netherlands)

    Arsyida, Tuty; van Delft, Selma; Rukanova, B.D.; Tan, Y.

    2017-01-01

    Trade costs for international supply chain are huge, even in the absence of formal barriers. It is necessary for all the stakeholders, both private and public organizations, to support an effective and efficient border compliance process. Very little trade cost model research has been done at the

  5. The dynamic complexity of a three species food chain model

    International Nuclear Information System (INIS)

    Lv Songjuan; Zhao Min

    2008-01-01

    In this paper, a three-species food chain model is analytically investigated on theories of ecology and using numerical simulation. Bifurcation diagrams are obtained for biologically feasible parameters. The results show that the system exhibits rich complexity features such as stable, periodic and chaotic dynamics

  6. A theoretical Markov chain model for evaluating correctional ...

    African Journals Online (AJOL)

    In this paper a stochastic method is applied in the study of the long time effect of confinement in a correctional institution on the behaviour of a person with criminal tendencies. The approach used is Markov chain, which uses past history to predict the state of a system in the future. A model is developed for comparing the ...

  7. Operations and support cost modeling using Markov chains

    Science.gov (United States)

    Unal, Resit

    1989-01-01

    Systems for future missions will be selected with life cycle costs (LCC) as a primary evaluation criterion. This reflects the current realization that only systems which are considered affordable will be built in the future due to the national budget constaints. Such an environment calls for innovative cost modeling techniques which address all of the phases a space system goes through during its life cycle, namely: design and development, fabrication, operations and support; and retirement. A significant portion of the LCC for reusable systems are generated during the operations and support phase (OS). Typically, OS costs can account for 60 to 80 percent of the total LCC. Clearly, OS costs are wholly determined or at least strongly influenced by decisions made during the design and development phases of the project. As a result OS costs need to be considered and estimated early in the conceptual phase. To be effective, an OS cost estimating model needs to account for actual instead of ideal processes by associating cost elements with probabilities. One approach that may be suitable for OS cost modeling is the use of the Markov Chain Process. Markov chains are an important method of probabilistic analysis for operations research analysts but they are rarely used for life cycle cost analysis. This research effort evaluates the use of Markov Chains in LCC analysis by developing OS cost model for a hypothetical reusable space transportation vehicle (HSTV) and suggests further uses of the Markov Chain process as a design-aid tool.

  8. Using hierarchical linear growth models to evaluate protective mechanisms that mediate science achievement

    Science.gov (United States)

    von Secker, Clare Elaine

    The study of students at risk is a major topic of science education policy and discussion. Much research has focused on describing conditions and problems associated with the statistical risk of low science achievement among individuals who are members of groups characterized by problems such as poverty and social disadvantage. But outcomes attributed to these factors do not explain the nature and extent of mechanisms that account for differences in performance among individuals at risk. There is ample theoretical and empirical evidence that demographic differences should be conceptualized as social contexts, or collections of variables, that alter the psychological significance and social demands of life events, and affect subsequent relationships between risk and resilience. The hierarchical linear growth models used in this dissertation provide greater specification of the role of social context and the protective effects of attitude, expectations, parenting practices, peer influences, and learning opportunities on science achievement. While the individual influences of these protective factors on science achievement were small, their cumulative effect was substantial. Meta-analysis conducted on the effects associated with psychological and environmental processes that mediate risk mechanisms in sixteen social contexts revealed twenty-two significant differences between groups of students. Positive attitudes, high expectations, and more intense science course-taking had positive effects on achievement of all students, although these factors were not equally protective in all social contexts. In general, effects associated with authoritative parenting and peer influences were negative, regardless of social context. An evaluation comparing the performance and stability of hierarchical linear growth models with traditional repeated measures models is included as well.

  9. An Integrated Risk Index Model Based on Hierarchical Fuzzy Logic for Underground Risk Assessment

    Directory of Open Access Journals (Sweden)

    Muhammad Fayaz

    2017-10-01

    Full Text Available Available space in congested cities is getting scarce due to growing urbanization in the recent past. The utilization of underground space is considered as a solution to the limited space in smart cities. The numbers of underground facilities are growing day by day in the developing world. Typical underground facilities include the transit subway, parking lots, electric lines, water supply and sewer lines. The likelihood of the occurrence of accidents due to underground facilities is a random phenomenon. To avoid any accidental loss, a risk assessment method is required to conduct the continuous risk assessment and report any abnormality before it happens. In this paper, we have proposed a hierarchical fuzzy inference based model for under-ground risk assessment. The proposed hierarchical fuzzy inference architecture reduces the total number of rules from the rule base. Rule reduction is important because the curse of dimensionality damages the transparency and interpretation as it is very tough to understand and justify hundreds or thousands of fuzzy rules. The computation time also increases as rules increase. The proposed model takes 175 rules having eight input parameters to compute the risk index, and the conventional fuzzy logic requires 390,625 rules, having the same number of input parameters to compute risk index. Hence, the proposed model significantly reduces the curse of dimensionality. Rule design for fuzzy logic is also a tedious task. In this paper, we have also introduced new rule schemes, namely maximum rule-based and average rule-based; both schemes can be used interchangeably according to the logic needed for rule design. The experimental results show that the proposed method is a virtuous choice for risk index calculation where the numbers of variables are greater.

  10. Spatial patterns of breeding success of grizzly bears derived from hierarchical multistate models.

    Science.gov (United States)

    Fisher, Jason T; Wheatley, Matthew; Mackenzie, Darryl

    2014-10-01

    Conservation programs often manage populations indirectly through the landscapes in which they live. Empirically, linking reproductive success with landscape structure and anthropogenic change is a first step in understanding and managing the spatial mechanisms that affect reproduction, but this link is not sufficiently informed by data. Hierarchical multistate occupancy models can forge these links by estimating spatial patterns of reproductive success across landscapes. To illustrate, we surveyed the occurrence of grizzly bears (Ursus arctos) in the Canadian Rocky Mountains Alberta, Canada. We deployed camera traps for 6 weeks at 54 surveys sites in different types of land cover. We used hierarchical multistate occupancy models to estimate probability of detection, grizzly bear occupancy, and probability of reproductive success at each site. Grizzly bear occupancy varied among cover types and was greater in herbaceous alpine ecotones than in low-elevation wetlands or mid-elevation conifer forests. The conditional probability of reproductive success given grizzly bear occupancy was 30% (SE = 0.14). Grizzly bears with cubs had a higher probability of detection than grizzly bears without cubs, but sites were correctly classified as being occupied by breeding females 49% of the time based on raw data and thus would have been underestimated by half. Repeated surveys and multistate modeling reduced the probability of misclassifying sites occupied by breeders as unoccupied to <2%. The probability of breeding grizzly bear occupancy varied across the landscape. Those patches with highest probabilities of breeding occupancy-herbaceous alpine ecotones-were small and highly dispersed and are projected to shrink as treelines advance due to climate warming. Understanding spatial correlates in breeding distribution is a key requirement for species conservation in the face of climate change and can help identify priorities for landscape management and protection. © 2014 Society

  11. Reduced Rank Mixed Effects Models for Spatially Correlated Hierarchical Functional Data

    KAUST Repository

    Zhou, Lan

    2010-03-01

    Hierarchical functional data are widely seen in complex studies where sub-units are nested within units, which in turn are nested within treatment groups. We propose a general framework of functional mixed effects model for such data: within unit and within sub-unit variations are modeled through two separate sets of principal components; the sub-unit level functions are allowed to be correlated. Penalized splines are used to model both the mean functions and the principal components functions, where roughness penalties are used to regularize the spline fit. An EM algorithm is developed to fit the model, while the specific covariance structure of the model is utilized for computational efficiency to avoid storage and inversion of large matrices. Our dimension reduction with principal components provides an effective solution to the difficult tasks of modeling the covariance kernel of a random function and modeling the correlation between functions. The proposed methodology is illustrated using simulations and an empirical data set from a colon carcinogenesis study. Supplemental materials are available online.

  12. A Hierarchical Poisson Log-Normal Model for Network Inference from RNA Sequencing Data

    Science.gov (United States)

    Gallopin, Mélina; Rau, Andrea; Jaffrézic, Florence

    2013-01-01

    Gene network inference from transcriptomic data is an important methodological challenge and a key aspect of systems biology. Although several methods have been proposed to infer networks from microarray data, there is a need for inference methods able to model RNA-seq data, which are count-based and highly variable. In this work we propose a hierarchical Poisson log-normal model with a Lasso penalty to infer gene networks from RNA-seq data; this model has the advantage of directly modelling discrete data and accounting for inter-sample variance larger than the sample mean. Using real microRNA-seq data from breast cancer tumors and simulations, we compare this method to a regularized Gaussian graphical model on log-transformed data, and a Poisson log-linear graphical model with a Lasso penalty on power-transformed data. For data simulated with large inter-sample dispersion, the proposed model performs better than the other methods in terms of sensitivity, specificity and area under the ROC curve. These results show the necessity of methods specifically designed for gene network inference from RNA-seq data. PMID:24147011

  13. Class hierarchical test case generation algorithm based on expanded EMDPN model

    Institute of Scientific and Technical Information of China (English)

    LI Jun-yi; GONG Hong-fang; HU Ji-ping; ZOU Bei-ji; SUN Jia-guang

    2006-01-01

    A new model of event and message driven Petri network(EMDPN) based on the characteristic of class interaction for messages passing between two objects was extended. Using EMDPN interaction graph, a class hierarchical test-case generation algorithm with cooperated paths (copaths) was proposed, which can be used to solve the problems resulting from the class inheritance mechanism encountered in object-oriented software testing such as oracle, message transfer errors, and unreachable statement. Finally, the testing sufficiency was analyzed with the ordered sequence testing criterion(OSC). The results indicate that the test cases stemmed from newly proposed automatic algorithm of copaths generation satisfies synchronization message sequences testing criteria, therefore the proposed new algorithm of copaths generation has a good coverage rate.

  14. LIMO EEG: a toolbox for hierarchical LInear MOdeling of ElectroEncephaloGraphic data.

    Science.gov (United States)

    Pernet, Cyril R; Chauveau, Nicolas; Gaspar, Carl; Rousselet, Guillaume A

    2011-01-01

    Magnetic- and electric-evoked brain responses have traditionally been analyzed by comparing the peaks or mean amplitudes of signals from selected channels and averaged across trials. More recently, tools have been developed to investigate single trial response variability (e.g., EEGLAB) and to test differences between averaged evoked responses over the entire scalp and time dimensions (e.g., SPM, Fieldtrip). LIMO EEG is a Matlab toolbox (EEGLAB compatible) to analyse evoked responses over all space and time dimensions, while accounting for single trial variability using a simple hierarchical linear modelling of the data. In addition, LIMO EEG provides robust parametric tests, therefore providing a new and complementary tool in the analysis of neural evoked responses.

  15. Land transportation model for supply chain manufacturing industries

    Science.gov (United States)

    Kurniawan, Fajar

    2017-12-01

    Supply chain is a system that integrates production, inventory, distribution and information processes for increasing productivity and minimize costs. Transportation is an important part of the supply chain system, especially for supporting the material distribution process, work in process products and final products. In fact, Jakarta as the distribution center of manufacturing industries for the industrial area. Transportation system has a large influences on the implementation of supply chain process efficiency. The main problem faced in Jakarta is traffic jam that will affect on the time of distribution. Based on the system dynamic model, there are several scenarios that can provide solutions to minimize timing of distribution that will effect on the cost such as the construction of ports approaching industrial areas other than Tanjung Priok, widening road facilities, development of railways system, and the development of distribution center.

  16. Profit Analysis and Supply Chain Planning Model for Closed-Loop Supply Chain in Fashion Industry

    Directory of Open Access Journals (Sweden)

    Jisoo Oh

    2014-12-01

    Full Text Available In recent decades, due to market growth and use of synthetic fiber, the fashion industry faces a rapid increase of CO2 emission throughout the production cycle and raises environmental issues in recovery processing. This study proposes a closed-loop supply chain (CLSC structure in fashion industry and develops its planning model as multi-objective mixed integer linear programming to find an optimal trade-off between CLSC profit and CO2 emission. The planning model is associated with the profit analysis of each member in CLSC to find the optimal price of products on CLSC network. The model determines optimal production, transportation, and inventory quantities on CLSC network. The proposed models are validated using numerical experiments and sensitivity analyses, and from the results some managerial insights are addressed.

  17. Supply chain management models, applications, and research directions

    CERN Document Server

    Pardalos, Panos; Romeijn, H

    2005-01-01

    This work brings together some of the most up to date research in the application of operations research and mathematical modeling te- niques to problems arising in supply chain management and e-Commerce. While research in the broad area of supply chain management enc- passes a wide range of topics and methodologies, we believe this book provides a good snapshot of current quantitative modeling approaches, issues, and trends within the field. Each chapter is a self-contained study of a timely and relevant research problem in supply chain mana- ment. The individual works place a heavy emphasis on the application of modeling techniques to real world management problems. In many instances, the actual results from applying these techniques in practice are highlighted. In addition, each chapter provides important mana- rial insights that apply to general supply chain management practice. The book is divided into three parts. The first part contains ch- ters that address the new and rapidly growing role of the inte...

  18. Probabilistic daily ILI syndromic surveillance with a spatio-temporal Bayesian hierarchical model.

    Directory of Open Access Journals (Sweden)

    Ta-Chien Chan

    Full Text Available BACKGROUND: For daily syndromic surveillance to be effective, an efficient and sensible algorithm would be expected to detect aberrations in influenza illness, and alert public health workers prior to any impending epidemic. This detection or alert surely contains uncertainty, and thus should be evaluated with a proper probabilistic measure. However, traditional monitoring mechanisms simply provide a binary alert, failing to adequately address this uncertainty. METHODS AND FINDINGS: Based on the Bayesian posterior probability of influenza-like illness (ILI visits, the intensity of outbreak can be directly assessed. The numbers of daily emergency room ILI visits at five community hospitals in Taipei City during 2006-2007 were collected and fitted with a Bayesian hierarchical model containing meteorological factors such as temperature and vapor pressure, spatial interaction with conditional autoregressive structure, weekend and holiday effects, seasonality factors, and previous ILI visits. The proposed algorithm recommends an alert for action if the posterior probability is larger than 70%. External data from January to February of 2008 were retained for validation. The decision rule detects successfully the peak in the validation period. When comparing the posterior probability evaluation with the modified Cusum method, results show that the proposed method is able to detect the signals 1-2 days prior to the rise of ILI visits. CONCLUSIONS: This Bayesian hierarchical model not only constitutes a dynamic surveillance system but also constructs a stochastic evaluation of the need to call for alert. The monitoring mechanism provides earlier detection as well as a complementary tool for current surveillance programs.

  19. Interneuronal Mechanism for Tinbergen’s Hierarchical Model of Behavioral Choice

    Science.gov (United States)

    Pirger, Zsolt; Crossley, Michael; László, Zita; Naskar, Souvik; Kemenes, György; O’Shea, Michael; Benjamin, Paul R.; Kemenes, Ildikó

    2014-01-01

    Summary Recent studies of behavioral choice support the notion that the decision to carry out one behavior rather than another depends on the reconfiguration of shared interneuronal networks [1]. We investigated another decision-making strategy, derived from the classical ethological literature [2, 3], which proposes that behavioral choice depends on competition between autonomous networks. According to this model, behavioral choice depends on inhibitory interactions between incompatible hierarchically organized behaviors. We provide evidence for this by investigating the interneuronal mechanisms mediating behavioral choice between two autonomous circuits that underlie whole-body withdrawal [4, 5] and feeding [6] in the pond snail Lymnaea. Whole-body withdrawal is a defensive reflex that is initiated by tactile contact with predators. As predicted by the hierarchical model, tactile stimuli that evoke whole-body withdrawal responses also inhibit ongoing feeding in the presence of feeding stimuli. By recording neurons from the feeding and withdrawal networks, we found no direct synaptic connections between the interneuronal and motoneuronal elements that generate the two behaviors. Instead, we discovered that behavioral choice depends on the interaction between two unique types of interneurons with asymmetrical synaptic connectivity that allows withdrawal to override feeding. One type of interneuron, the Pleuro-Buccal (PlB), is an extrinsic modulatory neuron of the feeding network that completely inhibits feeding when excited by touch-induced monosynaptic input from the second type of interneuron, Pedal-Dorsal12 (PeD12). PeD12 plays a critical role in behavioral choice by providing a synaptic pathway joining the two behavioral networks that underlies the competitive dominance of whole-body withdrawal over feeding. PMID:25155505

  20. Hierarchical Bayesian Spatio Temporal Model Comparison on the Earth Trapped Particle Forecast

    International Nuclear Information System (INIS)

    Suparta, Wayan; Gusrizal

    2014-01-01

    We compared two hierarchical Bayesian spatio temporal (HBST) results, Gaussian process (GP) and autoregressive (AR) models, on the Earth trapped particle forecast. Two models were employed on the South Atlantic Anomaly (SAA) region. Electron of >30 keV (mep0e1) from National Oceanic and Atmospheric Administration (NOAA) 15-18 satellites data was chosen as the particle modeled. We used two weeks data to perform the model fitting on a 5°x5° grid of longitude and latitude, and 31 August 2007 was set as the date of forecast. Three statistical validations were performed on the data, i.e. the root mean square error (RMSE), mean absolute percentage error (MAPE) and bias (BIAS). The statistical analysis showed that GP model performed better than AR with the average of RMSE = 0.38 and 0.63, MAPE = 11.98 and 17.30, and BIAS = 0.32 and 0.24, for GP and AR, respectively. Visual validation on both models with the NOAA map's also confirmed the superior of the GP than the AR. The variance of log flux minimum = 0.09 and 1.09, log flux maximum = 1.15 and 1.35, and in successively represents GP and AR

  1. A hierarchical model for estimating density in camera-trap studies

    Science.gov (United States)

    Royle, J. Andrew; Nichols, James D.; Karanth, K.Ullas; Gopalaswamy, Arjun M.

    2009-01-01

    Estimating animal density using capture–recapture data from arrays of detection devices such as camera traps has been problematic due to the movement of individuals and heterogeneity in capture probability among them induced by differential exposure to trapping.We develop a spatial capture–recapture model for estimating density from camera-trapping data which contains explicit models for the spatial point process governing the distribution of individuals and their exposure to and detection by traps.We adopt a Bayesian approach to analysis of the hierarchical model using the technique of data augmentation.The model is applied to photographic capture–recapture data on tigers Panthera tigris in Nagarahole reserve, India. Using this model, we estimate the density of tigers to be 14·3 animals per 100 km2 during 2004.Synthesis and applications. Our modelling framework largely overcomes several weaknesses in conventional approaches to the estimation of animal density from trap arrays. It effectively deals with key problems such as individual heterogeneity in capture probabilities, movement of traps, presence of potential ‘holes’ in the array and ad hoc estimation of sample area. The formulation, thus, greatly enhances flexibility in the conduct of field surveys as well as in the analysis of data, from studies that may involve physical, photographic or DNA-based ‘captures’ of individual animals.

  2. Automatic relative RPC image model bias compensation through hierarchical image matching for improving DEM quality

    Science.gov (United States)

    Noh, Myoung-Jong; Howat, Ian M.

    2018-02-01

    The quality and efficiency of automated Digital Elevation Model (DEM) extraction from stereoscopic satellite imagery is critically dependent on the accuracy of the sensor model used for co-locating pixels between stereo-pair images. In the absence of ground control or manual tie point selection, errors in the sensor models must be compensated with increased matching search-spaces, increasing both the computation time and the likelihood of spurious matches. Here we present an algorithm for automatically determining and compensating the relative bias in Rational Polynomial Coefficients (RPCs) between stereo-pairs utilizing hierarchical, sub-pixel image matching in object space. We demonstrate the algorithm using a suite of image stereo-pairs from multiple satellites over a range stereo-photogrammetrically challenging polar terrains. Besides providing a validation of the effectiveness of the algorithm for improving DEM quality, experiments with prescribed sensor model errors yield insight into the dependence of DEM characteristics and quality on relative sensor model bias. This algorithm is included in the Surface Extraction through TIN-based Search-space Minimization (SETSM) DEM extraction software package, which is the primary software used for the U.S. National Science Foundation ArcticDEM and Reference Elevation Model of Antarctica (REMA) products.

  3. Labour Quality Model for Organic Farming Food Chains

    OpenAIRE

    Gassner, B.; Freyer, B.; Leitner, H.

    2008-01-01

    The debate on labour quality in science is controversial as well as in the organic agriculture community. Therefore, we reviewed literature on different labour quality models and definitions, and had key informant interviews on labour quality issues with stakeholders in a regional oriented organic agriculture bread food chain. We developed a labour quality model with nine quality categories and discussed linkages to labour satisfaction, ethical values and IFOAM principles.

  4. Model-Based Engineering for Supply Chain Risk Management

    Science.gov (United States)

    2015-09-30

    Design Language (AADL), which has tools for modeling and compliance verification, provides an effective capability to model and describe all component...the outsourced units in a supply chain can be an impossible task where the product might be composed of 10,000 individual components at the 4th or...can be used to guide the process of monitoring the award and assurance of the outsourced work. Safety-critical verification of cyber-physical

  5. Scaling local species-habitat relations to the larger landscape with a hierarchical spatial count model

    Science.gov (United States)

    Thogmartin, W.E.; Knutson, M.G.

    2007-01-01

    Much of what is known about avian species-habitat relations has been derived from studies of birds at local scales. It is entirely unclear whether the relations observed at these scales translate to the larger landscape in a predictable linear fashion. We derived habitat models and mapped predicted abundances for three forest bird species of eastern North America using bird counts, environmental variables, and hierarchical models applied at three spatial scales. Our purpose was to understand habitat associations at multiple spatial scales and create predictive abundance maps for purposes of conservation planning at a landscape scale given the constraint that the variables used in this exercise were derived from local-level studies. Our models indicated a substantial influence of landscape context for all species, many of which were counter to reported associations at finer spatial extents. We found land cover composition provided the greatest contribution to the relative explained variance in counts for all three species; spatial structure was second in importance. No single spatial scale dominated any model, indicating that these species are responding to factors at multiple spatial scales. For purposes of conservation planning, areas of predicted high abundance should be investigated to evaluate the conservation potential of the landscape in their general vicinity. In addition, the models and spatial patterns of abundance among species suggest locations where conservation actions may benefit more than one species. ?? 2006 Springer Science+Business Media B.V.

  6. Hierarchical Kinematic Modelling and Optimal Design of a Novel Hexapod Robot with Integrated Limb Mechanism

    Directory of Open Access Journals (Sweden)

    Guiyang Xin

    2015-09-01

    Full Text Available This paper presents a novel hexapod robot, hereafter named PH-Robot, with three degrees of freedom (3-DOF parallel leg mechanisms based on the concept of an integrated limb mechanism (ILM for the integration of legged locomotion and arm manipulation. The kinematic model plays an important role in the parametric optimal design and motion planning of robots. However, models of parallel mechanisms are often difficult to obtain because of the implicit relationship between the motions of actuated joints and the motion of a moving platform. In order to derive the kinematic equations of the proposed hexapod robot, an extended hierarchical kinematic modelling method is proposed. According to the kinematic model, the geometrical parameters of the leg are optimized utilizing a comprehensive objective function that considers both dexterity and payload. PH-Robot has distinct advantages in accuracy and load ability over a robot with serial leg mechanisms through the former's comparison of performance indices. The reachable workspace of the leg verifies its ability to walk and manipulate. The results of the trajectory tracking experiment demonstrate the correctness of the kinematic model of the hexapod robot.

  7. Deformations of N=4 SYM and integrable spin chain models

    International Nuclear Information System (INIS)

    Berenstein, David; Cherkis, Sergey A.

    2004-01-01

    Beginning with the planar limit of N=4 SYM theory, we study planar diagrams for field theory deformations of N=4 which are marginal at the free field theory level. We show that the requirement of integrability of the full one-loop dilatation operator in the scalar sector, places very strong constraints on the field theory, so that the only soluble models correspond essentially to orbifolds of N=4 SYM. For these, the associated spin chain model gets twisted boundary conditions that depend on the length of the chain, but which are still integrable. We also show that theories with integrable subsectors appear quite generically, and it is possible to engineer integrable subsectors to have some specific symmetry, however these do not generally lead to full integrability. We also try to construct a theory whose spin chain has quantum group symmetry SOq(6) as a deformation of the SO(6) R-symmetry structure of N=4 SYM. We show that it is not possible to obtain a spin chain with that symmetry from deformations of the scalar potential of N=4 SYM.We also show that the natural context for these questions can be better phrased in terms of multi-matrix quantum mechanics rather than in four-dimensional field theories

  8. Quantitative Methods in Supply Chain Management Models and Algorithms

    CERN Document Server

    Christou, Ioannis T

    2012-01-01

    Quantitative Methods in Supply Chain Management presents some of the most important methods and tools available for modeling and solving problems arising in the context of supply chain management. In the context of this book, “solving problems” usually means designing efficient algorithms for obtaining high-quality solutions. The first chapter is an extensive optimization review covering continuous unconstrained and constrained linear and nonlinear optimization algorithms, as well as dynamic programming and discrete optimization exact methods and heuristics. The second chapter presents time-series forecasting methods together with prediction market techniques for demand forecasting of new products and services. The third chapter details models and algorithms for planning and scheduling with an emphasis on production planning and personnel scheduling. The fourth chapter presents deterministic and stochastic models for inventory control with a detailed analysis on periodic review systems and algorithmic dev...

  9. Dose estimation with the help of food chain compartment models

    International Nuclear Information System (INIS)

    Murzin, N.V.

    1987-01-01

    Food chain chamber models for calculation of human irradiation doses are considered. Chamber models are divided into steady-state (SSCM) and dynamic (DCM) ones according to the type of interaction between chambers. SSCM are built on the ground of the postulate about steady-static equilibrium presence within organism-environment system. DCM are based on two main assumptions: 1) food chain may be divided into several interacting chambers, between which radionuclides exchange occurs. Radionuclide specific activity in all parts of the chamber is identical at any instant of time; 2) radionuclide losses by the chamber are proportional to radionuclide specific activity in the chamber. The construction principles for economic chamber model are considered

  10. LiDAR based prediction of forest biomass using hierarchical models with spatially varying coefficients

    Science.gov (United States)

    Babcock, Chad; Finley, Andrew O.; Bradford, John B.; Kolka, Randall K.; Birdsey, Richard A.; Ryan, Michael G.

    2015-01-01

    Many studies and production inventory systems have shown the utility of coupling covariates derived from Light Detection and Ranging (LiDAR) data with forest variables measured on georeferenced inventory plots through regression models. The objective of this study was to propose and assess the use of a Bayesian hierarchical modeling framework that accommodates both residual spatial dependence and non-stationarity of model covariates through the introduction of spatial random effects. We explored this objective using four forest inventory datasets that are part of the North American Carbon Program, each comprising point-referenced measures of above-ground forest biomass and discrete LiDAR. For each dataset, we considered at least five regression model specifications of varying complexity. Models were assessed based on goodness of fit criteria and predictive performance using a 10-fold cross-validation procedure. Results showed that the addition of spatial random effects to the regression model intercept improved fit and predictive performance in the presence of substantial residual spatial dependence. Additionally, in some cases, allowing either some or all regression slope parameters to vary spatially, via the addition of spatial random effects, further improved model fit and predictive performance. In other instances, models showed improved fit but decreased predictive performance—indicating over-fitting and underscoring the need for cross-validation to assess predictive ability. The proposed Bayesian modeling framework provided access to pixel-level posterior predictive distributions that were useful for uncertainty mapping, diagnosing spatial extrapolation issues, revealing missing model covariates, and discovering locally significant parameters.

  11. Markov chain aggregation for agent-based models

    CERN Document Server

    Banisch, Sven

    2016-01-01

    This self-contained text develops a Markov chain approach that makes the rigorous analysis of a class of microscopic models that specify the dynamics of complex systems at the individual level possible. It presents a general framework of aggregation in agent-based and related computational models, one which makes use of lumpability and information theory in order to link the micro and macro levels of observation. The starting point is a microscopic Markov chain description of the dynamical process in complete correspondence with the dynamical behavior of the agent-based model (ABM), which is obtained by considering the set of all possible agent configurations as the state space of a huge Markov chain. An explicit formal representation of a resulting “micro-chain” including microscopic transition rates is derived for a class of models by using the random mapping representation of a Markov process. The type of probability distribution used to implement the stochastic part of the model, which defines the upd...

  12. Modeling nutrient flows in the food chain of China.

    Science.gov (United States)

    Ma, L; Ma, W Q; Velthof, G L; Wang, F H; Qin, W; Zhang, F S; Oenema, O

    2010-01-01

    Increasing nitrogen (N) and phosphorus (P) inputs have greatly contributed to the increasing food production in China during the last decades, but have also increased N and P losses to the environment. The pathways and magnitude of these losses are not well quantified. Here, we report on N and P use efficiencies and losses at a national scale in 2005, using the model NUFER (NUtrient flows in Food chains, Environment and Resources use). Total amount of "new" N imported to the food chain was 48.8 Tg in 2005. Only 4.4.Tg reached households as food. Average N use efficiencies in crop production, animal production, and the whole food chain were 26, 11, and 9%, respectively. Most of the imported N was lost to the environment, that is, 23 Tg N to atmosphere, as ammonia (57%), nitrous oxide (2%), dinitrogen (33%), and nitrogen oxides (8%), and 20 Tg to waters. The total P input into the food chain was 7.8 Tg. The average P use efficiencies in crop production, animal production, and the whole food chain were 36, 5, and 7%, respectively. This is the first comprehensive overview of N and P balances, losses, and use efficiencies of the food chain in China. It shows that the N and P costs of food are high (for N 11 kg kg(-1), for P 13 kg kg(-1)). Key measures for lowering the N and P costs of food production are (i) increasing crop and animal production, (ii) balanced fertilization, and (iii) improved manure management.

  13. Classification of customer lifetime value models using Markov chain

    Science.gov (United States)

    Permana, Dony; Pasaribu, Udjianna S.; Indratno, Sapto W.; Suprayogi

    2017-10-01

    A firm’s potential reward in future time from a customer can be determined by customer lifetime value (CLV). There are some mathematic methods to calculate it. One method is using Markov chain stochastic model. Here, a customer is assumed through some states. Transition inter the states follow Markovian properties. If we are given some states for a customer and the relationships inter states, then we can make some Markov models to describe the properties of the customer. As Markov models, CLV is defined as a vector contains CLV for a customer in the first state. In this paper we make a classification of Markov Models to calculate CLV. Start from two states of customer model, we make develop in many states models. The development a model is based on weaknesses in previous model. Some last models can be expected to describe how real characters of customers in a firm.

  14. Hierarchical Bayesian models to assess between- and within-batch variability of pathogen contamination in food.

    Science.gov (United States)

    Commeau, Natalie; Cornu, Marie; Albert, Isabelle; Denis, Jean-Baptiste; Parent, Eric

    2012-03-01

    Assessing within-batch and between-batch variability is of major interest for risk assessors and risk managers in the context of microbiological contamination of food. For example, the ratio between the within-batch variability and the between-batch variability has a large impact on the results of a sampling plan. Here, we designed hierarchical Bayesian models to represent such variability. Compatible priors were built mathematically to obtain sound model comparisons. A numeric criterion is proposed to assess the contamination structure comparing the ability of the models to replicate grouped data at the batch level using a posterior predictive loss approach. Models were applied to two case studies: contamination by Listeria monocytogenes of pork breast used to produce diced bacon and contamination by the same microorganism on cold smoked salmon at the end of the process. In the first case study, a contamination structure clearly exists and is located at the batch level, that is, between batches variability is relatively strong, whereas in the second a structure also exists but is less marked. © 2012 Society for Risk Analysis.

  15. Bayesian Uncertainty Quantification for Subsurface Inversion Using a Multiscale Hierarchical Model

    KAUST Repository

    Mondal, Anirban

    2014-07-03

    We consider a Bayesian approach to nonlinear inverse problems in which the unknown quantity is a random field (spatial or temporal). The Bayesian approach contains a natural mechanism for regularization in the form of prior information, can incorporate information from heterogeneous sources and provide a quantitative assessment of uncertainty in the inverse solution. The Bayesian setting casts the inverse solution as a posterior probability distribution over the model parameters. The Karhunen-Loeve expansion is used for dimension reduction of the random field. Furthermore, we use a hierarchical Bayes model to inject multiscale data in the modeling framework. In this Bayesian framework, we show that this inverse problem is well-posed by proving that the posterior measure is Lipschitz continuous with respect to the data in total variation norm. Computational challenges in this construction arise from the need for repeated evaluations of the forward model (e.g., in the context of MCMC) and are compounded by high dimensionality of the posterior. We develop two-stage reversible jump MCMC that has the ability to screen the bad proposals in the first inexpensive stage. Numerical results are presented by analyzing simulated as well as real data from hydrocarbon reservoir. This article has supplementary material available online. © 2014 American Statistical Association and the American Society for Quality.

  16. Comparison of Extreme Precipitation Return Levels using Spatial Bayesian Hierarchical Modeling versus Regional Frequency Analysis

    Science.gov (United States)

    Love, C. A.; Skahill, B. E.; AghaKouchak, A.; Karlovits, G. S.; England, J. F.; Duren, A. M.

    2017-12-01

    We compare gridded extreme precipitation return levels obtained using spatial Bayesian hierarchical modeling (BHM) with their respective counterparts from a traditional regional frequency analysis (RFA) using the same set of extreme precipitation data. Our study area is the 11,478 square mile Willamette River basin (WRB) located in northwestern Oregon, a major tributary of the Columbia River whose 187 miles long main stem, the Willamette River, flows northward between the Coastal and Cascade Ranges. The WRB contains approximately two ­thirds of Oregon's population and 20 of the 25 most populous cities in the state. The U.S. Army Corps of Engineers (USACE) Portland District operates thirteen dams and extreme precipitation estimates are required to support risk­ informed hydrologic analyses as part of the USACE Dam Safety Program. Our intent is to profile for the USACE an alternate methodology to an RFA that was developed in 2008 due to the lack of an official NOAA Atlas 14 update for the state of Oregon. We analyze 24-hour annual precipitation maxima data for the WRB utilizing the spatial BHM R package "spatial.gev.bma", which has been shown to be efficient in developing coherent maps of extreme precipitation by return level. Our BHM modeling analysis involved application of leave-one-out cross validation (LOO-CV), which not only supported model selection but also a comprehensive assessment of location specific model performance. The LOO-CV results will provide a basis for the BHM RFA comparison.

  17. TOPICAL REVIEW: Nonlinear aspects of the renormalization group flows of Dyson's hierarchical model

    Science.gov (United States)

    Meurice, Y.

    2007-06-01

    We review recent results concerning the renormalization group (RG) transformation of Dyson's hierarchical model (HM). This model can be seen as an approximation of a scalar field theory on a lattice. We introduce the HM and show that its large group of symmetry simplifies drastically the blockspinning procedure. Several equivalent forms of the recursion formula are presented with unified notations. Rigourous and numerical results concerning the recursion formula are summarized. It is pointed out that the recursion formula of the HM is inequivalent to both Wilson's approximate recursion formula and Polchinski's equation in the local potential approximation (despite the very small difference with the exponents of the latter). We draw a comparison between the RG of the HM and functional RG equations in the local potential approximation. The construction of the linear and nonlinear scaling variables is discussed in an operational way. We describe the calculation of non-universal critical amplitudes in terms of the scaling variables of two fixed points. This question appears as a problem of interpolation between these fixed points. Universal amplitude ratios are calculated. We discuss the large-N limit and the complex singularities of the critical potential calculable in this limit. The interpolation between the HM and more conventional lattice models is presented as a symmetry breaking problem. We briefly introduce models with an approximate supersymmetry. One important goal of this review is to present a configuration space counterpart, suitable for lattice formulations, of functional RG equations formulated in momentum space (often called exact RG equations and abbreviated ERGE).

  18. A Hierarchical Building Segmentation in Digital Surface Models for 3D Reconstruction

    Directory of Open Access Journals (Sweden)

    Yiming Yan

    2017-01-01

    Full Text Available In this study, a hierarchical method for segmenting buildings in a digital surface model (DSM, which is used in a novel framework for 3D reconstruction, is proposed. Most 3D reconstructions of buildings are model-based. However, the limitations of these methods are overreliance on completeness of the offline-constructed models of buildings, and the completeness is not easily guaranteed since in modern cities buildings can be of a variety of types. Therefore, a model-free framework using high precision DSM and texture-images buildings was introduced. There are two key problems with this framework. The first one is how to accurately extract the buildings from the DSM. Most segmentation methods are limited by either the terrain factors or the difficult choice of parameter-settings. A level-set method are employed to roughly find the building regions in the DSM, and then a recently proposed ‘occlusions of random textures model’ are used to enhance the local segmentation of the buildings. The second problem is how to generate the facades of buildings. Synergizing with the corresponding texture-images, we propose a roof-contour guided interpolation of building facades. The 3D reconstruction results achieved by airborne-like images and satellites are compared. Experiments show that the segmentation method has good performance, and 3D reconstruction is easily performed by our framework, and better visualization results can be obtained by airborne-like images, which can be further replaced by UAV images.

  19. A hierarchical model for structure learning based on the physiological characteristics of neurons

    Institute of Scientific and Technical Information of China (English)

    WEI Hui

    2007-01-01

    Almost all applications of Artificial Neural Networks (ANNs) depend mainly on their memory ability.The characteristics of typical ANN models are fixed connections,with evolved weights,globalized representations,and globalized optimizations,all based on a mathematical approach.This makes those models to be deficient in robustness,efficiency of learning,capacity,anti-jamming between training sets,and correlativity of samples,etc.In this paper,we attempt to address these problems by adopting the characteristics of biological neurons in morphology and signal processing.A hierarchical neural network was designed and realized to implement structure learning and representations based on connected structures.The basic characteristics of this model are localized and random connections,field limitations of neuron fan-in and fan-out,dynamic behavior of neurons,and samples represented through different sub-circuits of neurons specialized into different response patterns.At the end of this paper,some important aspects of error correction,capacity,learning efficiency,and soundness of structural representation are analyzed theoretically.This paper has demonstrated the feasibility and advantages of structure learning and representation.This model can serve as a fundamental element of cognitive systems such as perception and associative memory.Key-words structure learning,representation,associative memory,computational neuroscience

  20. Resolving the Antarctic contribution to sea-level rise: a hierarchical modelling framework.

    Science.gov (United States)

    Zammit-Mangion, Andrew; Rougier, Jonathan; Bamber, Jonathan; Schön, Nana

    2014-06-01

    Determining the Antarctic contribution to sea-level rise from observational data is a complex problem. The number of physical processes involved (such as ice dynamics and surface climate) exceeds the number of observables, some of which have very poor spatial definition. This has led, in general, to solutions that utilise strong prior assumptions or physically based deterministic models to simplify the problem. Here, we present a new approach for estimating the Antarctic contribution, which only incorporates descriptive aspects of the physically based models in the analysis and in a statistical manner. By combining physical insights with modern spatial statistical modelling techniques, we are able to provide probability distributions on all processes deemed to play a role in both the observed data and the contribution to sea-level rise. Specifically, we use stochastic partial differential equations and their relation to geostatistical fields to capture our physical understanding and employ a Gaussian Markov random field approach for efficient computation. The method, an instantiation of Bayesian hierarchical modelling, naturally incorporates uncertainty in order to reveal credible intervals on all estimated quantities. The estimated sea-level rise contribution using this approach corroborates those found using a statistically independent method. © 2013 The Authors. Environmetrics Published by John Wiley & Sons, Ltd.

  1. Resolving the Antarctic contribution to sea-level rise: a hierarchical modelling framework†

    Science.gov (United States)

    Zammit-Mangion, Andrew; Rougier, Jonathan; Bamber, Jonathan; Schön, Nana

    2014-01-01

    Determining the Antarctic contribution to sea-level rise from observational data is a complex problem. The number of physical processes involved (such as ice dynamics and surface climate) exceeds the number of observables, some of which have very poor spatial definition. This has led, in general, to solutions that utilise strong prior assumptions or physically based deterministic models to simplify the problem. Here, we present a new approach for estimating the Antarctic contribution, which only incorporates descriptive aspects of the physically based models in the analysis and in a statistical manner. By combining physical insights with modern spatial statistical modelling techniques, we are able to provide probability distributions on all processes deemed to play a role in both the observed data and the contribution to sea-level rise. Specifically, we use stochastic partial differential equations and their relation to geostatistical fields to capture our physical understanding and employ a Gaussian Markov random field approach for efficient computation. The method, an instantiation of Bayesian hierarchical modelling, naturally incorporates uncertainty in order to reveal credible intervals on all estimated quantities. The estimated sea-level rise contribution using this approach corroborates those found using a statistically independent method. © 2013 The Authors. Environmetrics Published by John Wiley & Sons, Ltd. PMID:25505370

  2. Benefits of Applying Hierarchical Models to the Empirical Green's Function Approach

    Science.gov (United States)

    Denolle, M.; Van Houtte, C.

    2017-12-01

    Stress drops calculated from source spectral studies currently show larger variability than what is implied by empirical ground motion models. One of the potential origins of the inflated variability is the simplified model-fitting techniques used in most source spectral studies. This study improves upon these existing methods, and shows that the fitting method may explain some of the discrepancy. In particular, Bayesian hierarchical modelling is shown to be a method that can reduce bias, better quantify uncertainties and allow additional effects to be resolved. The method is applied to the Mw7.1 Kumamoto, Japan earthquake, and other global, moderate-magnitude, strike-slip earthquakes between Mw5 and Mw7.5. It is shown that the variation of the corner frequency, fc, and the falloff rate, n, across the focal sphere can be reliably retrieved without overfitting the data. Additionally, it is shown that methods commonly used to calculate corner frequencies can give substantial biases. In particular, if fc were calculated for the Kumamoto earthquake using a model with a falloff rate fixed at 2 instead of the best fit 1.6, the obtained fc would be as large as twice its realistic value. The reliable retrieval of the falloff rate allows deeper examination of this parameter for a suite of global, strike-slip earthquakes, and its scaling with magnitude. The earthquake sequences considered in this study are from Japan, New Zealand, Haiti and California.

  3. A Bayesian hierarchical model with novel prior specifications for estimating HIV testing rates.

    Science.gov (United States)

    An, Qian; Kang, Jian; Song, Ruiguang; Hall, H Irene

    2016-04-30

    Human immunodeficiency virus (HIV) infection is a severe infectious disease actively spreading globally, and acquired immunodeficiency syndrome (AIDS) is an advanced stage of HIV infection. The HIV testing rate, that is, the probability that an AIDS-free HIV infected person seeks a test for HIV during a particular time interval, given no previous positive test has been obtained prior to the start of the time, is an important parameter for public health. In this paper, we propose a Bayesian hierarchical model with two levels of hierarchy to estimate the HIV testing rate using annual AIDS and AIDS-free HIV diagnoses data. At level one, we model the latent number of HIV infections for each year using a Poisson distribution with the intensity parameter representing the HIV incidence rate. At level two, the annual numbers of AIDS and AIDS-free HIV diagnosed cases and all undiagnosed cases stratified by the HIV infections at different years are modeled using a multinomial distribution with parameters including the HIV testing rate. We propose a new class of priors for the HIV incidence rate and HIV testing rate taking into account the temporal dependence of these parameters to improve the estimation accuracy. We develop an efficient posterior computation algorithm based on the adaptive rejection metropolis sampling technique. We demonstrate our model using simulation studies and the analysis of the national HIV surveillance data in the USA. Copyright © 2015 John Wiley & Sons, Ltd.

  4. Exploring the Effects of Congruence and Holland's Personality Codes on Job Satisfaction: An Application of Hierarchical Linear Modeling Techniques

    Science.gov (United States)

    Ishitani, Terry T.

    2010-01-01

    This study applied hierarchical linear modeling to investigate the effect of congruence on intrinsic and extrinsic aspects of job satisfaction. Particular focus was given to differences in job satisfaction by gender and by Holland's first-letter codes. The study sample included nationally represented 1462 female and 1280 male college graduates who…

  5. Factors associated with leisure time physical inactivity in black individuals: hierarchical model

    Directory of Open Access Journals (Sweden)

    Francisco José Gondim Pitanga

    2014-09-01

    Full Text Available Background. A number of studies have shown that the black population exhibits higher levels of leisure-time physical inactivity (LTPI, but few have investigated the factors associated with this behavior.Objective. The aim of this study was to analyze associated factors and the explanatory model proposed for LTPI in black adults.Methods. The design was cross-sectional with a sample of 2,305 adults from 20–96 years of age, 902 (39.1% men, living in the city of Salvador, Brazil. LTPI was analyzed using the International Physical Activity Questionnaire (IPAQ. A hierarchical model was built with the possible factors associated with LTPI, distributed in distal (age and sex, intermediate 1 (socioeconomic status, educational level and marital status, intermediate 2 (perception of safety/violence in the neighborhood, racial discrimination in private settings and physical activity at work and proximal blocks (smoking and participation in Carnival block rehearsals. We estimated crude and adjusted odds ratio (OR using logistic regression.Results. The variables inversely associated with LTPI were male gender, socioeconomic status and secondary/university education, although the proposed model explains only 4.2% of LTPI.Conclusions. We conclude that male gender, higher education and socioeconomic status can reduce LTPI in black adults.

  6. An Integrated Model Based on a Hierarchical Indices System for Monitoring and Evaluating Urban Sustainability

    Directory of Open Access Journals (Sweden)

    Xulin Guo

    2013-02-01

    Full Text Available Over 50% of world’s population presently resides in cities, and this number is expected to rise to ~70% by 2050. Increasing urbanization problems including population growth, urban sprawl, land use change, unemployment, and environmental degradation, have markedly impacted urban residents’ Quality of Life (QOL. Therefore, urban sustainability and its measurement have gained increasing attention from administrators, urban planners, and scientific communities throughout the world with respect to improving urban development and human well-being. The widely accepted definition of urban sustainability emphasizes the balancing development of three primary domains (urban economy, society, and environment. This article attempts to improve the aforementioned definition of urban sustainability by incorporating a human well-being dimension. Major problems identified in existing urban sustainability indicator (USI models include a weak integration of potential indicators, poor measurement and quantification, and insufficient spatial-temporal analysis. To tackle these challenges an integrated USI model based on a hierarchical indices system was established for monitoring and evaluating urban sustainability. This model can be performed by quantifying indicators using both traditional statistical approaches and advanced geomatic techniques based on satellite imagery and census data, which aims to provide a theoretical basis for a comprehensive assessment of urban sustainability from a spatial-temporal perspective.

  7. A bayesian hierarchical model for classification with selection of functional predictors.

    Science.gov (United States)

    Zhu, Hongxiao; Vannucci, Marina; Cox, Dennis D

    2010-06-01

    In functional data classification, functional observations are often contaminated by various systematic effects, such as random batch effects caused by device artifacts, or fixed effects caused by sample-related factors. These effects may lead to classification bias and thus should not be neglected. Another issue of concern is the selection of functions when predictors consist of multiple functions, some of which may be redundant. The above issues arise in a real data application where we use fluorescence spectroscopy to detect cervical precancer. In this article, we propose a Bayesian hierarchical model that takes into account random batch effects and selects effective functions among multiple functional predictors. Fixed effects or predictors in nonfunctional form are also included in the model. The dimension of the functional data is reduced through orthonormal basis expansion or functional principal components. For posterior sampling, we use a hybrid Metropolis-Hastings/Gibbs sampler, which suffers slow mixing. An evolutionary Monte Carlo algorithm is applied to improve the mixing. Simulation and real data application show that the proposed model provides accurate selection of functional predictors as well as good classification.

  8. Teacher characteristics and student performance: An analysis using hierarchical linear modelling

    Directory of Open Access Journals (Sweden)

    Paula Armstrong

    2015-12-01

    Full Text Available This research makes use of hierarchical linear modelling to investigate which teacher characteristics are significantly associated with student performance. Using data from the SACMEQ III study of 2007, an interesting and potentially important finding is that younger teachers are better able to improve the mean mathematics performance of their students. Furthermore, younger teachers themselves perform better on subject tests than do their older counterparts. Identical models are run for Sub Saharan countries bordering on South Africa, as well for Kenya and the strong relationship between teacher age and student performance is not observed. Similarly, the model is run for South Africa using data from SACMEQ II (conducted in 2002 and the relationship between teacher age and student performance is also not observed. It must be noted that South African teachers were not tested in SACMEQ II so it was not possible to observe differences in subject knowledge amongst teachers in different cohorts and it was not possible to control for teachers’ level of subject knowledge when observing the relationship between teacher age and student performance. Changes in teacher education in the late 1990s and early 2000s may explain the differences in the performance of younger teachers relative to their older counterparts observed in the later dataset.

  9. A Bayesian Hierarchical Modeling Approach to Predicting Flow in Ungauged Basins

    Science.gov (United States)

    Gronewold, A.; Alameddine, I.; Anderson, R. M.

    2009-12-01

    Recent innovative approaches to identifying and applying regression-based relationships between land use patterns (such as increasing impervious surface area and decreasing vegetative cover) and rainfall-runoff model parameters represent novel and promising improvements to predicting flow from ungauged basins. In particular, these approaches allow for predicting flows under uncertain and potentially variable future conditions due to rapid land cover changes, variable climate conditions, and other factors. Despite the broad range of literature on estimating rainfall-runoff model parameters, however, the absence of a robust set of modeling tools for identifying and quantifying uncertainties in (and correlation between) rainfall-runoff model parameters represents a significant gap in current hydrological modeling research. Here, we build upon a series of recent publications promoting novel Bayesian and probabilistic modeling strategies for quantifying rainfall-runoff model parameter estimation uncertainty. Our approach applies alternative measures of rainfall-runoff model parameter joint likelihood (including Nash-Sutcliffe efficiency, among others) to simulate samples from the joint parameter posterior probability density function. We then use these correlated samples as response variables in a Bayesian hierarchical model with land use coverage data as predictor variables in order to develop a robust land use-based tool for forecasting flow in ungauged basins while accounting for, and explicitly acknowledging, parameter estimation uncertainty. We apply this modeling strategy to low-relief coastal watersheds of Eastern North Carolina, an area representative of coastal resource waters throughout the world because of its sensitive embayments and because of the abundant (but currently threatened) natural resources it hosts. Consequently, this area is the subject of several ongoing studies and large-scale planning initiatives, including those conducted through the United

  10. Fear of Failure, 2x2 Achievement Goal and Self-Handicapping: An Examination of the Hierarchical Model of Achievement Motivation in Physical Education

    Science.gov (United States)

    Chen, Lung Hung; Wu, Chia-Huei; Kee, Ying Hwa; Lin, Meng-Shyan; Shui, Shang-Hsueh

    2009-01-01

    In this study, the hierarchical model of achievement motivation [Elliot, A. J. (1997). Integrating the "classic" and "contemporary" approaches to achievement motivation: A hierarchical model of approach and avoidance achievement motivation. In P. Pintrich & M. Maehr (Eds.), "Advances in motivation and achievement"…

  11. Fission-product burnup chain model for research reactor application

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jung Do; Gil, Choong Sup; Lee, Jong Tai [Korea Atomic Energy Research Inst., Daeduk (Republic of Korea)

    1990-12-01

    A new fission-product burnup chain model was developed for use in research reactor analysis capable of predicting the burnup-dependent reactivity with high precision over a wide range of burnup. The new model consists of 63 nuclides treated explicitly and one fissile-independent pseudo-element. The effective absorption cross sections for the preudo-element and the preudo-element yield of actinide nuclides were evaluated in the this report. The model is capable of predicting the high burnup behavior of low-enriched uranium-fueled research reactors.(Author).

  12. Application of hierarchical Bayesian unmixing models in river sediment source apportionment

    Science.gov (United States)

    Blake, Will; Smith, Hugh; Navas, Ana; Bodé, Samuel; Goddard, Rupert; Zou Kuzyk, Zou; Lennard, Amy; Lobb, David; Owens, Phil; Palazon, Leticia; Petticrew, Ellen; Gaspar, Leticia; Stock, Brian; Boeckx, Pacsal; Semmens, Brice

    2016-04-01

    Fingerprinting and unmixing concepts are used widely across environmental disciplines for forensic evaluation of pollutant sources. In aquatic and marine systems, this includes tracking the source of organic and inorganic pollutants in water and linking problem sediment to soil erosion and land use sources. It is, however, the particular complexity of ecological systems that has driven creation of the most sophisticated mixing models, primarily to (i) evaluate diet composition in complex ecological food webs, (ii) inform population structure and (iii) explore animal movement. In the context of the new hierarchical Bayesian unmixing model, MIXSIAR, developed to characterise intra-population niche variation in ecological systems, we evaluate the linkage between ecological 'prey' and 'consumer' concepts and river basin sediment 'source' and sediment 'mixtures' to exemplify the value of ecological modelling tools to river basin science. Recent studies have outlined advantages presented by Bayesian unmixing approaches in handling complex source and mixture datasets while dealing appropriately with uncertainty in parameter probability distributions. MixSIAR is unique in that it allows individual fixed and random effects associated with mixture hierarchy, i.e. factors that might exert an influence on model outcome for mixture groups, to be explored within the source-receptor framework. This offers new and powerful ways of interpreting river basin apportionment data. In this contribution, key components of the model are evaluated in the context of common experimental designs for sediment fingerprinting studies namely simple, nested and distributed catchment sampling programmes. Illustrative examples using geochemical and compound specific stable isotope datasets are presented and used to discuss best practice with specific attention to (1) the tracer selection process, (2) incorporation of fixed effects relating to sample timeframe and sediment type in the modelling

  13. A coordination theoretic model for three level supply chains using ...

    Indian Academy of Sciences (India)

    city in fashion industry (Lee & Rhee 2007); two period contract model in case of decentralized assembly system (Zou et al 2008); .... p: Price of product qr : Optimal quantity of retailer Q. ∗ sc: Optimal order quantity of supply chain. S(q): Expected sales at the end of period which can be defined as: S(q) = q(1 − F(q)) −. ∫ q. 0.

  14. Hierarchical Model for the Similarity Measurement of a Complex Holed-Region Entity Scene

    Directory of Open Access Journals (Sweden)

    Zhanlong Chen

    2017-11-01

    Full Text Available Complex multi-holed-region entity scenes (i.e., sets of random region with holes are common in spatial database systems, spatial query languages, and the Geographic Information System (GIS. A multi-holed-region (region with an arbitrary number of holes is an abstraction of the real world that primarily represents geographic objects that have more than one interior boundary, such as areas that contain several lakes or lakes that contain islands. When the similarity of the two complex holed-region entity scenes is measured, the number of regions in the scenes and the number of holes in the regions are usually different between the two scenes, which complicates the matching relationships of holed-regions and holes. The aim of this research is to develop several holed-region similarity metrics and propose a hierarchical model to measure comprehensively the similarity between two complex holed-region entity scenes. The procedure first divides a complex entity scene into three layers: a complex scene, a micro-spatial-scene, and a simple entity (hole. The relationships between the adjacent layers are considered to be sets of relationships, and each level of similarity measurements is nested with the adjacent one. Next, entity matching is performed from top to bottom, while the similarity results are calculated from local to global. In addition, we utilize position graphs to describe the distribution of the holed-regions and subsequently describe the directions between the holes using a feature matrix. A case study that uses the Great Lakes in North America in 1986 and 2015 as experimental data illustrates the entire similarity measurement process between two complex holed-region entity scenes. The experimental results show that the hierarchical model accounts for the relationships of the different layers in the entire complex holed-region entity scene. The model can effectively calculate the similarity of complex holed-region entity scenes, even if the

  15. Modelling and analysis of workflow for lean supply chains

    Science.gov (United States)

    Ma, Jinping; Wang, Kanliang; Xu, Lida

    2011-11-01

    Cross-organisational workflow systems are a component of enterprise information systems which support collaborative business process among organisations in supply chain. Currently, the majority of workflow systems is developed in perspectives of information modelling without considering actual requirements of supply chain management. In this article, we focus on the modelling and analysis of the cross-organisational workflow systems in the context of lean supply chain (LSC) using Petri nets. First, the article describes the assumed conditions of cross-organisation workflow net according to the idea of LSC and then discusses the standardisation of collaborating business process between organisations in the context of LSC. Second, the concept of labelled time Petri nets (LTPNs) is defined through combining labelled Petri nets with time Petri nets, and the concept of labelled time workflow nets (LTWNs) is also defined based on LTPNs. Cross-organisational labelled time workflow nets (CLTWNs) is then defined based on LTWNs. Third, the article proposes the notion of OR-silent CLTWNS and a verifying approach to the soundness of LTWNs and CLTWNs. Finally, this article illustrates how to use the proposed method by a simple example. The purpose of this research is to establish a formal method of modelling and analysis of workflow systems for LSC. This study initiates a new perspective of research on cross-organisational workflow management and promotes operation management of LSC in real world settings.

  16. Effective food supply chains : generating, modelling and evaluating supply chain scenarios

    NARCIS (Netherlands)

    Vorst, van der J.G.A.J.

    2000-01-01

    Logistical co-ordination in FMCG supply chains

    The overall objectives of the research described in this thesis were to obtain insight into the applicability of the concept Supply Chain Management (SCM) in food supply chains (SCs) from a logistical point of view, and to

  17. Cultivating a disease management partnership: a value-chain model.

    Science.gov (United States)

    Murray, Carolyn F; Monroe, Wendy; Stalder, Sharon A

    2003-01-01

    Disease management (DM) is one of the health care industry's more innovative value-chain models, whereby multiple relationships are created to bring complex and time-sensitive services to market. The very nature of comprehensive, seamless DM provided through an outsourced arrangement necessitates a level of cooperation, trust, and synergy that may be lacking from more traditional vendor-customer relationships. This discussion highlights the experience of one health plan and its vendor partner and their approach to the development and delivery of an outsourced heart failure (HF) DM program. The program design and rollout are discussed within principles adapted from the theoretical framework of a value-chain model. Within the value-chain model, added value is created by the convergence and synergistic integration of the partners' discrete strengths. Although each partner brings unique attributes to the relationship, those attributes are significantly enhanced by the value-chain model, thus allowing each party to bring the added value of the relationship to their respective customers. This partnership increases innovation, leverages critical capabilities, and improves market responsiveness. Implementing a comprehensive, outsourced DM program is no small task. DM programs incorporate a broad array of services affecting nearly every department in a health plan's organization. When true seamless integration between multiple organizations with multiple stakeholders is the objective, implementation and ongoing operations can become even more complex. To effectively address the complexities presented by an HF DM program, the parties in this case moved beyond a typical purchaser-vendor relationship to one that is more closely akin to a strategic partnership. This discussion highlights the development of this partnership from the perspective of both organizations, as revealed through contracting and implementation activities. It is intended to provide insight into the program

  18. Developing a Hierarchical Decision Model to Evaluate Nuclear Power Plant Alternative Siting Technologies

    Science.gov (United States)

    Lingga, Marwan Mossa

    A strong trend of returning to nuclear power is evident in different places in the world. Forty-five countries are planning to add nuclear power to their grids and more than 66 nuclear power plants are under construction. Nuclear power plants that generate electricity and steam need to improve safety to become more acceptable to governments and the public. One novel practical solution to increase nuclear power plants' safety factor is to build them away from urban areas, such as offshore or underground. To date, Land-Based siting is the dominant option for siting all commercial operational nuclear power plants. However, the literature reveals several options for building nuclear power plants in safer sitings than Land-Based sitings. The alternatives are several and each has advantages and disadvantages, and it is difficult to distinguish among them and choose the best for a specific project. In this research, we recall the old idea of using the alternatives of offshore and underground sitings for new nuclear power plants and propose a tool to help in choosing the best siting technology. This research involved the development of a decision model for evaluating several potential nuclear power plant siting technologies, both those that are currently available and future ones. The decision model was developed based on the Hierarchical Decision Modeling (HDM) methodology. The model considers five major dimensions, social, technical, economic, environmental, and political (STEEP), and their related criteria and sub-criteria. The model was designed and developed by the author, and its elements' validation and evaluation were done by a large number of experts in the field of nuclear energy. The decision model was applied in evaluating five potential siting technologies and ranked the Natural Island as the best in comparison to Land-Based, Floating Plant, Artificial Island, and Semi-Embedded plant.

  19. Micromechanics of hierarchical materials

    DEFF Research Database (Denmark)

    Mishnaevsky, Leon, Jr.

    2012-01-01

    A short overview of micromechanical models of hierarchical materials (hybrid composites, biomaterials, fractal materials, etc.) is given. Several examples of the modeling of strength and damage in hierarchical materials are summarized, among them, 3D FE model of hybrid composites...... with nanoengineered matrix, fiber bundle model of UD composites with hierarchically clustered fibers and 3D multilevel model of wood considered as a gradient, cellular material with layered composite cell walls. The main areas of research in micromechanics of hierarchical materials are identified, among them......, the investigations of the effects of load redistribution between reinforcing elements at different scale levels, of the possibilities to control different material properties and to ensure synergy of strengthening effects at different scale levels and using the nanoreinforcement effects. The main future directions...

  20. Enriching the hierarchical model of achievement motivation: autonomous and controlling reasons underlying achievement goals.

    Science.gov (United States)

    Michou, Aikaterini; Vansteenkiste, Maarten; Mouratidis, Athanasios; Lens, Willy

    2014-12-01

    The hierarchical model of achievement motivation presumes that achievement goals channel the achievement motives of need for achievement and fear of failure towards motivational outcomes. Yet, less is known whether autonomous and controlling reasons underlying the pursuit of achievement goals can serve as additional pathways between achievement motives and outcomes. We tested whether mastery approach, performance approach, and performance avoidance goals and their underlying autonomous and controlling reasons would jointly explain the relation between achievement motives (i.e., fear of failure and need for achievement) and learning strategies (Study 1). Additionally, we examined whether the autonomous and controlling reasons underlying learners' dominant achievement goal would account for the link between achievement motives and the educational outcomes of learning strategies and cheating (Study 2). Six hundred and six Greek adolescent students (Mage = 15.05, SD = 1.43) and 435 university students (Mage M = 20.51, SD = 2.80) participated in studies 1 and 2, respectively. In both studies, a correlational design was used and the hypotheses were tested via path modelling. Autonomous and controlling reasons underlying the pursuit of achievement goals mediated, respectively, the relation of need for achievement and fear of failure to aspects of learning outcomes. Autonomous and controlling reasons underlying achievement goals could further explain learners' functioning in achievement settings. © 2014 The British Psychological Society.

  1. The SIS Model of Epidemic Spreading in a Hierarchical Social Network

    International Nuclear Information System (INIS)

    Grabowski, A.; Kosinski, R.A.

    2005-01-01

    The phenomenon of epidemic spreading in a population with a hierarchical structure of interpersonal interactions is described and investigated numerically. The SIS model with temporal immunity to a disease and a time of incubation is used. In our model spatial localization of individuals belonging to different social groups, effectiveness of different interpersonal interactions and the mobility of a contemporary community are taken into account. The structure of interpersonal connections is based on a scale-free network. The influence of the structure of the social network on typical relations characterizing the spreading process, like a range of epidemic and epidemic curves, is discussed. The probability that endemic state occurs is also calculated. Surprisingly it occurs, that less contagious diseases has greater chance to survive. The influence of preventive vaccinations on the spreading process is investigated and critical range of vaccinations that is sufficient for the suppression of an epidemic is calculated. Our results of numerical calculations are compared with the solutions of the master equation for the spreading process, and good agreement is found. (author)

  2. Motivation, Classroom Environment, and Learning in Introductory Geology: A Hierarchical Linear Model

    Science.gov (United States)

    Gilbert, L. A.; Hilpert, J. C.; Van Der Hoeven Kraft, K.; Budd, D.; Jones, M. H.; Matheney, R.; Mcconnell, D. A.; Perkins, D.; Stempien, J. A.; Wirth, K. R.

    2013-12-01

    Prior research has indicated that highly motivated students perform better and that learning increases in innovative, reformed classrooms, but untangling the student effects from the instructor effects is essential to understanding how to best support student learning. Using a hierarchical linear model, we examine these effects separately and jointly. We use data from nearly 2,000 undergraduate students surveyed by the NSF-funded GARNET (Geoscience Affective Research NETwork) project in 65 different introductory geology classes at research universities, public masters-granting universities, liberal arts colleges and community colleges across the US. Student level effects were measured as increases in expectancy and self-regulation using the Motivated Strategies for Learning Questionnaire (MSLQ; Pintrich et al., 1991). Instructor level effects were measured using the Reformed Teaching Observation Protocol, (RTOP; Sawada et al., 2000), with higher RTOP scores indicating a more reformed, student-centered classroom environment. Learning was measured by learning gains on a Geology Concept Inventory (GCI; Libarkin and Anderson, 2005) and normalized final course grade. The hierarchical linear model yielded significant results at several levels. At the student level, increases in expectancy and self-regulation are significantly and positively related to higher grades regardless of instructor; the higher the increase, the higher the grade. At the instructor level, RTOP scores are positively related to normalized average GCI learning gains. The higher the RTOP score, the higher the average class GCI learning gains. Across both levels, average class GCI learning gains are significantly and positively related to student grades; the higher the GCI learning gain, the higher the grade. Further, the RTOP scores are significantly and negatively related to the relationship between expectancy and course grade. The lower the RTOP score, the higher the correlation between change in

  3. Use of hierarchical models to analyze European trends in congenital anomaly prevalence

    DEFF Research Database (Denmark)

    Cavadino, Alana; Prieto-Merino, David; Addor, Marie-Claude

    2016-01-01

    BACKGROUND: Surveillance of congenital anomalies is important to identify potential teratogens. Despite known associations between different anomalies, current surveillance methods examine trends within each subgroup separately. We aimed to evaluate whether hierarchical statistical methods that c...

  4. CHAIN-WISE GENERALIZATION OF ROAD NETWORKS USING MODEL SELECTION

    Directory of Open Access Journals (Sweden)

    D. Bulatov

    2017-05-01

    Full Text Available Streets are essential entities of urban terrain and their automatized extraction from airborne sensor data is cumbersome because of a complex interplay of geometric, topological and semantic aspects. Given a binary image, representing the road class, centerlines of road segments are extracted by means of skeletonization. The focus of this paper lies in a well-reasoned representation of these segments by means of geometric primitives, such as straight line segments as well as circle and ellipse arcs. We propose the fusion of raw segments based on similarity criteria; the output of this process are the so-called chains which better match to the intuitive perception of what a street is. Further, we propose a two-step approach for chain-wise generalization. First, the chain is pre-segmented using circlePeucker and finally, model selection is used to decide whether two neighboring segments should be fused to a new geometric entity. Thereby, we consider both variance-covariance analysis of residuals and model complexity. The results on a complex data-set with many traffic roundabouts indicate the benefits of the proposed procedure.

  5. Simulating reservoir lithologies by an actively conditioned Markov chain model

    Science.gov (United States)

    Feng, Runhai; Luthi, Stefan M.; Gisolf, Dries

    2018-06-01

    The coupled Markov chain model can be used to simulate reservoir lithologies between wells, by conditioning them on the observed data in the cored wells. However, with this method, only the state at the same depth as the current cell is going to be used for conditioning, which may be a problem if the geological layers are dipping. This will cause the simulated lithological layers to be broken or to become discontinuous across the reservoir. In order to address this problem, an actively conditioned process is proposed here, in which a tolerance angle is predefined. The states contained in the region constrained by the tolerance angle will be employed for conditioning in the horizontal chain first, after which a coupling concept with the vertical chain is implemented. In order to use the same horizontal transition matrix for different future states, the tolerance angle has to be small. This allows the method to work in reservoirs without complex structures caused by depositional processes or tectonic deformations. Directional artefacts in the modeling process are avoided through a careful choice of the simulation path. The tolerance angle and dipping direction of the strata can be obtained from a correlation between wells, or from seismic data, which are available in most hydrocarbon reservoirs, either by interpretation or by inversion that can also assist the construction of a horizontal probability matrix.

  6. Closed-Loop Supply Chain Planning Model of Rare Metals

    Directory of Open Access Journals (Sweden)

    Dongmin Son

    2018-04-01

    Full Text Available Rare metals (RMs are becoming increasingly important in high-tech industries associated with the Fourth Industrial Revolution, such as the electric vehicle (EV and 3D printer industries. As the growth of these industries accelerates in the near future, manufacturers will also face greater RM supply risks. For this reason, many countries are putting considerable effort into securing the RM supply. For example, countries including Japan, Korea, and the USA have adopted two major policies: the stockpile system and Extended Producer Responsibility (EPR. Therefore, it is necessary for the manufacturers with RMs to establish a suitable supply chain plan that reflects this situation. In this study, the RM classification matrix is created based on the stockpile and recycling level in Korea. Accordingly, three different types of supply chain are designed in order to develop the closed-loop supply chain (CLSC planning model of RM, and the CLSC planning models of RM are validated through experimental analysis. The results show that the stockpiling and the EPR recycling obligation increase the amount of recycled flow and reduce the total cost of the part manufacturing, which means that these two factors are significant for obtaining sustainability of the RMs’ CLSC. In addition, the government needs to set an appropriate sharing cost for promoting the manufacturer’s recycling. Also, from the manufacturer’s perspective, it is better to increase the return rate by making a contract with the collectors to guarantee the collection of used products.

  7. Chemical event chain model of coupled genetic oscillators.

    Science.gov (United States)

    Jörg, David J; Morelli, Luis G; Jülicher, Frank

    2018-03-01

    We introduce a stochastic model of coupled genetic oscillators in which chains of chemical events involved in gene regulation and expression are represented as sequences of Poisson processes. We characterize steady states by their frequency, their quality factor, and their synchrony by the oscillator cross correlation. The steady state is determined by coupling and exhibits stochastic transitions between different modes. The interplay of stochasticity and nonlinearity leads to isolated regions in parameter space in which the coupled system works best as a biological pacemaker. Key features of the stochastic oscillations can be captured by an effective model for phase oscillators that are coupled by signals with distributed delays.

  8. Chemical event chain model of coupled genetic oscillators

    Science.gov (United States)

    Jörg, David J.; Morelli, Luis G.; Jülicher, Frank

    2018-03-01

    We introduce a stochastic model of coupled genetic oscillators in which chains of chemical events involved in gene regulation and expression are represented as sequences of Poisson processes. We characterize steady states by their frequency, their quality factor, and their synchrony by the oscillator cross correlation. The steady state is determined by coupling and exhibits stochastic transitions between different modes. The interplay of stochasticity and nonlinearity leads to isolated regions in parameter space in which the coupled system works best as a biological pacemaker. Key features of the stochastic oscillations can be captured by an effective model for phase oscillators that are coupled by signals with distributed delays.

  9. Reliability modelling - PETROBRAS 2010 integrated gas supply chain

    Energy Technology Data Exchange (ETDEWEB)

    Faertes, Denise; Heil, Luciana; Saker, Leonardo; Vieira, Flavia; Risi, Francisco; Domingues, Joaquim; Alvarenga, Tobias; Carvalho, Eduardo; Mussel, Patricia

    2010-09-15

    The purpose of this paper is to present the innovative reliability modeling of Petrobras 2010 integrated gas supply chain. The model represents a challenge in terms of complexity and software robustness. It was jointly developed by PETROBRAS Gas and Power Department and Det Norske Veritas. It was carried out with the objective of evaluating security of supply of 2010 gas network design that was conceived to connect Brazilian Northeast and Southeast regions. To provide best in class analysis, state of the art software was used to quantify the availability and the efficiency of the overall network and its individual components.

  10. A GIS-Enabled, Michigan-Specific, Hierarchical Groundwater Modeling and Visualization System

    Science.gov (United States)

    Liu, Q.; Li, S.; Mandle, R.; Simard, A.; Fisher, B.; Brown, E.; Ross, S.

    2005-12-01

    Efficient management of groundwater resources relies on a comprehensive database that represents the characteristics of the natural groundwater system as well as analysis and modeling tools to describe the impacts of decision alternatives. Many agencies in Michigan have spent several years compiling expensive and comprehensive surface water and groundwater inventories and other related spatial data that describe their respective areas of responsibility. However, most often this wealth of descriptive data has only been utilized for basic mapping purposes. The benefits from analyzing these data, using GIS analysis functions or externally developed analysis models or programs, has yet to be systematically realized. In this talk, we present a comprehensive software environment that allows Michigan groundwater resources managers and frontline professionals to make more effective use of the available data and improve their ability to manage and protect groundwater resources, address potential conflicts, design cleanup schemes, and prioritize investigation activities. In particular, we take advantage of the Interactive Ground Water (IGW) modeling system and convert it to a customized software environment specifically for analyzing, modeling, and visualizing the Michigan statewide groundwater database. The resulting Michigan IGW modeling system (IGW-M) is completely window-based, fully interactive, and seamlessly integrated with a GIS mapping engine. The system operates in real-time (on the fly) providing dynamic, hierarchical mapping, modeling, spatial analysis, and visualization. Specifically, IGW-M allows water resources and environmental professionals in Michigan to: * Access and utilize the extensive data from the statewide groundwater database, interactively manipulate GIS objects, and display and query the associated data and attributes; * Analyze and model the statewide groundwater database, interactively convert GIS objects into numerical model features

  11. Hierarchical mixture of experts and diagnostic modeling approach to reduce hydrologic model structural uncertainty: STRUCTURAL UNCERTAINTY DIAGNOSTICS

    Energy Technology Data Exchange (ETDEWEB)

    Moges, Edom [Civil and Environmental Engineering Department, Washington State University, Richland Washington USA; Demissie, Yonas [Civil and Environmental Engineering Department, Washington State University, Richland Washington USA; Li, Hong-Yi [Hydrology Group, Pacific Northwest National Laboratory, Richland Washington USA

    2016-04-01

    In most water resources applications, a single model structure might be inadequate to capture the dynamic multi-scale interactions among different hydrological processes. Calibrating single models for dynamic catchments, where multiple dominant processes exist, can result in displacement of errors from structure to parameters, which in turn leads to over-correction and biased predictions. An alternative to a single model structure is to develop local expert structures that are effective in representing the dominant components of the hydrologic process and adaptively integrate them based on an indicator variable. In this study, the Hierarchical Mixture of Experts (HME) framework is applied to integrate expert model structures representing the different components of the hydrologic process. Various signature diagnostic analyses are used to assess the presence of multiple dominant processes and the adequacy of a single model, as well as to identify the structures of the expert models. The approaches are applied for two distinct catchments, the Guadalupe River (Texas) and the French Broad River (North Carolina) from the Model Parameter Estimation Experiment (MOPEX), using different structures of the HBV model. The results show that the HME approach has a better performance over the single model for the Guadalupe catchment, where multiple dominant processes are witnessed through diagnostic measures. Whereas, the diagnostics and aggregated performance measures prove that French Broad has a homogeneous catchment response, making the single model adequate to capture the response.

  12. Modelling a flows in supply chain with analytical models: Case of a chemical industry

    Science.gov (United States)

    Benhida, Khalid; Azougagh, Yassine; Elfezazi, Said

    2016-02-01

    This study is interested on the modelling of the logistics flows in a supply chain composed on a production sites and a logistics platform. The contribution of this research is to develop an analytical model (integrated linear programming model), based on a case study of a real company operating in the phosphate field, considering a various constraints in this supply chain to resolve the planning problems for a better decision-making. The objectives of this model is to determine and define the optimal quantities of different products to route, to and from the various entities in the supply chain studied.

  13. Using spiral chain models for study of nanoscroll structures

    Science.gov (United States)

    Savin, Alexander V.; Sakovich, Ruslan A.; Mazo, Mikhail A.

    2018-04-01

    Molecular nanoribbons with different chemical structures can form scrolled packings possessing outstanding properties and application perspectives due to their morphology. Here, we propose a simplified two-dimensional model of the molecular chain that allows us to describe the molecular nanoribbon's scrolled packings of various structures as a spiral packaging chain. The model allows us to obtain the possible stationary states of single-layer nanoribbon scrolls of graphene, graphane, fluorographene, fluorographane (graphene hydrogenated on one side and fluorinated on the other side), graphone C4H (graphene partially hydrogenated on one side), and fluorographone C4F . The obtained states and the states of the scrolls found through all-atomic models coincide with good accuracy. We show the stability of scrolled packings and calculate the dependence of energy, the number of coils, and the inner and outer radius of the scrolled packing on the nanoribbon length. It is shown that a scrolled packing is the most energetically favorable conformation for nanoribbons of graphene, graphane, fluorographene, and fluorographane at large lengths. A double-scrolled packing when the nanoribbon is symmetrically rolled into a scroll from opposite ends is more advantageous for longer length nanoribbons of graphone and fluorographone. We show the possibility of the existence of scrolled packings for nanoribbons of fluorographene and the existence of two different types of scrolls for nanoribbons of fluorographane, which correspond to the left and right Archimedean spirals of the chain model. The simplicity of the proposed model allows us to consider the dynamics of molecular nanoribbon scrolls of sufficiently large lengths and at sufficiently large time intervals.

  14. A Poisson hierarchical modelling approach to detecting copy number variation in sequence coverage data

    KAUST Repository

    Sepú lveda, Nuno; Campino, Susana G; Assefa, Samuel A; Sutherland, Colin J; Pain, Arnab; Clark, Taane G

    2013-01-01

    Background: The advent of next generation sequencing technology has accelerated efforts to map and catalogue copy number variation (CNV) in genomes of important micro-organisms for public health. A typical analysis of the sequence data involves mapping reads onto a reference genome, calculating the respective coverage, and detecting regions with too-low or too-high coverage (deletions and amplifications, respectively). Current CNV detection methods rely on statistical assumptions (e.g., a Poisson model) that may not hold in general, or require fine-tuning the underlying algorithms to detect known hits. We propose a new CNV detection methodology based on two Poisson hierarchical models, the Poisson-Gamma and Poisson-Lognormal, with the advantage of being sufficiently flexible to describe different data patterns, whilst robust against deviations from the often assumed Poisson model.Results: Using sequence coverage data of 7 Plasmodium falciparum malaria genomes (3D7 reference strain, HB3, DD2, 7G8, GB4, OX005, and OX006), we showed that empirical coverage distributions are intrinsically asymmetric and overdispersed in relation to the Poisson model. We also demonstrated a low baseline false positive rate for the proposed methodology using 3D7 resequencing data and simulation. When applied to the non-reference isolate data, our approach detected known CNV hits, including an amplification of the PfMDR1 locus in DD2 and a large deletion in the CLAG3.2 gene in GB4, and putative novel CNV regions. When compared to the recently available FREEC and cn.MOPS approaches, our findings were more concordant with putative hits from the highest quality array data for the 7G8 and GB4 isolates.Conclusions: In summary, the proposed methodology brings an increase in flexibility, robustness, accuracy and statistical rigour to CNV detection using sequence coverage data. 2013 Seplveda et al.; licensee BioMed Central Ltd.

  15. Clarifying the Hubble constant tension with a Bayesian hierarchical model of the local distance ladder

    Science.gov (United States)

    Feeney, Stephen M.; Mortlock, Daniel J.; Dalmasso, Niccolò

    2018-05-01

    Estimates of the Hubble constant, H0, from the local distance ladder and from the cosmic microwave background (CMB) are discrepant at the ˜3σ level, indicating a potential issue with the standard Λ cold dark matter (ΛCDM) cosmology. A probabilistic (i.e. Bayesian) interpretation of this tension requires a model comparison calculation, which in turn depends strongly on the tails of the H0 likelihoods. Evaluating the tails of the local H0 likelihood requires the use of non-Gaussian distributions to faithfully represent anchor likelihoods and outliers, and simultaneous fitting of the complete distance-ladder data set to ensure correct uncertainty propagation. We have hence developed a Bayesian hierarchical model of the full distance ladder that does not rely on Gaussian distributions and allows outliers to be modelled without arbitrary data cuts. Marginalizing over the full ˜3000-parameter joint posterior distribution, we find H0 = (72.72 ± 1.67) km s-1 Mpc-1 when applied to the outlier-cleaned Riess et al. data, and (73.15 ± 1.78) km s-1 Mpc-1 with supernova outliers reintroduced (the pre-cut Cepheid data set is not available). Using our precise evaluation of the tails of the H0 likelihood, we apply Bayesian model comparison to assess the evidence for deviation from ΛCDM given the distance-ladder and CMB data. The odds against ΛCDM are at worst ˜10:1 when considering the Planck 2015 XIII data, regardless of outlier treatment, considerably less dramatic than naïvely implied by the 2.8σ discrepancy. These odds become ˜60:1 when an approximation to the more-discrepant Planck Intermediate XLVI likelihood is included.

  16. A Poisson hierarchical modelling approach to detecting copy number variation in sequence coverage data.

    Science.gov (United States)

    Sepúlveda, Nuno; Campino, Susana G; Assefa, Samuel A; Sutherland, Colin J; Pain, Arnab; Clark, Taane G

    2013-02-26

    The advent of next generation sequencing technology has accelerated efforts to map and catalogue copy number variation (CNV) in genomes of important micro-organisms for public health. A typical analysis of the sequence data involves mapping reads onto a reference genome, calculating the respective coverage, and detecting regions with too-low or too-high coverage (deletions and amplifications, respectively). Current CNV detection methods rely on statistical assumptions (e.g., a Poisson model) that may not hold in general, or require fine-tuning the underlying algorithms to detect known hits. We propose a new CNV detection methodology based on two Poisson hierarchical models, the Poisson-Gamma and Poisson-Lognormal, with the advantage of being sufficiently flexible to describe different data patterns, whilst robust against deviations from the often assumed Poisson model. Using sequence coverage data of 7 Plasmodium falciparum malaria genomes (3D7 reference strain, HB3, DD2, 7G8, GB4, OX005, and OX006), we showed that empirical coverage distributions are intrinsically asymmetric and overdispersed in relation to the Poisson model. We also demonstrated a low baseline false positive rate for the proposed methodology using 3D7 resequencing data and simulation. When applied to the non-reference isolate data, our approach detected known CNV hits, including an amplification of the PfMDR1 locus in DD2 and a large deletion in the CLAG3.2 gene in GB4, and putative novel CNV regions. When compared to the recently available FREEC and cn.MOPS approaches, our findings were more concordant with putative hits from the highest quality array data for the 7G8 and GB4 isolates. In summary, the proposed methodology brings an increase in flexibility, robustness, accuracy and statistical rigour to CNV detection using sequence coverage data.

  17. A Poisson hierarchical modelling approach to detecting copy number variation in sequence coverage data

    KAUST Repository

    Sepúlveda, Nuno

    2013-02-26

    Background: The advent of next generation sequencing technology has accelerated efforts to map and catalogue copy number variation (CNV) in genomes of important micro-organisms for public health. A typical analysis of the sequence data involves mapping reads onto a reference genome, calculating the respective coverage, and detecting regions with too-low or too-high coverage (deletions and amplifications, respectively). Current CNV detection methods rely on statistical assumptions (e.g., a Poisson model) that may not hold in general, or require fine-tuning the underlying algorithms to detect known hits. We propose a new CNV detection methodology based on two Poisson hierarchical models, the Poisson-Gamma and Poisson-Lognormal, with the advantage of being sufficiently flexible to describe different data patterns, whilst robust against deviations from the often assumed Poisson model.Results: Using sequence coverage data of 7 Plasmodium falciparum malaria genomes (3D7 reference strain, HB3, DD2, 7G8, GB4, OX005, and OX006), we showed that empirical coverage distributions are intrinsically asymmetric and overdispersed in relation to the Poisson model. We also demonstrated a low baseline false positive rate for the proposed methodology using 3D7 resequencing data and simulation. When applied to the non-reference isolate data, our approach detected known CNV hits, including an amplification of the PfMDR1 locus in DD2 and a large deletion in the CLAG3.2 gene in GB4, and putative novel CNV regions. When compared to the recently available FREEC and cn.MOPS approaches, our findings were more concordant with putative hits from the highest quality array data for the 7G8 and GB4 isolates.Conclusions: In summary, the proposed methodology brings an increase in flexibility, robustness, accuracy and statistical rigour to CNV detection using sequence coverage data. 2013 Seplveda et al.; licensee BioMed Central Ltd.

  18. Chemical and morphological gradient scaffolds to mimic hierarchically complex tissues: From theoretical modeling to their fabrication.

    Science.gov (United States)

    Marrella, Alessandra; Aiello, Maurizio; Quarto, Rodolfo; Scaglione, Silvia

    2016-10-01

    Porous multiphase scaffolds have been proposed in different tissue engineering applications because of their potential to artificially recreate the heterogeneous structure of hierarchically complex tissues. Recently, graded scaffolds have been also realized, offering a continuum at the interface among different phases for an enhanced structural stability of the scaffold. However, their internal architecture is often obtained empirically and the architectural parameters rarely predetermined. The aim of this work is to offer a theoretical model as tool for the design and fabrication of functional and structural complex graded scaffolds with predicted morphological and chemical features, to overcome the time-consuming trial and error experimental method. This developed mathematical model uses laws of motions, Stokes equations, and viscosity laws to describe the dependence between centrifugation speed and fiber/particles sedimentation velocity over time, which finally affects the fiber packing, and thus the total porosity of the 3D scaffolds. The efficacy of the theoretical model was tested by realizing engineered graded grafts for osteochondral tissue engineering applications. The procedure, based on combined centrifugation and freeze-drying technique, was applied on both polycaprolactone (PCL) and collagen-type-I (COL) to test the versatility of the entire process. A functional gradient was combined to the morphological one by adding hydroxyapatite (HA) powders, to mimic the bone mineral phase. Results show that 3D bioactive morphologically and chemically graded grafts can be properly designed and realized in agreement with the theoretical model. Biotechnol. Bioeng. 2016;113: 2286-2297. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  19. A dynamic game on Green Supply Chain Management

    OpenAIRE

    Mehrnoosh Khademi; Massimiliano Ferrara; Bruno Pansera; Mehdi Salimi

    2015-01-01

    In this paper, we establish a dynamic game to allocate CSR (Corporate Social Responsibility) to the members of a supply chain. We propose a model of three-tier supply chain in decentralized state that is including supplier, manufacturer and retailer. For analyzing supply chain performance in decentralized state and the relationships between the members of supply chain, we use Stackelberg game and we consider in this paper a hierarchical equilibrium solution for a two-level game. Specially, we...

  20. Sub-seasonal-to-seasonal Reservoir Inflow Forecast using Bayesian Hierarchical Hidden Markov Model

    Science.gov (United States)

    Mukhopadhyay, S.; Arumugam, S.

    2017-12-01

    Sub-seasonal-to-seasonal (S2S) (15-90 days) streamflow forecasting is an emerging area of research that provides seamless information for reservoir operation from weather time scales to seasonal time scales. From an operational perspective, sub-seasonal inflow forecasts are highly valuable as these enable water managers to decide short-term releases (15-30 days), while holding water for seasonal needs (e.g., irrigation and municipal supply) and to meet end-of-the-season target storage at a desired level. We propose a Bayesian Hierarchical Hidden Markov Model (BHHMM) to develop S2S inflow forecasts for the Tennessee Valley Area (TVA) reservoir system. Here, the hidden states are predicted by relevant indices that influence the inflows at S2S time scale. The hidden Markov model also captures the both spatial and temporal hierarchy in predictors that operate at S2S time scale with model parameters being estimated as a posterior distribution using a Bayesian framework. We present our work in two steps, namely single site model and multi-site model. For proof of concept, we consider inflows to Douglas Dam, Tennessee, in the single site model. For multisite model we consider reservoirs in the upper Tennessee valley. Streamflow forecasts are issued and updated continuously every day at S2S time scale. We considered precipitation forecasts obtained from NOAA Climate Forecast System (CFSv2) GCM as predictors for developing S2S streamflow forecasts along with relevant indices for predicting hidden states. Spatial dependence of the inflow series of reservoirs are also preserved in the multi-site model. To circumvent the non-normality of the data, we consider the HMM in a Generalized Linear Model setting. Skill of the proposed approach is tested using split sample validation against a traditional multi-site canonical correlation model developed using the same set of predictors. From the posterior distribution of the inflow forecasts, we also highlight different system behavior

  1. Predictors of Drinking Water Boiling and Bottled Water Consumption in Rural China: A Hierarchical Modeling Approach.

    Science.gov (United States)

    Cohen, Alasdair; Zhang, Qi; Luo, Qing; Tao, Yong; Colford, John M; Ray, Isha

    2017-06-20

    Approximately two billion people drink unsafe water. Boiling is the most commonly used household water treatment (HWT) method globally and in China. HWT can make water safer, but sustained adoption is rare and bottled water consumption is growing. To successfully promote HWT, an understanding of associated socioeconomic factors is critical. We collected survey data and water samples from 450 rural households in Guangxi Province, China. Covariates were grouped into blocks to hierarchically construct modified Poisson models and estimate risk ratios (RR) associated with boiling methods, bottled water, and untreated water. Female-headed households were most likely to boil (RR = 1.36, p water, or use electric kettles if they boiled. Our findings show that boiling is not an undifferentiated practice, but one with different methods of varying effectiveness, environmental impact, and adoption across socioeconomic strata. Our results can inform programs to promote safer and more efficient boiling using electric kettles, and suggest that if rural China's economy continues to grow then bottled water use will increase.

  2. Modelling the dynamics of an experimental host-pathogen microcosm within a hierarchical Bayesian framework.

    Directory of Open Access Journals (Sweden)

    David Lunn

    Full Text Available The advantages of Bayesian statistical approaches, such as flexibility and the ability to acknowledge uncertainty in all parameters, have made them the prevailing method for analysing the spread of infectious diseases in human or animal populations. We introduce a Bayesian approach to experimental host-pathogen systems that shares these attractive features. Since uncertainty in all parameters is acknowledged, existing information can be accounted for through prior distributions, rather than through fixing some parameter values. The non-linear dynamics, multi-factorial design, multiple measurements of responses over time and sampling error that are typical features of experimental host-pathogen systems can also be naturally incorporated. We analyse the dynamics of the free-living protozoan Paramecium caudatum and its specialist bacterial parasite Holospora undulata. Our analysis provides strong evidence for a saturable infection function, and we were able to reproduce the two waves of infection apparent in the data by separating the initial inoculum from the parasites released after the first cycle of infection. In addition, the parameter estimates from the hierarchical model can be combined to infer variations in the parasite's basic reproductive ratio across experimental groups, enabling us to make predictions about the effect of resources and host genotype on the ability of the parasite to spread. Even though the high level of variability between replicates limited the resolution of the results, this Bayesian framework has strong potential to be used more widely in experimental ecology.

  3. Determination of a Differential Item Functioning Procedure Using the Hierarchical Generalized Linear Model

    Directory of Open Access Journals (Sweden)

    Tülin Acar

    2012-01-01

    Full Text Available The aim of this research is to compare the result of the differential item functioning (DIF determining with hierarchical generalized linear model (HGLM technique and the results of the DIF determining with logistic regression (LR and item response theory–likelihood ratio (IRT-LR techniques on the test items. For this reason, first in this research, it is determined whether the students encounter DIF with HGLM, LR, and IRT-LR techniques according to socioeconomic status (SES, in the Turkish, Social Sciences, and Science subtest items of the Secondary School Institutions Examination. When inspecting the correlations among the techniques in terms of determining the items having DIF, it was discovered that there was significant correlation between the results of IRT-LR and LR techniques in all subtests; merely in Science subtest, the results of the correlation between HGLM and IRT-LR techniques were found significant. DIF applications can be made on test items with other DIF analysis techniques that were not taken to the scope of this research. The analysis results, which were determined by using the DIF techniques in different sample sizes, can be compared.

  4. Hierarchical modeling of indoor radon concentration: how much do geology and building factors matter?

    Science.gov (United States)

    Borgoni, Riccardo; De Francesco, Davide; De Bartolo, Daniela; Tzavidis, Nikos

    2014-12-01

    Radon is a natural gas known to be the main contributor to natural background radiation exposure and only second to smoking as major leading cause of lung cancer. The main concern is in indoor environments where the gas tends to accumulate and can reach high concentrations. The primary contributor of this gas into the building is from the soil although architectonic characteristics, such as building materials, can largely affect concentration values. Understanding the factors affecting the concentration in dwellings and workplaces is important both in prevention, when the construction of a new building is being planned, and in mitigation when the amount of Radon detected inside a building is too high. In this paper we investigate how several factors, such as geologic typologies of the soil and a range of building characteristics, impact on indoor concentration focusing, in particular, on how concentration changes as a function of the floor level. Adopting a mixed effects model to account for the hierarchical nature of the data, we also quantify the extent to which such measurable factors manage to explain the variability of indoor radon concentration. Copyright © 2014 Elsevier Ltd. All rights reserved.

  5. Factors influencing the occupational injuries of physical therapists in Taiwan: A hierarchical linear model approach.

    Science.gov (United States)

    Tao, Yu-Hui; Wu, Yu-Lung; Huang, Wan-Yun

    2017-01-01

    The evidence literature suggests that physical therapy practitioners are subjected to a high probability of acquiring work-related injuries, but only a few studies have specifically investigated Taiwanese physical therapy practitioners. This study was conducted to determine the relationships among individual and group hospital-level factors that contribute to the medical expenses for the occupational injuries of physical therapy practitioners in Taiwan. Physical therapy practitioners in Taiwan with occupational injuries were selected from the 2013 National Health Insurance Research Databases (NHIRD). The age, gender, job title, hospitals attributes, and outpatient data of physical therapy practitioners who sustained an occupational injury in 2013 were obtained with SAS 9.3. SPSS 20.0 and HLM 7.01 were used to conduct descriptive and hierarchical linear model analyses, respectively. The job title of physical therapy practitioners at the individual level and the hospital type at the group level exert positive effects on per person medical expenses. Hospital hierarchy moderates the individual-level relationships of age and job title with the per person medical expenses. Considering that age, job title, and hospital hierarchy affect medical expenses for the occupational injuries of physical therapy practitioners, we suggest strengthening related safety education and training and elevating the self-awareness of the risk of occupational injuries of physical therapy practitioners to reduce and prevent the occurrence of such injuries.

  6. Hierarchical modeling of indoor radon concentration: how much do geology and building factors matter?

    International Nuclear Information System (INIS)

    Borgoni, Riccardo; De Francesco, Davide; De Bartolo, Daniela; Tzavidis, Nikos

    2014-01-01

    Radon is a natural gas known to be the main contributor to natural background radiation exposure and only second to smoking as major leading cause of lung cancer. The main concern is in indoor environments where the gas tends to accumulate and can reach high concentrations. The primary contributor of this gas into the building is from the soil although architectonic characteristics, such as building materials, can largely affect concentration values. Understanding the factors affecting the concentration in dwellings and workplaces is important both in prevention, when the construction of a new building is being planned, and in mitigation when the amount of Radon detected inside a building is too high. In this paper we investigate how several factors, such as geologic typologies of the soil and a range of building characteristics, impact on indoor concentration focusing, in particular, on how concentration changes as a function of the floor level. Adopting a mixed effects model to account for the hierarchical nature of the data, we also quantify the extent to which such measurable factors manage to explain the variability of indoor radon concentration. - Highlights: • It is assessed how the variability of indoor radon concentration depends on buildings and lithologies. • The lithological component has been found less relevant than the building one. • Radon-prone lithologies have been identified. • The effect of the floor where the room is located has been estimated. • Indoor radon concentration have been predicted for different dwelling typologies

  7. Visualization and Hierarchical Analysis of Flow in Discrete Fracture Network Models

    Science.gov (United States)

    Aldrich, G. A.; Gable, C. W.; Painter, S. L.; Makedonska, N.; Hamann, B.; Woodring, J.

    2013-12-01

    Flow and transport in low permeability fractured rock is primary in interconnected fracture networks. Prediction and characterization of flow and transport in fractured rock has important implications in underground repositories for hazardous materials (eg. nuclear and chemical waste), contaminant migration and remediation, groundwater resource management, and hydrocarbon extraction. We have developed methods to explicitly model flow in discrete fracture networks and track flow paths using passive particle tracking algorithms. Visualization and analysis of particle trajectory through the fracture network is important to understanding fracture connectivity, flow patterns, potential contaminant pathways and fast paths through the network. However, occlusion due to the large number of highly tessellated and intersecting fracture polygons preclude the effective use of traditional visualization methods. We would also like quantitative analysis methods to characterize the trajectory of a large number of particle paths. We have solved these problems by defining a hierarchal flow network representing the topology of particle flow through the fracture network. This approach allows us to analyses the flow and the dynamics of the system as a whole. We are able to easily query the flow network, and use paint-and-link style framework to filter the fracture geometry and particle traces based on the flow analytics. This allows us to greatly reduce occlusion while emphasizing salient features such as the principal transport pathways. Examples are shown that demonstrate the methodology and highlight how use of this new method allows quantitative analysis and characterization of flow and transport in a number of representative fracture networks.

  8. Assessing exposure to violence using multiple informants: application of hierarchical linear model.

    Science.gov (United States)

    Kuo, M; Mohler, B; Raudenbush, S L; Earls, F J

    2000-11-01

    The present study assesses the effects of demographic risk factors on children's exposure to violence (ETV) and how these effects vary by informants. Data on exposure to violence of 9-, 12-, and 15-year-olds were collected from both child participants (N = 1880) and parents (N = 1776), as part of the assessment of the Project on Human Development in Chicago Neighborhoods (PHDCN). A two-level hierarchical linear model (HLM) with multivariate outcomes was employed to analyze information obtained from these two different groups of informants. The findings indicate that parents generally report less ETV than do their children and that associations of age, gender, and parent education with ETV are stronger in the self-reports than in the parent reports. The findings support a multivariate approach when information obtained from different sources is being integrated. The application of HLM allows an assessment of interactions between risk factors and informants and uses all available data, including data from one informant when data from the other informant is missing.

  9. The Development and Empirical Validation of an E-based Supply Chain Strategy Optimization Model

    DEFF Research Database (Denmark)

    Kotzab, Herbert; Skjoldager, Niels; Vinum, Thorkil

    2003-01-01

    Examines the formulation of supply chain strategies in complex environments. Argues that current state‐of‐the‐art e‐business and supply chain management, combined into the concept of e‐SCM, as well as the use of transaction cost theory, network theory and resource‐based theory, altogether can...... be used to form a model for analyzing supply chains with the purpose of reducing the uncertainty of formulating supply chain strategies. Presents e‐supply chain strategy optimization model (e‐SOM) as a way to analyze supply chains in a structured manner as regards strategic preferences for supply chain...... design, relations and resources in the chains with the ultimate purpose of enabling the formulation of optimal, executable strategies for specific supply chains. Uses research results for a specific supply chain to validate the usefulness of the model....

  10. Teaching Supply Chain Management Complexities: A SCOR Model Based Classroom Simulation

    Science.gov (United States)

    Webb, G. Scott; Thomas, Stephanie P.; Liao-Troth, Sara

    2014-01-01

    The SCOR (Supply Chain Operations Reference) Model Supply Chain Classroom Simulation is an in-class experiential learning activity that helps students develop a holistic understanding of the processes and challenges of supply chain management. The simulation has broader learning objectives than other supply chain related activities such as the…

  11. A Bayesian Hierarchical Model for Glacial Dynamics Based on the Shallow Ice Approximation and its Evaluation Using Analytical Solutions

    Science.gov (United States)

    Gopalan, Giri; Hrafnkelsson, Birgir; Aðalgeirsdóttir, Guðfinna; Jarosch, Alexander H.; Pálsson, Finnur

    2018-03-01

    Bayesian hierarchical modeling can assist the study of glacial dynamics and ice flow properties. This approach will allow glaciologists to make fully probabilistic predictions for the thickness of a glacier at unobserved spatio-temporal coordinates, and it will also allow for the derivation of posterior probability distributions for key physical parameters such as ice viscosity and basal sliding. The goal of this paper is to develop a proof of concept for a Bayesian hierarchical model constructed, which uses exact analytical solutions for the shallow ice approximation (SIA) introduced by Bueler et al. (2005). A suite of test simulations utilizing these exact solutions suggests that this approach is able to adequately model numerical errors and produce useful physical parameter posterior distributions and predictions. A byproduct of the development of the Bayesian hierarchical model is the derivation of a novel finite difference method for solving the SIA partial differential equation (PDE). An additional novelty of this work is the correction of numerical errors induced through a numerical solution using a statistical model. This error correcting process models numerical errors that accumulate forward in time and spatial variation of numerical errors between the dome, interior, and margin of a glacier.

  12. Clustering and Bayesian hierarchical modeling for the definition of informative prior distributions in hydrogeology

    Science.gov (United States)

    Cucchi, K.; Kawa, N.; Hesse, F.; Rubin, Y.

    2017-12-01

    In order to reduce uncertainty in the prediction of subsurface flow and transport processes, practitioners should use all data available. However, classic inverse modeling frameworks typically only make use of information contained in in-situ field measurements to provide estimates of hydrogeological parameters. Such hydrogeological information about an aquifer is difficult and costly to acquire. In this data-scarce context, the transfer of ex-situ information coming from previously investigated sites can be critical for improving predictions by better constraining the estimation procedure. Bayesian inverse modeling provides a coherent framework to represent such ex-situ information by virtue of the prior distribution and combine them with in-situ information from the target site. In this study, we present an innovative data-driven approach for defining such informative priors for hydrogeological parameters at the target site. Our approach consists in two steps, both relying on statistical and machine learning methods. The first step is data selection; it consists in selecting sites similar to the target site. We use clustering methods for selecting similar sites based on observable hydrogeological features. The second step is data assimilation; it consists in assimilating data from the selected similar sites into the informative prior. We use a Bayesian hierarchical model to account for inter-site variability and to allow for the assimilation of multiple types of site-specific data. We present the application and validation of the presented methods on an established database of hydrogeological parameters. Data and methods are implemented in the form of an open-source R-package and therefore facilitate easy use by other practitioners.

  13. Connections of the Liouville model and XXZ spin chain

    Science.gov (United States)

    Faddeev, Ludvig D.; Tirkkonen, Olav

    1995-02-01

    The quantum theory of the Liouville model with imaginary field is considered using the Quantum Inverse Scattering Method. An integrable structure with non-trivial spectral-parameter dependence is developed for lattice Liouville theory by scaling the L-matrix of lattice sine-Gordon theory. This L-matrix yields Bethe ansatz equations for Liouville theory, by the methods of the algebraic Bethe ansatz. Using the string picture of excited Bethe states, the lattice Liouville Bethe equations are mapped to the corresponding spin- {1}/{2} XXZ chain equations. The well developed theory of finite-size corrections in spin chains is used to deduce the conformal properties of the lattice Liouville Bethe states. The unitary series of conformal field theories emerge for Liouville couplings of the form γ = πν/( ν + 1), corresponding to root of unity XXZ anisotropies. The Bethe states give the full spectrum of the corresponding unitary conformal field theory, with the primary states in the Kač table parameterized by a string length K, and the remnant of the chain length mod ( ν + 1).

  14. Connections of the Liouville model and XXZ spin chain

    International Nuclear Information System (INIS)

    Faddeev, L.D.; Tirkkonen, O.

    1995-01-01

    The quantum theory of the Liouville model with imaginary field is considered using the Quantum Inverse Scattering Method. An integrable structure with non-trivial spectral-parameter dependence is developed for lattice Liouville theory by scaling the L-matrix of lattice sine-Gordon theory. This L-matrix yields Bethe ansatz equations for Liouville theory, by the methods of the algebraic Bethe ansatz. Using the string picture of excited Bethe states, the lattice Liouville Bethe equations are mapped to the corresponding spin-1/2 XXZ chain equations. The well developed theory of finite-size corrections in spin chains is used to deduce the conformal properties of the lattice Liouville Bethe states. The unitary series of conformal field theories emerge for Liouville couplings of the form γ= πν/(ν+1), corresponding to root of unity XXZ anisotropies. The Bethe states give the full spectrum of the corresponding unitary conformal field theory, with the primary states in the Kac table parameterized by a string length K, and the remnant of the chain length mod (ν+1). (orig.)

  15. Modeling Value Chain Analysis of Distance Education using UML

    Science.gov (United States)

    Acharya, Anal; Mukherjee, Soumen

    2010-10-01

    Distance education continues to grow as a methodology for the delivery of course content in higher education in India as well as abroad. To manage this growing demand and to provide certain flexibility, there must be certain strategic planning about the use of ICT tools. Value chain analysis is a framework for breaking down the sequence of business functions into a set of activities through which utility could be added to service. Thus it can help to determine the competitive advantage that is enjoyed by an institute. To implement these business functions certain visual representation is required. UML allows for this representation by using a set of structural and behavioral diagrams. In this paper, the first section defines a framework for value chain analysis and highlights its advantages. The second section gives a brief overview of related work in this field. The third section gives a brief discussion on distance education. The fourth section very briefly introduces UML. The fifth section models value chain of distance education using UML. Finally we discuss the limitations and the problems posed in this domain.

  16. 3D hierarchical computational model of wood as a cellular material with fibril reinforced, heterogeneous multiple layers

    DEFF Research Database (Denmark)

    Qing, Hai; Mishnaevsky, Leon

    2009-01-01

    A 3D hierarchical computational model of deformation and stiffness of wood, which takes into account the structures of wood at several scale levels (cellularity, multilayered nature of cell walls, composite-like structures of the wall layers) is developed. At the mesoscale, the softwood cell...... cellular model. With the use of the developed hierarchical model, the influence of the microstructure, including microfibril angles (MFAs, which characterizes the orientation of the cellulose fibrils with respect to the cell axis), the thickness of the cell wall, the shape of the cell cross...... is presented as a 3D hexagon-shape-tube with multilayered walls. The layers in the softwood cell are considered as considered as composite reinforced by microfibrils (celluloses). The elastic properties of the layers are determined with Halpin–Tsai equations, and introduced into mesoscale finite element...

  17. Dynamic model for tritium transfer in an aquatic food chain.

    Science.gov (United States)

    Melintescu, A; Galeriu, D

    2011-08-01

    Tritium ((3)H) is released from some nuclear facilities in relatively large quantities. It is a ubiquitous isotope because it enters straight into organisms, behaving essentially identically to its stable analogue (hydrogen). Tritium is a key radionuclide in the aquatic environment, in some cases, contributing significantly to the doses received by aquatic, non-human biota and by humans. The updated model presented here is based on more standardized, comprehensive assessments than previously used for the aquatic food chain, including the benthic flora and fauna, with an explicit application to the Danube ecosystem, as well as an extension to the special case of dissolved organic tritium (DOT). The model predicts the organically bound tritium (OBT) in the primary producers (the autotrophs, such as phytoplankton and algae) and in the consumers (the heterotrophs) using their bioenergetics, which involves the investigation of energy expenditure, losses, gains and efficiencies of transformations in the body. The model described in the present study intends to be more specific than a screening-level model, by including a metabolic approach and a description of the direct uptake of DOT in marine phytoplankton and invertebrates. For a better control of tritium transfer into the environment, not only tritiated water must be monitored, but also the other chemical forms and most importantly OBT, in the food chain.

  18. 15 years of food-chain modeling in Romania

    International Nuclear Information System (INIS)

    Gheorghe, R.; Galeriu, D.; Apostoaie, I.; Gheorghe, D.

    2002-01-01

    In the very begin of Chernobyl accident, the high contamination of food, the large variability and the unexpected behavior posed many problems in order to assess the radiological consequences. A simple dynamic food-chain model was built in early May and used for our first assessment on future food contamination and overall impact of Chernobyl in Romania. This quite primitive model show remarkable performance when compared in late 86 and 87 with measurements and our projection of dose was close to a factor two with 10 years later post assessment. After the slowing down of radiological stress we developed a more advanced, process level model of food chain, using not only all literature available but also all local measurements with quality assurance. This model, named LINDOZ, was first internationally applied in the frame of A4 scenario in BIOMOVS 1. and presented in the 1990 Stockholm conference. It was the first time when fallout solubility and foliar absorption were introduced in such a model, explaining very well the dynamics of grass and milk contamination. Upgrades of the model were done concerning deposition and retention and LINDOZ91 was successfully applied in international comparisons VAMP and BIOMOVS 2., including blind tests. Using local expertise and certified data, correlation between probability distribution of deposition and food contamination were used and successfully applied to predict Cs body content distribution in VAMP scenario. Extension to lake-fish was done and tested with excellent results in BIOMOVS 2. In 1994, the model was applied in the first attempt to assess food contamination in Iput region and this old results have been compared recently with those obtained last year by other modelers using updated scenario information. The key points in LINDOZ and its performances in international comparison exercises are presented. In 90'years the German model ECOSYS was spread in Europe and a variant (FDMT) was developed as a food-chain model for

  19. Influence of chain topology and bond potential on the glass transition of polymer chains simulated with the bond fluctuation model

    International Nuclear Information System (INIS)

    Freire, J J

    2008-01-01

    The bond fluctuation model with a bond potential has been applied to investigation of the glass transition of linear chains and chains with a regular disposition of small branches. Cooling and subsequent heating curves are obtained for the chain energies and also for the mean acceptance probability of a bead jump. In order to mimic different trends to vitrification, a factor B gauging the strength of the bond potential with respect to the long-range potential (i.e. the intramolecular or intermolecular potential between indirectly bonded beads) has been introduced. (A higher value of B leads to a preference for the highest bond lengths and a higher total energy, implying a greater tendency to vitrify.) Different cases have been considered for linear chains: no long-range potential, no bond potential and several choices for B. Furthermore, two distinct values of B have been considered for alternate bonds in linear chains. In the case of the branched chains, mixed models with different values of B for bonds in the main chain and in the branches have also been investigated. The possible presence of ordering or crystallization has been characterized by calculating the collective light scattering function of the different samples after annealing at a convenient temperature below the onset of the abrupt change in the curves associated with a thermodynamic transition. It is concluded that ordering is inherited more efficiently in the systems with branched chains and also for higher values of B. The branched molecules with the highest B values in the main chain bonds exhibit two distinct transitions in the heating curves, which may be associated with two glass transitions. This behavior has been detected experimentally for chains with relatively long flexible branches

  20. Influence of chain topology and bond potential on the glass transition of polymer chains simulated with the bond fluctuation model

    Energy Technology Data Exchange (ETDEWEB)

    Freire, J J [Departamento de Ciencias y Tecnicas FisicoquImicas, Facultad de Ciencias, Universidad Nacional de Educacion a Distancia (UNED), Senda del Rey 9, 28040 Madrid (Spain)], E-mail: jfreire@invi.uned.es

    2008-07-16

    The bond fluctuation model with a bond potential has been applied to investigation of the glass transition of linear chains and chains with a regular disposition of small branches. Cooling and subsequent heating curves are obtained for the chain energies and also for the mean acceptance probability of a bead jump. In order to mimic different trends to vitrification, a factor B gauging the strength of the bond potential with respect to the long-range potential (i.e. the intramolecular or intermolecular potential between indirectly bonded beads) has been introduced. (A higher value of B leads to a preference for the highest bond lengths and a higher total energy, implying a greater tendency to vitrify.) Different cases have been considered for linear chains: no long-range potential, no bond potential and several choices for B. Furthermore, two distinct values of B have been considered for alternate bonds in linear chains. In the case of the branched chains, mixed models with different values of B for bonds in the main chain and in the branches have also been investigated. The possible presence of ordering or crystallization has been characterized by calculating the collective light scattering function of the different samples after annealing at a convenient temperature below the onset of the abrupt change in the curves associated with a thermodynamic transition. It is concluded that ordering is inherited more efficiently in the systems with branched chains and also for higher values of B. The branched molecules with the highest B values in the main chain bonds exhibit two distinct transitions in the heating curves, which may be associated with two glass transitions. This behavior has been detected experimentally for chains with relatively long flexible branches.

  1. Stochastic model of milk homogenization process using Markov's chain

    Directory of Open Access Journals (Sweden)

    A. A. Khvostov

    2016-01-01

    Full Text Available The process of development of a mathematical model of the process of homogenization of dairy products is considered in the work. The theory of Markov's chains was used in the development of the mathematical model, Markov's chain with discrete states and continuous parameter for which the homogenisation pressure is taken, being the basis for the model structure. Machine realization of the model is implemented in the medium of structural modeling MathWorks Simulink™. Identification of the model parameters was carried out by minimizing the standard deviation calculated from the experimental data for each fraction of dairy products fat phase. As the set of experimental data processing results of the micrographic images of fat globules of whole milk samples distribution which were subjected to homogenization at different pressures were used. Pattern Search method was used as optimization method with the Latin Hypercube search algorithm from Global Optimization Тoolbox library. The accuracy of calculations averaged over all fractions of 0.88% (the relative share of units, the maximum relative error was 3.7% with the homogenization pressure of 30 MPa, which may be due to the very abrupt change in properties from the original milk in the particle size distribution at the beginning of the homogenization process and the lack of experimental data at homogenization pressures of below the specified value. The mathematical model proposed allows to calculate the profile of volume and mass distribution of the fat phase (fat globules in the product, depending on the homogenization pressure and can be used in the laboratory and research of dairy products composition, as well as in the calculation, design and modeling of the process equipment of the dairy industry enterprises.

  2. A Structural Model of Intellectual Capital in Supply Chains

    DEFF Research Database (Denmark)

    Parisi, Cristiana

    Intellectual capital (IC) is probably one of the most critical resources of the knowledge society. However, the discipline of IC still needs empirically grounded research, especially with regards to the interrelations between the different component of IC and how these enable or impinge upon the ...... performance. This result both contributes to the IC and SCM literature, as it offers a better understanding of the multiple interrelations between the IC components from a SCM perspective and provides evidence of their impact on the value creation process of supply chains....... the internal organisational value creation process. The present paper helps to address the need for empirical investigation about the interconnections between the components of IC and their value creation, by assessing the structural effects of intellectual capital on firms’ financial performance from a supply...... chain management perspective. A model composed by five constructs describing intellectual capital and three constructs describing firms’ internal performance is proposed. The theoretical model is then tested through a structural equation modeling technique. The components of intellectual capital...

  3. Bayesian hierarchical models for regional climate reconstructions of the last glacial maximum

    Science.gov (United States)

    Weitzel, Nils; Hense, Andreas; Ohlwein, Christian

    2017-04-01

    Spatio-temporal reconstructions of past climate are important for the understanding of the long term behavior of the climate system and the sensitivity to forcing changes. Unfortunately, they are subject to large uncertainties, have to deal with a complex proxy-climate structure, and a physically reasonable interpolation between the sparse proxy observations is difficult. Bayesian Hierarchical Models (BHMs) are a class of statistical models that is well suited for spatio-temporal reconstructions of past climate because they permit the inclusion of multiple sources of information (e.g. records from different proxy types, uncertain age information, output from climate simulations) and quantify uncertainties in a statistically rigorous way. BHMs in paleoclimatology typically consist of three stages which are modeled individually and are combined using Bayesian inference techniques. The data stage models the proxy-climate relation (often named transfer function), the process stage models the spatio-temporal distribution of the climate variables of interest, and the prior stage consists of prior distributions of the model parameters. For our BHMs, we translate well-known proxy-climate transfer functions for pollen to a Bayesian framework. In addition, we can include Gaussian distributed local climate information from preprocessed proxy records. The process stage combines physically reasonable spatial structures from prior distributions with proxy records which leads to a multivariate posterior probability distribution for the reconstructed climate variables. The prior distributions that constrain the possible spatial structure of the climate variables are calculated from climate simulation output. We present results from pseudoproxy tests as well as new regional reconstructions of temperatures for the last glacial maximum (LGM, ˜ 21,000 years BP). These reconstructions combine proxy data syntheses with information from climate simulations for the LGM that were

  4. Construction of Hierarchical Models for Fluid Dynamics in Earth and Planetary Sciences : DCMODEL project

    Science.gov (United States)

    Takahashi, Y. O.; Takehiro, S.; Sugiyama, K.; Odaka, M.; Ishiwatari, M.; Sasaki, Y.; Nishizawa, S.; Ishioka, K.; Nakajima, K.; Hayashi, Y.

    2012-12-01

    Toward the understanding of fluid motions of planetary atmospheres and planetary interiors by performing multiple numerical experiments with multiple models, we are now proceeding ``dcmodel project'', where a series of hierarchical numerical models with various complexity is developed and maintained. In ``dcmodel project'', a series of the numerical models are developed taking care of the following points: 1) a common ``style'' of program codes assuring readability of the software, 2) open source codes of the models to the public, 3) scalability of the models assuring execution on various scales of computational resources, 4) stressing the importance of documentation and presenting a method for writing reference manuals. The lineup of the models and utility programs of the project is as follows: Gtool5, ISPACK/SPML, SPMODEL, Deepconv, Dcpam, and Rdoc-f95. In the followings, features of each component are briefly described. Gtool5 (Ishiwatari et al., 2012) is a Fortran90 library, which provides data input/output interfaces and various utilities commonly used in the models of dcmodel project. A self-descriptive data format netCDF is adopted as a IO format of Gtool5. The interfaces of gtool5 library can reduce the number of operation steps for the data IO in the program code of the models compared with the interfaces of the raw netCDF library. Further, by use of gtool5 library, procedures for data IO and addition of metadata for post-processing can be easily implemented in the program codes in a consolidated form independent of the size and complexity of the models. ``ISPACK'' is the spectral transformation library and ``SPML (SPMODEL library)'' (Takehiro et al., 2006) is its wrapper library. Most prominent feature of SPML is a series of array-handling functions with systematic function naming rules, and this enables us to write codes with a form which is easily deduced from the mathematical expressions of the governing equations. ``SPMODEL'' (Takehiro et al., 2006

  5. Estimating Model Probabilities using Thermodynamic Markov Chain Monte Carlo Methods

    Science.gov (United States)

    Ye, M.; Liu, P.; Beerli, P.; Lu, D.; Hill, M. C.

    2014-12-01

    Markov chain Monte Carlo (MCMC) methods are widely used to evaluate model probability for quantifying model uncertainty. In a general procedure, MCMC simulations are first conducted for each individual model, and MCMC parameter samples are then used to approximate marginal likelihood of the model by calculating the geometric mean of the joint likelihood of the model and its parameters. It has been found the method of evaluating geometric mean suffers from the numerical problem of low convergence rate. A simple test case shows that even millions of MCMC samples are insufficient to yield accurate estimation of the marginal likelihood. To resolve this problem, a thermodynamic method is used to have multiple MCMC runs with different values of a heating coefficient between zero and one. When the heating coefficient is zero, the MCMC run is equivalent to a random walk MC in the prior parameter space; when the heating coefficient is one, the MCMC run is the conventional one. For a simple case with analytical form of the marginal likelihood, the thermodynamic method yields more accurate estimate than the method of using geometric mean. This is also demonstrated for a case of groundwater modeling with consideration of four alternative models postulated based on different conceptualization of a confining layer. This groundwater example shows that model probabilities estimated using the thermodynamic method are more reasonable than those obtained using the geometric method. The thermodynamic method is general, and can be used for a wide range of environmental problem for model uncertainty quantification.

  6. Monitoring Farmland Loss Caused by Urbanization in Beijing from Modis Time Series Using Hierarchical Hidden Markov Model

    Science.gov (United States)

    Yuan, Y.; Meng, Y.; Chen, Y. X.; Jiang, C.; Yue, A. Z.

    2018-04-01

    In this study, we proposed a method to map urban encroachment onto farmland using satellite image time series (SITS) based on the hierarchical hidden Markov model (HHMM). In this method, the farmland change process is decomposed into three hierarchical levels, i.e., the land cover level, the vegetation phenology level, and the SITS level. Then a three-level HHMM is constructed to model the multi-level semantic structure of farmland change process. Once the HHMM is established, a change from farmland to built-up could be detected by inferring the underlying state sequence that is most likely to generate the input time series. The performance of the method is evaluated on MODIS time series in Beijing. Results on both simulated and real datasets demonstrate that our method improves the change detection accuracy compared with the HMM-based method.

  7. Markov chain modelling of pitting corrosion in underground pipelines

    Energy Technology Data Exchange (ETDEWEB)

    Caleyo, F. [Departamento de Ingenieri' a Metalurgica, ESIQIE, IPN, UPALM Edif. 7, Zacatenco, Mexico D. F. 07738 (Mexico)], E-mail: fcaleyo@gmail.com; Velazquez, J.C. [Departamento de Ingenieri' a Metalurgica, ESIQIE, IPN, UPALM Edif. 7, Zacatenco, Mexico D. F. 07738 (Mexico); Valor, A. [Facultad de Fisica, Universidad de La Habana, San Lazaro y L, Vedado, 10400 La Habana (Cuba); Hallen, J.M. [Departamento de Ingenieri' a Metalurgica, ESIQIE, IPN, UPALM Edif. 7, Zacatenco, Mexico D. F. 07738 (Mexico)

    2009-09-15

    A continuous-time, non-homogenous linear growth (pure birth) Markov process has been used to model external pitting corrosion in underground pipelines. The closed form solution of Kolmogorov's forward equations for this type of Markov process is used to describe the transition probability function in a discrete pit depth space. The identification of the transition probability function can be achieved by correlating the stochastic pit depth mean with the deterministic mean obtained experimentally. Monte-Carlo simulations previously reported have been used to predict the time evolution of the mean value of the pit depth distribution for different soil textural classes. The simulated distributions have been used to create an empirical Markov chain-based stochastic model for predicting the evolution of pitting corrosion depth and rate distributions from the observed properties of the soil. The proposed model has also been applied to pitting corrosion data from pipeline repeated in-line inspections and laboratory immersion experiments.

  8. CORPORATE FORESIGHT AND PERFORMANCE: A CHAIN-OF-EFFECTS MODEL

    DEFF Research Database (Denmark)

    Jissink, Tymen; Huizingh, Eelko K.R.E.; Rohrbeck, René

    2015-01-01

    In this paper we develop and validate a measurement scale for corporate foresight and examine its impact on performance in a chain-of-effects model. We conceptualize corporate foresight as an organizational ability consisting of five distinct dimensions: information scope, method usage, people......, formal organization, and culture. We investigate the relation of corporate foresight with three innovation performance dimensions – new product success, new product innovativeness, and financial performance. We use partial-least-squares structural equations modelling to assess our measurement mode ls...... and test our research hypotheses. Using a cross-industry sample of 153 innovative firms, we find that corporate foresight can be validly and reliably measured by our measurement instrument. The results of the structural model support the hypothesized positive effects of corporate foresight on all...

  9. Markov chain modelling of pitting corrosion in underground pipelines

    International Nuclear Information System (INIS)

    Caleyo, F.; Velazquez, J.C.; Valor, A.; Hallen, J.M.

    2009-01-01

    A continuous-time, non-homogenous linear growth (pure birth) Markov process has been used to model external pitting corrosion in underground pipelines. The closed form solution of Kolmogorov's forward equations for this type of Markov process is used to describe the transition probability function in a discrete pit depth space. The identification of the transition probability function can be achieved by correlating the stochastic pit depth mean with the deterministic mean obtained experimentally. Monte-Carlo simulations previously reported have been used to predict the time evolution of the mean value of the pit depth distribution for different soil textural classes. The simulated distributions have been used to create an empirical Markov chain-based stochastic model for predicting the evolution of pitting corrosion depth and rate distributions from the observed properties of the soil. The proposed model has also been applied to pitting corrosion data from pipeline repeated in-line inspections and laboratory immersion experiments.

  10. Discrete persistent-chain model for protein binding on DNA.

    Science.gov (United States)

    Lam, Pui-Man; Zhen, Yi

    2011-04-01

    We describe and solve a discrete persistent-chain model of protein binding on DNA, involving an extra σ(i) at a site i of the DNA. This variable takes the value 1 or 0, depending on whether or not the site is occupied by a protein. In addition, if the site is occupied by a protein, there is an extra energy cost ɛ. For a small force, we obtain analytic expressions for the force-extension curve and the fraction of bound protein on the DNA. For higher forces, the model can be solved numerically to obtain force-extension curves and the average fraction of bound proteins as a function of applied force. Our model can be used to analyze experimental force-extension curves of protein binding on DNA, and hence deduce the number of bound proteins in the case of nonspecific binding. ©2011 American Physical Society

  11. Hierarchical Bayesian Model for Simultaneous EEG Source and Forward Model Reconstruction (SOFOMORE)

    DEFF Research Database (Denmark)

    Stahlhut, Carsten; Mørup, Morten; Winther, Ole

    2009-01-01

    In this paper we propose an approach to handle forward model uncertainty for EEG source reconstruction. A stochastic forward model is motivated by the many uncertain contributions that form the forward propagation model including the tissue conductivity distribution, the cortical surface, and ele......In this paper we propose an approach to handle forward model uncertainty for EEG source reconstruction. A stochastic forward model is motivated by the many uncertain contributions that form the forward propagation model including the tissue conductivity distribution, the cortical surface...

  12. Supply Chain Simulation using Business Process Modeling in Service Oriented Architecture

    OpenAIRE

    Taejong Yoo

    2015-01-01

    For supply chain optimization, as a key determinant of strategic resources mobility along the value-added chain, simulation is widely used to test the impact on supply chain performance for the strategic level decisions, such as the number of plants, the modes of transport, or the relocation of warehouses. Traditionally, a single centralized model that encompasses multiple participants in the supply chain is built when optimization of the supply chain through simulation is required. However, ...

  13. Modeling and Optimization of Inventory-Distribution Routing Problem for Agriculture Products Supply Chain

    OpenAIRE

    Liao, Li; Li, Jianfeng; Wu, Yaohua

    2013-01-01

    Mathematical models of inventory-distribution routing problem for two-echelon agriculture products distribution network are established, which are based on two management modes, franchise chain and regular chain, one-to-many, interval periodic order, demand depending on inventory, deteriorating treatment cost of agriculture products, start-up costs of vehicles and so forth. Then, a heuristic adaptive genetic algorithm is presented for the model of franchise chain. For the regular chain model,...

  14. Evaluating the value chain model for service organisational strategy: International hotels.

    OpenAIRE

    Choi, Keetag.

    2000-01-01

    Strategic models like Porter's (1985) value chain have not been fully evaluated in the strategy literature and applied to all industries. To theoretically redefine the value chain technique, this research evaluates the value chain's use with various strategic issues by applying it to a specific aspect in the service field, namely the hotel industry. The study defines five key questions by which to evaluate a strategic model and the value chain model is examined using them. This research is a ...

  15. Primitive-path statistics of entangled polymers: mapping multi-chain simulations onto single-chain mean-field models

    International Nuclear Information System (INIS)

    Steenbakkers, Rudi J A; Schieber, Jay D; Tzoumanekas, Christos; Li, Ying; Liu, Wing Kam; Kröger, Martin

    2014-01-01

    We present a method to map the full equilibrium distribution of the primitive-path (PP) length, obtained from multi-chain simulations of polymer melts, onto a single-chain mean-field ‘target’ model. Most previous works used the Doi–Edwards tube model as a target. However, the average number of monomers per PP segment, obtained from multi-chain PP networks, has consistently shown a discrepancy of a factor of two with respect to tube-model estimates. Part of the problem is that the tube model neglects fluctuations in the lengths of PP segments, the number of entanglements per chain and the distribution of monomers among PP segments, while all these fluctuations are observed in multi-chain simulations. Here we use a recently proposed slip-link model, which includes fluctuations in all these variables as well as in the spatial positions of the entanglements. This turns out to be essential to obtain qualitative and quantitative agreement with the equilibrium PP-length distribution obtained from multi-chain simulations. By fitting this distribution, we are able to determine two of the three parameters of the model, which govern its equilibrium properties. This mapping is executed for four different linear polymers and for different molecular weights. The two parameters are found to depend on chemistry, but not on molecular weight. The model predicts a constant plateau modulus minus a correction inversely proportional to molecular weight. The value for well-entangled chains, with the parameters determined ab initio, lies in the range of experimental data for the materials investigated. (paper)

  16. Assessing Local Model Adequacy in Bayesian Hierarchical Models Using the Partitioned Deviance Information Criterion

    Science.gov (United States)

    Wheeler, David C.; Hickson, DeMarc A.; Waller, Lance A.

    2010-01-01

    Many diagnostic tools and goodness-of-fit measures, such as the Akaike information criterion (AIC) and the Bayesian deviance information criterion (DIC), are available to evaluate the overall adequacy of linear regression models. In addition, visually assessing adequacy in models has become an essential part of any regression analysis. In this paper, we focus on a spatial consideration of the local DIC measure for model selection and goodness-of-fit evaluation. We use a partitioning of the DIC into the local DIC, leverage, and deviance residuals to assess local model fit and influence for both individual observations and groups of observations in a Bayesian framework. We use visualization of the local DIC and differences in local DIC between models to assist in model selection and to visualize the global and local impacts of adding covariates or model parameters. We demonstrate the utility of the local DIC in assessing model adequacy using HIV prevalence data from pregnant women in the Butare province of Rwanda during 1989-1993 using a range of linear model specifications, from global effects only to spatially varying coefficient models, and a set of covariates related to sexual behavior. Results of applying the diagnostic visualization approach include more refined model selection and greater understanding of the models as applied to the data. PMID:21243121

  17. A two-stage value chain model for vegetable marketing chain efficiency evaluation: A transaction cost approach

    OpenAIRE

    Lu Hualiang

    2006-01-01

    We applied a two-stage value chain model to investigate the effects of input application and occasional transaction costs on vegetable marketing chain efficiencies with a farm household-level data set. In the first stage, the production efficiencies with the combination of resource endowments, capital and managerial inputs, and production techniques were evaluated; then at the second stage, the marketing technical efficiencies were determined under the marketing value of the vegetables for th...

  18. Comparison of models of radionuclide migration in food chains

    International Nuclear Information System (INIS)

    Hanusik, V.; Mitro, A.; Chorvat, D.

    1985-01-01

    Two models are compared used for describing the transfer of radioactive substances to man through food chains: the model used in US NRC Regulatory Guide 1.109 and that used in Interatomehnergo NTD No. 38.220.56-81. The models are compared with regard to the approach to model construction, with regard to mathematical expressions and recommended values of parameters. The comparative calculations show that with the use of the recommended values the contribution of direct contamination is prevalent in both models. The concentration of radioactive substances in selected products calculated for indirect contamination using the NRC method is more conservative. For direct and total contamination the NRC method provides higher values of concentrations in the leaf and non-leaf vegetables (cabbage, potatoes, cucumbers) than the NTD method. Concentrations in non-leaf vegetables are higher than in wheat for 4 nuclides only and in meat and milk for 13 radionuclides of the considered set of 22 radionuclides. Substitution of the recommended values of the parameters of the NRC model with recommended values of the corresponding parameters of the NTD model will reduce total concentrations in products as against initial results of the two studied models. (author)

  19. Comparison of a species distribution model and a process model from a hierarchical perspective to quantify effects of projected climate change on tree species

    Science.gov (United States)

    Jeffrey E. Schneiderman; Hong S. He; Frank R. Thompson; William D. Dijak; Jacob S. Fraser

    2015-01-01

    Tree species distribution and abundance are affected by forces operating across a hierarchy of ecological scales. Process and species distribution models have been developed emphasizing forces at different scales. Understanding model agreement across hierarchical scales provides perspective on prediction uncertainty and ultimately enables policy makers and managers to...

  20. The role of uncertainty in supply chains under dynamic modeling

    Directory of Open Access Journals (Sweden)

    M. Fera

    2017-01-01

    Full Text Available The uncertainty in the supply chains (SCs for manufacturing and services firms is going to be, over the coming decades, more important for the companies that are called to compete in a new globalized economy. Risky situations for manufacturing are considered in trying to individuate the optimal positioning of the order penetration point (OPP. It aims at defining the best level of information of the client’s order going back through the several supply chain (SC phases, i.e. engineering, procurement, production and distribution. This work aims at defining a system dynamics model to assess competitiveness coming from the positioning of the order in different SC locations. A Taguchi analysis has been implemented to create a decision map for identifying possible strategic decisions under different scenarios and with alternatives for order location in the SC levels. Centralized and decentralized strategies for SC integration are discussed. In the model proposed, the location of OPP is influenced by the demand variation, production time, stock-outs and stock amount. Results of this research are as follows: (i customer-oriented strategies are preferable under high volatility of demand, (ii production-focused strategies are suggested when the probability of stock-outs is high, (iii no specific location is preferable if a centralized control architecture is implemented, (iv centralization requires cooperation among partners to achieve the SC optimum point, (v the producer must not prefer the OPP location at the Retailer level when the general strategy is focused on a decentralized approach.