WorldWideScience

Sample records for hierarchical model parallel

  1. Parallel hierarchical radiosity rendering

    Energy Technology Data Exchange (ETDEWEB)

    Carter, Michael [Iowa State Univ., Ames, IA (United States)

    1993-07-01

    In this dissertation, the step-by-step development of a scalable parallel hierarchical radiosity renderer is documented. First, a new look is taken at the traditional radiosity equation, and a new form is presented in which the matrix of linear system coefficients is transformed into a symmetric matrix, thereby simplifying the problem and enabling a new solution technique to be applied. Next, the state-of-the-art hierarchical radiosity methods are examined for their suitability to parallel implementation, and scalability. Significant enhancements are also discovered which both improve their theoretical foundations and improve the images they generate. The resultant hierarchical radiosity algorithm is then examined for sources of parallelism, and for an architectural mapping. Several architectural mappings are discussed. A few key algorithmic changes are suggested during the process of making the algorithm parallel. Next, the performance, efficiency, and scalability of the algorithm are analyzed. The dissertation closes with a discussion of several ideas which have the potential to further enhance the hierarchical radiosity method, or provide an entirely new forum for the application of hierarchical methods.

  2. Parallel Motion Simulation of Large-Scale Real-Time Crowd in a Hierarchical Environmental Model

    Directory of Open Access Journals (Sweden)

    Xin Wang

    2012-01-01

    Full Text Available This paper presents a parallel real-time crowd simulation method based on a hierarchical environmental model. A dynamical model of the complex environment should be constructed to simulate the state transition and propagation of individual motions. By modeling of a virtual environment where virtual crowds reside, we employ different parallel methods on a topological layer, a path layer and a perceptual layer. We propose a parallel motion path matching method based on the path layer and a parallel crowd simulation method based on the perceptual layer. The large-scale real-time crowd simulation becomes possible with these methods. Numerical experiments are carried out to demonstrate the methods and results.

  3. Metastable states in the hierarchical Dyson model drive parallel processing in the hierarchical Hopfield network

    International Nuclear Information System (INIS)

    Agliari, Elena; Barra, Adriano; Guerra, Francesco; Galluzzi, Andrea; Tantari, Daniele; Tavani, Flavia

    2015-01-01

    In this paper, we introduce and investigate the statistical mechanics of hierarchical neural networks. First, we approach these systems à la Mattis, by thinking of the Dyson model as a single-pattern hierarchical neural network. We also discuss the stability of different retrievable states as predicted by the related self-consistencies obtained both from a mean-field bound and from a bound that bypasses the mean-field limitation. The latter is worked out by properly reabsorbing the magnetization fluctuations related to higher levels of the hierarchy into effective fields for the lower levels. Remarkably, mixing Amit's ansatz technique for selecting candidate-retrievable states with the interpolation procedure for solving for the free energy of these states, we prove that, due to gauge symmetry, the Dyson model accomplishes both serial and parallel processing. We extend this scenario to multiple stored patterns by implementing the Hebb prescription for learning within the couplings. This results in Hopfield-like networks constrained on a hierarchical topology, for which, by restricting to the low-storage regime where the number of patterns grows at its most logarithmical with the amount of neurons, we prove the existence of the thermodynamic limit for the free energy, and we give an explicit expression of its mean-field bound and of its related improved bound. We studied the resulting self-consistencies for the Mattis magnetizations, which act as order parameters, are studied and the stability of solutions is analyzed to get a picture of the overall retrieval capabilities of the system according to both mean-field and non-mean-field scenarios. Our main finding is that embedding the Hebbian rule on a hierarchical topology allows the network to accomplish both serial and parallel processing. By tuning the level of fast noise affecting it or triggering the decay of the interactions with the distance among neurons, the system may switch from sequential retrieval to

  4. Application of Parallel Hierarchical Matrices in Spatial Statistics and Parameter Identification

    KAUST Repository

    Litvinenko, Alexander

    2018-04-20

    Parallel H-matrices in spatial statistics 1. Motivation: improve statistical model 2. Tools: Hierarchical matrices [Hackbusch 1999] 3. Matern covariance function and joint Gaussian likelihood 4. Identification of unknown parameters via maximizing Gaussian log-likelihood 5. Implementation with HLIBPro

  5. Models of parallel computation :a survey and classification

    Institute of Scientific and Technical Information of China (English)

    ZHANG Yunquan; CHEN Guoliang; SUN Guangzhong; MIAO Qiankun

    2007-01-01

    In this paper,the state-of-the-art parallel computational model research is reviewed.We will introduce various models that were developed during the past decades.According to their targeting architecture features,especially memory organization,we classify these parallel computational models into three generations.These models and their characteristics are discussed based on three generations classification.We believe that with the ever increasing speed gap between the CPU and memory systems,incorporating non-uniform memory hierarchy into computational models will become unavoidable.With the emergence of multi-core CPUs,the parallelism hierarchy of current computing platforms becomes more and more complicated.Describing this complicated parallelism hierarchy in future computational models becomes more and more important.A semi-automatic toolkit that can extract model parameters and their values on real computers can reduce the model analysis complexity,thus allowing more complicated models with more parameters to be adopted.Hierarchical memory and hierarchical parallelism will be two very important features that should be considered in future model design and research.

  6. Hierarchical approach to optimization of parallel matrix multiplication on large-scale platforms

    KAUST Repository

    Hasanov, Khalid

    2014-03-04

    © 2014, Springer Science+Business Media New York. Many state-of-the-art parallel algorithms, which are widely used in scientific applications executed on high-end computing systems, were designed in the twentieth century with relatively small-scale parallelism in mind. Indeed, while in 1990s a system with few hundred cores was considered a powerful supercomputer, modern top supercomputers have millions of cores. In this paper, we present a hierarchical approach to optimization of message-passing parallel algorithms for execution on large-scale distributed-memory systems. The idea is to reduce the communication cost by introducing hierarchy and hence more parallelism in the communication scheme. We apply this approach to SUMMA, the state-of-the-art parallel algorithm for matrix–matrix multiplication, and demonstrate both theoretically and experimentally that the modified Hierarchical SUMMA significantly improves the communication cost and the overall performance on large-scale platforms.

  7. Hierarchical approach to optimization of parallel matrix multiplication on large-scale platforms

    KAUST Repository

    Hasanov, Khalid; Quintin, Jean-Noë l; Lastovetsky, Alexey

    2014-01-01

    -scale parallelism in mind. Indeed, while in 1990s a system with few hundred cores was considered a powerful supercomputer, modern top supercomputers have millions of cores. In this paper, we present a hierarchical approach to optimization of message-passing parallel

  8. Parallel-hierarchical processing and classification of laser beam profile images based on the GPU-oriented architecture

    Science.gov (United States)

    Yarovyi, Andrii A.; Timchenko, Leonid I.; Kozhemiako, Volodymyr P.; Kokriatskaia, Nataliya I.; Hamdi, Rami R.; Savchuk, Tamara O.; Kulyk, Oleksandr O.; Surtel, Wojciech; Amirgaliyev, Yedilkhan; Kashaganova, Gulzhan

    2017-08-01

    The paper deals with a problem of insufficient productivity of existing computer means for large image processing, which do not meet modern requirements posed by resource-intensive computing tasks of laser beam profiling. The research concentrated on one of the profiling problems, namely, real-time processing of spot images of the laser beam profile. Development of a theory of parallel-hierarchic transformation allowed to produce models for high-performance parallel-hierarchical processes, as well as algorithms and software for their implementation based on the GPU-oriented architecture using GPGPU technologies. The analyzed performance of suggested computerized tools for processing and classification of laser beam profile images allows to perform real-time processing of dynamic images of various sizes.

  9. Application of Parallel Hierarchical Matrices and Low-Rank Tensors in Spatial Statistics and Parameter Identification

    KAUST Repository

    Litvinenko, Alexander

    2018-03-12

    Part 1: Parallel H-matrices in spatial statistics 1. Motivation: improve statistical model 2. Tools: Hierarchical matrices 3. Matern covariance function and joint Gaussian likelihood 4. Identification of unknown parameters via maximizing Gaussian log-likelihood 5. Implementation with HLIBPro. Part 2: Low-rank Tucker tensor methods in spatial statistics

  10. Parallel Implementation of the Recursive Approximation of an Unsupervised Hierarchical Segmentation Algorithm. Chapter 5

    Science.gov (United States)

    Tilton, James C.; Plaza, Antonio J. (Editor); Chang, Chein-I. (Editor)

    2008-01-01

    The hierarchical image segmentation algorithm (referred to as HSEG) is a hybrid of hierarchical step-wise optimization (HSWO) and constrained spectral clustering that produces a hierarchical set of image segmentations. HSWO is an iterative approach to region grooving segmentation in which the optimal image segmentation is found at N(sub R) regions, given a segmentation at N(sub R+1) regions. HSEG's addition of constrained spectral clustering makes it a computationally intensive algorithm, for all but, the smallest of images. To counteract this, a computationally efficient recursive approximation of HSEG (called RHSEG) has been devised. Further improvements in processing speed are obtained through a parallel implementation of RHSEG. This chapter describes this parallel implementation and demonstrates its computational efficiency on a Landsat Thematic Mapper test scene.

  11. Parallel iterative solvers and preconditioners using approximate hierarchical methods

    Energy Technology Data Exchange (ETDEWEB)

    Grama, A.; Kumar, V.; Sameh, A. [Univ. of Minnesota, Minneapolis, MN (United States)

    1996-12-31

    In this paper, we report results of the performance, convergence, and accuracy of a parallel GMRES solver for Boundary Element Methods. The solver uses a hierarchical approximate matrix-vector product based on a hybrid Barnes-Hut / Fast Multipole Method. We study the impact of various accuracy parameters on the convergence and show that with minimal loss in accuracy, our solver yields significant speedups. We demonstrate the excellent parallel efficiency and scalability of our solver. The combined speedups from approximation and parallelism represent an improvement of several orders in solution time. We also develop fast and paralellizable preconditioners for this problem. We report on the performance of an inner-outer scheme and a preconditioner based on truncated Green`s function. Experimental results on a 256 processor Cray T3D are presented.

  12. Hierarchical Control of Parallel AC-DC Converter Interfaces for Hybrid Microgrids

    DEFF Research Database (Denmark)

    Lu, Xiaonan; Guerrero, Josep M.; Sun, Kai

    2014-01-01

    In this paper, a hierarchical control system for parallel power electronics interfaces between ac bus and dc bus in a hybrid microgrid is presented. Both standalone and grid-connected operation modes in the dc side of the microgrid are analyzed. Concretely, a three-level hierarchical control system...... equal or proportional dc load current sharing. The common secondary control level is designed to eliminate the dc bus voltage deviation produced by the droop control, with dc bus voltage in the hybrid microgrid boosted to an acceptable range. After guaranteeing the performance of the dc side standalone...

  13. Dynamic modeling and hierarchical compound control of a novel 2-DOF flexible parallel manipulator with multiple actuation modes

    Science.gov (United States)

    Liang, Dong; Song, Yimin; Sun, Tao; Jin, Xueying

    2018-03-01

    This paper addresses the problem of rigid-flexible coupling dynamic modeling and active control of a novel flexible parallel manipulator (PM) with multiple actuation modes. Firstly, based on the flexible multi-body dynamics theory, the rigid-flexible coupling dynamic model (RFDM) of system is developed by virtue of the augmented Lagrangian multipliers approach. For completeness, the mathematical models of permanent magnet synchronous motor (PMSM) and piezoelectric transducer (PZT) are further established and integrated with the RFDM of mechanical system to formulate the electromechanical coupling dynamic model (ECDM). To achieve the trajectory tracking and vibration suppression, a hierarchical compound control strategy is presented. Within this control strategy, the proportional-differential (PD) feedback controller is employed to realize the trajectory tracking of end-effector, while the strain and strain rate feedback (SSRF) controller is developed to restrain the vibration of the flexible links using PZT. Furthermore, the stability of the control algorithm is demonstrated based on the Lyapunov stability theory. Finally, two simulation case studies are performed to illustrate the effectiveness of the proposed approach. The results indicate that, under the redundant actuation mode, the hierarchical compound control strategy can guarantee the flexible PM achieves singularity-free motion and vibration attenuation within task workspace simultaneously. The systematic methodology proposed in this study can be conveniently extended for the dynamic modeling and efficient controller design of other flexible PMs, especially the emerging ones with multiple actuation modes.

  14. Algorithm of parallel: hierarchical transformation and its implementation on FPGA

    Science.gov (United States)

    Timchenko, Leonid I.; Petrovskiy, Mykola S.; Kokryatskay, Natalia I.; Barylo, Alexander S.; Dembitska, Sofia V.; Stepanikuk, Dmytro S.; Suleimenov, Batyrbek; Zyska, Tomasz; Uvaysova, Svetlana; Shedreyeva, Indira

    2017-08-01

    In this paper considers the algorithm of laser beam spots image classification in atmospheric-optical transmission systems. It discusses the need for images filtering using adaptive methods, using, for example, parallel-hierarchical networks. The article also highlights the need to create high-speed memory devices for such networks. Implementation and simulation results of the developed method based on the PLD are demonstrated, which shows that the presented method gives 15-20% better prediction results than similar methods.

  15. Hierarchical Image Segmentation of Remotely Sensed Data using Massively Parallel GNU-LINUX Software

    Science.gov (United States)

    Tilton, James C.

    2003-01-01

    A hierarchical set of image segmentations is a set of several image segmentations of the same image at different levels of detail in which the segmentations at coarser levels of detail can be produced from simple merges of regions at finer levels of detail. In [1], Tilton, et a1 describes an approach for producing hierarchical segmentations (called HSEG) and gave a progress report on exploiting these hierarchical segmentations for image information mining. The HSEG algorithm is a hybrid of region growing and constrained spectral clustering that produces a hierarchical set of image segmentations based on detected convergence points. In the main, HSEG employs the hierarchical stepwise optimization (HSWO) approach to region growing, which was described as early as 1989 by Beaulieu and Goldberg. The HSWO approach seeks to produce segmentations that are more optimized than those produced by more classic approaches to region growing (e.g. Horowitz and T. Pavlidis, [3]). In addition, HSEG optionally interjects between HSWO region growing iterations, merges between spatially non-adjacent regions (i.e., spectrally based merging or clustering) constrained by a threshold derived from the previous HSWO region growing iteration. While the addition of constrained spectral clustering improves the utility of the segmentation results, especially for larger images, it also significantly increases HSEG s computational requirements. To counteract this, a computationally efficient recursive, divide-and-conquer, implementation of HSEG (RHSEG) was devised, which includes special code to avoid processing artifacts caused by RHSEG s recursive subdivision of the image data. The recursive nature of RHSEG makes for a straightforward parallel implementation. This paper describes the HSEG algorithm, its recursive formulation (referred to as RHSEG), and the implementation of RHSEG using massively parallel GNU-LINUX software. Results with Landsat TM data are included comparing RHSEG with classic

  16. High-speed parallel solution of the neutron diffusion equation with the hierarchical domain decomposition boundary element method incorporating parallel communications

    International Nuclear Information System (INIS)

    Tsuji, Masashi; Chiba, Gou

    2000-01-01

    A hierarchical domain decomposition boundary element method (HDD-BEM) for solving the multiregion neutron diffusion equation (NDE) has been fully parallelized, both for numerical computations and for data communications, to accomplish a high parallel efficiency on distributed memory message passing parallel computers. Data exchanges between node processors that are repeated during iteration processes of HDD-BEM are implemented, without any intervention of the host processor that was used to supervise parallel processing in the conventional parallelized HDD-BEM (P-HDD-BEM). Thus, the parallel processing can be executed with only cooperative operations of node processors. The communication overhead was even the dominant time consuming part in the conventional P-HDD-BEM, and the parallelization efficiency decreased steeply with the increase of the number of processors. With the parallel data communication, the efficiency is affected only by the number of boundary elements assigned to decomposed subregions, and the communication overhead can be drastically reduced. This feature can be particularly advantageous in the analysis of three-dimensional problems where a large number of processors are required. The proposed P-HDD-BEM offers a promising solution to the deterioration problem of parallel efficiency and opens a new path to parallel computations of NDEs on distributed memory message passing parallel computers. (author)

  17. Method of Parallel-Hierarchical Network Self-Training and its Application for Pattern Classification and Recognition

    Directory of Open Access Journals (Sweden)

    TIMCHENKO, L.

    2012-11-01

    Full Text Available Propositions necessary for development of parallel-hierarchical (PH network training methods are discussed in this article. Unlike already known structures of the artificial neural network, where non-normalized (absolute similarity criteria are used for comparison, the suggested structure uses a normalized criterion. Based on the analysis of training rules, a conclusion is made that application of two training methods with a teacher is optimal for PH network training: error correction-based training and memory-based training. Mathematical models of training and a combined method of PH network training for recognition of static and dynamic patterns are developed.

  18. A Parallel Computational Model for Multichannel Phase Unwrapping Problem

    Science.gov (United States)

    Imperatore, Pasquale; Pepe, Antonio; Lanari, Riccardo

    2015-05-01

    In this paper, a parallel model for the solution of the computationally intensive multichannel phase unwrapping (MCh-PhU) problem is proposed. Firstly, the Extended Minimum Cost Flow (EMCF) algorithm for solving MCh-PhU problem is revised within the rigorous mathematical framework of the discrete calculus ; thus permitting to capture its topological structure in terms of meaningful discrete differential operators. Secondly, emphasis is placed on those methodological and practical aspects, which lead to a parallel reformulation of the EMCF algorithm. Thus, a novel dual-level parallel computational model, in which the parallelism is hierarchically implemented at two different (i.e., process and thread) levels, is presented. The validity of our approach has been demonstrated through a series of experiments that have revealed a significant speedup. Therefore, the attained high-performance prototype is suitable for the solution of large-scale phase unwrapping problems in reasonable time frames, with a significant impact on the systematic exploitation of the existing, and rapidly growing, large archives of SAR data.

  19. Hierarchical Parallel Matrix Multiplication on Large-Scale Distributed Memory Platforms

    KAUST Repository

    Quintin, Jean-Noel

    2013-10-01

    Matrix multiplication is a very important computation kernel both in its own right as a building block of many scientific applications and as a popular representative for other scientific applications. Cannon\\'s algorithm which dates back to 1969 was the first efficient algorithm for parallel matrix multiplication providing theoretically optimal communication cost. However this algorithm requires a square number of processors. In the mid-1990s, the SUMMA algorithm was introduced. SUMMA overcomes the shortcomings of Cannon\\'s algorithm as it can be used on a nonsquare number of processors as well. Since then the number of processors in HPC platforms has increased by two orders of magnitude making the contribution of communication in the overall execution time more significant. Therefore, the state of the art parallel matrix multiplication algorithms should be revisited to reduce the communication cost further. This paper introduces a new parallel matrix multiplication algorithm, Hierarchical SUMMA (HSUMMA), which is a redesign of SUMMA. Our algorithm reduces the communication cost of SUMMA by introducing a two-level virtual hierarchy into the two-dimensional arrangement of processors. Experiments on an IBM BlueGene/P demonstrate the reduction of communication cost up to 2.08 times on 2048 cores and up to 5.89 times on 16384 cores. © 2013 IEEE.

  20. Hierarchical Parallel Matrix Multiplication on Large-Scale Distributed Memory Platforms

    KAUST Repository

    Quintin, Jean-Noel; Hasanov, Khalid; Lastovetsky, Alexey

    2013-01-01

    Matrix multiplication is a very important computation kernel both in its own right as a building block of many scientific applications and as a popular representative for other scientific applications. Cannon's algorithm which dates back to 1969 was the first efficient algorithm for parallel matrix multiplication providing theoretically optimal communication cost. However this algorithm requires a square number of processors. In the mid-1990s, the SUMMA algorithm was introduced. SUMMA overcomes the shortcomings of Cannon's algorithm as it can be used on a nonsquare number of processors as well. Since then the number of processors in HPC platforms has increased by two orders of magnitude making the contribution of communication in the overall execution time more significant. Therefore, the state of the art parallel matrix multiplication algorithms should be revisited to reduce the communication cost further. This paper introduces a new parallel matrix multiplication algorithm, Hierarchical SUMMA (HSUMMA), which is a redesign of SUMMA. Our algorithm reduces the communication cost of SUMMA by introducing a two-level virtual hierarchy into the two-dimensional arrangement of processors. Experiments on an IBM BlueGene/P demonstrate the reduction of communication cost up to 2.08 times on 2048 cores and up to 5.89 times on 16384 cores. © 2013 IEEE.

  1. Requirements for implementing real-time control functional modules on a hierarchical parallel pipelined system

    Science.gov (United States)

    Wheatley, Thomas E.; Michaloski, John L.; Lumia, Ronald

    1989-01-01

    Analysis of a robot control system leads to a broad range of processing requirements. One fundamental requirement of a robot control system is the necessity of a microcomputer system in order to provide sufficient processing capability.The use of multiple processors in a parallel architecture is beneficial for a number of reasons, including better cost performance, modular growth, increased reliability through replication, and flexibility for testing alternate control strategies via different partitioning. A survey of the progression from low level control synchronizing primitives to higher level communication tools is presented. The system communication and control mechanisms of existing robot control systems are compared to the hierarchical control model. The impact of this design methodology on the current robot control systems is explored.

  2. Likelihood Approximation With Parallel Hierarchical Matrices For Large Spatial Datasets

    KAUST Repository

    Litvinenko, Alexander; Sun, Ying; Genton, Marc G.; Keyes, David E.

    2017-01-01

    The main goal of this article is to introduce the parallel hierarchical matrix library HLIBpro to the statistical community. We describe the HLIBCov package, which is an extension of the HLIBpro library for approximating large covariance matrices and maximizing likelihood functions. We show that an approximate Cholesky factorization of a dense matrix of size $2M\\times 2M$ can be computed on a modern multi-core desktop in few minutes. Further, HLIBCov is used for estimating the unknown parameters such as the covariance length, variance and smoothness parameter of a Matérn covariance function by maximizing the joint Gaussian log-likelihood function. The computational bottleneck here is expensive linear algebra arithmetics due to large and dense covariance matrices. Therefore covariance matrices are approximated in the hierarchical ($\\H$-) matrix format with computational cost $\\mathcal{O}(k^2n \\log^2 n/p)$ and storage $\\mathcal{O}(kn \\log n)$, where the rank $k$ is a small integer (typically $k<25$), $p$ the number of cores and $n$ the number of locations on a fairly general mesh. We demonstrate a synthetic example, where the true values of known parameters are known. For reproducibility we provide the C++ code, the documentation, and the synthetic data.

  3. Likelihood Approximation With Parallel Hierarchical Matrices For Large Spatial Datasets

    KAUST Repository

    Litvinenko, Alexander

    2017-11-01

    The main goal of this article is to introduce the parallel hierarchical matrix library HLIBpro to the statistical community. We describe the HLIBCov package, which is an extension of the HLIBpro library for approximating large covariance matrices and maximizing likelihood functions. We show that an approximate Cholesky factorization of a dense matrix of size $2M\\\\times 2M$ can be computed on a modern multi-core desktop in few minutes. Further, HLIBCov is used for estimating the unknown parameters such as the covariance length, variance and smoothness parameter of a Matérn covariance function by maximizing the joint Gaussian log-likelihood function. The computational bottleneck here is expensive linear algebra arithmetics due to large and dense covariance matrices. Therefore covariance matrices are approximated in the hierarchical ($\\\\H$-) matrix format with computational cost $\\\\mathcal{O}(k^2n \\\\log^2 n/p)$ and storage $\\\\mathcal{O}(kn \\\\log n)$, where the rank $k$ is a small integer (typically $k<25$), $p$ the number of cores and $n$ the number of locations on a fairly general mesh. We demonstrate a synthetic example, where the true values of known parameters are known. For reproducibility we provide the C++ code, the documentation, and the synthetic data.

  4. SORM applied to hierarchical parallel system

    DEFF Research Database (Denmark)

    Ditlevsen, Ove Dalager

    2006-01-01

    of a particular first order reliability method (FORM) was first described in a celebrated paper by Rackwitz and Fiessler more than a quarter of a century ago. The method has become known as the Rackwitz-Fiessler algorithm. The original RF-algorithm as applied to a hierarchical random variable model...... is recapitulated so that a simple but quite effective accuracy improving calculation can be explained. A limit state curvature correction factor on the probability approximation is obtained from the final stop results of the RF-algorithm. This correction factor is based on Breitung’s asymptotic formula for second...

  5. Classification using Hierarchical Naive Bayes models

    DEFF Research Database (Denmark)

    Langseth, Helge; Dyhre Nielsen, Thomas

    2006-01-01

    Classification problems have a long history in the machine learning literature. One of the simplest, and yet most consistently well-performing set of classifiers is the Naïve Bayes models. However, an inherent problem with these classifiers is the assumption that all attributes used to describe......, termed Hierarchical Naïve Bayes models. Hierarchical Naïve Bayes models extend the modeling flexibility of Naïve Bayes models by introducing latent variables to relax some of the independence statements in these models. We propose a simple algorithm for learning Hierarchical Naïve Bayes models...

  6. Learning with hierarchical-deep models.

    Science.gov (United States)

    Salakhutdinov, Ruslan; Tenenbaum, Joshua B; Torralba, Antonio

    2013-08-01

    We introduce HD (or “Hierarchical-Deep”) models, a new compositional learning architecture that integrates deep learning models with structured hierarchical Bayesian (HB) models. Specifically, we show how we can learn a hierarchical Dirichlet process (HDP) prior over the activities of the top-level features in a deep Boltzmann machine (DBM). This compound HDP-DBM model learns to learn novel concepts from very few training example by learning low-level generic features, high-level features that capture correlations among low-level features, and a category hierarchy for sharing priors over the high-level features that are typical of different kinds of concepts. We present efficient learning and inference algorithms for the HDP-DBM model and show that it is able to learn new concepts from very few examples on CIFAR-100 object recognition, handwritten character recognition, and human motion capture datasets.

  7. Parallel hierarchical global illumination

    Energy Technology Data Exchange (ETDEWEB)

    Snell, Quinn O. [Iowa State Univ., Ames, IA (United States)

    1997-10-08

    Solving the global illumination problem is equivalent to determining the intensity of every wavelength of light in all directions at every point in a given scene. The complexity of the problem has led researchers to use approximation methods for solving the problem on serial computers. Rather than using an approximation method, such as backward ray tracing or radiosity, the authors have chosen to solve the Rendering Equation by direct simulation of light transport from the light sources. This paper presents an algorithm that solves the Rendering Equation to any desired accuracy, and can be run in parallel on distributed memory or shared memory computer systems with excellent scaling properties. It appears superior in both speed and physical correctness to recent published methods involving bidirectional ray tracing or hybrid treatments of diffuse and specular surfaces. Like progressive radiosity methods, it dynamically refines the geometry decomposition where required, but does so without the excessive storage requirements for ray histories. The algorithm, called Photon, produces a scene which converges to the global illumination solution. This amounts to a huge task for a 1997-vintage serial computer, but using the power of a parallel supercomputer significantly reduces the time required to generate a solution. Currently, Photon can be run on most parallel environments from a shared memory multiprocessor to a parallel supercomputer, as well as on clusters of heterogeneous workstations.

  8. A Performance-Prediction Model for PIC Applications on Clusters of Symmetric MultiProcessors: Validation with Hierarchical HPF+OpenMP Implementation

    Directory of Open Access Journals (Sweden)

    Sergio Briguglio

    2003-01-01

    Full Text Available A performance-prediction model is presented, which describes different hierarchical workload decomposition strategies for particle in cell (PIC codes on Clusters of Symmetric MultiProcessors. The devised workload decomposition is hierarchically structured: a higher-level decomposition among the computational nodes, and a lower-level one among the processors of each computational node. Several decomposition strategies are evaluated by means of the prediction model, with respect to the memory occupancy, the parallelization efficiency and the required programming effort. Such strategies have been implemented by integrating the high-level languages High Performance Fortran (at the inter-node stage and OpenMP (at the intra-node one. The details of these implementations are presented, and the experimental values of parallelization efficiency are compared with the predicted results.

  9. Topics in Computational Bayesian Statistics With Applications to Hierarchical Models in Astronomy and Sociology

    Science.gov (United States)

    Sahai, Swupnil

    This thesis includes three parts. The overarching theme is how to analyze structured hierarchical data, with applications to astronomy and sociology. The first part discusses how expectation propagation can be used to parallelize the computation when fitting big hierarchical bayesian models. This methodology is then used to fit a novel, nonlinear mixture model to ultraviolet radiation from various regions of the observable universe. The second part discusses how the Stan probabilistic programming language can be used to numerically integrate terms in a hierarchical bayesian model. This technique is demonstrated on supernovae data to significantly speed up convergence to the posterior distribution compared to a previous study that used a Gibbs-type sampler. The third part builds a formal latent kernel representation for aggregate relational data as a way to more robustly estimate the mixing characteristics of agents in a network. In particular, the framework is applied to sociology surveys to estimate, as a function of ego age, the age and sex composition of the personal networks of individuals in the United States.

  10. An Improved Hierarchical Genetic Algorithm for Sheet Cutting Scheduling with Process Constraints

    OpenAIRE

    Yunqing Rao; Dezhong Qi; Jinling Li

    2013-01-01

    For the first time, an improved hierarchical genetic algorithm for sheet cutting problem which involves n cutting patterns for m non-identical parallel machines with process constraints has been proposed in the integrated cutting stock model. The objective of the cutting scheduling problem is minimizing the weighted completed time. A mathematical model for this problem is presented, an improved hierarchical genetic algorithm (ant colony—hierarchical genetic algorithm) is developed for better ...

  11. Parallel content-based sub-image retrieval using hierarchical searching.

    Science.gov (United States)

    Yang, Lin; Qi, Xin; Xing, Fuyong; Kurc, Tahsin; Saltz, Joel; Foran, David J

    2014-04-01

    The capacity to systematically search through large image collections and ensembles and detect regions exhibiting similar morphological characteristics is central to pathology diagnosis. Unfortunately, the primary methods used to search digitized, whole-slide histopathology specimens are slow and prone to inter- and intra-observer variability. The central objective of this research was to design, develop, and evaluate a content-based image retrieval system to assist doctors for quick and reliable content-based comparative search of similar prostate image patches. Given a representative image patch (sub-image), the algorithm will return a ranked ensemble of image patches throughout the entire whole-slide histology section which exhibits the most similar morphologic characteristics. This is accomplished by first performing hierarchical searching based on a newly developed hierarchical annular histogram (HAH). The set of candidates is then further refined in the second stage of processing by computing a color histogram from eight equally divided segments within each square annular bin defined in the original HAH. A demand-driven master-worker parallelization approach is employed to speed up the searching procedure. Using this strategy, the query patch is broadcasted to all worker processes. Each worker process is dynamically assigned an image by the master process to search for and return a ranked list of similar patches in the image. The algorithm was tested using digitized hematoxylin and eosin (H&E) stained prostate cancer specimens. We have achieved an excellent image retrieval performance. The recall rate within the first 40 rank retrieved image patches is ∼90%. Both the testing data and source code can be downloaded from http://pleiad.umdnj.edu/CBII/Bioinformatics/.

  12. K-means clustering for optimal partitioning and dynamic load balancing of parallel hierarchical N-body simulations

    International Nuclear Information System (INIS)

    Marzouk, Youssef M.; Ghoniem, Ahmed F.

    2005-01-01

    A number of complex physical problems can be approached through N-body simulation, from fluid flow at high Reynolds number to gravitational astrophysics and molecular dynamics. In all these applications, direct summation is prohibitively expensive for large N and thus hierarchical methods are employed for fast summation. This work introduces new algorithms, based on k-means clustering, for partitioning parallel hierarchical N-body interactions. We demonstrate that the number of particle-cluster interactions and the order at which they are performed are directly affected by partition geometry. Weighted k-means partitions minimize the sum of clusters' second moments and create well-localized domains, and thus reduce the computational cost of N-body approximations by enabling the use of lower-order approximations and fewer cells. We also introduce compatible techniques for dynamic load balancing, including adaptive scaling of cluster volumes and adaptive redistribution of cluster centroids. We demonstrate the performance of these algorithms by constructing a parallel treecode for vortex particle simulations, based on the serial variable-order Cartesian code developed by Lindsay and Krasny [Journal of Computational Physics 172 (2) (2001) 879-907]. The method is applied to vortex simulations of a transverse jet. Results show outstanding parallel efficiencies even at high concurrencies, with velocity evaluation errors maintained at or below their serial values; on a realistic distribution of 1.2 million vortex particles, we observe a parallel efficiency of 98% on 1024 processors. Excellent load balance is achieved even in the face of several obstacles, such as an irregular, time-evolving particle distribution containing a range of length scales and the continual introduction of new vortex particles throughout the domain. Moreover, results suggest that k-means yields a more efficient partition of the domain than a global oct-tree

  13. Hierarchical species distribution models

    Science.gov (United States)

    Hefley, Trevor J.; Hooten, Mevin B.

    2016-01-01

    Determining the distribution pattern of a species is important to increase scientific knowledge, inform management decisions, and conserve biodiversity. To infer spatial and temporal patterns, species distribution models have been developed for use with many sampling designs and types of data. Recently, it has been shown that count, presence-absence, and presence-only data can be conceptualized as arising from a point process distribution. Therefore, it is important to understand properties of the point process distribution. We examine how the hierarchical species distribution modeling framework has been used to incorporate a wide array of regression and theory-based components while accounting for the data collection process and making use of auxiliary information. The hierarchical modeling framework allows us to demonstrate how several commonly used species distribution models can be derived from the point process distribution, highlight areas of potential overlap between different models, and suggest areas where further research is needed.

  14. Hierarchical modeling and analysis for spatial data

    CERN Document Server

    Banerjee, Sudipto; Gelfand, Alan E

    2003-01-01

    Among the many uses of hierarchical modeling, their application to the statistical analysis of spatial and spatio-temporal data from areas such as epidemiology And environmental science has proven particularly fruitful. Yet to date, the few books that address the subject have been either too narrowly focused on specific aspects of spatial analysis, or written at a level often inaccessible to those lacking a strong background in mathematical statistics.Hierarchical Modeling and Analysis for Spatial Data is the first accessible, self-contained treatment of hierarchical methods, modeling, and dat

  15. Hierarchical Neural Regression Models for Customer Churn Prediction

    Directory of Open Access Journals (Sweden)

    Golshan Mohammadi

    2013-01-01

    Full Text Available As customers are the main assets of each industry, customer churn prediction is becoming a major task for companies to remain in competition with competitors. In the literature, the better applicability and efficiency of hierarchical data mining techniques has been reported. This paper considers three hierarchical models by combining four different data mining techniques for churn prediction, which are backpropagation artificial neural networks (ANN, self-organizing maps (SOM, alpha-cut fuzzy c-means (α-FCM, and Cox proportional hazards regression model. The hierarchical models are ANN + ANN + Cox, SOM + ANN + Cox, and α-FCM + ANN + Cox. In particular, the first component of the models aims to cluster data in two churner and nonchurner groups and also filter out unrepresentative data or outliers. Then, the clustered data as the outputs are used to assign customers to churner and nonchurner groups by the second technique. Finally, the correctly classified data are used to create Cox proportional hazards model. To evaluate the performance of the hierarchical models, an Iranian mobile dataset is considered. The experimental results show that the hierarchical models outperform the single Cox regression baseline model in terms of prediction accuracy, Types I and II errors, RMSE, and MAD metrics. In addition, the α-FCM + ANN + Cox model significantly performs better than the two other hierarchical models.

  16. Bayesian nonparametric hierarchical modeling.

    Science.gov (United States)

    Dunson, David B

    2009-04-01

    In biomedical research, hierarchical models are very widely used to accommodate dependence in multivariate and longitudinal data and for borrowing of information across data from different sources. A primary concern in hierarchical modeling is sensitivity to parametric assumptions, such as linearity and normality of the random effects. Parametric assumptions on latent variable distributions can be challenging to check and are typically unwarranted, given available prior knowledge. This article reviews some recent developments in Bayesian nonparametric methods motivated by complex, multivariate and functional data collected in biomedical studies. The author provides a brief review of flexible parametric approaches relying on finite mixtures and latent class modeling. Dirichlet process mixture models are motivated by the need to generalize these approaches to avoid assuming a fixed finite number of classes. Focusing on an epidemiology application, the author illustrates the practical utility and potential of nonparametric Bayes methods.

  17. Analysis hierarchical model for discrete event systems

    Science.gov (United States)

    Ciortea, E. M.

    2015-11-01

    The This paper presents the hierarchical model based on discrete event network for robotic systems. Based on the hierarchical approach, Petri network is analysed as a network of the highest conceptual level and the lowest level of local control. For modelling and control of complex robotic systems using extended Petri nets. Such a system is structured, controlled and analysed in this paper by using Visual Object Net ++ package that is relatively simple and easy to use, and the results are shown as representations easy to interpret. The hierarchical structure of the robotic system is implemented on computers analysed using specialized programs. Implementation of hierarchical model discrete event systems, as a real-time operating system on a computer network connected via a serial bus is possible, where each computer is dedicated to local and Petri model of a subsystem global robotic system. Since Petri models are simplified to apply general computers, analysis, modelling, complex manufacturing systems control can be achieved using Petri nets. Discrete event systems is a pragmatic tool for modelling industrial systems. For system modelling using Petri nets because we have our system where discrete event. To highlight the auxiliary time Petri model using transport stream divided into hierarchical levels and sections are analysed successively. Proposed robotic system simulation using timed Petri, offers the opportunity to view the robotic time. Application of goods or robotic and transmission times obtained by measuring spot is obtained graphics showing the average time for transport activity, using the parameters sets of finished products. individually.

  18. Hierarchical modeling and its numerical implementation for layered thin elastic structures

    Energy Technology Data Exchange (ETDEWEB)

    Cho, Jin-Rae [Hongik University, Sejong (Korea, Republic of)

    2017-05-15

    Thin elastic structures such as beam- and plate-like structures and laminates are characterized by the small thickness, which lead to classical plate and laminate theories in which the displacement fields through the thickness are assumed linear or higher-order polynomials. These classical theories are either insufficient to represent the complex stress variation through the thickness or may encounter the accuracy-computational cost dilemma. In order to overcome the inherent problem of classical theories, the concept of hierarchical modeling has been emerged. In the hierarchical modeling, the hierarchical models with different model levels are selected and combined within a structure domain, in order to make the modeling error be distributed as uniformly as possible throughout the problem domain. The purpose of current study is to explore the potential of hierarchical modeling for the effective numerical analysis of layered structures such as laminated composite. For this goal, the hierarchical models are constructed and the hierarchical modeling is implemented by selectively adjusting the level of hierarchical models. As well, the major characteristics of hierarchical models are investigated through the numerical experiments.

  19. A hierarchical approach to reducing communication in parallel graph algorithms

    KAUST Repository

    Harshvardhan,

    2015-01-01

    Large-scale graph computing has become critical due to the ever-increasing size of data. However, distributed graph computations are limited in their scalability and performance due to the heavy communication inherent in such computations. This is exacerbated in scale-free networks, such as social and web graphs, which contain hub vertices that have large degrees and therefore send a large number of messages over the network. Furthermore, many graph algorithms and computations send the same data to each of the neighbors of a vertex. Our proposed approach recognizes this, and reduces communication performed by the algorithm without change to user-code, through a hierarchical machine model imposed upon the input graph. The hierarchical model takes advantage of locale information of the neighboring vertices to reduce communication, both in message volume and total number of bytes sent. It is also able to better exploit the machine hierarchy to further reduce the communication costs, by aggregating traffic between different levels of the machine hierarchy. Results of an implementation in the STAPL GL shows improved scalability and performance over the traditional level-synchronous approach, with 2.5 × - 8× improvement for a variety of graph algorithms at 12, 000+ cores.

  20. Hierarchical Bayesian Modeling of Fluid-Induced Seismicity

    Science.gov (United States)

    Broccardo, M.; Mignan, A.; Wiemer, S.; Stojadinovic, B.; Giardini, D.

    2017-11-01

    In this study, we present a Bayesian hierarchical framework to model fluid-induced seismicity. The framework is based on a nonhomogeneous Poisson process with a fluid-induced seismicity rate proportional to the rate of injected fluid. The fluid-induced seismicity rate model depends upon a set of physically meaningful parameters and has been validated for six fluid-induced case studies. In line with the vision of hierarchical Bayesian modeling, the rate parameters are considered as random variables. We develop both the Bayesian inference and updating rules, which are used to develop a probabilistic forecasting model. We tested the Basel 2006 fluid-induced seismic case study to prove that the hierarchical Bayesian model offers a suitable framework to coherently encode both epistemic uncertainty and aleatory variability. Moreover, it provides a robust and consistent short-term seismic forecasting model suitable for online risk quantification and mitigation.

  1. Petascale Hierarchical Modeling VIA Parallel Execution

    Energy Technology Data Exchange (ETDEWEB)

    Gelman, Andrew [Principal Investigator

    2014-04-14

    The research allows more effective model building. By allowing researchers to fit complex models to large datasets in a scalable manner, our algorithms and software enable more effective scientific research. In the new area of “big data,” it is often necessary to fit “big models” to adjust for systematic differences between sample and population. For this task, scalable and efficient model-fitting tools are needed, and these have been achieved with our new Hamiltonian Monte Carlo algorithm, the no-U-turn sampler, and our new C++ program, Stan. In layman’s terms, our research enables researchers to create improved mathematical modes for large and complex systems.

  2. Hierarchical Bayesian modelling of mobility metrics for hazard model input calibration

    Science.gov (United States)

    Calder, Eliza; Ogburn, Sarah; Spiller, Elaine; Rutarindwa, Regis; Berger, Jim

    2015-04-01

    In this work we present a method to constrain flow mobility input parameters for pyroclastic flow models using hierarchical Bayes modeling of standard mobility metrics such as H/L and flow volume etc. The advantage of hierarchical modeling is that it can leverage the information in global dataset for a particular mobility metric in order to reduce the uncertainty in modeling of an individual volcano, especially important where individual volcanoes have only sparse datasets. We use compiled pyroclastic flow runout data from Colima, Merapi, Soufriere Hills, Unzen and Semeru volcanoes, presented in an open-source database FlowDat (https://vhub.org/groups/massflowdatabase). While the exact relationship between flow volume and friction varies somewhat between volcanoes, dome collapse flows originating from the same volcano exhibit similar mobility relationships. Instead of fitting separate regression models for each volcano dataset, we use a variation of the hierarchical linear model (Kass and Steffey, 1989). The model presents a hierarchical structure with two levels; all dome collapse flows and dome collapse flows at specific volcanoes. The hierarchical model allows us to assume that the flows at specific volcanoes share a common distribution of regression slopes, then solves for that distribution. We present comparisons of the 95% confidence intervals on the individual regression lines for the data set from each volcano as well as those obtained from the hierarchical model. The results clearly demonstrate the advantage of considering global datasets using this technique. The technique developed is demonstrated here for mobility metrics, but can be applied to many other global datasets of volcanic parameters. In particular, such methods can provide a means to better contain parameters for volcanoes for which we only have sparse data, a ubiquitous problem in volcanology.

  3. Hierarchical Context Modeling for Video Event Recognition.

    Science.gov (United States)

    Wang, Xiaoyang; Ji, Qiang

    2016-10-11

    Current video event recognition research remains largely target-centered. For real-world surveillance videos, targetcentered event recognition faces great challenges due to large intra-class target variation, limited image resolution, and poor detection and tracking results. To mitigate these challenges, we introduced a context-augmented video event recognition approach. Specifically, we explicitly capture different types of contexts from three levels including image level, semantic level, and prior level. At the image level, we introduce two types of contextual features including the appearance context features and interaction context features to capture the appearance of context objects and their interactions with the target objects. At the semantic level, we propose a deep model based on deep Boltzmann machine to learn event object representations and their interactions. At the prior level, we utilize two types of prior-level contexts including scene priming and dynamic cueing. Finally, we introduce a hierarchical context model that systematically integrates the contextual information at different levels. Through the hierarchical context model, contexts at different levels jointly contribute to the event recognition. We evaluate the hierarchical context model for event recognition on benchmark surveillance video datasets. Results show that incorporating contexts in each level can improve event recognition performance, and jointly integrating three levels of contexts through our hierarchical model achieves the best performance.

  4. Hierarchical Bayesian Markov switching models with application to predicting spawning success of shovelnose sturgeon

    Science.gov (United States)

    Holan, S.H.; Davis, G.M.; Wildhaber, M.L.; DeLonay, A.J.; Papoulias, D.M.

    2009-01-01

    The timing of spawning in fish is tightly linked to environmental factors; however, these factors are not very well understood for many species. Specifically, little information is available to guide recruitment efforts for endangered species such as the sturgeon. Therefore, we propose a Bayesian hierarchical model for predicting the success of spawning of the shovelnose sturgeon which uses both biological and behavioural (longitudinal) data. In particular, we use data that were produced from a tracking study that was conducted in the Lower Missouri River. The data that were produced from this study consist of biological variables associated with readiness to spawn along with longitudinal behavioural data collected by using telemetry and archival data storage tags. These high frequency data are complex both biologically and in the underlying behavioural process. To accommodate such complexity we developed a hierarchical linear regression model that uses an eigenvalue predictor, derived from the transition probability matrix of a two-state Markov switching model with generalized auto-regressive conditional heteroscedastic dynamics. Finally, to minimize the computational burden that is associated with estimation of this model, a parallel computing approach is proposed. ?? Journal compilation 2009 Royal Statistical Society.

  5. Hierarchical Semantic Model of Geovideo

    Directory of Open Access Journals (Sweden)

    XIE Xiao

    2015-05-01

    Full Text Available The public security incidents were getting increasingly challenging with regard to their new features, including multi-scale mobility, multistage dynamic evolution, as well as spatiotemporal concurrency and uncertainty in the complex urban environment. However, the existing video models, which were used/designed for independent archive or local analysis of surveillance video, have seriously inhibited emergency response to the urgent requirements.Aiming at the explicit representation of change mechanism in video, the paper proposed a novel hierarchical geovideo semantic model using UML. This model was characterized by the hierarchical representation of both data structure and semantics based on the change-oriented three domains (feature domain, process domain and event domain instead of overall semantic description of video streaming; combining both geographical semantics and video content semantics, in support of global semantic association between multiple geovideo data. The public security incidents by video surveillance are inspected as an example to illustrate the validity of this model.

  6. What are hierarchical models and how do we analyze them?

    Science.gov (United States)

    Royle, Andy

    2016-01-01

    In this chapter we provide a basic definition of hierarchical models and introduce the two canonical hierarchical models in this book: site occupancy and N-mixture models. The former is a hierarchical extension of logistic regression and the latter is a hierarchical extension of Poisson regression. We introduce basic concepts of probability modeling and statistical inference including likelihood and Bayesian perspectives. We go through the mechanics of maximizing the likelihood and characterizing the posterior distribution by Markov chain Monte Carlo (MCMC) methods. We give a general perspective on topics such as model selection and assessment of model fit, although we demonstrate these topics in practice in later chapters (especially Chapters 5, 6, 7, and 10 Chapter 5 Chapter 6 Chapter 7 Chapter 10)

  7. Advances in Applications of Hierarchical Bayesian Methods with Hydrological Models

    Science.gov (United States)

    Alexander, R. B.; Schwarz, G. E.; Boyer, E. W.

    2017-12-01

    Mechanistic and empirical watershed models are increasingly used to inform water resource decisions. Growing access to historical stream measurements and data from in-situ sensor technologies has increased the need for improved techniques for coupling models with hydrological measurements. Techniques that account for the intrinsic uncertainties of both models and measurements are especially needed. Hierarchical Bayesian methods provide an efficient modeling tool for quantifying model and prediction uncertainties, including those associated with measurements. Hierarchical methods can also be used to explore spatial and temporal variations in model parameters and uncertainties that are informed by hydrological measurements. We used hierarchical Bayesian methods to develop a hybrid (statistical-mechanistic) SPARROW (SPAtially Referenced Regression On Watershed attributes) model of long-term mean annual streamflow across diverse environmental and climatic drainages in 18 U.S. hydrological regions. Our application illustrates the use of a new generation of Bayesian methods that offer more advanced computational efficiencies than the prior generation. Evaluations of the effects of hierarchical (regional) variations in model coefficients and uncertainties on model accuracy indicates improved prediction accuracies (median of 10-50%) but primarily in humid eastern regions, where model uncertainties are one-third of those in arid western regions. Generally moderate regional variability is observed for most hierarchical coefficients. Accounting for measurement and structural uncertainties, using hierarchical state-space techniques, revealed the effects of spatially-heterogeneous, latent hydrological processes in the "localized" drainages between calibration sites; this improved model precision, with only minor changes in regional coefficients. Our study can inform advances in the use of hierarchical methods with hydrological models to improve their integration with stream

  8. cudaBayesreg: Parallel Implementation of a Bayesian Multilevel Model for fMRI Data Analysis

    Directory of Open Access Journals (Sweden)

    Adelino R. Ferreira da Silva

    2011-10-01

    Full Text Available Graphic processing units (GPUs are rapidly gaining maturity as powerful general parallel computing devices. A key feature in the development of modern GPUs has been the advancement of the programming model and programming tools. Compute Unified Device Architecture (CUDA is a software platform for massively parallel high-performance computing on Nvidia many-core GPUs. In functional magnetic resonance imaging (fMRI, the volume of the data to be processed, and the type of statistical analysis to perform call for high-performance computing strategies. In this work, we present the main features of the R-CUDA package cudaBayesreg which implements in CUDA the core of a Bayesian multilevel model for the analysis of brain fMRI data. The statistical model implements a Gibbs sampler for multilevel/hierarchical linear models with a normal prior. The main contribution for the increased performance comes from the use of separate threads for fitting the linear regression model at each voxel in parallel. The R-CUDA implementation of the Bayesian model proposed here has been able to reduce significantly the run-time processing of Markov chain Monte Carlo (MCMC simulations used in Bayesian fMRI data analyses. Presently, cudaBayesreg is only configured for Linux systems with Nvidia CUDA support.

  9. Hierarchical Multinomial Processing Tree Models: A Latent-Trait Approach

    Science.gov (United States)

    Klauer, Karl Christoph

    2010-01-01

    Multinomial processing tree models are widely used in many areas of psychology. A hierarchical extension of the model class is proposed, using a multivariate normal distribution of person-level parameters with the mean and covariance matrix to be estimated from the data. The hierarchical model allows one to take variability between persons into…

  10. Constructive Epistemic Modeling: A Hierarchical Bayesian Model Averaging Method

    Science.gov (United States)

    Tsai, F. T. C.; Elshall, A. S.

    2014-12-01

    Constructive epistemic modeling is the idea that our understanding of a natural system through a scientific model is a mental construct that continually develops through learning about and from the model. Using the hierarchical Bayesian model averaging (HBMA) method [1], this study shows that segregating different uncertain model components through a BMA tree of posterior model probabilities, model prediction, within-model variance, between-model variance and total model variance serves as a learning tool [2]. First, the BMA tree of posterior model probabilities permits the comparative evaluation of the candidate propositions of each uncertain model component. Second, systemic model dissection is imperative for understanding the individual contribution of each uncertain model component to the model prediction and variance. Third, the hierarchical representation of the between-model variance facilitates the prioritization of the contribution of each uncertain model component to the overall model uncertainty. We illustrate these concepts using the groundwater modeling of a siliciclastic aquifer-fault system. The sources of uncertainty considered are from geological architecture, formation dip, boundary conditions and model parameters. The study shows that the HBMA analysis helps in advancing knowledge about the model rather than forcing the model to fit a particularly understanding or merely averaging several candidate models. [1] Tsai, F. T.-C., and A. S. Elshall (2013), Hierarchical Bayesian model averaging for hydrostratigraphic modeling: Uncertainty segregation and comparative evaluation. Water Resources Research, 49, 5520-5536, doi:10.1002/wrcr.20428. [2] Elshall, A.S., and F. T.-C. Tsai (2014). Constructive epistemic modeling of groundwater flow with geological architecture and boundary condition uncertainty under Bayesian paradigm, Journal of Hydrology, 517, 105-119, doi: 10.1016/j.jhydrol.2014.05.027.

  11. Road network safety evaluation using Bayesian hierarchical joint model.

    Science.gov (United States)

    Wang, Jie; Huang, Helai

    2016-05-01

    Safety and efficiency are commonly regarded as two significant performance indicators of transportation systems. In practice, road network planning has focused on road capacity and transport efficiency whereas the safety level of a road network has received little attention in the planning stage. This study develops a Bayesian hierarchical joint model for road network safety evaluation to help planners take traffic safety into account when planning a road network. The proposed model establishes relationships between road network risk and micro-level variables related to road entities and traffic volume, as well as socioeconomic, trip generation and network density variables at macro level which are generally used for long term transportation plans. In addition, network spatial correlation between intersections and their connected road segments is also considered in the model. A road network is elaborately selected in order to compare the proposed hierarchical joint model with a previous joint model and a negative binomial model. According to the results of the model comparison, the hierarchical joint model outperforms the joint model and negative binomial model in terms of the goodness-of-fit and predictive performance, which indicates the reasonableness of considering the hierarchical data structure in crash prediction and analysis. Moreover, both random effects at the TAZ level and the spatial correlation between intersections and their adjacent segments are found to be significant, supporting the employment of the hierarchical joint model as an alternative in road-network-level safety modeling as well. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. An Improved Hierarchical Genetic Algorithm for Sheet Cutting Scheduling with Process Constraints

    Directory of Open Access Journals (Sweden)

    Yunqing Rao

    2013-01-01

    Full Text Available For the first time, an improved hierarchical genetic algorithm for sheet cutting problem which involves n cutting patterns for m non-identical parallel machines with process constraints has been proposed in the integrated cutting stock model. The objective of the cutting scheduling problem is minimizing the weighted completed time. A mathematical model for this problem is presented, an improved hierarchical genetic algorithm (ant colony—hierarchical genetic algorithm is developed for better solution, and a hierarchical coding method is used based on the characteristics of the problem. Furthermore, to speed up convergence rates and resolve local convergence issues, a kind of adaptive crossover probability and mutation probability is used in this algorithm. The computational result and comparison prove that the presented approach is quite effective for the considered problem.

  13. An improved hierarchical genetic algorithm for sheet cutting scheduling with process constraints.

    Science.gov (United States)

    Rao, Yunqing; Qi, Dezhong; Li, Jinling

    2013-01-01

    For the first time, an improved hierarchical genetic algorithm for sheet cutting problem which involves n cutting patterns for m non-identical parallel machines with process constraints has been proposed in the integrated cutting stock model. The objective of the cutting scheduling problem is minimizing the weighted completed time. A mathematical model for this problem is presented, an improved hierarchical genetic algorithm (ant colony--hierarchical genetic algorithm) is developed for better solution, and a hierarchical coding method is used based on the characteristics of the problem. Furthermore, to speed up convergence rates and resolve local convergence issues, a kind of adaptive crossover probability and mutation probability is used in this algorithm. The computational result and comparison prove that the presented approach is quite effective for the considered problem.

  14. Parallel phase model : a programming model for high-end parallel machines with manycores.

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Junfeng (Syracuse University, Syracuse, NY); Wen, Zhaofang; Heroux, Michael Allen; Brightwell, Ronald Brian

    2009-04-01

    This paper presents a parallel programming model, Parallel Phase Model (PPM), for next-generation high-end parallel machines based on a distributed memory architecture consisting of a networked cluster of nodes with a large number of cores on each node. PPM has a unified high-level programming abstraction that facilitates the design and implementation of parallel algorithms to exploit both the parallelism of the many cores and the parallelism at the cluster level. The programming abstraction will be suitable for expressing both fine-grained and coarse-grained parallelism. It includes a few high-level parallel programming language constructs that can be added as an extension to an existing (sequential or parallel) programming language such as C; and the implementation of PPM also includes a light-weight runtime library that runs on top of an existing network communication software layer (e.g. MPI). Design philosophy of PPM and details of the programming abstraction are also presented. Several unstructured applications that inherently require high-volume random fine-grained data accesses have been implemented in PPM with very promising results.

  15. Hierarchical composites: Analysis of damage evolution based on fiber bundle model

    DEFF Research Database (Denmark)

    Mishnaevsky, Leon

    2011-01-01

    A computational model of multiscale composites is developed on the basis of the fiber bundle model with the hierarchical load sharing rule, and employed to study the effect of the microstructures of hierarchical composites on their damage resistance. Two types of hierarchical materials were consi...

  16. Robust Real-Time Music Transcription with a Compositional Hierarchical Model.

    Science.gov (United States)

    Pesek, Matevž; Leonardis, Aleš; Marolt, Matija

    2017-01-01

    The paper presents a new compositional hierarchical model for robust music transcription. Its main features are unsupervised learning of a hierarchical representation of input data, transparency, which enables insights into the learned representation, as well as robustness and speed which make it suitable for real-world and real-time use. The model consists of multiple layers, each composed of a number of parts. The hierarchical nature of the model corresponds well to hierarchical structures in music. The parts in lower layers correspond to low-level concepts (e.g. tone partials), while the parts in higher layers combine lower-level representations into more complex concepts (tones, chords). The layers are learned in an unsupervised manner from music signals. Parts in each layer are compositions of parts from previous layers based on statistical co-occurrences as the driving force of the learning process. In the paper, we present the model's structure and compare it to other hierarchical approaches in the field of music information retrieval. We evaluate the model's performance for the multiple fundamental frequency estimation. Finally, we elaborate on extensions of the model towards other music information retrieval tasks.

  17. Statistical modelling of railway track geometry degradation using Hierarchical Bayesian models

    International Nuclear Information System (INIS)

    Andrade, A.R.; Teixeira, P.F.

    2015-01-01

    Railway maintenance planners require a predictive model that can assess the railway track geometry degradation. The present paper uses a Hierarchical Bayesian model as a tool to model the main two quality indicators related to railway track geometry degradation: the standard deviation of longitudinal level defects and the standard deviation of horizontal alignment defects. Hierarchical Bayesian Models (HBM) are flexible statistical models that allow specifying different spatially correlated components between consecutive track sections, namely for the deterioration rates and the initial qualities parameters. HBM are developed for both quality indicators, conducting an extensive comparison between candidate models and a sensitivity analysis on prior distributions. HBM is applied to provide an overall assessment of the degradation of railway track geometry, for the main Portuguese railway line Lisbon–Oporto. - Highlights: • Rail track geometry degradation is analysed using Hierarchical Bayesian models. • A Gibbs sampling strategy is put forward to estimate the HBM. • Model comparison and sensitivity analysis find the most suitable model. • We applied the most suitable model to all the segments of the main Portuguese line. • Tackling spatial correlations using CAR structures lead to a better model fit

  18. Hierarchical Kinematic Modelling and Optimal Design of a Novel Hexapod Robot with Integrated Limb Mechanism

    Directory of Open Access Journals (Sweden)

    Guiyang Xin

    2015-09-01

    Full Text Available This paper presents a novel hexapod robot, hereafter named PH-Robot, with three degrees of freedom (3-DOF parallel leg mechanisms based on the concept of an integrated limb mechanism (ILM for the integration of legged locomotion and arm manipulation. The kinematic model plays an important role in the parametric optimal design and motion planning of robots. However, models of parallel mechanisms are often difficult to obtain because of the implicit relationship between the motions of actuated joints and the motion of a moving platform. In order to derive the kinematic equations of the proposed hexapod robot, an extended hierarchical kinematic modelling method is proposed. According to the kinematic model, the geometrical parameters of the leg are optimized utilizing a comprehensive objective function that considers both dexterity and payload. PH-Robot has distinct advantages in accuracy and load ability over a robot with serial leg mechanisms through the former's comparison of performance indices. The reachable workspace of the leg verifies its ability to walk and manipulate. The results of the trajectory tracking experiment demonstrate the correctness of the kinematic model of the hexapod robot.

  19. A language for data-parallel and task parallel programming dedicated to multi-SIMD computers. Contributions to hydrodynamic simulation with lattice gases

    International Nuclear Information System (INIS)

    Pic, Marc Michel

    1995-01-01

    Parallel programming covers task-parallelism and data-parallelism. Many problems need both parallelisms. Multi-SIMD computers allow hierarchical approach of these parallelisms. The T++ language, based on C++, is dedicated to exploit Multi-SIMD computers using a programming paradigm which is an extension of array-programming to tasks managing. Our language introduced array of independent tasks to achieve separately (MIMD), on subsets of processors of identical behaviour (SIMD), in order to translate the hierarchical inclusion of data-parallelism in task-parallelism. To manipulate in a symmetrical way tasks and data we propose meta-operations which have the same behaviour on tasks arrays and on data arrays. We explain how to implement this language on our parallel computer SYMPHONIE in order to profit by the locally-shared memory, by the hardware virtualization, and by the multiplicity of communications networks. We analyse simultaneously a typical application of such architecture. Finite elements scheme for Fluid mechanic needs powerful parallel computers and requires large floating points abilities. Lattice gases is an alternative to such simulations. Boolean lattice bases are simple, stable, modular, need to floating point computation, but include numerical noise. Boltzmann lattice gases present large precision of computation, but needs floating points and are only locally stable. We propose a new scheme, called multi-bit, who keeps the advantages of each boolean model to which it is applied, with large numerical precision and reduced noise. Experiments on viscosity, physical behaviour, noise reduction and spurious invariants are shown and implementation techniques for parallel Multi-SIMD computers detailed. (author) [fr

  20. Conceptual hierarchical modeling to describe wetland plant community organization

    Science.gov (United States)

    Little, A.M.; Guntenspergen, G.R.; Allen, T.F.H.

    2010-01-01

    Using multivariate analysis, we created a hierarchical modeling process that describes how differently-scaled environmental factors interact to affect wetland-scale plant community organization in a system of small, isolated wetlands on Mount Desert Island, Maine. We followed the procedure: 1) delineate wetland groups using cluster analysis, 2) identify differently scaled environmental gradients using non-metric multidimensional scaling, 3) order gradient hierarchical levels according to spatiotem-poral scale of fluctuation, and 4) assemble hierarchical model using group relationships with ordination axes and post-hoc tests of environmental differences. Using this process, we determined 1) large wetland size and poor surface water chemistry led to the development of shrub fen wetland vegetation, 2) Sphagnum and water chemistry differences affected fen vs. marsh / sedge meadows status within small wetlands, and 3) small-scale hydrologic differences explained transitions between forested vs. non-forested and marsh vs. sedge meadow vegetation. This hierarchical modeling process can help explain how upper level contextual processes constrain biotic community response to lower-level environmental changes. It creates models with more nuanced spatiotemporal complexity than classification and regression tree procedures. Using this process, wetland scientists will be able to generate more generalizable theories of plant community organization, and useful management models. ?? Society of Wetland Scientists 2009.

  1. Hierarchical modeling of active materials

    International Nuclear Information System (INIS)

    Taya, Minoru

    2003-01-01

    Intelligent (or smart) materials are increasingly becoming key materials for use in actuators and sensors. If an intelligent material is used as a sensor, it can be embedded in a variety of structure functioning as a health monitoring system to make their life longer with high reliability. If an intelligent material is used as an active material in an actuator, it plays a key role of making dynamic movement of the actuator under a set of stimuli. This talk intends to cover two different active materials in actuators, (1) piezoelectric laminate with FGM microstructure, (2) ferromagnetic shape memory alloy (FSMA). The advantage of using the FGM piezo laminate is to enhance its fatigue life while maintaining large bending displacement, while that of use in FSMA is its fast actuation while providing a large force and stroke capability. Use of hierarchical modeling of the above active materials is a key design step in optimizing its microstructure for enhancement of their performance. I will discuss briefly hierarchical modeling of the above two active materials. For FGM piezo laminate, we will use both micromechanical model and laminate theory, while for FSMA, the modeling interfacing nano-structure, microstructure and macro-behavior is discussed. (author)

  2. HLIBCov: Parallel Hierarchical Matrix Approximation of Large Covariance Matrices and Likelihoods with Applications in Parameter Identification

    KAUST Repository

    Litvinenko, Alexander

    2017-09-26

    The main goal of this article is to introduce the parallel hierarchical matrix library HLIBpro to the statistical community. We describe the HLIBCov package, which is an extension of the HLIBpro library for approximating large covariance matrices and maximizing likelihood functions. We show that an approximate Cholesky factorization of a dense matrix of size $2M\\\\times 2M$ can be computed on a modern multi-core desktop in few minutes. Further, HLIBCov is used for estimating the unknown parameters such as the covariance length, variance and smoothness parameter of a Mat\\\\\\'ern covariance function by maximizing the joint Gaussian log-likelihood function. The computational bottleneck here is expensive linear algebra arithmetics due to large and dense covariance matrices. Therefore covariance matrices are approximated in the hierarchical ($\\\\H$-) matrix format with computational cost $\\\\mathcal{O}(k^2n \\\\log^2 n/p)$ and storage $\\\\mathcal{O}(kn \\\\log n)$, where the rank $k$ is a small integer (typically $k<25$), $p$ the number of cores and $n$ the number of locations on a fairly general mesh. We demonstrate a synthetic example, where the true values of known parameters are known. For reproducibility we provide the C++ code, the documentation, and the synthetic data.

  3. HLIBCov: Parallel Hierarchical Matrix Approximation of Large Covariance Matrices and Likelihoods with Applications in Parameter Identification

    KAUST Repository

    Litvinenko, Alexander

    2017-09-24

    The main goal of this article is to introduce the parallel hierarchical matrix library HLIBpro to the statistical community. We describe the HLIBCov package, which is an extension of the HLIBpro library for approximating large covariance matrices and maximizing likelihood functions. We show that an approximate Cholesky factorization of a dense matrix of size $2M\\\\times 2M$ can be computed on a modern multi-core desktop in few minutes. Further, HLIBCov is used for estimating the unknown parameters such as the covariance length, variance and smoothness parameter of a Mat\\\\\\'ern covariance function by maximizing the joint Gaussian log-likelihood function. The computational bottleneck here is expensive linear algebra arithmetics due to large and dense covariance matrices. Therefore covariance matrices are approximated in the hierarchical ($\\\\mathcal{H}$-) matrix format with computational cost $\\\\mathcal{O}(k^2n \\\\log^2 n/p)$ and storage $\\\\mathcal{O}(kn \\\\log n)$, where the rank $k$ is a small integer (typically $k<25$), $p$ the number of cores and $n$ the number of locations on a fairly general mesh. We demonstrate a synthetic example, where the true values of known parameters are known. For reproducibility we provide the C++ code, the documentation, and the synthetic data.

  4. The Revised Hierarchical Model: A critical review and assessment

    OpenAIRE

    Kroll, Judith F.; van Hell, Janet G.; Tokowicz, Natasha; Green, David W.

    2010-01-01

    Brysbaert and Duyck (2009) suggest that it is time to abandon the Revised Hierarchical Model (Kroll and Stewart, 1994) in favor of connectionist models such as BIA+ (Dijkstra and Van Heuven, 2002) that more accurately account for the recent evidence on nonselective access in bilingual word recognition. In this brief response, we first review the history of the Revised Hierarchical Model (RHM), consider the set of issues that it was proposed to address, and then evaluate the evidence that supp...

  5. Scalable Hierarchical Algorithms for stochastic PDEs and UQ

    KAUST Repository

    Litvinenko, Alexander; Chá vez, Gustavo; Keyes,David; Ltaief, Hatem; Yokota, Rio

    2015-01-01

    number of degrees of freedom in the discretization. The storage is reduced to the log-linear as well. This hierarchical structure is a good starting point for parallel algorithms. Parallelization on shared and distributed memory systems was pioneered

  6. Hierarchical graphs for rule-based modeling of biochemical systems

    Directory of Open Access Journals (Sweden)

    Hu Bin

    2011-02-01

    Full Text Available Abstract Background In rule-based modeling, graphs are used to represent molecules: a colored vertex represents a component of a molecule, a vertex attribute represents the internal state of a component, and an edge represents a bond between components. Components of a molecule share the same color. Furthermore, graph-rewriting rules are used to represent molecular interactions. A rule that specifies addition (removal of an edge represents a class of association (dissociation reactions, and a rule that specifies a change of a vertex attribute represents a class of reactions that affect the internal state of a molecular component. A set of rules comprises an executable model that can be used to determine, through various means, the system-level dynamics of molecular interactions in a biochemical system. Results For purposes of model annotation, we propose the use of hierarchical graphs to represent structural relationships among components and subcomponents of molecules. We illustrate how hierarchical graphs can be used to naturally document the structural organization of the functional components and subcomponents of two proteins: the protein tyrosine kinase Lck and the T cell receptor (TCR complex. We also show that computational methods developed for regular graphs can be applied to hierarchical graphs. In particular, we describe a generalization of Nauty, a graph isomorphism and canonical labeling algorithm. The generalized version of the Nauty procedure, which we call HNauty, can be used to assign canonical labels to hierarchical graphs or more generally to graphs with multiple edge types. The difference between the Nauty and HNauty procedures is minor, but for completeness, we provide an explanation of the entire HNauty algorithm. Conclusions Hierarchical graphs provide more intuitive formal representations of proteins and other structured molecules with multiple functional components than do the regular graphs of current languages for

  7. Comparing hierarchical models via the marginalized deviance information criterion.

    Science.gov (United States)

    Quintero, Adrian; Lesaffre, Emmanuel

    2018-07-20

    Hierarchical models are extensively used in pharmacokinetics and longitudinal studies. When the estimation is performed from a Bayesian approach, model comparison is often based on the deviance information criterion (DIC). In hierarchical models with latent variables, there are several versions of this statistic: the conditional DIC (cDIC) that incorporates the latent variables in the focus of the analysis and the marginalized DIC (mDIC) that integrates them out. Regardless of the asymptotic and coherency difficulties of cDIC, this alternative is usually used in Markov chain Monte Carlo (MCMC) methods for hierarchical models because of practical convenience. The mDIC criterion is more appropriate in most cases but requires integration of the likelihood, which is computationally demanding and not implemented in Bayesian software. Therefore, we consider a method to compute mDIC by generating replicate samples of the latent variables that need to be integrated out. This alternative can be easily conducted from the MCMC output of Bayesian packages and is widely applicable to hierarchical models in general. Additionally, we propose some approximations in order to reduce the computational complexity for large-sample situations. The method is illustrated with simulated data sets and 2 medical studies, evidencing that cDIC may be misleading whilst mDIC appears pertinent. Copyright © 2018 John Wiley & Sons, Ltd.

  8. A hierarchical model for ordinal matrix factorization

    DEFF Research Database (Denmark)

    Paquet, Ulrich; Thomson, Blaise; Winther, Ole

    2012-01-01

    This paper proposes a hierarchical probabilistic model for ordinal matrix factorization. Unlike previous approaches, we model the ordinal nature of the data and take a principled approach to incorporating priors for the hidden variables. Two algorithms are presented for inference, one based...

  9. A hierarchical spatiotemporal analog forecasting model for count data.

    Science.gov (United States)

    McDermott, Patrick L; Wikle, Christopher K; Millspaugh, Joshua

    2018-01-01

    Analog forecasting is a mechanism-free nonlinear method that forecasts a system forward in time by examining how past states deemed similar to the current state moved forward. Previous applications of analog forecasting has been successful at producing robust forecasts for a variety of ecological and physical processes, but it has typically been presented in an empirical or heuristic procedure, rather than as a formal statistical model. The methodology presented here extends the model-based analog method of McDermott and Wikle (Environmetrics, 27, 2016, 70) by placing analog forecasting within a fully hierarchical statistical framework that can accommodate count observations. Using a Bayesian approach, the hierarchical analog model is able to quantify rigorously the uncertainty associated with forecasts. Forecasting waterfowl settling patterns in the northwestern United States and Canada is conducted by applying the hierarchical analog model to a breeding population survey dataset. Sea surface temperature (SST) in the Pacific Ocean is used to help identify potential analogs for the waterfowl settling patterns.

  10. Hierarchical Multiple Markov Chain Model for Unsupervised Texture Segmentation

    Czech Academy of Sciences Publication Activity Database

    Scarpa, G.; Gaetano, R.; Haindl, Michal; Zerubia, J.

    2009-01-01

    Roč. 18, č. 8 (2009), s. 1830-1843 ISSN 1057-7149 R&D Projects: GA ČR GA102/08/0593 EU Projects: European Commission(XE) 507752 - MUSCLE Institutional research plan: CEZ:AV0Z10750506 Keywords : Classification * texture analysis * segmentation * hierarchical image models * Markov process Subject RIV: BD - Theory of Information Impact factor: 2.848, year: 2009 http://library.utia.cas.cz/separaty/2009/RO/haindl-hierarchical multiple markov chain model for unsupervised texture segmentation.pdf

  11. Hierarchical modeling of molecular energies using a deep neural network

    Science.gov (United States)

    Lubbers, Nicholas; Smith, Justin S.; Barros, Kipton

    2018-06-01

    We introduce the Hierarchically Interacting Particle Neural Network (HIP-NN) to model molecular properties from datasets of quantum calculations. Inspired by a many-body expansion, HIP-NN decomposes properties, such as energy, as a sum over hierarchical terms. These terms are generated from a neural network—a composition of many nonlinear transformations—acting on a representation of the molecule. HIP-NN achieves the state-of-the-art performance on a dataset of 131k ground state organic molecules and predicts energies with 0.26 kcal/mol mean absolute error. With minimal tuning, our model is also competitive on a dataset of molecular dynamics trajectories. In addition to enabling accurate energy predictions, the hierarchical structure of HIP-NN helps to identify regions of model uncertainty.

  12. Hierarchical and coupling model of factors influencing vessel traffic flow.

    Science.gov (United States)

    Liu, Zhao; Liu, Jingxian; Li, Huanhuan; Li, Zongzhi; Tan, Zhirong; Liu, Ryan Wen; Liu, Yi

    2017-01-01

    Understanding the characteristics of vessel traffic flow is crucial in maintaining navigation safety, efficiency, and overall waterway transportation management. Factors influencing vessel traffic flow possess diverse features such as hierarchy, uncertainty, nonlinearity, complexity, and interdependency. To reveal the impact mechanism of the factors influencing vessel traffic flow, a hierarchical model and a coupling model are proposed in this study based on the interpretative structural modeling method. The hierarchical model explains the hierarchies and relationships of the factors using a graph. The coupling model provides a quantitative method that explores interaction effects of factors using a coupling coefficient. The coupling coefficient is obtained by determining the quantitative indicators of the factors and their weights. Thereafter, the data obtained from Port of Tianjin is used to verify the proposed coupling model. The results show that the hierarchical model of the factors influencing vessel traffic flow can explain the level, structure, and interaction effect of the factors; the coupling model is efficient in analyzing factors influencing traffic volumes. The proposed method can be used for analyzing increases in vessel traffic flow in waterway transportation system.

  13. Scalable Hierarchical Algorithms for stochastic PDEs and Uncertainty Quantification

    KAUST Repository

    Litvinenko, Alexander; Chavez, Gustavo; Keyes, David E.; Ltaief, Hatem; Yokota, Rio

    2015-01-01

    number of degrees of freedom in the discretization. The storage is reduced to the log-linear as well. This hierarchical structure is a good starting point for parallel algorithms. Parallelization on shared and distributed memory systems was pioneered by R

  14. A distributed-memory hierarchical solver for general sparse linear systems

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Chao [Stanford Univ., CA (United States). Inst. for Computational and Mathematical Engineering; Pouransari, Hadi [Stanford Univ., CA (United States). Dept. of Mechanical Engineering; Rajamanickam, Sivasankaran [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Center for Computing Research; Boman, Erik G. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Center for Computing Research; Darve, Eric [Stanford Univ., CA (United States). Inst. for Computational and Mathematical Engineering and Dept. of Mechanical Engineering

    2017-12-20

    We present a parallel hierarchical solver for general sparse linear systems on distributed-memory machines. For large-scale problems, this fully algebraic algorithm is faster and more memory-efficient than sparse direct solvers because it exploits the low-rank structure of fill-in blocks. Depending on the accuracy of low-rank approximations, the hierarchical solver can be used either as a direct solver or as a preconditioner. The parallel algorithm is based on data decomposition and requires only local communication for updating boundary data on every processor. Moreover, the computation-to-communication ratio of the parallel algorithm is approximately the volume-to-surface-area ratio of the subdomain owned by every processor. We also provide various numerical results to demonstrate the versatility and scalability of the parallel algorithm.

  15. Hierarchical Bayesian Models of Subtask Learning

    Science.gov (United States)

    Anglim, Jeromy; Wynton, Sarah K. A.

    2015-01-01

    The current study used Bayesian hierarchical methods to challenge and extend previous work on subtask learning consistency. A general model of individual-level subtask learning was proposed focusing on power and exponential functions with constraints to test for inconsistency. To study subtask learning, we developed a novel computer-based booking…

  16. Automatic thoracic anatomy segmentation on CT images using hierarchical fuzzy models and registration

    Science.gov (United States)

    Sun, Kaioqiong; Udupa, Jayaram K.; Odhner, Dewey; Tong, Yubing; Torigian, Drew A.

    2014-03-01

    This paper proposes a thoracic anatomy segmentation method based on hierarchical recognition and delineation guided by a built fuzzy model. Labeled binary samples for each organ are registered and aligned into a 3D fuzzy set representing the fuzzy shape model for the organ. The gray intensity distributions of the corresponding regions of the organ in the original image are recorded in the model. The hierarchical relation and mean location relation between different organs are also captured in the model. Following the hierarchical structure and location relation, the fuzzy shape model of different organs is registered to the given target image to achieve object recognition. A fuzzy connected delineation method is then used to obtain the final segmentation result of organs with seed points provided by recognition. The hierarchical structure and location relation integrated in the model provide the initial parameters for registration and make the recognition efficient and robust. The 3D fuzzy model combined with hierarchical affine registration ensures that accurate recognition can be obtained for both non-sparse and sparse organs. The results on real images are presented and shown to be better than a recently reported fuzzy model-based anatomy recognition strategy.

  17. Hierarchical and coupling model of factors influencing vessel traffic flow.

    Directory of Open Access Journals (Sweden)

    Zhao Liu

    Full Text Available Understanding the characteristics of vessel traffic flow is crucial in maintaining navigation safety, efficiency, and overall waterway transportation management. Factors influencing vessel traffic flow possess diverse features such as hierarchy, uncertainty, nonlinearity, complexity, and interdependency. To reveal the impact mechanism of the factors influencing vessel traffic flow, a hierarchical model and a coupling model are proposed in this study based on the interpretative structural modeling method. The hierarchical model explains the hierarchies and relationships of the factors using a graph. The coupling model provides a quantitative method that explores interaction effects of factors using a coupling coefficient. The coupling coefficient is obtained by determining the quantitative indicators of the factors and their weights. Thereafter, the data obtained from Port of Tianjin is used to verify the proposed coupling model. The results show that the hierarchical model of the factors influencing vessel traffic flow can explain the level, structure, and interaction effect of the factors; the coupling model is efficient in analyzing factors influencing traffic volumes. The proposed method can be used for analyzing increases in vessel traffic flow in waterway transportation system.

  18. Hierarchical Modelling of Flood Risk for Engineering Decision Analysis

    DEFF Research Database (Denmark)

    Custer, Rocco

    protection structures in the hierarchical flood protection system - is identified. To optimise the design of protection structures, fragility and vulnerability models must allow for consideration of decision alternatives. While such vulnerability models are available for large protection structures (e...... systems, as well as the implementation of the flood risk analysis methodology and the vulnerability modelling approach are illustrated with an example application. In summary, the present thesis provides a characterisation of hierarchical flood protection systems as well as several methodologies to model...... and robust. Traditional risk management solutions, e.g. dike construction, are not particularly flexible, as they are difficult to adapt to changing risk. Conversely, the recent concept of integrated flood risk management, entailing a combination of several structural and non-structural risk management...

  19. Hierarchical Swarm Model: A New Approach to Optimization

    Directory of Open Access Journals (Sweden)

    Hanning Chen

    2010-01-01

    Full Text Available This paper presents a novel optimization model called hierarchical swarm optimization (HSO, which simulates the natural hierarchical complex system from where more complex intelligence can emerge for complex problems solving. This proposed model is intended to suggest ways that the performance of HSO-based algorithms on complex optimization problems can be significantly improved. This performance improvement is obtained by constructing the HSO hierarchies, which means that an agent in a higher level swarm can be composed of swarms of other agents from lower level and different swarms of different levels evolve on different spatiotemporal scale. A novel optimization algorithm (named PS2O, based on the HSO model, is instantiated and tested to illustrate the ideas of HSO model clearly. Experiments were conducted on a set of 17 benchmark optimization problems including both continuous and discrete cases. The results demonstrate remarkable performance of the PS2O algorithm on all chosen benchmark functions when compared to several successful swarm intelligence and evolutionary algorithms.

  20. Bayesian disease mapping: hierarchical modeling in spatial epidemiology

    National Research Council Canada - National Science Library

    Lawson, Andrew

    2013-01-01

    .... Exploring these new developments, Bayesian Disease Mapping: Hierarchical Modeling in Spatial Epidemiology, Second Edition provides an up-to-date, cohesive account of the full range of Bayesian disease mapping methods and applications...

  1. Parallel Boltzmann machines : a mathematical model

    NARCIS (Netherlands)

    Zwietering, P.J.; Aarts, E.H.L.

    1991-01-01

    A mathematical model is presented for the description of parallel Boltzmann machines. The framework is based on the theory of Markov chains and combines a number of previously known results into one generic model. It is argued that parallel Boltzmann machines maximize a function consisting of a

  2. Multitasking TORT Under UNICOS: Parallel Performance Models and Measurements

    International Nuclear Information System (INIS)

    Azmy, Y.Y.; Barnett, D.A.

    1999-01-01

    The existing parallel algorithms in the TORT discrete ordinates were updated to function in a UNI-COS environment. A performance model for the parallel overhead was derived for the existing algorithms. The largest contributors to the parallel overhead were identified and a new algorithm was developed. A parallel overhead model was also derived for the new algorithm. The results of the comparison of parallel performance models were compared to applications of the code to two TORT standard test problems and a large production problem. The parallel performance models agree well with the measured parallel overhead

  3. Multitasking TORT under UNICOS: Parallel performance models and measurements

    International Nuclear Information System (INIS)

    Barnett, A.; Azmy, Y.Y.

    1999-01-01

    The existing parallel algorithms in the TORT discrete ordinates code were updated to function in a UNICOS environment. A performance model for the parallel overhead was derived for the existing algorithms. The largest contributors to the parallel overhead were identified and a new algorithm was developed. A parallel overhead model was also derived for the new algorithm. The results of the comparison of parallel performance models were compared to applications of the code to two TORT standard test problems and a large production problem. The parallel performance models agree well with the measured parallel overhead

  4. Hierarchic modeling of heat exchanger thermal hydraulics

    International Nuclear Information System (INIS)

    Horvat, A.; Koncar, B.

    2002-01-01

    Volume Averaging Technique (VAT) is employed in order to model the heat exchanger cross-flow as a porous media flow. As the averaging of the transport equations lead to a closure problem, separate relations are introduced to model interphase momentum and heat transfer between fluid flow and the solid structure. The hierarchic modeling is used to calculate the local drag coefficient C d as a function of Reynolds number Re h . For that purpose a separate model of REV is built and DNS of flow through REV is performed. The local values of heat transfer coefficient h are obtained from available literature. The geometry of the simulation domain and boundary conditions follow the geometry of the experimental test section used at U.C.L.A. The calculated temperature fields reveal that the geometry with denser pin-fins arrangement (HX1) heats fluid flow faster. The temperature field in the HX2 exhibits the formation of thermal boundary layer between pin-fins, which has a significant role in overall thermal performance of the heat exchanger. Although presented discrepancies of the whole-section drag coefficient C d are large, we believe that hierarchic modeling is an appropriate strategy for calculation of complex transport phenomena in heat exchanger geometries.(author)

  5. Applying Hierarchical Model Calibration to Automatically Generated Items.

    Science.gov (United States)

    Williamson, David M.; Johnson, Matthew S.; Sinharay, Sandip; Bejar, Isaac I.

    This study explored the application of hierarchical model calibration as a means of reducing, if not eliminating, the need for pretesting of automatically generated items from a common item model prior to operational use. Ultimately the successful development of automatic item generation (AIG) systems capable of producing items with highly similar…

  6. Towards a streaming model for nested data parallelism

    DEFF Research Database (Denmark)

    Madsen, Frederik Meisner; Filinski, Andrzej

    2013-01-01

    The language-integrated cost semantics for nested data parallelism pioneered by NESL provides an intuitive, high-level model for predicting performance and scalability of parallel algorithms with reasonable accuracy. However, this predictability, obtained through a uniform, parallelism-flattening......The language-integrated cost semantics for nested data parallelism pioneered by NESL provides an intuitive, high-level model for predicting performance and scalability of parallel algorithms with reasonable accuracy. However, this predictability, obtained through a uniform, parallelism......-processable in a streaming fashion. This semantics is directly compatible with previously proposed piecewise execution models for nested data parallelism, but allows the expected space usage to be reasoned about directly at the source-language level. The language definition and implementation are still very much work...

  7. AN INTEGER PROGRAMMING MODEL FOR HIERARCHICAL WORKFORCE

    Directory of Open Access Journals (Sweden)

    BANU SUNGUR

    2013-06-01

    Full Text Available The model presented in this paper is based on the model developed by Billionnet for the hierarchical workforce problem. In Billionnet’s Model, while determining the workers’ weekly costs, weekly working hours of workers are not taken into consideration. In our model, the weekly costs per worker are reduced in proportion to the working hours per week. Our model is illustrated on the Billionnet’s Example. The models in question are compared and evaluated on the basis of the results obtained from the example problem. A reduction is achieved in the total cost by the proposed model.

  8. High-Performance Psychometrics: The Parallel-E Parallel-M Algorithm for Generalized Latent Variable Models. Research Report. ETS RR-16-34

    Science.gov (United States)

    von Davier, Matthias

    2016-01-01

    This report presents results on a parallel implementation of the expectation-maximization (EM) algorithm for multidimensional latent variable models. The developments presented here are based on code that parallelizes both the E step and the M step of the parallel-E parallel-M algorithm. Examples presented in this report include item response…

  9. The Realized Hierarchical Archimedean Copula in Risk Modelling

    Directory of Open Access Journals (Sweden)

    Ostap Okhrin

    2017-06-01

    Full Text Available This paper introduces the concept of the realized hierarchical Archimedean copula (rHAC. The proposed approach inherits the ability of the copula to capture the dependencies among financial time series, and combines it with additional information contained in high-frequency data. The considered model does not suffer from the curse of dimensionality, and is able to accurately predict high-dimensional distributions. This flexibility is obtained by using a hierarchical structure in the copula. The time variability of the model is provided by daily forecasts of the realized correlation matrix, which is used to estimate the structure and the parameters of the rHAC. Extensive simulation studies show the validity of the estimator based on this realized correlation matrix, and its performance, in comparison to the benchmark models. The application of the estimator to one-day-ahead Value at Risk (VaR prediction using high-frequency data exhibits good forecasting properties for a multivariate portfolio.

  10. Application of hierarchical genetic models to Raven and WAIS subtests: a Dutch twin study

    NARCIS (Netherlands)

    Rijsdijk, F.V.; Vernon, P.A.; Boomsma, D.I.

    2002-01-01

    Hierarchical models of intelligence are highly informative and widely accepted. Application of these models to twin data, however, is sparse. This paper addresses the question of how a genetic hierarchical model fits the Wechsler Adult Intelligence Scale (WAIS) subtests and the Raven Standard

  11. PDDP, A Data Parallel Programming Model

    Directory of Open Access Journals (Sweden)

    Karen H. Warren

    1996-01-01

    Full Text Available PDDP, the parallel data distribution preprocessor, is a data parallel programming model for distributed memory parallel computers. PDDP implements high-performance Fortran-compatible data distribution directives and parallelism expressed by the use of Fortran 90 array syntax, the FORALL statement, and the WHERE construct. Distributed data objects belong to a global name space; other data objects are treated as local and replicated on each processor. PDDP allows the user to program in a shared memory style and generates codes that are portable to a variety of parallel machines. For interprocessor communication, PDDP uses the fastest communication primitives on each platform.

  12. Multicollinearity in hierarchical linear models.

    Science.gov (United States)

    Yu, Han; Jiang, Shanhe; Land, Kenneth C

    2015-09-01

    This study investigates an ill-posed problem (multicollinearity) in Hierarchical Linear Models from both the data and the model perspectives. We propose an intuitive, effective approach to diagnosing the presence of multicollinearity and its remedies in this class of models. A simulation study demonstrates the impacts of multicollinearity on coefficient estimates, associated standard errors, and variance components at various levels of multicollinearity for finite sample sizes typical in social science studies. We further investigate the role multicollinearity plays at each level for estimation of coefficient parameters in terms of shrinkage. Based on these analyses, we recommend a top-down method for assessing multicollinearity in HLMs that first examines the contextual predictors (Level-2 in a two-level model) and then the individual predictors (Level-1) and uses the results for data collection, research problem redefinition, model re-specification, variable selection and estimation of a final model. Copyright © 2015 Elsevier Inc. All rights reserved.

  13. Cellular automata a parallel model

    CERN Document Server

    Mazoyer, J

    1999-01-01

    Cellular automata can be viewed both as computational models and modelling systems of real processes. This volume emphasises the first aspect. In articles written by leading researchers, sophisticated massive parallel algorithms (firing squad, life, Fischer's primes recognition) are treated. Their computational power and the specific complexity classes they determine are surveyed, while some recent results in relation to chaos from a new dynamic systems point of view are also presented. Audience: This book will be of interest to specialists of theoretical computer science and the parallelism challenge.

  14. The Medial Temporal Lobe – Conduit of Parallel Connectivity: A model for Attention, Memory, and Perception.

    Directory of Open Access Journals (Sweden)

    Brian B. Mozaffari

    2014-11-01

    Full Text Available Based on the notion that the brain is equipped with a hierarchical organization, which embodies environmental contingencies across many time scales, this paper suggests that the medial temporal lobe (MTL – located deep in the hierarchy – serves as a bridge connecting supra to infra – MTL levels. Bridging the upper and lower regions of the hierarchy provides a parallel architecture that optimizes information flow between upper and lower regions to aid attention, encoding, and processing of quick complex visual phenomenon. Bypassing intermediate hierarchy levels, information conveyed through the MTL ‘bridge’ allows upper levels to make educated predictions about the prevailing context and accordingly select lower representations to increase the efficiency of predictive coding throughout the hierarchy. This selection or activation/deactivation is associated with endogenous attention. In the event that these ‘bridge’ predictions are inaccurate, this architecture enables the rapid encoding of novel contingencies. A review of hierarchical models in relation to memory is provided along with a new theory, Medial-temporal-lobe Conduit for Parallel Connectivity (MCPC. In this scheme, consolidation is considered as a secondary process, occurring after a MTL-bridged connection, which eventually allows upper and lower levels to access each other directly. With repeated reactivations, as contingencies become consolidated, less MTL activity is predicted. Finally, MTL bridging may aid processing transient but structured perceptual events, by allowing communication between upper and lower levels without calling on intermediate levels of representation.

  15. Detecting Hierarchical Structure in Networks

    DEFF Research Database (Denmark)

    Herlau, Tue; Mørup, Morten; Schmidt, Mikkel Nørgaard

    2012-01-01

    Many real-world networks exhibit hierarchical organization. Previous models of hierarchies within relational data has focused on binary trees; however, for many networks it is unknown whether there is hierarchical structure, and if there is, a binary tree might not account well for it. We propose...... a generative Bayesian model that is able to infer whether hierarchies are present or not from a hypothesis space encompassing all types of hierarchical tree structures. For efficient inference we propose a collapsed Gibbs sampling procedure that jointly infers a partition and its hierarchical structure....... On synthetic and real data we demonstrate that our model can detect hierarchical structure leading to better link-prediction than competing models. Our model can be used to detect if a network exhibits hierarchical structure, thereby leading to a better comprehension and statistical account the network....

  16. Hierarchical ordering with partial pairwise hierarchical relationships on the macaque brain data sets.

    Directory of Open Access Journals (Sweden)

    Woosang Lim

    Full Text Available Hierarchical organizations of information processing in the brain networks have been known to exist and widely studied. To find proper hierarchical structures in the macaque brain, the traditional methods need the entire pairwise hierarchical relationships between cortical areas. In this paper, we present a new method that discovers hierarchical structures of macaque brain networks by using partial information of pairwise hierarchical relationships. Our method uses a graph-based manifold learning to exploit inherent relationship, and computes pseudo distances of hierarchical levels for every pair of cortical areas. Then, we compute hierarchy levels of all cortical areas by minimizing the sum of squared hierarchical distance errors with the hierarchical information of few cortical areas. We evaluate our method on the macaque brain data sets whose true hierarchical levels are known as the FV91 model. The experimental results show that hierarchy levels computed by our method are similar to the FV91 model, and its errors are much smaller than the errors of hierarchical clustering approaches.

  17. Usability Prediction & Ranking of SDLC Models Using Fuzzy Hierarchical Usability Model

    Science.gov (United States)

    Gupta, Deepak; Ahlawat, Anil K.; Sagar, Kalpna

    2017-06-01

    Evaluation of software quality is an important aspect for controlling and managing the software. By such evaluation, improvements in software process can be made. The software quality is significantly dependent on software usability. Many researchers have proposed numbers of usability models. Each model considers a set of usability factors but do not cover all the usability aspects. Practical implementation of these models is still missing, as there is a lack of precise definition of usability. Also, it is very difficult to integrate these models into current software engineering practices. In order to overcome these challenges, this paper aims to define the term `usability' using the proposed hierarchical usability model with its detailed taxonomy. The taxonomy considers generic evaluation criteria for identifying the quality components, which brings together factors, attributes and characteristics defined in various HCI and software models. For the first time, the usability model is also implemented to predict more accurate usability values. The proposed system is named as fuzzy hierarchical usability model that can be easily integrated into the current software engineering practices. In order to validate the work, a dataset of six software development life cycle models is created and employed. These models are ranked according to their predicted usability values. This research also focuses on the detailed comparison of proposed model with the existing usability models.

  18. Comparing the performance of flat and hierarchical Habitat/Land-Cover classification models in a NATURA 2000 site

    Science.gov (United States)

    Gavish, Yoni; O'Connell, Jerome; Marsh, Charles J.; Tarantino, Cristina; Blonda, Palma; Tomaselli, Valeria; Kunin, William E.

    2018-02-01

    The increasing need for high quality Habitat/Land-Cover (H/LC) maps has triggered considerable research into novel machine-learning based classification models. In many cases, H/LC classes follow pre-defined hierarchical classification schemes (e.g., CORINE), in which fine H/LC categories are thematically nested within more general categories. However, none of the existing machine-learning algorithms account for this pre-defined hierarchical structure. Here we introduce a novel Random Forest (RF) based application of hierarchical classification, which fits a separate local classification model in every branching point of the thematic tree, and then integrates all the different local models to a single global prediction. We applied the hierarchal RF approach in a NATURA 2000 site in Italy, using two land-cover (CORINE, FAO-LCCS) and one habitat classification scheme (EUNIS) that differ from one another in the shape of the class hierarchy. For all 3 classification schemes, both the hierarchical model and a flat model alternative provided accurate predictions, with kappa values mostly above 0.9 (despite using only 2.2-3.2% of the study area as training cells). The flat approach slightly outperformed the hierarchical models when the hierarchy was relatively simple, while the hierarchical model worked better under more complex thematic hierarchies. Most misclassifications came from habitat pairs that are thematically distant yet spectrally similar. In 2 out of 3 classification schemes, the additional constraints of the hierarchical model resulted with fewer such serious misclassifications relative to the flat model. The hierarchical model also provided valuable information on variable importance which can shed light into "black-box" based machine learning algorithms like RF. We suggest various ways by which hierarchical classification models can increase the accuracy and interpretability of H/LC classification maps.

  19. Iteration schemes for parallelizing models of superconductivity

    Energy Technology Data Exchange (ETDEWEB)

    Gray, P.A. [Michigan State Univ., East Lansing, MI (United States)

    1996-12-31

    The time dependent Lawrence-Doniach model, valid for high fields and high values of the Ginzburg-Landau parameter, is often used for studying vortex dynamics in layered high-T{sub c} superconductors. When solving these equations numerically, the added degrees of complexity due to the coupling and nonlinearity of the model often warrant the use of high-performance computers for their solution. However, the interdependence between the layers can be manipulated so as to allow parallelization of the computations at an individual layer level. The reduced parallel tasks may then be solved independently using a heterogeneous cluster of networked workstations connected together with Parallel Virtual Machine (PVM) software. Here, this parallelization of the model is discussed and several computational implementations of varying degrees of parallelism are presented. Computational results are also given which contrast properties of convergence speed, stability, and consistency of these implementations. Included in these results are models involving the motion of vortices due to an applied current and pinning effects due to various material properties.

  20. A hierarchical model exhibiting the Kosterlitz-Thouless fixed point

    International Nuclear Information System (INIS)

    Marchetti, D.H.U.; Perez, J.F.

    1985-01-01

    A hierarchical model for 2-d Coulomb gases displaying a line stable of fixed points describing the Kosterlitz-Thouless phase transition is constructed. For Coulomb gases corresponding to Z sub(N)- models these fixed points are stable for an intermediate temperature interval. (Author) [pt

  1. Fuzzy hierarchical model for risk assessment principles, concepts, and practical applications

    CERN Document Server

    Chan, Hing Kai

    2013-01-01

    Risk management is often complicated by situational uncertainties and the subjective preferences of decision makers. Fuzzy Hierarchical Model for Risk Assessment introduces a fuzzy-based hierarchical approach to solve risk management problems considering both qualitative and quantitative criteria to tackle imprecise information.   This approach is illustrated through number of case studies using examples from the food, fashion and electronics sectors to cover a range of applications including supply chain management, green product design and green initiatives. These practical examples explore how this method can be adapted and fine tuned to fit other industries as well.   Supported by an extensive literature review, Fuzzy Hierarchical Model for Risk Assessment  comprehensively introduces a new method for project managers across all industries as well as researchers in risk management.

  2. A Hierarchal Risk Assessment Model Using the Evidential Reasoning Rule

    Directory of Open Access Journals (Sweden)

    Xiaoxiao Ji

    2017-02-01

    Full Text Available This paper aims to develop a hierarchical risk assessment model using the newly-developed evidential reasoning (ER rule, which constitutes a generic conjunctive probabilistic reasoning process. In this paper, we first provide a brief introduction to the basics of the ER rule and emphasize the strengths for representing and aggregating uncertain information from multiple experts and sources. Further, we discuss the key steps of developing the hierarchical risk assessment framework systematically, including (1 formulation of risk assessment hierarchy; (2 representation of both qualitative and quantitative information; (3 elicitation of attribute weights and information reliabilities; (4 aggregation of assessment information using the ER rule and (5 quantification and ranking of risks using utility-based transformation. The proposed hierarchical risk assessment framework can potentially be implemented to various complex and uncertain systems. A case study on the fire/explosion risk assessment of marine vessels demonstrates the applicability of the proposed risk assessment model.

  3. Ranking of Business Process Simulation Software Tools with DEX/QQ Hierarchical Decision Model.

    Science.gov (United States)

    Damij, Nadja; Boškoski, Pavle; Bohanec, Marko; Mileva Boshkoska, Biljana

    2016-01-01

    The omnipresent need for optimisation requires constant improvements of companies' business processes (BPs). Minimising the risk of inappropriate BP being implemented is usually performed by simulating the newly developed BP under various initial conditions and "what-if" scenarios. An effectual business process simulations software (BPSS) is a prerequisite for accurate analysis of an BP. Characterisation of an BPSS tool is a challenging task due to the complex selection criteria that includes quality of visual aspects, simulation capabilities, statistical facilities, quality reporting etc. Under such circumstances, making an optimal decision is challenging. Therefore, various decision support models are employed aiding the BPSS tool selection. The currently established decision support models are either proprietary or comprise only a limited subset of criteria, which affects their accuracy. Addressing this issue, this paper proposes a new hierarchical decision support model for ranking of BPSS based on their technical characteristics by employing DEX and qualitative to quantitative (QQ) methodology. Consequently, the decision expert feeds the required information in a systematic and user friendly manner. There are three significant contributions of the proposed approach. Firstly, the proposed hierarchical model is easily extendible for adding new criteria in the hierarchical structure. Secondly, a fully operational decision support system (DSS) tool that implements the proposed hierarchical model is presented. Finally, the effectiveness of the proposed hierarchical model is assessed by comparing the resulting rankings of BPSS with respect to currently available results.

  4. Optimisation of a parallel ocean general circulation model

    OpenAIRE

    M. I. Beare; D. P. Stevens

    1997-01-01

    International audience; This paper presents the development of a general-purpose parallel ocean circulation model, for use on a wide range of computer platforms, from traditional scalar machines to workstation clusters and massively parallel processors. Parallelism is provided, as a modular option, via high-level message-passing routines, thus hiding the technical intricacies from the user. An initial implementation highlights that the parallel efficiency of the model is adversely affected by...

  5. Application of Hierarchical Linear Models/Linear Mixed-Effects Models in School Effectiveness Research

    Science.gov (United States)

    Ker, H. W.

    2014-01-01

    Multilevel data are very common in educational research. Hierarchical linear models/linear mixed-effects models (HLMs/LMEs) are often utilized to analyze multilevel data nowadays. This paper discusses the problems of utilizing ordinary regressions for modeling multilevel educational data, compare the data analytic results from three regression…

  6. Hierarchical modeling and inference in ecology: The analysis of data from populations, metapopulations and communities

    Science.gov (United States)

    Royle, J. Andrew; Dorazio, Robert M.

    2008-01-01

    A guide to data collection, modeling and inference strategies for biological survey data using Bayesian and classical statistical methods. This book describes a general and flexible framework for modeling and inference in ecological systems based on hierarchical models, with a strict focus on the use of probability models and parametric inference. Hierarchical models represent a paradigm shift in the application of statistics to ecological inference problems because they combine explicit models of ecological system structure or dynamics with models of how ecological systems are observed. The principles of hierarchical modeling are developed and applied to problems in population, metapopulation, community, and metacommunity systems. The book provides the first synthetic treatment of many recent methodological advances in ecological modeling and unifies disparate methods and procedures. The authors apply principles of hierarchical modeling to ecological problems, including * occurrence or occupancy models for estimating species distribution * abundance models based on many sampling protocols, including distance sampling * capture-recapture models with individual effects * spatial capture-recapture models based on camera trapping and related methods * population and metapopulation dynamic models * models of biodiversity, community structure and dynamics.

  7. Bottom-up learning of hierarchical models in a class of deterministic POMDP environments

    Directory of Open Access Journals (Sweden)

    Itoh Hideaki

    2015-09-01

    Full Text Available The theory of partially observable Markov decision processes (POMDPs is a useful tool for developing various intelligent agents, and learning hierarchical POMDP models is one of the key approaches for building such agents when the environments of the agents are unknown and large. To learn hierarchical models, bottom-up learning methods in which learning takes place in a layer-by-layer manner from the lowest to the highest layer are already extensively used in some research fields such as hidden Markov models and neural networks. However, little attention has been paid to bottom-up approaches for learning POMDP models. In this paper, we present a novel bottom-up learning algorithm for hierarchical POMDP models and prove that, by using this algorithm, a perfect model (i.e., a model that can perfectly predict future observations can be learned at least in a class of deterministic POMDP environments

  8. Scale of association: hierarchical linear models and the measurement of ecological systems

    Science.gov (United States)

    Sean M. McMahon; Jeffrey M. Diez

    2007-01-01

    A fundamental challenge to understanding patterns in ecological systems lies in employing methods that can analyse, test and draw inference from measured associations between variables across scales. Hierarchical linear models (HLM) use advanced estimation algorithms to measure regression relationships and variance-covariance parameters in hierarchically structured...

  9. Micromechanics of hierarchical materials

    DEFF Research Database (Denmark)

    Mishnaevsky, Leon, Jr.

    2012-01-01

    A short overview of micromechanical models of hierarchical materials (hybrid composites, biomaterials, fractal materials, etc.) is given. Several examples of the modeling of strength and damage in hierarchical materials are summarized, among them, 3D FE model of hybrid composites...... with nanoengineered matrix, fiber bundle model of UD composites with hierarchically clustered fibers and 3D multilevel model of wood considered as a gradient, cellular material with layered composite cell walls. The main areas of research in micromechanics of hierarchical materials are identified, among them......, the investigations of the effects of load redistribution between reinforcing elements at different scale levels, of the possibilities to control different material properties and to ensure synergy of strengthening effects at different scale levels and using the nanoreinforcement effects. The main future directions...

  10. The Revised Hierarchical Model: A critical review and assessment

    NARCIS (Netherlands)

    Kroll, J.F.; Hell, J.G. van; Tokowicz, N.; Green, D.W.

    2010-01-01

    Brysbaert and Duyck (this issue) suggest that it is time to abandon the Revised Hierarchical Model (Kroll and Stewart, 1994) in favor of connectionist models such as BIA+ (Dijkstra and Van Heuven, 2002) that more accurately account for the recent evidence on non-selective access in bilingual word

  11. Hierarchical modeling of cluster size in wildlife surveys

    Science.gov (United States)

    Royle, J. Andrew

    2008-01-01

    Clusters or groups of individuals are the fundamental unit of observation in many wildlife sampling problems, including aerial surveys of waterfowl, marine mammals, and ungulates. Explicit accounting of cluster size in models for estimating abundance is necessary because detection of individuals within clusters is not independent and detectability of clusters is likely to increase with cluster size. This induces a cluster size bias in which the average cluster size in the sample is larger than in the population at large. Thus, failure to account for the relationship between delectability and cluster size will tend to yield a positive bias in estimates of abundance or density. I describe a hierarchical modeling framework for accounting for cluster-size bias in animal sampling. The hierarchical model consists of models for the observation process conditional on the cluster size distribution and the cluster size distribution conditional on the total number of clusters. Optionally, a spatial model can be specified that describes variation in the total number of clusters per sample unit. Parameter estimation, model selection, and criticism may be carried out using conventional likelihood-based methods. An extension of the model is described for the situation where measurable covariates at the level of the sample unit are available. Several candidate models within the proposed class are evaluated for aerial survey data on mallard ducks (Anas platyrhynchos).

  12. Introduction to Hierarchical Bayesian Modeling for Ecological Data

    CERN Document Server

    Parent, Eric

    2012-01-01

    Making statistical modeling and inference more accessible to ecologists and related scientists, Introduction to Hierarchical Bayesian Modeling for Ecological Data gives readers a flexible and effective framework to learn about complex ecological processes from various sources of data. It also helps readers get started on building their own statistical models. The text begins with simple models that progressively become more complex and realistic through explanatory covariates and intermediate hidden states variables. When fitting the models to data, the authors gradually present the concepts a

  13. Action detection by double hierarchical multi-structure space-time statistical matching model

    Science.gov (United States)

    Han, Jing; Zhu, Junwei; Cui, Yiyin; Bai, Lianfa; Yue, Jiang

    2018-03-01

    Aimed at the complex information in videos and low detection efficiency, an actions detection model based on neighboring Gaussian structure and 3D LARK features is put forward. We exploit a double hierarchical multi-structure space-time statistical matching model (DMSM) in temporal action localization. First, a neighboring Gaussian structure is presented to describe the multi-scale structural relationship. Then, a space-time statistical matching method is proposed to achieve two similarity matrices on both large and small scales, which combines double hierarchical structural constraints in model by both the neighboring Gaussian structure and the 3D LARK local structure. Finally, the double hierarchical similarity is fused and analyzed to detect actions. Besides, the multi-scale composite template extends the model application into multi-view. Experimental results of DMSM on the complex visual tracker benchmark data sets and THUMOS 2014 data sets show the promising performance. Compared with other state-of-the-art algorithm, DMSM achieves superior performances.

  14. A HIERARCHICAL SET OF MODELS FOR SPECIES RESPONSE ANALYSIS

    NARCIS (Netherlands)

    HUISMAN, J; OLFF, H; FRESCO, LFM

    Variation in the abundance of species in space and/or time can be caused by a wide range of underlying processes. Before such causes can be analysed we need simple mathematical models which can describe the observed response patterns. For this purpose a hierarchical set of models is presented. These

  15. A hierarchical set of models for species response analysis

    NARCIS (Netherlands)

    Huisman, J.; Olff, H.; Fresco, L.F.M.

    1993-01-01

    Variation in the abundance of species in space and/or time can be caused by a wide range of underlying processes. Before such causes can be analysed we need simple mathematical models which can describe the observed response patterns. For this purpose a hierarchical set of models is presented. These

  16. Statistical shear lag model - unraveling the size effect in hierarchical composites.

    Science.gov (United States)

    Wei, Xiaoding; Filleter, Tobin; Espinosa, Horacio D

    2015-05-01

    Numerous experimental and computational studies have established that the hierarchical structures encountered in natural materials, such as the brick-and-mortar structure observed in sea shells, are essential for achieving defect tolerance. Due to this hierarchy, the mechanical properties of natural materials have a different size dependence compared to that of typical engineered materials. This study aimed to explore size effects on the strength of bio-inspired staggered hierarchical composites and to define the influence of the geometry of constituents in their outstanding defect tolerance capability. A statistical shear lag model is derived by extending the classical shear lag model to account for the statistics of the constituents' strength. A general solution emerges from rigorous mathematical derivations, unifying the various empirical formulations for the fundamental link length used in previous statistical models. The model shows that the staggered arrangement of constituents grants composites a unique size effect on mechanical strength in contrast to homogenous continuous materials. The model is applied to hierarchical yarns consisting of double-walled carbon nanotube bundles to assess its predictive capabilities for novel synthetic materials. Interestingly, the model predicts that yarn gauge length does not significantly influence the yarn strength, in close agreement with experimental observations. Copyright © 2015 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.

  17. Time to failure of hierarchical load-transfer models of fracture

    DEFF Research Database (Denmark)

    Vázquez-Prada, M; Gómez, J B; Moreno, Y

    1999-01-01

    The time to failure, T, of dynamical models of fracture for a hierarchical load-transfer geometry is studied. Using a probabilistic strategy and juxtaposing hierarchical structures of height n, we devise an exact method to compute T, for structures of height n+1. Bounding T, for large n, we are a...... are able to deduce that the time to failure tends to a nonzero value when n tends to infinity. This numerical conclusion is deduced for both power law and exponential breakdown rules....

  18. Hierarchical Controlled Grid-Connected Microgrid based on a Novel Autonomous Current Sharing Controller

    DEFF Research Database (Denmark)

    Guan, Yajuan; Quintero, Juan Carlos Vasquez; Guerrero, Josep M.

    2015-01-01

    In this paper, a hierarchical control system based on a novel autonomous current sharing controller for grid-connected microgrids (MGs) is presented. A three-level hierarchical control system is implemented to guarantee the power sharing performance among voltage controlled parallel inverters......, while providing the required active and reactive power to the utility grid. A communication link is used to transmit the control signal from the tertiary and secondary control levels to the primary control. Simulation results from a MG based on two grid-connected parallel inverters are shown in order...

  19. A Hierarchical Linear Model for Estimating Gender-Based Earnings Differentials.

    Science.gov (United States)

    Haberfield, Yitchak; Semyonov, Moshe; Addi, Audrey

    1998-01-01

    Estimates of gender earnings inequality in data from 116,431 Jewish workers were compared using a hierarchical linear model (HLM) and ordinary least squares model. The HLM allows estimation of the extent to which earnings inequality depends on occupational characteristics. (SK)

  20. Slow logarithmic relaxation in models with hierarchically constrained dynamics

    OpenAIRE

    Brey, J. J.; Prados, A.

    2000-01-01

    A general kind of models with hierarchically constrained dynamics is shown to exhibit logarithmic anomalous relaxation, similarly to a variety of complex strongly interacting materials. The logarithmic behavior describes most of the decay of the response function.

  1. Determining Predictor Importance in Hierarchical Linear Models Using Dominance Analysis

    Science.gov (United States)

    Luo, Wen; Azen, Razia

    2013-01-01

    Dominance analysis (DA) is a method used to evaluate the relative importance of predictors that was originally proposed for linear regression models. This article proposes an extension of DA that allows researchers to determine the relative importance of predictors in hierarchical linear models (HLM). Commonly used measures of model adequacy in…

  2. Analysis of Error Propagation Within Hierarchical Air Combat Models

    Science.gov (United States)

    2016-06-01

    values alone are propagated through layers of combat models, the final results will likely be biased, and risk underestimated. An air-to-air...values alone are propagated through layers of combat models, the final results will likely be biased, and risk underestimated. An air-to-air engagement... PROPAGATION WITHIN HIERARCHICAL AIR COMBAT MODELS by Salih Ilaslan June 2016 Thesis Advisor: Thomas W. Lucas Second Reader: Jeffrey

  3. Transformation of renormalization groups in 2N-component fermion hierarchical model

    International Nuclear Information System (INIS)

    Stepanov, R.G.

    2006-01-01

    The 2N-component fermion model on the hierarchical lattice is studied. The explicit formulae for renormalization groups transformation in the space of coefficients setting the Grassmannian-significant density of the free measure are presented. The inverse transformation of the renormalization group is calculated. The definition of immovable points of renormalization groups is reduced to solving the set of algebraic equations. The interesting connection between renormalization group transformations in boson and fermion hierarchical models is found out. It is shown that one transformation is obtained from other one by the substitution of N on -N [ru

  4. Multi-subject hierarchical inverse covariance modelling improves estimation of functional brain networks.

    Science.gov (United States)

    Colclough, Giles L; Woolrich, Mark W; Harrison, Samuel J; Rojas López, Pedro A; Valdes-Sosa, Pedro A; Smith, Stephen M

    2018-05-07

    A Bayesian model for sparse, hierarchical inverse covariance estimation is presented, and applied to multi-subject functional connectivity estimation in the human brain. It enables simultaneous inference of the strength of connectivity between brain regions at both subject and population level, and is applicable to fmri, meg and eeg data. Two versions of the model can encourage sparse connectivity, either using continuous priors to suppress irrelevant connections, or using an explicit description of the network structure to estimate the connection probability between each pair of regions. A large evaluation of this model, and thirteen methods that represent the state of the art of inverse covariance modelling, is conducted using both simulated and resting-state functional imaging datasets. Our novel Bayesian approach has similar performance to the best extant alternative, Ng et al.'s Sparse Group Gaussian Graphical Model algorithm, which also is based on a hierarchical structure. Using data from the Human Connectome Project, we show that these hierarchical models are able to reduce the measurement error in meg beta-band functional networks by 10%, producing concomitant increases in estimates of the genetic influence on functional connectivity. Copyright © 2018. Published by Elsevier Inc.

  5. Recognizing Chinese characters in digital ink from non-native language writers using hierarchical models

    Science.gov (United States)

    Bai, Hao; Zhang, Xi-wen

    2017-06-01

    While Chinese is learned as a second language, its characters are taught step by step from their strokes to components, radicals to components, and their complex relations. Chinese Characters in digital ink from non-native language writers are deformed seriously, thus the global recognition approaches are poorer. So a progressive approach from bottom to top is presented based on hierarchical models. Hierarchical information includes strokes and hierarchical components. Each Chinese character is modeled as a hierarchical tree. Strokes in one Chinese characters in digital ink are classified with Hidden Markov Models and concatenated to the stroke symbol sequence. And then the structure of components in one ink character is extracted. According to the extraction result and the stroke symbol sequence, candidate characters are traversed and scored. Finally, the recognition candidate results are listed by descending. The method of this paper is validated by testing 19815 copies of the handwriting Chinese characters written by foreign students.

  6. Hierarchically Ordered Nanopatterns for Spatial Control of Biomolecules

    Science.gov (United States)

    2015-01-01

    The development and study of a benchtop, high-throughput, and inexpensive fabrication strategy to obtain hierarchical patterns of biomolecules with sub-50 nm resolution is presented. A diblock copolymer of polystyrene-b-poly(ethylene oxide), PS-b-PEO, is synthesized with biotin capping the PEO block and 4-bromostyrene copolymerized within the polystyrene block at 5 wt %. These two handles allow thin films of the block copolymer to be postfunctionalized with biotinylated biomolecules of interest and to obtain micropatterns of nanoscale-ordered films via photolithography. The design of this single polymer further allows access to two distinct superficial nanopatterns (lines and dots), where the PEO cylinders are oriented parallel or perpendicular to the substrate. Moreover, we present a strategy to obtain hierarchical mixed morphologies: a thin-film coating of cylinders both parallel and perpendicular to the substrate can be obtained by tuning the solvent annealing and irradiation conditions. PMID:25363506

  7. Control of discrete event systems modeled as hierarchical state machines

    Science.gov (United States)

    Brave, Y.; Heymann, M.

    1991-01-01

    The authors examine a class of discrete event systems (DESs) modeled as asynchronous hierarchical state machines (AHSMs). For this class of DESs, they provide an efficient method for testing reachability, which is an essential step in many control synthesis procedures. This method utilizes the asynchronous nature and hierarchical structure of AHSMs, thereby illustrating the advantage of the AHSM representation as compared with its equivalent (flat) state machine representation. An application of the method is presented where an online minimally restrictive solution is proposed for the problem of maintaining a controlled AHSM within prescribed legal bounds.

  8. Application of hierarchical genetic models to Raven and WAIS subtests: a Dutch twin study.

    Science.gov (United States)

    Rijsdijk, Frühling V; Vernon, P A; Boomsma, Dorret I

    2002-05-01

    Hierarchical models of intelligence are highly informative and widely accepted. Application of these models to twin data, however, is sparse. This paper addresses the question of how a genetic hierarchical model fits the Wechsler Adult Intelligence Scale (WAIS) subtests and the Raven Standard Progressive test score, collected in 194 18-year-old Dutch twin pairs. We investigated whether first-order group factors possess genetic and environmental variance independent of the higher-order general factor and whether the hierarchical structure is significant for all sources of variance. A hierarchical model with the 3 Cohen group-factors (verbal comprehension, perceptual organisation and freedom-from-distractibility) and a higher-order g factor showed the best fit to the phenotypic data and to additive genetic influences (A), whereas the unique environmental source of variance (E) could be modeled by a single general factor and specifics. There was no evidence for common environmental influences. The covariation among the WAIS group factors and the covariation between the group factors and the Raven is predominantly influenced by a second-order genetic factor and strongly support the notion of a biological basis of g.

  9. A Direct Elliptic Solver Based on Hierarchically Low-Rank Schur Complements

    KAUST Repository

    Chávez, Gustavo

    2017-03-17

    A parallel fast direct solver for rank-compressible block tridiagonal linear systems is presented. Algorithmic synergies between Cyclic Reduction and Hierarchical matrix arithmetic operations result in a solver with O(Nlog2N) arithmetic complexity and O(NlogN) memory footprint. We provide a baseline for performance and applicability by comparing with well-known implementations of the $$\\\\mathcal{H}$$ -LU factorization and algebraic multigrid within a shared-memory parallel environment that leverages the concurrency features of the method. Numerical experiments reveal that this method is comparable with other fast direct solvers based on Hierarchical Matrices such as $$\\\\mathcal{H}$$ -LU and that it can tackle problems where algebraic multigrid fails to converge.

  10. A mechanical model of biomimetic adhesive pads with tilted and hierarchical structures.

    Science.gov (United States)

    Schargott, M

    2009-06-01

    A 3D model for hierarchical biomimetic adhesive pads is constructed. It is based on the main principles of the adhesive pads of the Tokay gecko and consists of hierarchical layers of vertical or tilted beams, where each layer is constructed in such a way that no cohesion between adjacent beams can occur. The elastic and adhesive properties are calculated analytically and numerically. For the adhesive contact on stochastically rough surfaces, the maximum adhesion force increases with increasing number of hierarchical layers. Additional calculations show that the adhesion force also depends on the height spectrum of the rough surface.

  11. A mechanical model of biomimetic adhesive pads with tilted and hierarchical structures

    Energy Technology Data Exchange (ETDEWEB)

    Schargott, M [Institute of Mechanics, Technische Universitaet Berlin, Strd 17 Juni 135, 10623 Berlin (Germany)], E-mail: martin.schargott@tu-berlin.de

    2009-06-01

    A 3D model for hierarchical biomimetic adhesive pads is constructed. It is based on the main principles of the adhesive pads of the Tokay gecko and consists of hierarchical layers of vertical or tilted beams, where each layer is constructed in such a way that no cohesion between adjacent beams can occur. The elastic and adhesive properties are calculated analytically and numerically. For the adhesive contact on stochastically rough surfaces, the maximum adhesion force increases with increasing number of hierarchical layers. Additional calculations show that the adhesion force also depends on the height spectrum of the rough surface.

  12. A mechanical model of biomimetic adhesive pads with tilted and hierarchical structures

    International Nuclear Information System (INIS)

    Schargott, M

    2009-01-01

    A 3D model for hierarchical biomimetic adhesive pads is constructed. It is based on the main principles of the adhesive pads of the Tokay gecko and consists of hierarchical layers of vertical or tilted beams, where each layer is constructed in such a way that no cohesion between adjacent beams can occur. The elastic and adhesive properties are calculated analytically and numerically. For the adhesive contact on stochastically rough surfaces, the maximum adhesion force increases with increasing number of hierarchical layers. Additional calculations show that the adhesion force also depends on the height spectrum of the rough surface

  13. Shared Variable Oriented Parallel Precompiler for SPMD Model

    Institute of Scientific and Technical Information of China (English)

    1995-01-01

    For the moment,commercial parallel computer systems with distributed memory architecture are usually provided with parallel FORTRAN or parallel C compliers,which are just traditional sequential FORTRAN or C compilers expanded with communication statements.Programmers suffer from writing parallel programs with communication statements. The Shared Variable Oriented Parallel Precompiler (SVOPP) proposed in this paper can automatically generate appropriate communication statements based on shared variables for SPMD(Single Program Multiple Data) computation model and greatly ease the parallel programming with high communication efficiency.The core function of parallel C precompiler has been successfully verified on a transputer-based parallel computer.Its prominent performance shows that SVOPP is probably a break-through in parallel programming technique.

  14. Hierarchical virtual screening approaches in small molecule drug discovery.

    Science.gov (United States)

    Kumar, Ashutosh; Zhang, Kam Y J

    2015-01-01

    Virtual screening has played a significant role in the discovery of small molecule inhibitors of therapeutic targets in last two decades. Various ligand and structure-based virtual screening approaches are employed to identify small molecule ligands for proteins of interest. These approaches are often combined in either hierarchical or parallel manner to take advantage of the strength and avoid the limitations associated with individual methods. Hierarchical combination of ligand and structure-based virtual screening approaches has received noteworthy success in numerous drug discovery campaigns. In hierarchical virtual screening, several filters using ligand and structure-based approaches are sequentially applied to reduce a large screening library to a number small enough for experimental testing. In this review, we focus on different hierarchical virtual screening strategies and their application in the discovery of small molecule modulators of important drug targets. Several virtual screening studies are discussed to demonstrate the successful application of hierarchical virtual screening in small molecule drug discovery. Copyright © 2014 Elsevier Inc. All rights reserved.

  15. Parallel Processing and Applied Mathematics. 10th International Conference, PPAM 2013. Revised Selected Papers

    DEFF Research Database (Denmark)

    The following topics are dealt with: parallel scientific computing; numerical algorithms; parallel nonnumerical algorithms; cloud computing; evolutionary computing; metaheuristics; applied mathematics; GPU computing; multicore systems; hybrid architectures; hierarchical parallelism; HPC systems......; power monitoring; energy monitoring; and distributed computing....

  16. Accelerated Hierarchical Collision Detection for Simulation using CUDA

    DEFF Research Database (Denmark)

    Jørgensen, Jimmy Alison; Fugl, Andreas Rune; Petersen, Henrik Gordon

    2011-01-01

    . The hierarchical nature of the bounding volume structure complicates an efficient implementation on massively parallel architectures such as modern graphics cards and we therefore propose a hybrid method where only box and triangle overlap tests and transformations are offloaded to the graphics card. When...

  17. A generalized linear factor model approach to the hierarchical framework for responses and response times.

    Science.gov (United States)

    Molenaar, Dylan; Tuerlinckx, Francis; van der Maas, Han L J

    2015-05-01

    We show how the hierarchical model for responses and response times as developed by van der Linden (2007), Fox, Klein Entink, and van der Linden (2007), Klein Entink, Fox, and van der Linden (2009), and Glas and van der Linden (2010) can be simplified to a generalized linear factor model with only the mild restriction that there is no hierarchical model at the item side. This result is valuable as it enables all well-developed modelling tools and extensions that come with these methods. We show that the restriction we impose on the hierarchical model does not influence parameter recovery under realistic circumstances. In addition, we present two illustrative real data analyses to demonstrate the practical benefits of our approach. © 2014 The British Psychological Society.

  18. Galactic chemical evolution in hierarchical formation models

    Science.gov (United States)

    Arrigoni, Matias

    2010-10-01

    The chemical properties and abundance ratios of galaxies provide important information about their formation histories. Galactic chemical evolution has been modelled in detail within the monolithic collapse scenario. These models have successfully described the abundance distributions in our Galaxy and other spiral discs, as well as the trends of metallicity and abundance ratios observed in early-type galaxies. In the last three decades, however, the paradigm of hierarchical assembly in a Cold Dark Matter (CDM) cosmology has revised the picture of how structure in the Universe forms and evolves. In this scenario, galaxies form when gas radiatively cools and condenses inside dark matter haloes, which themselves follow dissipationless gravitational collapse. The CDM picture has been successful at predicting many observed properties of galaxies (for example, the luminosity and stellar mass function of galaxies, color-magnitude or star formation rate vs. stellar mass distributions, relative numbers of early and late-type galaxies, gas fractions and size distributions of spiral galaxies, and the global star formation history), though many potential problems and open questions remain. It is therefore interesting to see whether chemical evolution models, when implemented within this modern cosmological context, are able to correctly predict the observed chemical properties of galaxies. With the advent of more powerfull telescopes and detectors, precise observations of chemical abundances and abundance ratios in various phases (stellar, ISM, ICM) offer the opportunity to obtain strong constraints on galaxy formation histories and the physics that shapes them. However, in order to take advantage of these observations, it is necessary to implement detailed modeling of chemical evolution into a modern cosmological model of hierarchical assembly.

  19. Optimisation of a parallel ocean general circulation model

    Science.gov (United States)

    Beare, M. I.; Stevens, D. P.

    1997-10-01

    This paper presents the development of a general-purpose parallel ocean circulation model, for use on a wide range of computer platforms, from traditional scalar machines to workstation clusters and massively parallel processors. Parallelism is provided, as a modular option, via high-level message-passing routines, thus hiding the technical intricacies from the user. An initial implementation highlights that the parallel efficiency of the model is adversely affected by a number of factors, for which optimisations are discussed and implemented. The resulting ocean code is portable and, in particular, allows science to be achieved on local workstations that could otherwise only be undertaken on state-of-the-art supercomputers.

  20. A Hierarchical Agency Model of Deposit Insurance

    OpenAIRE

    Jonathan Carroll; Shino Takayama

    2010-01-01

    This paper develops a hierarchical agency model of deposit insurance. The main purpose is to undertake a game theoretic analysis of the consequences of deposit insurance schemes and their effects on monitoring incentives for banks. Using this simple framework, we analyze both risk- independent and risk-dependent premium schemes along with reserve requirement constraints. The results provide policymakers with not only a better understanding of the effects of deposit insurance on welfare and th...

  1. ESPRIT-Forest: Parallel clustering of massive amplicon sequence data in subquadratic time.

    Science.gov (United States)

    Cai, Yunpeng; Zheng, Wei; Yao, Jin; Yang, Yujie; Mai, Volker; Mao, Qi; Sun, Yijun

    2017-04-01

    The rapid development of sequencing technology has led to an explosive accumulation of genomic sequence data. Clustering is often the first step to perform in sequence analysis, and hierarchical clustering is one of the most commonly used approaches for this purpose. However, it is currently computationally expensive to perform hierarchical clustering of extremely large sequence datasets due to its quadratic time and space complexities. In this paper we developed a new algorithm called ESPRIT-Forest for parallel hierarchical clustering of sequences. The algorithm achieves subquadratic time and space complexity and maintains a high clustering accuracy comparable to the standard method. The basic idea is to organize sequences into a pseudo-metric based partitioning tree for sub-linear time searching of nearest neighbors, and then use a new multiple-pair merging criterion to construct clusters in parallel using multiple threads. The new algorithm was tested on the human microbiome project (HMP) dataset, currently one of the largest published microbial 16S rRNA sequence dataset. Our experiment demonstrated that with the power of parallel computing it is now compu- tationally feasible to perform hierarchical clustering analysis of tens of millions of sequences. The software is available at http://www.acsu.buffalo.edu/∼yijunsun/lab/ESPRIT-Forest.html.

  2. Optimisation of a parallel ocean general circulation model

    Directory of Open Access Journals (Sweden)

    M. I. Beare

    1997-10-01

    Full Text Available This paper presents the development of a general-purpose parallel ocean circulation model, for use on a wide range of computer platforms, from traditional scalar machines to workstation clusters and massively parallel processors. Parallelism is provided, as a modular option, via high-level message-passing routines, thus hiding the technical intricacies from the user. An initial implementation highlights that the parallel efficiency of the model is adversely affected by a number of factors, for which optimisations are discussed and implemented. The resulting ocean code is portable and, in particular, allows science to be achieved on local workstations that could otherwise only be undertaken on state-of-the-art supercomputers.

  3. Optimisation of a parallel ocean general circulation model

    Directory of Open Access Journals (Sweden)

    M. I. Beare

    Full Text Available This paper presents the development of a general-purpose parallel ocean circulation model, for use on a wide range of computer platforms, from traditional scalar machines to workstation clusters and massively parallel processors. Parallelism is provided, as a modular option, via high-level message-passing routines, thus hiding the technical intricacies from the user. An initial implementation highlights that the parallel efficiency of the model is adversely affected by a number of factors, for which optimisations are discussed and implemented. The resulting ocean code is portable and, in particular, allows science to be achieved on local workstations that could otherwise only be undertaken on state-of-the-art supercomputers.

  4. A hierarchical modeling methodology for the definition and selection of requirements

    Science.gov (United States)

    Dufresne, Stephane

    This dissertation describes the development of a requirements analysis methodology that takes into account the concept of operations and the hierarchical decomposition of aerospace systems. At the core of the methodology, the Analytic Network Process (ANP) is used to ensure the traceability between the qualitative and quantitative information present in the hierarchical model. The proposed methodology is implemented to the requirements definition of a hurricane tracker Unmanned Aerial Vehicle. Three research objectives are identified in this work; (1) improve the requirements mapping process by matching the stakeholder expectations with the concept of operations, systems and available resources; (2) reduce the epistemic uncertainty surrounding the requirements and requirements mapping; and (3) improve the requirements down-selection process by taking into account the level of importance of the criteria and the available resources. Several challenges are associated with the identification and definition of requirements. The complexity of the system implies that a large number of requirements are needed to define the systems. These requirements are defined early in the conceptual design, where the level of knowledge is relatively low and the level of uncertainty is large. The proposed methodology intends to increase the level of knowledge and reduce the level of uncertainty by guiding the design team through a structured process. To address these challenges, a new methodology is created to flow-down the requirements from the stakeholder expectations to the systems alternatives. A taxonomy of requirements is created to classify the information gathered during the problem definition. Subsequently, the operational and systems functions and measures of effectiveness are integrated to a hierarchical model to allow the traceability of the information. Monte Carlo methods are used to evaluate the variations of the hierarchical model elements and consequently reduce the

  5. Internet advertising effectiveness by using hierarchical model

    OpenAIRE

    RAHMANI, Samaneh

    2015-01-01

    Abstract. Present paper has been developed with the title of internet advertising effectiveness by using hierarchical model. Presenting the question: Today Internet is an important channel in marketing and advertising. The reason for this could be the ability of the Internet to reduce costs and people’s access to online services[1]. Also advertisers can easily access a multitude of users and communicate with them at low cost [9]. On the other hand, compared to traditional advertising, interne...

  6. A Hierarchical Modeling for Reactive Power Optimization With Joint Transmission and Distribution Networks by Curve Fitting

    DEFF Research Database (Denmark)

    Ding, Tao; Li, Cheng; Huang, Can

    2018-01-01

    –slave structure and improves traditional centralized modeling methods by alleviating the big data problem in a control center. Specifically, the transmission-distribution-network coordination issue of the hierarchical modeling method is investigated. First, a curve-fitting approach is developed to provide a cost......In order to solve the reactive power optimization with joint transmission and distribution networks, a hierarchical modeling method is proposed in this paper. It allows the reactive power optimization of transmission and distribution networks to be performed separately, leading to a master...... optimality. Numerical results on two test systems verify the effectiveness of the proposed hierarchical modeling and curve-fitting methods....

  7. Parallelization Issues and Particle-In Codes.

    Science.gov (United States)

    Elster, Anne Cathrine

    1994-01-01

    "Everything should be made as simple as possible, but not simpler." Albert Einstein. The field of parallel scientific computing has concentrated on parallelization of individual modules such as matrix solvers and factorizers. However, many applications involve several interacting modules. Our analyses of a particle-in-cell code modeling charged particles in an electric field, show that these accompanying dependencies affect data partitioning and lead to new parallelization strategies concerning processor, memory and cache utilization. Our test-bed, a KSR1, is a distributed memory machine with a globally shared addressing space. However, most of the new methods presented hold generally for hierarchical and/or distributed memory systems. We introduce a novel approach that uses dual pointers on the local particle arrays to keep the particle locations automatically partially sorted. Complexity and performance analyses with accompanying KSR benchmarks, have been included for both this scheme and for the traditional replicated grids approach. The latter approach maintains load-balance with respect to particles. However, our results demonstrate it fails to scale properly for problems with large grids (say, greater than 128-by-128) running on as few as 15 KSR nodes, since the extra storage and computation time associated with adding the grid copies, becomes significant. Our grid partitioning scheme, although harder to implement, does not need to replicate the whole grid. Consequently, it scales well for large problems on highly parallel systems. It may, however, require load balancing schemes for non-uniform particle distributions. Our dual pointer approach may facilitate this through dynamically partitioned grids. We also introduce hierarchical data structures that store neighboring grid-points within the same cache -line by reordering the grid indexing. This alignment produces a 25% savings in cache-hits for a 4-by-4 cache. A consideration of the input data's effect on

  8. An Analysis of Turkey's PISA 2015 Results Using Two-Level Hierarchical Linear Modelling

    Science.gov (United States)

    Atas, Dogu; Karadag, Özge

    2017-01-01

    In the field of education, most of the data collected are multi-level structured. Cities, city based schools, school based classes and finally students in the classrooms constitute a hierarchical structure. Hierarchical linear models give more accurate results compared to standard models when the data set has a structure going far as individuals,…

  9. A sow replacement model using Bayesian updating in a three-level hierarchic Markov process. I. Biological model

    DEFF Research Database (Denmark)

    Kristensen, Anders Ringgaard; Søllested, Thomas Algot

    2004-01-01

    that really uses all these methodological improvements. In this paper, the biological model describing the performance and feed intake of sows is presented. In particular, estimation of herd specific parameters is emphasized. The optimization model is described in a subsequent paper......Several replacement models have been presented in literature. In other applicational areas like dairy cow replacement, various methodological improvements like hierarchical Markov processes and Bayesian updating have been implemented, but not in sow models. Furthermore, there are methodological...... improvements like multi-level hierarchical Markov processes with decisions on multiple time scales, efficient methods for parameter estimations at herd level and standard software that has been hardly implemented at all in any replacement model. The aim of this study is to present a sow replacement model...

  10. Efficient Actor-Critic Algorithm with Hierarchical Model Learning and Planning

    Science.gov (United States)

    Fu, QiMing

    2016-01-01

    To improve the convergence rate and the sample efficiency, two efficient learning methods AC-HMLP and RAC-HMLP (AC-HMLP with ℓ 2-regularization) are proposed by combining actor-critic algorithm with hierarchical model learning and planning. The hierarchical models consisting of the local and the global models, which are learned at the same time during learning of the value function and the policy, are approximated by local linear regression (LLR) and linear function approximation (LFA), respectively. Both the local model and the global model are applied to generate samples for planning; the former is used only if the state-prediction error does not surpass the threshold at each time step, while the latter is utilized at the end of each episode. The purpose of taking both models is to improve the sample efficiency and accelerate the convergence rate of the whole algorithm through fully utilizing the local and global information. Experimentally, AC-HMLP and RAC-HMLP are compared with three representative algorithms on two Reinforcement Learning (RL) benchmark problems. The results demonstrate that they perform best in terms of convergence rate and sample efficiency. PMID:27795704

  11. A sow replacement model using Bayesian updating in a three-level hierarchic Markov process. II. Optimization model

    DEFF Research Database (Denmark)

    Kristensen, Anders Ringgaard; Søllested, Thomas Algot

    2004-01-01

    improvements. The biological model of the replacement model is described in a previous paper and in this paper the optimization model is described. The model is developed as a prototype for use under practical conditions. The application of the model is demonstrated using data from two commercial Danish sow......Recent methodological improvements in replacement models comprising multi-level hierarchical Markov processes and Bayesian updating have hardly been implemented in any replacement model and the aim of this study is to present a sow replacement model that really uses these methodological...... herds. It is concluded that the Bayesian updating technique and the hierarchical structure decrease the size of the state space dramatically. Since parameter estimates vary considerably among herds it is concluded that decision support concerning sow replacement only makes sense with parameters...

  12. Hierarchical model generation for architecture reconstruction using laser-scanned point clouds

    Science.gov (United States)

    Ning, Xiaojuan; Wang, Yinghui; Zhang, Xiaopeng

    2014-06-01

    Architecture reconstruction using terrestrial laser scanner is a prevalent and challenging research topic. We introduce an automatic, hierarchical architecture generation framework to produce full geometry of architecture based on a novel combination of facade structures detection, detailed windows propagation, and hierarchical model consolidation. Our method highlights the generation of geometric models automatically fitting the design information of the architecture from sparse, incomplete, and noisy point clouds. First, the planar regions detected in raw point clouds are interpreted as three-dimensional clusters. Then, the boundary of each region extracted by projecting the points into its corresponding two-dimensional plane is classified to obtain detailed shape structure elements (e.g., windows and doors). Finally, a polyhedron model is generated by calculating the proposed local structure model, consolidated structure model, and detailed window model. Experiments on modeling the scanned real-life buildings demonstrate the advantages of our method, in which the reconstructed models not only correspond to the information of architectural design accurately, but also satisfy the requirements for visualization and analysis.

  13. HDDM: Hierarchical Bayesian estimation of the Drift-Diffusion Model in Python.

    Science.gov (United States)

    Wiecki, Thomas V; Sofer, Imri; Frank, Michael J

    2013-01-01

    The diffusion model is a commonly used tool to infer latent psychological processes underlying decision-making, and to link them to neural mechanisms based on response times. Although efficient open source software has been made available to quantitatively fit the model to data, current estimation methods require an abundance of response time measurements to recover meaningful parameters, and only provide point estimates of each parameter. In contrast, hierarchical Bayesian parameter estimation methods are useful for enhancing statistical power, allowing for simultaneous estimation of individual subject parameters and the group distribution that they are drawn from, while also providing measures of uncertainty in these parameters in the posterior distribution. Here, we present a novel Python-based toolbox called HDDM (hierarchical drift diffusion model), which allows fast and flexible estimation of the the drift-diffusion model and the related linear ballistic accumulator model. HDDM requires fewer data per subject/condition than non-hierarchical methods, allows for full Bayesian data analysis, and can handle outliers in the data. Finally, HDDM supports the estimation of how trial-by-trial measurements (e.g., fMRI) influence decision-making parameters. This paper will first describe the theoretical background of the drift diffusion model and Bayesian inference. We then illustrate usage of the toolbox on a real-world data set from our lab. Finally, parameter recovery studies show that HDDM beats alternative fitting methods like the χ(2)-quantile method as well as maximum likelihood estimation. The software and documentation can be downloaded at: http://ski.clps.brown.edu/hddm_docs/

  14. HDDM: Hierarchical Bayesian estimation of the Drift-Diffusion Model in Python

    Directory of Open Access Journals (Sweden)

    Thomas V Wiecki

    2013-08-01

    Full Text Available The diffusion model is a commonly used tool to infer latent psychological processes underlying decision making, and to link them to neural mechanisms based on reaction times. Although efficient open source software has been made available to quantitatively fit the model to data, current estimation methods require an abundance of reaction time measurements to recover meaningful parameters, and only provide point estimates of each parameter. In contrast, hierarchical Bayesian parameter estimation methods are useful for enhancing statistical power, allowing for simultaneous estimation of individual subject parameters and the group distribution that they are drawn from, while also providing measures of uncertainty in these parameters in the posterior distribution. Here, we present a novel Python-based toolbox called HDDM (hierarchical drift diffusion model, which allows fast and flexible estimation of the the drift-diffusion model and the related linear ballistic accumulator model. HDDM requires fewer data per subject / condition than non-hierarchical method, allows for full Bayesian data analysis, and can handle outliers in the data. Finally, HDDM supports the estimation of how trial-by-trial measurements (e.g. fMRI influence decision making parameters. This paper will first describe the theoretical background of drift-diffusion model and Bayesian inference. We then illustrate usage of the toolbox on a real-world data set from our lab. Finally, parameter recovery studies show that HDDM beats alternative fitting methods like the chi-quantile method as well as maximum likelihood estimation. The software and documentation can be downloaded at: http://ski.clps.brown.edu/hddm_docs

  15. A Hierarchical Visualization Analysis Model of Power Big Data

    Science.gov (United States)

    Li, Yongjie; Wang, Zheng; Hao, Yang

    2018-01-01

    Based on the conception of integrating VR scene and power big data analysis, a hierarchical visualization analysis model of power big data is proposed, in which levels are designed, targeting at different abstract modules like transaction, engine, computation, control and store. The regularly departed modules of power data storing, data mining and analysis, data visualization are integrated into one platform by this model. It provides a visual analysis solution for the power big data.

  16. The Hierarchical Trend Model for property valuation and local price indices

    NARCIS (Netherlands)

    Francke, M.K.; Vos, G.A.

    2002-01-01

    This paper presents a hierarchical trend model (HTM) for selling prices of houses, addressing three main problems: the spatial and temporal dependence of selling prices and the dependency of price index changes on housing quality. In this model the general price trend, cluster-level price trends,

  17. Trans-dimensional matched-field geoacoustic inversion with hierarchical error models and interacting Markov chains.

    Science.gov (United States)

    Dettmer, Jan; Dosso, Stan E

    2012-10-01

    This paper develops a trans-dimensional approach to matched-field geoacoustic inversion, including interacting Markov chains to improve efficiency and an autoregressive model to account for correlated errors. The trans-dimensional approach and hierarchical seabed model allows inversion without assuming any particular parametrization by relaxing model specification to a range of plausible seabed models (e.g., in this case, the number of sediment layers is an unknown parameter). Data errors are addressed by sampling statistical error-distribution parameters, including correlated errors (covariance), by applying a hierarchical autoregressive error model. The well-known difficulty of low acceptance rates for trans-dimensional jumps is addressed with interacting Markov chains, resulting in a substantial increase in efficiency. The trans-dimensional seabed model and the hierarchical error model relax the degree of prior assumptions required in the inversion, resulting in substantially improved (more realistic) uncertainty estimates and a more automated algorithm. In particular, the approach gives seabed parameter uncertainty estimates that account for uncertainty due to prior model choice (layering and data error statistics). The approach is applied to data measured on a vertical array in the Mediterranean Sea.

  18. Cache-aware data structure model for parallelism and dynamic load balancing

    International Nuclear Information System (INIS)

    Sridi, Marwa

    2016-01-01

    This PhD thesis is dedicated to the implementation of innovative parallel methods in the framework of fast transient fluid-structure dynamics. It improves existing methods within EUROPLEXUS software, in order to optimize the shared memory parallel strategy, complementary to the original distributed memory approach, brought together into a global hybrid strategy for clusters of multi-core nodes. Starting from a sound analysis of the state of the art concerning data structuring techniques correlated to the hierarchic memory organization of current multi-processor architectures, the proposed work introduces an approach suitable for an explicit time integration (i.e. with no linear system to solve at each step). A data structure of type 'Structure of arrays' is conserved for the global data storage, providing flexibility and efficiency for current operations on kinematics fields (displacement, velocity and acceleration). On the contrary, in the particular case of elementary operations (for internal forces generic computations, as well as fluxes computations between cell faces for fluid models), particularly time consuming but localized in the program, a temporary data structure of type 'Array of structures' is used instead, to force an efficient filling of the cache memory and increase the performance of the resolution, for both serial and shared memory parallel processing. Switching from the global structure to the temporary one is based on a cell grouping strategy, following classing cache-blocking principles but handling specifically for this work neighboring data necessary to the efficient treatment of ALE fluxes for cells on the group boundaries. The proposed approach is extensively tested, from the point of views of both the computation time and the access failures into cache memory, confronting the gains obtained within the elementary operations to the potential overhead generated by the data structure switch. Obtained results are very satisfactory, especially

  19. Frequency Stability of Hierarchically Controlled Hybrid Photovoltaic-Battery-Hydropower Microgrids

    DEFF Research Database (Denmark)

    Guan, Yajuan; Quintero, Juan Carlos Vasquez; Guerrero, Josep M.

    2015-01-01

    -based local grid is supplied. In this case, the PV-battery system will operate as a PQ bus to inject the desired active and reactive powers to local grid, while the hydropower station will act as a slack bus which maintains its voltage amplitude and frequency. An integrated small-signal state-space model......, a hierarchical controller for hybrid PV-battery-hydropower microgrid is proposed in order to achieve the parallel operation of hydropower and PV-battery system with different rates, and to guarantee power sharing performance among PV voltage controlled inverters, while the required power to hydropower...

  20. Bayesian Poisson hierarchical models for crash data analysis: Investigating the impact of model choice on site-specific predictions.

    Science.gov (United States)

    Khazraee, S Hadi; Johnson, Valen; Lord, Dominique

    2018-08-01

    The Poisson-gamma (PG) and Poisson-lognormal (PLN) regression models are among the most popular means for motor vehicle crash data analysis. Both models belong to the Poisson-hierarchical family of models. While numerous studies have compared the overall performance of alternative Bayesian Poisson-hierarchical models, little research has addressed the impact of model choice on the expected crash frequency prediction at individual sites. This paper sought to examine whether there are any trends among candidate models predictions e.g., that an alternative model's prediction for sites with certain conditions tends to be higher (or lower) than that from another model. In addition to the PG and PLN models, this research formulated a new member of the Poisson-hierarchical family of models: the Poisson-inverse gamma (PIGam). Three field datasets (from Texas, Michigan and Indiana) covering a wide range of over-dispersion characteristics were selected for analysis. This study demonstrated that the model choice can be critical when the calibrated models are used for prediction at new sites, especially when the data are highly over-dispersed. For all three datasets, the PIGam model would predict higher expected crash frequencies than would the PLN and PG models, in order, indicating a clear link between the models predictions and the shape of their mixing distributions (i.e., gamma, lognormal, and inverse gamma, respectively). The thicker tail of the PIGam and PLN models (in order) may provide an advantage when the data are highly over-dispersed. The analysis results also illustrated a major deficiency of the Deviance Information Criterion (DIC) in comparing the goodness-of-fit of hierarchical models; models with drastically different set of coefficients (and thus predictions for new sites) may yield similar DIC values, because the DIC only accounts for the parameters in the lowest (observation) level of the hierarchy and ignores the higher levels (regression coefficients

  1. Agent-based distributed hierarchical control of dc microgrid systems

    DEFF Research Database (Denmark)

    Meng, Lexuan; Vasquez, Juan Carlos; Guerrero, Josep M.

    2014-01-01

    In order to enable distributed control and management for microgrids, this paper explores the application of information consensus and local decisionmaking methods formulating an agent based distributed hierarchical control system. A droop controlled paralleled DC/DC converter system is taken as ....... Standard genetic algorithm is applied in each local control system in order to search for a global optimum. Hardware-in-Loop simulation results are shown to demonstrate the effectiveness of the method.......In order to enable distributed control and management for microgrids, this paper explores the application of information consensus and local decisionmaking methods formulating an agent based distributed hierarchical control system. A droop controlled paralleled DC/DC converter system is taken...... as a case study. The objective is to enhance the system efficiency by finding the optimal sharing ratio of load current. Virtual resistances in local control systems are taken as decision variables. Consensus algorithms are applied for global information discovery and local control systems coordination...

  2. De Novo Ultrascale Atomistic Simulations On High-End Parallel Supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Nakano, A; Kalia, R K; Nomura, K; Sharma, A; Vashishta, P; Shimojo, F; van Duin, A; Goddard, III, W A; Biswas, R; Srivastava, D; Yang, L H

    2006-09-04

    We present a de novo hierarchical simulation framework for first-principles based predictive simulations of materials and their validation on high-end parallel supercomputers and geographically distributed clusters. In this framework, high-end chemically reactive and non-reactive molecular dynamics (MD) simulations explore a wide solution space to discover microscopic mechanisms that govern macroscopic material properties, into which highly accurate quantum mechanical (QM) simulations are embedded to validate the discovered mechanisms and quantify the uncertainty of the solution. The framework includes an embedded divide-and-conquer (EDC) algorithmic framework for the design of linear-scaling simulation algorithms with minimal bandwidth complexity and tight error control. The EDC framework also enables adaptive hierarchical simulation with automated model transitioning assisted by graph-based event tracking. A tunable hierarchical cellular decomposition parallelization framework then maps the O(N) EDC algorithms onto Petaflops computers, while achieving performance tunability through a hierarchy of parameterized cell data/computation structures, as well as its implementation using hybrid Grid remote procedure call + message passing + threads programming. High-end computing platforms such as IBM BlueGene/L, SGI Altix 3000 and the NSF TeraGrid provide an excellent test grounds for the framework. On these platforms, we have achieved unprecedented scales of quantum-mechanically accurate and well validated, chemically reactive atomistic simulations--1.06 billion-atom fast reactive force-field MD and 11.8 million-atom (1.04 trillion grid points) quantum-mechanical MD in the framework of the EDC density functional theory on adaptive multigrids--in addition to 134 billion-atom non-reactive space-time multiresolution MD, with the parallel efficiency as high as 0.998 on 65,536 dual-processor BlueGene/L nodes. We have also achieved an automated execution of hierarchical QM

  3. Peformance Tuning and Evaluation of a Parallel Community Climate Model

    Energy Technology Data Exchange (ETDEWEB)

    Drake, J.B.; Worley, P.H.; Hammond, S.

    1999-11-13

    The Parallel Community Climate Model (PCCM) is a message-passing parallelization of version 2.1 of the Community Climate Model (CCM) developed by researchers at Argonne and Oak Ridge National Laboratories and at the National Center for Atmospheric Research in the early to mid 1990s. In preparation for use in the Department of Energy's Parallel Climate Model (PCM), PCCM has recently been updated with new physics routines from version 3.2 of the CCM, improvements to the parallel implementation, and ports to the SGIKray Research T3E and Origin 2000. We describe our experience in porting and tuning PCCM on these new platforms, evaluating the performance of different parallel algorithm options and comparing performance between the T3E and Origin 2000.

  4. Intelligent spatial ecosystem modeling using parallel processors

    International Nuclear Information System (INIS)

    Maxwell, T.; Costanza, R.

    1993-01-01

    Spatial modeling of ecosystems is essential if one's modeling goals include developing a relatively realistic description of past behavior and predictions of the impacts of alternative management policies on future ecosystem behavior. Development of these models has been limited in the past by the large amount of input data required and the difficulty of even large mainframe serial computers in dealing with large spatial arrays. These two limitations have begun to erode with the increasing availability of remote sensing data and GIS systems to manipulate it, and the development of parallel computer systems which allow computation of large, complex, spatial arrays. Although many forms of dynamic spatial modeling are highly amenable to parallel processing, the primary focus in this project is on process-based landscape models. These models simulate spatial structure by first compartmentalizing the landscape into some geometric design and then describing flows within compartments and spatial processes between compartments according to location-specific algorithms. The authors are currently building and running parallel spatial models at the regional scale for the Patuxent River region in Maryland, the Everglades in Florida, and Barataria Basin in Louisiana. The authors are also planning a project to construct a series of spatially explicit linked ecological and economic simulation models aimed at assessing the long-term potential impacts of global climate change

  5. Metamodeling Techniques to Aid in the Aggregation Process of Large Hierarchical Simulation Models

    National Research Council Canada - National Science Library

    Rodriguez, June F

    2008-01-01

    .... More specifically, investigating how to accurately aggregate hierarchical lower-level (higher resolution) models into the next higher-level in order to reduce the complexity of the overall simulation model...

  6. Structured building model reduction toward parallel simulation

    Energy Technology Data Exchange (ETDEWEB)

    Dobbs, Justin R. [Cornell University; Hencey, Brondon M. [Cornell University

    2013-08-26

    Building energy model reduction exchanges accuracy for improved simulation speed by reducing the number of dynamical equations. Parallel computing aims to improve simulation times without loss of accuracy but is poorly utilized by contemporary simulators and is inherently limited by inter-processor communication. This paper bridges these disparate techniques to implement efficient parallel building thermal simulation. We begin with a survey of three structured reduction approaches that compares their performance to a leading unstructured method. We then use structured model reduction to find thermal clusters in the building energy model and allocate processing resources. Experimental results demonstrate faster simulation and low error without any interprocessor communication.

  7. Linguistic steganography on Twitter: hierarchical language modeling with manual interaction

    Science.gov (United States)

    Wilson, Alex; Blunsom, Phil; Ker, Andrew D.

    2014-02-01

    This work proposes a natural language stegosystem for Twitter, modifying tweets as they are written to hide 4 bits of payload per tweet, which is a greater payload than previous systems have achieved. The system, CoverTweet, includes novel components, as well as some already developed in the literature. We believe that the task of transforming covers during embedding is equivalent to unilingual machine translation (paraphrasing), and we use this equivalence to de ne a distortion measure based on statistical machine translation methods. The system incorporates this measure of distortion to rank possible tweet paraphrases, using a hierarchical language model; we use human interaction as a second distortion measure to pick the best. The hierarchical language model is designed to model the speci c language of the covers, which in this setting is the language of the Twitter user who is embedding. This is a change from previous work, where general-purpose language models have been used. We evaluate our system by testing the output against human judges, and show that humans are unable to distinguish stego tweets from cover tweets any better than random guessing.

  8. Avoiding Boundary Estimates in Hierarchical Linear Models through Weakly Informative Priors

    Science.gov (United States)

    Chung, Yeojin; Rabe-Hesketh, Sophia; Gelman, Andrew; Dorie, Vincent; Liu, Jinchen

    2012-01-01

    Hierarchical or multilevel linear models are widely used for longitudinal or cross-sectional data on students nested in classes and schools, and are particularly important for estimating treatment effects in cluster-randomized trials, multi-site trials, and meta-analyses. The models can allow for variation in treatment effects, as well as…

  9. INFOGRAPHIC MODELING OF THE HIERARCHICAL STRUCTURE OF THE MANAGEMENT SYSTEM EXPOSED TO AN INNOVATIVE CONFLICT

    Directory of Open Access Journals (Sweden)

    Chulkov Vitaliy Olegovich

    2012-12-01

    Full Text Available This article deals with the infographic modeling of hierarchical management systems exposed to innovative conflicts. The authors analyze the facts that serve as conflict drivers in the construction management environment. The reasons for innovative conflicts include changes in hierarchical structures of management systems, adjustment of workers to new management conditions, changes in the ideology, etc. Conflicts under consideration may involve contradictions between requests placed by customers and the legislation, any risks that may originate from the above contradiction, conflicts arising from any failure to comply with any accepted standards of conduct, etc. One of the main objectives of the theory of hierarchical structures is to develop a model capable of projecting potential innovative conflicts. Models described in the paper reflect dynamic changes in patterns of external impacts within the conflict area. The simplest model element is a monad, or an indivisible set of characteristics of participants at the pre-set level. Interaction between two monads forms a diad. Modeling of situations that involve a different number of monads, diads, resources and impacts can improve methods used to control and manage hierarchical structures in the construction industry. However, in the absence of any mathematical models employed to simulate conflict-related events, processes and situations, any research into, projection and management of interpersonal and group-to-group conflicts are to be performed in the legal environment

  10. Scalable Hierarchical Algorithms for stochastic PDEs and UQ

    KAUST Repository

    Litvinenko, Alexander

    2015-01-07

    H-matrices and Fast Multipole (FMM) are powerful methods to approximate linear operators coming from partial differential and integral equations as well as speed up computational cost from quadratic or cubic to log-linear (O(n log n)), where n number of degrees of freedom in the discretization. The storage is reduced to the log-linear as well. This hierarchical structure is a good starting point for parallel algorithms. Parallelization on shared and distributed memory systems was pioneered by Kriemann [1,2]. Since 2005, the area of parallel architectures and software is developing very fast. Progress in GPUs and Many-Core Systems (e.g. XeonPhi with 64 cores) motivated us to extend work started in [1,2,7,8].

  11. Parallel algorithms for placement and routing in VLSI design. Ph.D. Thesis

    Science.gov (United States)

    Brouwer, Randall Jay

    1991-01-01

    The computational requirements for high quality synthesis, analysis, and verification of very large scale integration (VLSI) designs have rapidly increased with the fast growing complexity of these designs. Research in the past has focused on the development of heuristic algorithms, special purpose hardware accelerators, or parallel algorithms for the numerous design tasks to decrease the time required for solution. Two new parallel algorithms are proposed for two VLSI synthesis tasks, standard cell placement and global routing. The first algorithm, a parallel algorithm for global routing, uses hierarchical techniques to decompose the routing problem into independent routing subproblems that are solved in parallel. Results are then presented which compare the routing quality to the results of other published global routers and which evaluate the speedups attained. The second algorithm, a parallel algorithm for cell placement and global routing, hierarchically integrates a quadrisection placement algorithm, a bisection placement algorithm, and the previous global routing algorithm. Unique partitioning techniques are used to decompose the various stages of the algorithm into independent tasks which can be evaluated in parallel. Finally, results are presented which evaluate the various algorithm alternatives and compare the algorithm performance to other placement programs. Measurements are presented on the parallel speedups available.

  12. Hierarchical Artificial Bee Colony Optimizer with Divide-and-Conquer and Crossover for Multilevel Threshold Image Segmentation

    Directory of Open Access Journals (Sweden)

    Maowei He

    2014-01-01

    Full Text Available This paper presents a novel optimization algorithm, namely, hierarchical artificial bee colony optimization (HABC, for multilevel threshold image segmentation, which employs a pool of optimal foraging strategies to extend the classical artificial bee colony framework to a cooperative and hierarchical fashion. In the proposed hierarchical model, the higher-level species incorporates the enhanced information exchange mechanism based on crossover operator to enhance the global search ability between species. In the bottom level, with the divide-and-conquer approach, each subpopulation runs the original ABC method in parallel to part-dimensional optimum, which can be aggregated into a complete solution for the upper level. The experimental results for comparing HABC with several successful EA and SI algorithms on a set of benchmarks demonstrated the effectiveness of the proposed algorithm. Furthermore, we applied the HABC to the multilevel image segmentation problem. Experimental results of the new algorithm on a variety of images demonstrated the performance superiority of the proposed algorithm.

  13. TU-FG-209-12: Treatment Site and View Recognition in X-Ray Images with Hierarchical Multiclass Recognition Models

    Energy Technology Data Exchange (ETDEWEB)

    Chang, X; Mazur, T; Yang, D [Washington University in St Louis, St Louis, MO (United States)

    2016-06-15

    Purpose: To investigate an approach of automatically recognizing anatomical sites and imaging views (the orientation of the image acquisition) in 2D X-ray images. Methods: A hierarchical (binary tree) multiclass recognition model was developed to recognize the treatment sites and views in x-ray images. From top to bottom of the tree, the treatment sites are grouped hierarchically from more general to more specific. Each node in the hierarchical model was designed to assign images to one of two categories of anatomical sites. The binary image classification function of each node in the hierarchical model is implemented by using a PCA transformation and a support vector machine (SVM) model. The optimal PCA transformation matrices and SVM models are obtained by learning from a set of sample images. Alternatives of the hierarchical model were developed to support three scenarios of site recognition that may happen in radiotherapy clinics, including two or one X-ray images with or without view information. The performance of the approach was tested with images of 120 patients from six treatment sites – brain, head-neck, breast, lung, abdomen and pelvis – with 20 patients per site and two views (AP and RT) per patient. Results: Given two images in known orthogonal views (AP and RT), the hierarchical model achieved a 99% average F1 score to recognize the six sites. Site specific view recognition models have 100 percent accuracy. The computation time to process a new patient case (preprocessing, site and view recognition) is 0.02 seconds. Conclusion: The proposed hierarchical model of site and view recognition is effective and computationally efficient. It could be useful to automatically and independently confirm the treatment sites and views in daily setup x-ray 2D images. It could also be applied to guide subsequent image processing tasks, e.g. site and view dependent contrast enhancement and image registration. The senior author received research grants from View

  14. TU-FG-209-12: Treatment Site and View Recognition in X-Ray Images with Hierarchical Multiclass Recognition Models

    International Nuclear Information System (INIS)

    Chang, X; Mazur, T; Yang, D

    2016-01-01

    Purpose: To investigate an approach of automatically recognizing anatomical sites and imaging views (the orientation of the image acquisition) in 2D X-ray images. Methods: A hierarchical (binary tree) multiclass recognition model was developed to recognize the treatment sites and views in x-ray images. From top to bottom of the tree, the treatment sites are grouped hierarchically from more general to more specific. Each node in the hierarchical model was designed to assign images to one of two categories of anatomical sites. The binary image classification function of each node in the hierarchical model is implemented by using a PCA transformation and a support vector machine (SVM) model. The optimal PCA transformation matrices and SVM models are obtained by learning from a set of sample images. Alternatives of the hierarchical model were developed to support three scenarios of site recognition that may happen in radiotherapy clinics, including two or one X-ray images with or without view information. The performance of the approach was tested with images of 120 patients from six treatment sites – brain, head-neck, breast, lung, abdomen and pelvis – with 20 patients per site and two views (AP and RT) per patient. Results: Given two images in known orthogonal views (AP and RT), the hierarchical model achieved a 99% average F1 score to recognize the six sites. Site specific view recognition models have 100 percent accuracy. The computation time to process a new patient case (preprocessing, site and view recognition) is 0.02 seconds. Conclusion: The proposed hierarchical model of site and view recognition is effective and computationally efficient. It could be useful to automatically and independently confirm the treatment sites and views in daily setup x-ray 2D images. It could also be applied to guide subsequent image processing tasks, e.g. site and view dependent contrast enhancement and image registration. The senior author received research grants from View

  15. A new parallelization algorithm of ocean model with explicit scheme

    Science.gov (United States)

    Fu, X. D.

    2017-08-01

    This paper will focus on the parallelization of ocean model with explicit scheme which is one of the most commonly used schemes in the discretization of governing equation of ocean model. The characteristic of explicit schema is that calculation is simple, and that the value of the given grid point of ocean model depends on the grid point at the previous time step, which means that one doesn’t need to solve sparse linear equations in the process of solving the governing equation of the ocean model. Aiming at characteristics of the explicit scheme, this paper designs a parallel algorithm named halo cells update with tiny modification of original ocean model and little change of space step and time step of the original ocean model, which can parallelize ocean model by designing transmission module between sub-domains. This paper takes the GRGO for an example to implement the parallelization of GRGO (Global Reduced Gravity Ocean model) with halo update. The result demonstrates that the higher speedup can be achieved at different problem size.

  16. Efficient hierarchical trans-dimensional Bayesian inversion of magnetotelluric data

    Science.gov (United States)

    Xiang, Enming; Guo, Rongwen; Dosso, Stan E.; Liu, Jianxin; Dong, Hao; Ren, Zhengyong

    2018-06-01

    This paper develops an efficient hierarchical trans-dimensional (trans-D) Bayesian algorithm to invert magnetotelluric (MT) data for subsurface geoelectrical structure, with unknown geophysical model parameterization (the number of conductivity-layer interfaces) and data-error models parameterized by an auto-regressive (AR) process to account for potential error correlations. The reversible-jump Markov-chain Monte Carlo algorithm, which adds/removes interfaces and AR parameters in birth/death steps, is applied to sample the trans-D posterior probability density for model parameterization, model parameters, error variance and AR parameters, accounting for the uncertainties of model dimension and data-error statistics in the uncertainty estimates of the conductivity profile. To provide efficient sampling over the multiple subspaces of different dimensions, advanced proposal schemes are applied. Parameter perturbations are carried out in principal-component space, defined by eigen-decomposition of the unit-lag model covariance matrix, to minimize the effect of inter-parameter correlations and provide effective perturbation directions and length scales. Parameters of new layers in birth steps are proposed from the prior, instead of focused distributions centred at existing values, to improve birth acceptance rates. Parallel tempering, based on a series of parallel interacting Markov chains with successively relaxed likelihoods, is applied to improve chain mixing over model dimensions. The trans-D inversion is applied in a simulation study to examine the resolution of model structure according to the data information content. The inversion is also applied to a measured MT data set from south-central Australia.

  17. A Hierarchical Feature Extraction Model for Multi-Label Mechanical Patent Classification

    Directory of Open Access Journals (Sweden)

    Jie Hu

    2018-01-01

    Full Text Available Various studies have focused on feature extraction methods for automatic patent classification in recent years. However, most of these approaches are based on the knowledge from experts in related domains. Here we propose a hierarchical feature extraction model (HFEM for multi-label mechanical patent classification, which is able to capture both local features of phrases as well as global and temporal semantics. First, a n-gram feature extractor based on convolutional neural networks (CNNs is designed to extract salient local lexical-level features. Next, a long dependency feature extraction model based on the bidirectional long–short-term memory (BiLSTM neural network model is proposed to capture sequential correlations from higher-level sequence representations. Then the HFEM algorithm and its hierarchical feature extraction architecture are detailed. We establish the training, validation and test datasets, containing 72,532, 18,133, and 2679 mechanical patent documents, respectively, and then check the performance of HFEMs. Finally, we compared the results of the proposed HFEM and three other single neural network models, namely CNN, long–short-term memory (LSTM, and BiLSTM. The experimental results indicate that our proposed HFEM outperforms the other compared models in both precision and recall.

  18. Bayesian hierarchical model for large-scale covariance matrix estimation.

    Science.gov (United States)

    Zhu, Dongxiao; Hero, Alfred O

    2007-12-01

    Many bioinformatics problems implicitly depend on estimating large-scale covariance matrix. The traditional approaches tend to give rise to high variance and low accuracy due to "overfitting." We cast the large-scale covariance matrix estimation problem into the Bayesian hierarchical model framework, and introduce dependency between covariance parameters. We demonstrate the advantages of our approaches over the traditional approaches using simulations and OMICS data analysis.

  19. A hierarchical community occurrence model for North Carolina stream fish

    Science.gov (United States)

    Midway, S.R.; Wagner, Tyler; Tracy, B.H.

    2016-01-01

    The southeastern USA is home to one of the richest—and most imperiled and threatened—freshwater fish assemblages in North America. For many of these rare and threatened species, conservation efforts are often limited by a lack of data. Drawing on a unique and extensive data set spanning over 20 years, we modeled occurrence probabilities of 126 stream fish species sampled throughout North Carolina, many of which occur more broadly in the southeastern USA. Specifically, we developed species-specific occurrence probabilities from hierarchical Bayesian multispecies models that were based on common land use and land cover covariates. We also used index of biotic integrity tolerance classifications as a second level in the model hierarchy; we identify this level as informative for our work, but it is flexible for future model applications. Based on the partial-pooling property of the models, we were able to generate occurrence probabilities for many imperiled and data-poor species in addition to highlighting a considerable amount of occurrence heterogeneity that supports species-specific investigations whenever possible. Our results provide critical species-level information on many threatened and imperiled species as well as information that may assist with re-evaluation of existing management strategies, such as the use of surrogate species. Finally, we highlight the use of a relatively simple hierarchical model that can easily be generalized for similar situations in which conventional models fail to provide reliable estimates for data-poor groups.

  20. Modeling for mechanical response of CICC by hierarchical approach and ABAQUS simulation

    Energy Technology Data Exchange (ETDEWEB)

    Li, Y.X.; Wang, X.; Gao, Y.W., E-mail: ywgao@lzu.edu.cn; Zhou, Y.H.

    2013-11-15

    Highlights: • We develop an analytical model based on the hierarchical approach of classical wire rope theory. • The numerical model is set up through ABAQUS to verify and enhance the theoretical model. • We calculate two concerned mechanical response: global displacement–load curve and local axial strain distribution. • Elastic–plasticity is the main character in loading curve, and the friction between adjacent strands plays a significant role in the distribution map. -- Abstract: An unexpected degradation frequently occurs in superconducting cable (CICC) due to the mechanical response (deformation) when suffering from electromagnetic load and thermal load during operation. Because of the cable's hierarchical twisted configuration, it is difficult to quantitatively model the mechanical response. In addition, the local mechanical characteristics such as strain distribution could be hardly monitored via experimental method. To address this issue, we develop an analytical model based on the hierarchical approach of classical wire rope theory. This approach follows the algorithm advancing successively from n + 1 stage (e.g. 3 × 3 × 5 subcable) to n stage (e.g. 3 × 3 subcable). There are no complicated numerical procedures required in this model. Meanwhile, the numerical model is set up through ABAQUS to verify and enhance the theoretical model. Subsequently, we calculate two concerned mechanical responses: global displacement–load curve and local axial strain distribution. We find that in the global displacement–load curve, the elastic–plasticity is the main character, and the higher-level cable shows enhanced nonlinear characteristics. As for the local distribution, the friction among adjacent strands plays a significant role in this map. The magnitude of friction strongly influences the regularity of the distribution at different twisted stages. More detailed results are presented in this paper.

  1. Modeling for mechanical response of CICC by hierarchical approach and ABAQUS simulation

    International Nuclear Information System (INIS)

    Li, Y.X.; Wang, X.; Gao, Y.W.; Zhou, Y.H.

    2013-01-01

    Highlights: • We develop an analytical model based on the hierarchical approach of classical wire rope theory. • The numerical model is set up through ABAQUS to verify and enhance the theoretical model. • We calculate two concerned mechanical response: global displacement–load curve and local axial strain distribution. • Elastic–plasticity is the main character in loading curve, and the friction between adjacent strands plays a significant role in the distribution map. -- Abstract: An unexpected degradation frequently occurs in superconducting cable (CICC) due to the mechanical response (deformation) when suffering from electromagnetic load and thermal load during operation. Because of the cable's hierarchical twisted configuration, it is difficult to quantitatively model the mechanical response. In addition, the local mechanical characteristics such as strain distribution could be hardly monitored via experimental method. To address this issue, we develop an analytical model based on the hierarchical approach of classical wire rope theory. This approach follows the algorithm advancing successively from n + 1 stage (e.g. 3 × 3 × 5 subcable) to n stage (e.g. 3 × 3 subcable). There are no complicated numerical procedures required in this model. Meanwhile, the numerical model is set up through ABAQUS to verify and enhance the theoretical model. Subsequently, we calculate two concerned mechanical responses: global displacement–load curve and local axial strain distribution. We find that in the global displacement–load curve, the elastic–plasticity is the main character, and the higher-level cable shows enhanced nonlinear characteristics. As for the local distribution, the friction among adjacent strands plays a significant role in this map. The magnitude of friction strongly influences the regularity of the distribution at different twisted stages. More detailed results are presented in this paper

  2. Probabilistic Inference: Task Dependency and Individual Differences of Probability Weighting Revealed by Hierarchical Bayesian Modeling.

    Science.gov (United States)

    Boos, Moritz; Seer, Caroline; Lange, Florian; Kopp, Bruno

    2016-01-01

    Cognitive determinants of probabilistic inference were examined using hierarchical Bayesian modeling techniques. A classic urn-ball paradigm served as experimental strategy, involving a factorial two (prior probabilities) by two (likelihoods) design. Five computational models of cognitive processes were compared with the observed behavior. Parameter-free Bayesian posterior probabilities and parameter-free base rate neglect provided inadequate models of probabilistic inference. The introduction of distorted subjective probabilities yielded more robust and generalizable results. A general class of (inverted) S-shaped probability weighting functions had been proposed; however, the possibility of large differences in probability distortions not only across experimental conditions, but also across individuals, seems critical for the model's success. It also seems advantageous to consider individual differences in parameters of probability weighting as being sampled from weakly informative prior distributions of individual parameter values. Thus, the results from hierarchical Bayesian modeling converge with previous results in revealing that probability weighting parameters show considerable task dependency and individual differences. Methodologically, this work exemplifies the usefulness of hierarchical Bayesian modeling techniques for cognitive psychology. Theoretically, human probabilistic inference might be best described as the application of individualized strategic policies for Bayesian belief revision.

  3. Multilevel Hierarchical Modeling of Benthic Macroinvertebrate Responses to Urbanization in Nine Metropolitan Regions across the Conterminous United States

    Science.gov (United States)

    Kashuba, Roxolana; Cha, YoonKyung; Alameddine, Ibrahim; Lee, Boknam; Cuffney, Thomas F.

    2010-01-01

    Multilevel hierarchical modeling methodology has been developed for use in ecological data analysis. The effect of urbanization on stream macroinvertebrate communities was measured across a gradient of basins in each of nine metropolitan regions across the conterminous United States. The hierarchical nature of this dataset was harnessed in a multi-tiered model structure, predicting both invertebrate response at the basin scale and differences in invertebrate response at the region scale. Ordination site scores, total taxa richness, Ephemeroptera, Plecoptera, Trichoptera (EPT) taxa richness, and richness-weighted mean tolerance of organisms at a site were used to describe invertebrate responses. Percentage of urban land cover was used as a basin-level predictor variable. Regional mean precipitation, air temperature, and antecedent agriculture were used as region-level predictor variables. Multilevel hierarchical models were fit to both levels of data simultaneously, borrowing statistical strength from the complete dataset to reduce uncertainty in regional coefficient estimates. Additionally, whereas non-hierarchical regressions were only able to show differing relations between invertebrate responses and urban intensity separately for each region, the multilevel hierarchical regressions were able to explain and quantify those differences within a single model. In this way, this modeling approach directly establishes the importance of antecedent agricultural conditions in masking the response of invertebrates to urbanization in metropolitan regions such as Milwaukee-Green Bay, Wisconsin; Denver, Colorado; and Dallas-Fort Worth, Texas. Also, these models show that regions with high precipitation, such as Atlanta, Georgia; Birmingham, Alabama; and Portland, Oregon, start out with better regional background conditions of invertebrates prior to urbanization but experience faster negative rates of change with urbanization. Ultimately, this urbanization

  4. Concurrent and convergent validity of the mobility- and multidimensional-hierarchical disability categorization models with physical performance in community older adults.

    Science.gov (United States)

    Hu, Ming-Hsia; Yeh, Chih-Jun; Chen, Tou-Rong; Wang, Ching-Yi

    2014-01-01

    A valid, time-efficient and easy-to-use instrument is important for busy clinical settings, large scale surveys, or community screening use. The purpose of this study was to validate the mobility hierarchical disability categorization model (an abbreviated model) by investigating its concurrent validity with the multidimensional hierarchical disability categorization model (a comprehensive model) and triangulating both models with physical performance measures in older adults. 604 community-dwelling older adults of at least 60 years in age volunteered to participate. Self-reported function on mobility, instrumental activities of daily living (IADL) and activities of daily living (ADL) domains were recorded and then the disability status determined based on both the multidimensional hierarchical categorization model and the mobility hierarchical categorization model. The physical performance measures, consisting of grip strength and usual and fastest gait speeds (UGS, FGS), were collected on the same day. Both categorization models showed high correlation (γs = 0.92, p categorization models. The results of multiple regression analysis indicated that both models individually explain similar amount of variance on all physical performances, with adjustments for age, sex, and number of comorbidities. Our results found that the mobility hierarchical disability categorization model is a valid and time efficient tool for large survey or screening use.

  5. Parallel community climate model: Description and user`s guide

    Energy Technology Data Exchange (ETDEWEB)

    Drake, J.B.; Flanery, R.E.; Semeraro, B.D.; Worley, P.H. [and others

    1996-07-15

    This report gives an overview of a parallel version of the NCAR Community Climate Model, CCM2, implemented for MIMD massively parallel computers using a message-passing programming paradigm. The parallel implementation was developed on an Intel iPSC/860 with 128 processors and on the Intel Delta with 512 processors, and the initial target platform for the production version of the code is the Intel Paragon with 2048 processors. Because the implementation uses a standard, portable message-passing libraries, the code has been easily ported to other multiprocessors supporting a message-passing programming paradigm. The parallelization strategy used is to decompose the problem domain into geographical patches and assign each processor the computation associated with a distinct subset of the patches. With this decomposition, the physics calculations involve only grid points and data local to a processor and are performed in parallel. Using parallel algorithms developed for the semi-Lagrangian transport, the fast Fourier transform and the Legendre transform, both physics and dynamics are computed in parallel with minimal data movement and modest change to the original CCM2 source code. Sequential or parallel history tapes are written and input files (in history tape format) are read sequentially by the parallel code to promote compatibility with production use of the model on other computer systems. A validation exercise has been performed with the parallel code and is detailed along with some performance numbers on the Intel Paragon and the IBM SP2. A discussion of reproducibility of results is included. A user`s guide for the PCCM2 version 2.1 on the various parallel machines completes the report. Procedures for compilation, setup and execution are given. A discussion of code internals is included for those who may wish to modify and use the program in their own research.

  6. Scalable Hierarchical Algorithms for stochastic PDEs and Uncertainty Quantification

    KAUST Repository

    Litvinenko, Alexander

    2015-01-05

    H-matrices and Fast Multipole (FMM) are powerful methods to approximate linear operators coming from partial differential and integral equations as well as speed up computational cost from quadratic or cubic to log-linear (O(n log n)), where n number of degrees of freedom in the discretization. The storage is reduced to the log-linear as well. This hierarchical structure is a good starting point for parallel algorithms. Parallelization on shared and distributed memory systems was pioneered by R. Kriemann, 2005. Since 2005, the area of parallel architectures and software is developing very fast. Progress in GPUs and Many-Core Systems (e.g. XeonPhi with 64 cores) motivated us to extend work started in [1,2,7,8].

  7. Measuring Service Quality in Higher Education: Development of a Hierarchical Model (HESQUAL)

    Science.gov (United States)

    Teeroovengadum, Viraiyan; Kamalanabhan, T. J.; Seebaluck, Ashley Keshwar

    2016-01-01

    Purpose: This paper aims to develop and empirically test a hierarchical model for measuring service quality in higher education. Design/methodology/approach: The first phase of the study consisted of qualitative research methods and a comprehensive literature review, which allowed the development of a conceptual model comprising 53 service quality…

  8. Large-scale model of flow in heterogeneous and hierarchical porous media

    Science.gov (United States)

    Chabanon, Morgan; Valdés-Parada, Francisco J.; Ochoa-Tapia, J. Alberto; Goyeau, Benoît

    2017-11-01

    Heterogeneous porous structures are very often encountered in natural environments, bioremediation processes among many others. Reliable models for momentum transport are crucial whenever mass transport or convective heat occurs in these systems. In this work, we derive a large-scale average model for incompressible single-phase flow in heterogeneous and hierarchical soil porous media composed of two distinct porous regions embedding a solid impermeable structure. The model, based on the local mechanical equilibrium assumption between the porous regions, results in a unique momentum transport equation where the global effective permeability naturally depends on the permeabilities at the intermediate mesoscopic scales and therefore includes the complex hierarchical structure of the soil. The associated closure problem is numerically solved for various configurations and properties of the heterogeneous medium. The results clearly show that the effective permeability increases with the volume fraction of the most permeable porous region. It is also shown that the effective permeability is sensitive to the dimensionality spatial arrangement of the porous regions and in particular depends on the contact between the impermeable solid and the two porous regions.

  9. A Hierarchical Bayesian Model to Predict Self-Thinning Line for Chinese Fir in Southern China.

    Directory of Open Access Journals (Sweden)

    Xiongqing Zhang

    Full Text Available Self-thinning is a dynamic equilibrium between forest growth and mortality at full site occupancy. Parameters of the self-thinning lines are often confounded by differences across various stand and site conditions. For overcoming the problem of hierarchical and repeated measures, we used hierarchical Bayesian method to estimate the self-thinning line. The results showed that the self-thinning line for Chinese fir (Cunninghamia lanceolata (Lamb.Hook. plantations was not sensitive to the initial planting density. The uncertainty of model predictions was mostly due to within-subject variability. The simulation precision of hierarchical Bayesian method was better than that of stochastic frontier function (SFF. Hierarchical Bayesian method provided a reasonable explanation of the impact of other variables (site quality, soil type, aspect, etc. on self-thinning line, which gave us the posterior distribution of parameters of self-thinning line. The research of self-thinning relationship could be benefit from the use of hierarchical Bayesian method.

  10. Hierarchical relaxation dynamics in a tilted two-band Bose-Hubbard model

    Science.gov (United States)

    Cosme, Jayson G.

    2018-04-01

    We numerically examine slow and hierarchical relaxation dynamics of interacting bosons described by a tilted two-band Bose-Hubbard model. The system is found to exhibit signatures of quantum chaos within the spectrum and the validity of the eigenstate thermalization hypothesis for relevant physical observables is demonstrated for certain parameter regimes. Using the truncated Wigner representation in the semiclassical limit of the system, dynamics of relevant observables reveal hierarchical relaxation and the appearance of prethermalized states is studied from the perspective of statistics of the underlying mean-field trajectories. The observed prethermalization scenario can be attributed to different stages of glassy dynamics in the mode-time configuration space due to dynamical phase transition between ergodic and nonergodic trajectories.

  11. Hierarchical functional model for automobile development; Jidosha kaihatsu no tame no kaisogata kino model

    Energy Technology Data Exchange (ETDEWEB)

    Sumida, S [U-shin Ltd., Tokyo (Japan); Nagamatsu, M; Maruyama, K [Hokkaido Institute of Technology, Sapporo (Japan); Hiramatsu, S [Mazda Motor Corp., Hiroshima (Japan)

    1997-10-01

    A new approach on modeling is put forward in order to compose the virtual prototype which is indispensable for fully computer integrated concurrent development of automobile product. A basic concept of the hierarchical functional model is proposed as the concrete form of this new modeling technology. This model is used mainly for explaining and simulating functions and efficiencies of both the parts and the total product of automobile. All engineers who engage themselves in design and development of automobile can collaborate with one another using this model. Some application examples are shown, and usefulness of this model is demonstrated. 5 refs., 5 figs.

  12. Implementation of a parallel version of a regional climate model

    Energy Technology Data Exchange (ETDEWEB)

    Gerstengarbe, F.W. [ed.; Kuecken, M. [Potsdam-Institut fuer Klimafolgenforschung (PIK), Potsdam (Germany); Schaettler, U. [Deutscher Wetterdienst, Offenbach am Main (Germany). Geschaeftsbereich Forschung und Entwicklung

    1997-10-01

    A regional climate model developed by the Max Planck Institute for Meterology and the German Climate Computing Centre in Hamburg based on the `Europa` and `Deutschland` models of the German Weather Service has been parallelized and implemented on the IBM RS/6000 SP computer system of the Potsdam Institute for Climate Impact Research including parallel input/output processing, the explicit Eulerian time-step, the semi-implicit corrections, the normal-mode initialization and the physical parameterizations of the German Weather Service. The implementation utilizes Fortran 90 and the Message Passing Interface. The parallelization strategy used is a 2D domain decomposition. This report describes the parallelization strategy, the parallel I/O organization, the influence of different domain decomposition approaches for static and dynamic load imbalances and first numerical results. (orig.)

  13. Hierarchical modelling for the environmental sciences statistical methods and applications

    CERN Document Server

    Clark, James S

    2006-01-01

    New statistical tools are changing the way in which scientists analyze and interpret data and models. Hierarchical Bayes and Markov Chain Monte Carlo methods for analysis provide a consistent framework for inference and prediction where information is heterogeneous and uncertain, processes are complicated, and responses depend on scale. Nowhere are these methods more promising than in the environmental sciences.

  14. Modelling and parallel calculation of a kinetic boundary layer

    International Nuclear Information System (INIS)

    Perlat, Jean Philippe

    1998-01-01

    This research thesis aims at addressing reliability and cost issues in the calculation by numeric simulation of flows in transition regime. The first step has been to reduce calculation cost and memory space for the Monte Carlo method which is known to provide performance and reliability for rarefied regimes. Vector and parallel computers allow this objective to be reached. Here, a MIMD (multiple instructions, multiple data) machine has been used which implements parallel calculation at different levels of parallelization. Parallelization procedures have been adapted, and results showed that parallelization by calculation domain decomposition was far more efficient. Due to reliability issue related to the statistic feature of Monte Carlo methods, a new deterministic model was necessary to simulate gas molecules in transition regime. New models and hyperbolic systems have therefore been studied. One is chosen which allows thermodynamic values (density, average velocity, temperature, deformation tensor, heat flow) present in Navier-Stokes equations to be determined, and the equations of evolution of thermodynamic values are described for the mono-atomic case. Numerical resolution of is reported. A kinetic scheme is developed which complies with the structure of all systems, and which naturally expresses boundary conditions. The validation of the obtained 14 moment-based model is performed on shock problems and on Couette flows [fr

  15. Hierarchical Bayesian spatial models for multispecies conservation planning and monitoring.

    Science.gov (United States)

    Carroll, Carlos; Johnson, Devin S; Dunk, Jeffrey R; Zielinski, William J

    2010-12-01

    Biologists who develop and apply habitat models are often familiar with the statistical challenges posed by their data's spatial structure but are unsure of whether the use of complex spatial models will increase the utility of model results in planning. We compared the relative performance of nonspatial and hierarchical Bayesian spatial models for three vertebrate and invertebrate taxa of conservation concern (Church's sideband snails [Monadenia churchi], red tree voles [Arborimus longicaudus], and Pacific fishers [Martes pennanti pacifica]) that provide examples of a range of distributional extents and dispersal abilities. We used presence-absence data derived from regional monitoring programs to develop models with both landscape and site-level environmental covariates. We used Markov chain Monte Carlo algorithms and a conditional autoregressive or intrinsic conditional autoregressive model framework to fit spatial models. The fit of Bayesian spatial models was between 35 and 55% better than the fit of nonspatial analogue models. Bayesian spatial models outperformed analogous models developed with maximum entropy (Maxent) methods. Although the best spatial and nonspatial models included similar environmental variables, spatial models provided estimates of residual spatial effects that suggested how ecological processes might structure distribution patterns. Spatial models built from presence-absence data improved fit most for localized endemic species with ranges constrained by poorly known biogeographic factors and for widely distributed species suspected to be strongly affected by unmeasured environmental variables or population processes. By treating spatial effects as a variable of interest rather than a nuisance, hierarchical Bayesian spatial models, especially when they are based on a common broad-scale spatial lattice (here the national Forest Inventory and Analysis grid of 24 km(2) hexagons), can increase the relevance of habitat models to multispecies

  16. Accounting for uncertainty in ecological analysis: the strengths and limitations of hierarchical statistical modeling.

    Science.gov (United States)

    Cressie, Noel; Calder, Catherine A; Clark, James S; Ver Hoef, Jay M; Wikle, Christopher K

    2009-04-01

    Analyses of ecological data should account for the uncertainty in the process(es) that generated the data. However, accounting for these uncertainties is a difficult task, since ecology is known for its complexity. Measurement and/or process errors are often the only sources of uncertainty modeled when addressing complex ecological problems, yet analyses should also account for uncertainty in sampling design, in model specification, in parameters governing the specified model, and in initial and boundary conditions. Only then can we be confident in the scientific inferences and forecasts made from an analysis. Probability and statistics provide a framework that accounts for multiple sources of uncertainty. Given the complexities of ecological studies, the hierarchical statistical model is an invaluable tool. This approach is not new in ecology, and there are many examples (both Bayesian and non-Bayesian) in the literature illustrating the benefits of this approach. In this article, we provide a baseline for concepts, notation, and methods, from which discussion on hierarchical statistical modeling in ecology can proceed. We have also planted some seeds for discussion and tried to show where the practical difficulties lie. Our thesis is that hierarchical statistical modeling is a powerful way of approaching ecological analysis in the presence of inevitable but quantifiable uncertainties, even if practical issues sometimes require pragmatic compromises.

  17. Parameterization of aquatic ecosystem functioning and its natural variation: Hierarchical Bayesian modelling of plankton food web dynamics

    Science.gov (United States)

    Norros, Veera; Laine, Marko; Lignell, Risto; Thingstad, Frede

    2017-10-01

    Methods for extracting empirically and theoretically sound parameter values are urgently needed in aquatic ecosystem modelling to describe key flows and their variation in the system. Here, we compare three Bayesian formulations for mechanistic model parameterization that differ in their assumptions about the variation in parameter values between various datasets: 1) global analysis - no variation, 2) separate analysis - independent variation and 3) hierarchical analysis - variation arising from a shared distribution defined by hyperparameters. We tested these methods, using computer-generated and empirical data, coupled with simplified and reasonably realistic plankton food web models, respectively. While all methods were adequate, the simulated example demonstrated that a well-designed hierarchical analysis can result in the most accurate and precise parameter estimates and predictions, due to its ability to combine information across datasets. However, our results also highlighted sensitivity to hyperparameter prior distributions as an important caveat of hierarchical analysis. In the more complex empirical example, hierarchical analysis was able to combine precise identification of parameter values with reasonably good predictive performance, although the ranking of the methods was less straightforward. We conclude that hierarchical Bayesian analysis is a promising tool for identifying key ecosystem-functioning parameters and their variation from empirical datasets.

  18. A Hybrid PO - Higher-Order Hierarchical MoM Formulation using Curvilinear Geometry Modeling

    DEFF Research Database (Denmark)

    Jørgensen, E.; Meincke, Peter; Breinbjerg, Olav

    2003-01-01

    which implies a very modest memory requirement. Nevertheless, the hierarchical feature of the basis functions maintains the ability to treat small geometrical details efficiently. In addition, the scatterer is modelled with higher-order curved patches which allows accurate modelling of curved surfaces...

  19. Probabilistic inference: Task dependency and individual differences of probability weighting revealed by hierarchical Bayesian modelling

    Directory of Open Access Journals (Sweden)

    Moritz eBoos

    2016-05-01

    Full Text Available Cognitive determinants of probabilistic inference were examined using hierarchical Bayesian modelling techniques. A classic urn-ball paradigm served as experimental strategy, involving a factorial two (prior probabilities by two (likelihoods design. Five computational models of cognitive processes were compared with the observed behaviour. Parameter-free Bayesian posterior probabilities and parameter-free base rate neglect provided inadequate models of probabilistic inference. The introduction of distorted subjective probabilities yielded more robust and generalizable results. A general class of (inverted S-shaped probability weighting functions had been proposed; however, the possibility of large differences in probability distortions not only across experimental conditions, but also across individuals, seems critical for the model’s success. It also seems advantageous to consider individual differences in parameters of probability weighting as being sampled from weakly informative prior distributions of individual parameter values. Thus, the results from hierarchical Bayesian modelling converge with previous results in revealing that probability weighting parameters show considerable task dependency and individual differences. Methodologically, this work exemplifies the usefulness of hierarchical Bayesian modelling techniques for cognitive psychology. Theoretically, human probabilistic inference might be best described as the application of individualized strategic policies for Bayesian belief revision.

  20. Electromagnetic Physics Models for Parallel Computing Architectures

    Science.gov (United States)

    Amadio, G.; Ananya, A.; Apostolakis, J.; Aurora, A.; Bandieramonte, M.; Bhattacharyya, A.; Bianchini, C.; Brun, R.; Canal, P.; Carminati, F.; Duhem, L.; Elvira, D.; Gheata, A.; Gheata, M.; Goulas, I.; Iope, R.; Jun, S. Y.; Lima, G.; Mohanty, A.; Nikitina, T.; Novak, M.; Pokorski, W.; Ribon, A.; Seghal, R.; Shadura, O.; Vallecorsa, S.; Wenzel, S.; Zhang, Y.

    2016-10-01

    The recent emergence of hardware architectures characterized by many-core or accelerated processors has opened new opportunities for concurrent programming models taking advantage of both SIMD and SIMT architectures. GeantV, a next generation detector simulation, has been designed to exploit both the vector capability of mainstream CPUs and multi-threading capabilities of coprocessors including NVidia GPUs and Intel Xeon Phi. The characteristics of these architectures are very different in terms of the vectorization depth and type of parallelization needed to achieve optimal performance. In this paper we describe implementation of electromagnetic physics models developed for parallel computing architectures as a part of the GeantV project. Results of preliminary performance evaluation and physics validation are presented as well.

  1. Diagnostics for generalized linear hierarchical models in network meta-analysis.

    Science.gov (United States)

    Zhao, Hong; Hodges, James S; Carlin, Bradley P

    2017-09-01

    Network meta-analysis (NMA) combines direct and indirect evidence comparing more than 2 treatments. Inconsistency arises when these 2 information sources differ. Previous work focuses on inconsistency detection, but little has been done on how to proceed after identifying inconsistency. The key issue is whether inconsistency changes an NMA's substantive conclusions. In this paper, we examine such discrepancies from a diagnostic point of view. Our methods seek to detect influential and outlying observations in NMA at a trial-by-arm level. These observations may have a large effect on the parameter estimates in NMA, or they may deviate markedly from other observations. We develop formal diagnostics for a Bayesian hierarchical model to check the effect of deleting any observation. Diagnostics are specified for generalized linear hierarchical NMA models and investigated for both published and simulated datasets. Results from our example dataset using either contrast- or arm-based models and from the simulated datasets indicate that the sources of inconsistency in NMA tend not to be influential, though results from the example dataset suggest that they are likely to be outliers. This mimics a familiar result from linear model theory, in which outliers with low leverage are not influential. Future extensions include incorporating baseline covariates and individual-level patient data. Copyright © 2017 John Wiley & Sons, Ltd.

  2. Principal-subordinate hierarchical multi-objective programming model of initial water rights allocation

    Directory of Open Access Journals (Sweden)

    Dan Wu

    2009-06-01

    Full Text Available The principal-subordinate hierarchical multi-objective programming model of initial water rights allocation was developed based on the principle of coordinated and sustainable development of different regions and water sectors within a basin. With the precondition of strictly controlling maximum emissions rights, initial water rights were allocated between the first and the second levels of the hierarchy in order to promote fair and coordinated development across different regions of the basin and coordinated and efficient water use across different water sectors, realize the maximum comprehensive benefits to the basin, promote the unity of quantity and quality of initial water rights allocation, and eliminate water conflict across different regions and water sectors. According to interactive decision-making theory, a principal-subordinate hierarchical interactive iterative algorithm based on the satisfaction degree was developed and used to solve the initial water rights allocation model. A case study verified the validity of the model.

  3. Parallelization of elliptic solver for solving 1D Boussinesq model

    Science.gov (United States)

    Tarwidi, D.; Adytia, D.

    2018-03-01

    In this paper, a parallel implementation of an elliptic solver in solving 1D Boussinesq model is presented. Numerical solution of Boussinesq model is obtained by implementing a staggered grid scheme to continuity, momentum, and elliptic equation of Boussinesq model. Tridiagonal system emerging from numerical scheme of elliptic equation is solved by cyclic reduction algorithm. The parallel implementation of cyclic reduction is executed on multicore processors with shared memory architectures using OpenMP. To measure the performance of parallel program, large number of grids is varied from 28 to 214. Two test cases of numerical experiment, i.e. propagation of solitary and standing wave, are proposed to evaluate the parallel program. The numerical results are verified with analytical solution of solitary and standing wave. The best speedup of solitary and standing wave test cases is about 2.07 with 214 of grids and 1.86 with 213 of grids, respectively, which are executed by using 8 threads. Moreover, the best efficiency of parallel program is 76.2% and 73.5% for solitary and standing wave test cases, respectively.

  4. A hybrid parallel framework for the cellular Potts model simulations

    Energy Technology Data Exchange (ETDEWEB)

    Jiang, Yi [Los Alamos National Laboratory; He, Kejing [SOUTH CHINA UNIV; Dong, Shoubin [SOUTH CHINA UNIV

    2009-01-01

    The Cellular Potts Model (CPM) has been widely used for biological simulations. However, most current implementations are either sequential or approximated, which can't be used for large scale complex 3D simulation. In this paper we present a hybrid parallel framework for CPM simulations. The time-consuming POE solving, cell division, and cell reaction operation are distributed to clusters using the Message Passing Interface (MPI). The Monte Carlo lattice update is parallelized on shared-memory SMP system using OpenMP. Because the Monte Carlo lattice update is much faster than the POE solving and SMP systems are more and more common, this hybrid approach achieves good performance and high accuracy at the same time. Based on the parallel Cellular Potts Model, we studied the avascular tumor growth using a multiscale model. The application and performance analysis show that the hybrid parallel framework is quite efficient. The hybrid parallel CPM can be used for the large scale simulation ({approx}10{sup 8} sites) of complex collective behavior of numerous cells ({approx}10{sup 6}).

  5. Latent Variable Regression 4-Level Hierarchical Model Using Multisite Multiple-Cohorts Longitudinal Data. CRESST Report 801

    Science.gov (United States)

    Choi, Kilchan

    2011-01-01

    This report explores a new latent variable regression 4-level hierarchical model for monitoring school performance over time using multisite multiple-cohorts longitudinal data. This kind of data set has a 4-level hierarchical structure: time-series observation nested within students who are nested within different cohorts of students. These…

  6. Research on Multi - Person Parallel Modeling Method Based on Integrated Model Persistent Storage

    Science.gov (United States)

    Qu, MingCheng; Wu, XiangHu; Tao, YongChao; Liu, Ying

    2018-03-01

    This paper mainly studies the multi-person parallel modeling method based on the integrated model persistence storage. The integrated model refers to a set of MDDT modeling graphics system, which can carry out multi-angle, multi-level and multi-stage description of aerospace general embedded software. Persistent storage refers to converting the data model in memory into a storage model and converting the storage model into a data model in memory, where the data model refers to the object model and the storage model is a binary stream. And multi-person parallel modeling refers to the need for multi-person collaboration, the role of separation, and even real-time remote synchronization modeling.

  7. A model of shape memory materials with hierarchical twinning: statics and dynamics

    International Nuclear Information System (INIS)

    Saxena, A.; Bishop, A.R.; Wu, Y.; Lookman, T.

    1995-01-01

    We consider a model of shape memory materials in which hierarchical twinning near the habit plane (austenite-martensite interface) is a new and crucial ingredient. The model includes (1) a triple-well potential (φ 6 model) in local shear strain, (2) strain gradient terms up to second order in strain and fourth order in gradient, and (3) all symmetry allowed compositional fluctuation-induced strain gradient terms. The last term favors hierarchy which enables communication between macroscopic (cm) and microscopic (A) regions essential for shape memory. Hierarchy also stabilizes tweed formation (criss-cross patterns of twins). External stress or pressure modulates (''patterns'') the spacing of domain walls. Therefore the ''pattern'' is encoded in the modulated hierarchical variation of the depth and width of the twins. This hierarchy of length scales provides a related hierarchy of time scales and thus the possibility of non-exponential decay. The four processes of the complete shape memory cycle-write, record, erase and recall-are explained within this model. Preliminary results based on 2D molecular dynamics are shown for tweed and hierarchy formation. (orig.)

  8. Intraclass Correlation Coefficients in Hierarchical Designs: Evaluation Using Latent Variable Modeling

    Science.gov (United States)

    Raykov, Tenko

    2011-01-01

    Interval estimation of intraclass correlation coefficients in hierarchical designs is discussed within a latent variable modeling framework. A method accomplishing this aim is outlined, which is applicable in two-level studies where participants (or generally lower-order units) are clustered within higher-order units. The procedure can also be…

  9. Masking effects of speech and music: does the masker's hierarchical structure matter?

    Science.gov (United States)

    Shi, Lu-Feng; Law, Yvonne

    2010-04-01

    Speech and music are time-varying signals organized by parallel hierarchical rules. Through a series of four experiments, this study compared the masking effects of single-talker speech and instrumental music on speech perception while manipulating the complexity of hierarchical and temporal structures of the maskers. Listeners' word recognition was found to be similar between hierarchically intact and disrupted speech or classical music maskers (Experiment 1). When sentences served as the signal, significantly greater masking effects were observed with disrupted than intact speech or classical music maskers (Experiment 2), although not with jazz or serial music maskers, which differed from the classical music masker in their hierarchical structures (Experiment 3). Removing the classical music masker's temporal dynamics or partially restoring it affected listeners' sentence recognition; yet, differences in performance between intact and disrupted maskers remained robust (Experiment 4). Hence, the effect of structural expectancy was largely present across maskers when comparing them before and after their hierarchical structure was purposefully disrupted. This effect seemed to lend support to the auditory stream segregation theory.

  10. Electromagnetic Physics Models for Parallel Computing Architectures

    International Nuclear Information System (INIS)

    Amadio, G; Bianchini, C; Iope, R; Ananya, A; Apostolakis, J; Aurora, A; Bandieramonte, M; Brun, R; Carminati, F; Gheata, A; Gheata, M; Goulas, I; Nikitina, T; Bhattacharyya, A; Mohanty, A; Canal, P; Elvira, D; Jun, S Y; Lima, G; Duhem, L

    2016-01-01

    The recent emergence of hardware architectures characterized by many-core or accelerated processors has opened new opportunities for concurrent programming models taking advantage of both SIMD and SIMT architectures. GeantV, a next generation detector simulation, has been designed to exploit both the vector capability of mainstream CPUs and multi-threading capabilities of coprocessors including NVidia GPUs and Intel Xeon Phi. The characteristics of these architectures are very different in terms of the vectorization depth and type of parallelization needed to achieve optimal performance. In this paper we describe implementation of electromagnetic physics models developed for parallel computing architectures as a part of the GeantV project. Results of preliminary performance evaluation and physics validation are presented as well. (paper)

  11. Oscillatory Critical Amplitudes in Hierarchical Models and the Harris Function of Branching Processes

    Science.gov (United States)

    Costin, Ovidiu; Giacomin, Giambattista

    2013-02-01

    Oscillatory critical amplitudes have been repeatedly observed in hierarchical models and, in the cases that have been taken into consideration, these oscillations are so small to be hardly detectable. Hierarchical models are tightly related to iteration of maps and, in fact, very similar phenomena have been repeatedly reported in many fields of mathematics, like combinatorial evaluations and discrete branching processes. It is precisely in the context of branching processes with bounded off-spring that T. Harris, in 1948, first set forth the possibility that the logarithm of the moment generating function of the rescaled population size, in the super-critical regime, does not grow near infinity as a power, but it has an oscillatory prefactor (the Harris function). These oscillations have been observed numerically only much later and, while the origin is clearly tied to the discrete character of the iteration, the amplitude size is not so well understood. The purpose of this note is to reconsider the issue for hierarchical models and in what is arguably the most elementary setting—the pinning model—that actually just boils down to iteration of polynomial maps (and, notably, quadratic maps). In this note we show that the oscillatory critical amplitude for pinning models and the Harris function coincide. Moreover we make explicit the link between these oscillatory functions and the geometry of the Julia set of the map, making thus rigorous and quantitative some ideas set forth in Derrida et al. (Commun. Math. Phys. 94:115-132, 1984).

  12. Emotional intelligence is a second-stratum factor of intelligence: evidence from hierarchical and bifactor models.

    Science.gov (United States)

    MacCann, Carolyn; Joseph, Dana L; Newman, Daniel A; Roberts, Richard D

    2014-04-01

    This article examines the status of emotional intelligence (EI) within the structure of human cognitive abilities. To evaluate whether EI is a 2nd-stratum factor of intelligence, data were fit to a series of structural models involving 3 indicators each for fluid intelligence, crystallized intelligence, quantitative reasoning, visual processing, and broad retrieval ability, as well as 2 indicators each for emotion perception, emotion understanding, and emotion management. Unidimensional, multidimensional, hierarchical, and bifactor solutions were estimated in a sample of 688 college and community college students. Results suggest adequate fit for 2 models: (a) an oblique 8-factor model (with 5 traditional cognitive ability factors and 3 EI factors) and (b) a hierarchical solution (with cognitive g at the highest level and EI representing a 2nd-stratum factor that loads onto g at λ = .80). The acceptable relative fit of the hierarchical model confirms the notion that EI is a group factor of cognitive ability, marking the expression of intelligence in the emotion domain. The discussion proposes a possible expansion of Cattell-Horn-Carroll theory to include EI as a 2nd-stratum factor of similar standing to factors such as fluid intelligence and visual processing.

  13. A model for optimizing file access patterns using spatio-temporal parallelism

    Energy Technology Data Exchange (ETDEWEB)

    Boonthanome, Nouanesengsy [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Patchett, John [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Geveci, Berk [Kitware Inc., Clifton Park, NY (United States); Ahrens, James [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Bauer, Andy [Kitware Inc., Clifton Park, NY (United States); Chaudhary, Aashish [Kitware Inc., Clifton Park, NY (United States); Miller, Ross G. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Shipman, Galen M. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Williams, Dean N. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2013-01-01

    For many years now, I/O read time has been recognized as the primary bottleneck for parallel visualization and analysis of large-scale data. In this paper, we introduce a model that can estimate the read time for a file stored in a parallel filesystem when given the file access pattern. Read times ultimately depend on how the file is stored and the access pattern used to read the file. The file access pattern will be dictated by the type of parallel decomposition used. We employ spatio-temporal parallelism, which combines both spatial and temporal parallelism, to provide greater flexibility to possible file access patterns. Using our model, we were able to configure the spatio-temporal parallelism to design optimized read access patterns that resulted in a speedup factor of approximately 400 over traditional file access patterns.

  14. Hierarchical Model Predictive Control for Plug-and-Play Resource Distribution

    DEFF Research Database (Denmark)

    Bendtsen, Jan Dimon; Trangbæk, K; Stoustrup, Jakob

    2012-01-01

    of autonomous units. The approach is inspired by smart-grid electric power production and consumption systems, where the flexibility of a large number of power producing and/or power consuming units can be exploited in a smart-grid solution. The objective is to accommodate the load variation on the grid......This chapter deals with hierarchical model predictive control (MPC) of distributed systems. A three level hierarchical approach is proposed, consisting of a high level MPC controller, a second level of so-called aggregators, controlled by an online MPC-like algorithm, and a lower level......, arising on one hand from varying consumption, on the other hand by natural variations in power production e.g. from wind turbines. The proposed method can also be applied to supply chain management systems, where the challenge is to balance demand and supply, using a number of storages each with a maximal...

  15. Learning Hierarchical User Interest Models from Web Pages

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    We propose an algorithm for learning hierarchical user interest models according to the Web pages users have browsed. In this algorithm, the interests of a user are represented into a tree which is called a user interest tree, the content and the structure of which can change simultaneously to adapt to the changes in a user's interests. This expression represents a user's specific and general interests as a continuum. In some sense, specific interests correspond to short-term interests, while general interests correspond to long-term interests. So this representation more really reflects the users' interests. The algorithm can automatically model a user's multiple interest domains, dynamically generate the interest models and prune a user interest tree when the number of the nodes in it exceeds given value. Finally, we show the experiment results in a Chinese Web Site.

  16. Hierarchical material models for fragmentation modeling in NIF-ALE-AMR

    International Nuclear Information System (INIS)

    Fisher, A C; Masters, N D; Koniges, A E; Anderson, R W; Gunney, B T N; Wang, P; Becker, R; Dixit, P; Benson, D J

    2008-01-01

    Fragmentation is a fundamental process that naturally spans micro to macroscopic scales. Recent advances in algorithms, computer simulations, and hardware enable us to connect the continuum to microstructural regimes in a real simulation through a heterogeneous multiscale mathematical model. We apply this model to the problem of predicting how targets in the NIF chamber dismantle, so that optics and diagnostics can be protected from damage. The mechanics of the initial material fracture depend on the microscopic grain structure. In order to effectively simulate the fragmentation, this process must be modeled at the subgrain level with computationally expensive crystal plasticity models. However, there are not enough computational resources to model the entire NIF target at this microscopic scale. In order to accomplish these calculations, a hierarchical material model (HMM) is being developed. The HMM will allow fine-scale modeling of the initial fragmentation using computationally expensive crystal plasticity, while the elements at the mesoscale can use polycrystal models, and the macroscopic elements use analytical flow stress models. The HMM framework is built upon an adaptive mesh refinement (AMR) capability. We present progress in implementing the HMM in the NIF-ALE-AMR code. Additionally, we present test simulations relevant to NIF targets

  17. Hierarchical material models for fragmentation modeling in NIF-ALE-AMR

    Energy Technology Data Exchange (ETDEWEB)

    Fisher, A C; Masters, N D; Koniges, A E; Anderson, R W; Gunney, B T N; Wang, P; Becker, R [Lawrence Livermore National Laboratory, PO Box 808, Livermore, CA 94551 (United States); Dixit, P; Benson, D J [University of California San Diego, 9500 Gilman Dr., La Jolla. CA 92093 (United States)], E-mail: fisher47@llnl.gov

    2008-05-15

    Fragmentation is a fundamental process that naturally spans micro to macroscopic scales. Recent advances in algorithms, computer simulations, and hardware enable us to connect the continuum to microstructural regimes in a real simulation through a heterogeneous multiscale mathematical model. We apply this model to the problem of predicting how targets in the NIF chamber dismantle, so that optics and diagnostics can be protected from damage. The mechanics of the initial material fracture depend on the microscopic grain structure. In order to effectively simulate the fragmentation, this process must be modeled at the subgrain level with computationally expensive crystal plasticity models. However, there are not enough computational resources to model the entire NIF target at this microscopic scale. In order to accomplish these calculations, a hierarchical material model (HMM) is being developed. The HMM will allow fine-scale modeling of the initial fragmentation using computationally expensive crystal plasticity, while the elements at the mesoscale can use polycrystal models, and the macroscopic elements use analytical flow stress models. The HMM framework is built upon an adaptive mesh refinement (AMR) capability. We present progress in implementing the HMM in the NIF-ALE-AMR code. Additionally, we present test simulations relevant to NIF targets.

  18. How hierarchical is language use?

    Science.gov (United States)

    Frank, Stefan L.; Bod, Rens; Christiansen, Morten H.

    2012-01-01

    It is generally assumed that hierarchical phrase structure plays a central role in human language. However, considerations of simplicity and evolutionary continuity suggest that hierarchical structure should not be invoked too hastily. Indeed, recent neurophysiological, behavioural and computational studies show that sequential sentence structure has considerable explanatory power and that hierarchical processing is often not involved. In this paper, we review evidence from the recent literature supporting the hypothesis that sequential structure may be fundamental to the comprehension, production and acquisition of human language. Moreover, we provide a preliminary sketch outlining a non-hierarchical model of language use and discuss its implications and testable predictions. If linguistic phenomena can be explained by sequential rather than hierarchical structure, this will have considerable impact in a wide range of fields, such as linguistics, ethology, cognitive neuroscience, psychology and computer science. PMID:22977157

  19. Hierarchical models for informing general biomass equations with felled tree data

    Science.gov (United States)

    Brian J. Clough; Matthew B. Russell; Christopher W. Woodall; Grant M. Domke; Philip J. Radtke

    2015-01-01

    We present a hierarchical framework that uses a large multispecies felled tree database to inform a set of general models for predicting tree foliage biomass, with accompanying uncertainty, within the FIA database. Results suggest significant prediction uncertainty for individual trees and reveal higher errors when predicting foliage biomass for larger trees and for...

  20. An adaptive sampling method for variable-fidelity surrogate models using improved hierarchical kriging

    Science.gov (United States)

    Hu, Jiexiang; Zhou, Qi; Jiang, Ping; Shao, Xinyu; Xie, Tingli

    2018-01-01

    Variable-fidelity (VF) modelling methods have been widely used in complex engineering system design to mitigate the computational burden. Building a VF model generally includes two parts: design of experiments and metamodel construction. In this article, an adaptive sampling method based on improved hierarchical kriging (ASM-IHK) is proposed to refine the improved VF model. First, an improved hierarchical kriging model is developed as the metamodel, in which the low-fidelity model is varied through a polynomial response surface function to capture the characteristics of a high-fidelity model. Secondly, to reduce local approximation errors, an active learning strategy based on a sequential sampling method is introduced to make full use of the already required information on the current sampling points and to guide the sampling process of the high-fidelity model. Finally, two numerical examples and the modelling of the aerodynamic coefficient for an aircraft are provided to demonstrate the approximation capability of the proposed approach, as well as three other metamodelling methods and two sequential sampling methods. The results show that ASM-IHK provides a more accurate metamodel at the same simulation cost, which is very important in metamodel-based engineering design problems.

  1. Hierarchical Bayesian nonparametric mixture models for clustering with variable relevance determination.

    Science.gov (United States)

    Yau, Christopher; Holmes, Chris

    2011-07-01

    We propose a hierarchical Bayesian nonparametric mixture model for clustering when some of the covariates are assumed to be of varying relevance to the clustering problem. This can be thought of as an issue in variable selection for unsupervised learning. We demonstrate that by defining a hierarchical population based nonparametric prior on the cluster locations scaled by the inverse covariance matrices of the likelihood we arrive at a 'sparsity prior' representation which admits a conditionally conjugate prior. This allows us to perform full Gibbs sampling to obtain posterior distributions over parameters of interest including an explicit measure of each covariate's relevance and a distribution over the number of potential clusters present in the data. This also allows for individual cluster specific variable selection. We demonstrate improved inference on a number of canonical problems.

  2. Integration experiences and performance studies of A COTS parallel archive systems

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Hsing-bung [Los Alamos National Laboratory; Scott, Cody [Los Alamos National Laboratory; Grider, Bary [Los Alamos National Laboratory; Torres, Aaron [Los Alamos National Laboratory; Turley, Milton [Los Alamos National Laboratory; Sanchez, Kathy [Los Alamos National Laboratory; Bremer, John [Los Alamos National Laboratory

    2010-01-01

    Current and future Archive Storage Systems have been asked to (a) scale to very high bandwidths, (b) scale in metadata performance, (c) support policy-based hierarchical storage management capability, (d) scale in supporting changing needs of very large data sets, (e) support standard interface, and (f) utilize commercial-off-the-shelf(COTS) hardware. Parallel file systems have been asked to do the same thing but at one or more orders of magnitude faster in performance. Archive systems continue to move closer to file systems in their design due to the need for speed and bandwidth, especially metadata searching speeds such as more caching and less robust semantics. Currently the number of extreme highly scalable parallel archive solutions is very small especially those that will move a single large striped parallel disk file onto many tapes in parallel. We believe that a hybrid storage approach of using COTS components and innovative software technology can bring new capabilities into a production environment for the HPC community much faster than the approach of creating and maintaining a complete end-to-end unique parallel archive software solution. In this paper, we relay our experience of integrating a global parallel file system and a standard backup/archive product with a very small amount of additional code to provide a scalable, parallel archive. Our solution has a high degree of overlap with current parallel archive products including (a) doing parallel movement to/from tape for a single large parallel file, (b) hierarchical storage management, (c) ILM features, (d) high volume (non-single parallel file) archives for backup/archive/content management, and (e) leveraging all free file movement tools in Linux such as copy, move, ls, tar, etc. We have successfully applied our working COTS Parallel Archive System to the current world's first petaflop/s computing system, LANL's Roadrunner, and demonstrated its capability to address requirements of

  3. Integration experiments and performance studies of a COTS parallel archive system

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Hsing-bung [Los Alamos National Laboratory; Scott, Cody [Los Alamos National Laboratory; Grider, Gary [Los Alamos National Laboratory; Torres, Aaron [Los Alamos National Laboratory; Turley, Milton [Los Alamos National Laboratory; Sanchez, Kathy [Los Alamos National Laboratory; Bremer, John [Los Alamos National Laboratory

    2010-06-16

    Current and future Archive Storage Systems have been asked to (a) scale to very high bandwidths, (b) scale in metadata performance, (c) support policy-based hierarchical storage management capability, (d) scale in supporting changing needs of very large data sets, (e) support standard interface, and (f) utilize commercial-off-the-shelf (COTS) hardware. Parallel file systems have been asked to do the same thing but at one or more orders of magnitude faster in performance. Archive systems continue to move closer to file systems in their design due to the need for speed and bandwidth, especially metadata searching speeds such as more caching and less robust semantics. Currently the number of extreme highly scalable parallel archive solutions is very small especially those that will move a single large striped parallel disk file onto many tapes in parallel. We believe that a hybrid storage approach of using COTS components and innovative software technology can bring new capabilities into a production environment for the HPC community much faster than the approach of creating and maintaining a complete end-to-end unique parallel archive software solution. In this paper, we relay our experience of integrating a global parallel file system and a standard backup/archive product with a very small amount of additional code to provide a scalable, parallel archive. Our solution has a high degree of overlap with current parallel archive products including (a) doing parallel movement to/from tape for a single large parallel file, (b) hierarchical storage management, (c) ILM features, (d) high volume (non-single parallel file) archives for backup/archive/content management, and (e) leveraging all free file movement tools in Linux such as copy, move, Is, tar, etc. We have successfully applied our working COTS Parallel Archive System to the current world's first petafiop/s computing system, LANL's Roadrunner machine, and demonstrated its capability to address

  4. Parallelization of the model-based iterative reconstruction algorithm DIRA

    International Nuclear Information System (INIS)

    Oertenberg, A.; Sandborg, M.; Alm Carlsson, G.; Malusek, A.; Magnusson, M.

    2016-01-01

    New paradigms for parallel programming have been devised to simplify software development on multi-core processors and many-core graphical processing units (GPU). Despite their obvious benefits, the parallelization of existing computer programs is not an easy task. In this work, the use of the Open Multiprocessing (OpenMP) and Open Computing Language (OpenCL) frameworks is considered for the parallelization of the model-based iterative reconstruction algorithm DIRA with the aim to significantly shorten the code's execution time. Selected routines were parallelized using OpenMP and OpenCL libraries; some routines were converted from MATLAB to C and optimised. Parallelization of the code with the OpenMP was easy and resulted in an overall speedup of 15 on a 16-core computer. Parallelization with OpenCL was more difficult owing to differences between the central processing unit and GPU architectures. The resulting speedup was substantially lower than the theoretical peak performance of the GPU; the cause was explained. (authors)

  5. Distributed parallel computing in stochastic modeling of groundwater systems.

    Science.gov (United States)

    Dong, Yanhui; Li, Guomin; Xu, Haizhen

    2013-03-01

    Stochastic modeling is a rapidly evolving, popular approach to the study of the uncertainty and heterogeneity of groundwater systems. However, the use of Monte Carlo-type simulations to solve practical groundwater problems often encounters computational bottlenecks that hinder the acquisition of meaningful results. To improve the computational efficiency, a system that combines stochastic model generation with MODFLOW-related programs and distributed parallel processing is investigated. The distributed computing framework, called the Java Parallel Processing Framework, is integrated into the system to allow the batch processing of stochastic models in distributed and parallel systems. As an example, the system is applied to the stochastic delineation of well capture zones in the Pinggu Basin in Beijing. Through the use of 50 processing threads on a cluster with 10 multicore nodes, the execution times of 500 realizations are reduced to 3% compared with those of a serial execution. Through this application, the system demonstrates its potential in solving difficult computational problems in practical stochastic modeling. © 2012, The Author(s). Groundwater © 2012, National Ground Water Association.

  6. A Hierarchical and Distributed Approach for Mapping Large Applications to Heterogeneous Grids using Genetic Algorithms

    Science.gov (United States)

    Sanyal, Soumya; Jain, Amit; Das, Sajal K.; Biswas, Rupak

    2003-01-01

    In this paper, we propose a distributed approach for mapping a single large application to a heterogeneous grid environment. To minimize the execution time of the parallel application, we distribute the mapping overhead to the available nodes of the grid. This approach not only provides a fast mapping of tasks to resources but is also scalable. We adopt a hierarchical grid model and accomplish the job of mapping tasks to this topology using a scheduler tree. Results show that our three-phase algorithm provides high quality mappings, and is fast and scalable.

  7. Parallelized Genetic Identification of the Thermal-Electrochemical Model for Lithium-Ion Battery

    Directory of Open Access Journals (Sweden)

    Liqiang Zhang

    2013-01-01

    Full Text Available The parameters of a well predicted model can be used as health characteristics for Lithium-ion battery. This article reports a parallelized parameter identification of the thermal-electrochemical model, which significantly reduces the time consumption of parameter identification. Since the P2D model has the most predictability, it is chosen for further research and expanded to the thermal-electrochemical model by coupling thermal effect and temperature-dependent parameters. Then Genetic Algorithm is used for parameter identification, but it takes too much time because of the long time simulation of model. For this reason, a computer cluster is built by surplus computing resource in our laboratory based on Parallel Computing Toolbox and Distributed Computing Server in MATLAB. The performance of two parallelized methods, namely Single Program Multiple Data (SPMD and parallel FOR loop (PARFOR, is investigated and then the parallelized GA identification is proposed. With this method, model simulations running parallelly and the parameter identification could be speeded up more than a dozen times, and the identification result is batter than that from serial GA. This conclusion is validated by model parameter identification of a real LiFePO4 battery.

  8. Badlands: A parallel basin and landscape dynamics model

    Directory of Open Access Journals (Sweden)

    T. Salles

    2016-01-01

    Full Text Available Over more than three decades, a number of numerical landscape evolution models (LEMs have been developed to study the combined effects of climate, sea-level, tectonics and sediments on Earth surface dynamics. Most of them are written in efficient programming languages, but often cannot be used on parallel architectures. Here, I present a LEM which ports a common core of accepted physical principles governing landscape evolution into a distributed memory parallel environment. Badlands (acronym for BAsin anD LANdscape DynamicS is an open-source, flexible, TIN-based landscape evolution model, built to simulate topography development at various space and time scales.

  9. Tutorial: Parallel Computing of Simulation Models for Risk Analysis.

    Science.gov (United States)

    Reilly, Allison C; Staid, Andrea; Gao, Michael; Guikema, Seth D

    2016-10-01

    Simulation models are widely used in risk analysis to study the effects of uncertainties on outcomes of interest in complex problems. Often, these models are computationally complex and time consuming to run. This latter point may be at odds with time-sensitive evaluations or may limit the number of parameters that are considered. In this article, we give an introductory tutorial focused on parallelizing simulation code to better leverage modern computing hardware, enabling risk analysts to better utilize simulation-based methods for quantifying uncertainty in practice. This article is aimed primarily at risk analysts who use simulation methods but do not yet utilize parallelization to decrease the computational burden of these models. The discussion is focused on conceptual aspects of embarrassingly parallel computer code and software considerations. Two complementary examples are shown using the languages MATLAB and R. A brief discussion of hardware considerations is located in the Appendix. © 2016 Society for Risk Analysis.

  10. Hierarchical spatial models for predicting pygmy rabbit distribution and relative abundance

    Science.gov (United States)

    Wilson, T.L.; Odei, J.B.; Hooten, M.B.; Edwards, T.C.

    2010-01-01

    Conservationists routinely use species distribution models to plan conservation, restoration and development actions, while ecologists use them to infer process from pattern. These models tend to work well for common or easily observable species, but are of limited utility for rare and cryptic species. This may be because honest accounting of known observation bias and spatial autocorrelation are rarely included, thereby limiting statistical inference of resulting distribution maps. We specified and implemented a spatially explicit Bayesian hierarchical model for a cryptic mammal species (pygmy rabbit Brachylagus idahoensis). Our approach used two levels of indirect sign that are naturally hierarchical (burrows and faecal pellets) to build a model that allows for inference on regression coefficients as well as spatially explicit model parameters. We also produced maps of rabbit distribution (occupied burrows) and relative abundance (number of burrows expected to be occupied by pygmy rabbits). The model demonstrated statistically rigorous spatial prediction by including spatial autocorrelation and measurement uncertainty. We demonstrated flexibility of our modelling framework by depicting probabilistic distribution predictions using different assumptions of pygmy rabbit habitat requirements. Spatial representations of the variance of posterior predictive distributions were obtained to evaluate heterogeneity in model fit across the spatial domain. Leave-one-out cross-validation was conducted to evaluate the overall model fit. Synthesis and applications. Our method draws on the strengths of previous work, thereby bridging and extending two active areas of ecological research: species distribution models and multi-state occupancy modelling. Our framework can be extended to encompass both larger extents and other species for which direct estimation of abundance is difficult. ?? 2010 The Authors. Journal compilation ?? 2010 British Ecological Society.

  11. Performance modeling of parallel algorithms for solving neutron diffusion problems

    International Nuclear Information System (INIS)

    Azmy, Y.Y.; Kirk, B.L.

    1995-01-01

    Neutron diffusion calculations are the most common computational methods used in the design, analysis, and operation of nuclear reactors and related activities. Here, mathematical performance models are developed for the parallel algorithm used to solve the neutron diffusion equation on message passing and shared memory multiprocessors represented by the Intel iPSC/860 and the Sequent Balance 8000, respectively. The performance models are validated through several test problems, and these models are used to estimate the performance of each of the two considered architectures in situations typical of practical applications, such as fine meshes and a large number of participating processors. While message passing computers are capable of producing speedup, the parallel efficiency deteriorates rapidly as the number of processors increases. Furthermore, the speedup fails to improve appreciably for massively parallel computers so that only small- to medium-sized message passing multiprocessors offer a reasonable platform for this algorithm. In contrast, the performance model for the shared memory architecture predicts very high efficiency over a wide range of number of processors reasonable for this architecture. Furthermore, the model efficiency of the Sequent remains superior to that of the hypercube if its model parameters are adjusted to make its processors as fast as those of the iPSC/860. It is concluded that shared memory computers are better suited for this parallel algorithm than message passing computers

  12. Evaluation of Validity and Reliability for Hierarchical Scales Using Latent Variable Modeling

    Science.gov (United States)

    Raykov, Tenko; Marcoulides, George A.

    2012-01-01

    A latent variable modeling method is outlined, which accomplishes estimation of criterion validity and reliability for a multicomponent measuring instrument with hierarchical structure. The approach provides point and interval estimates for the scale criterion validity and reliability coefficients, and can also be used for testing composite or…

  13. Experiences Using Hybrid MPI/OpenMP in the Real World: Parallelization of a 3D CFD Solver for Multi-Core Node Clusters

    Directory of Open Access Journals (Sweden)

    Gabriele Jost

    2010-01-01

    Full Text Available Today most systems in high-performance computing (HPC feature a hierarchical hardware design: shared-memory nodes with several multi-core CPUs are connected via a network infrastructure. When parallelizing an application for these architectures it seems natural to employ a hierarchical programming model such as combining MPI and OpenMP. Nevertheless, there is the general lore that pure MPI outperforms the hybrid MPI/OpenMP approach. In this paper, we describe the hybrid MPI/OpenMP parallelization of IR3D (Incompressible Realistic 3-D code, a full-scale real-world application, which simulates the environmental effects on the evolution of vortices trailing behind control surfaces of underwater vehicles. We discuss performance, scalability and limitations of the pure MPI version of the code on a variety of hardware platforms and show how the hybrid approach can help to overcome certain limitations.

  14. iHAT: interactive Hierarchical Aggregation Table for Genetic Association Data

    Directory of Open Access Journals (Sweden)

    Heinrich Julian

    2012-05-01

    Full Text Available Abstract In the search for single-nucleotide polymorphisms which influence the observable phenotype, genome wide association studies have become an important technique for the identification of associations between genotype and phenotype of a diverse set of sequence-based data. We present a methodology for the visual assessment of single-nucleotide polymorphisms using interactive hierarchical aggregation techniques combined with methods known from traditional sequence browsers and cluster heatmaps. Our tool, the interactive Hierarchical Aggregation Table (iHAT, facilitates the visualization of multiple sequence alignments, associated metadata, and hierarchical clusterings. Different color maps and aggregation strategies as well as filtering options support the user in finding correlations between sequences and metadata. Similar to other visualizations such as parallel coordinates or heatmaps, iHAT relies on the human pattern-recognition ability for spotting patterns that might indicate correlation or anticorrelation. We demonstrate iHAT using artificial and real-world datasets for DNA and protein association studies as well as expression Quantitative Trait Locus data.

  15. A hierarchical lattice spring model to simulate the mechanics of 2-D materials-based composites

    Directory of Open Access Journals (Sweden)

    Lucas eBrely

    2015-07-01

    Full Text Available In the field of engineering materials, strength and toughness are typically two mutually exclusive properties. Structural biological materials such as bone, tendon or dentin have resolved this conflict and show unprecedented damage tolerance, toughness and strength levels. The common feature of these materials is their hierarchical heterogeneous structure, which contributes to increased energy dissipation before failure occurring at different scale levels. These structural properties are the key to exceptional bioinspired material mechanical properties, in particular for nanocomposites. Here, we develop a numerical model in order to simulate the mechanisms involved in damage progression and energy dissipation at different size scales in nano- and macro-composites, which depend both on the heterogeneity of the material and on the type of hierarchical structure. Both these aspects have been incorporated into a 2-dimensional model based on a Lattice Spring Model, accounting for geometrical nonlinearities and including statistically-based fracture phenomena. The model has been validated by comparing numerical results to continuum and fracture mechanics results as well as finite elements simulations, and then employed to study how structural aspects impact on hierarchical composite material properties. Results obtained with the numerical code highlight the dependence of stress distributions on matrix properties and reinforcement dispersion, geometry and properties, and how failure of sacrificial elements is directly involved in the damage tolerance of the material. Thanks to the rapidly developing field of nanocomposite manufacture, it is already possible to artificially create materials with multi-scale hierarchical reinforcements. The developed code could be a valuable support in the design and optimization of these advanced materials, drawing inspiration and going beyond biological materials with exceptional mechanical properties.

  16. Performance of a fine-grained parallel model for multi-group nodal-transport calculations in three-dimensional pin-by-pin reactor geometry

    International Nuclear Information System (INIS)

    Masahiro, Tatsumi; Akio, Yamamoto

    2003-01-01

    A production code SCOPE2 was developed based on the fine-grained parallel algorithm by the red/black iterative method targeting parallel computing environments such as a PC-cluster. It can perform a depletion calculation in a few hours using a PC-cluster with the model based on a 9-group nodal-SP3 transport method in 3-dimensional pin-by-pin geometry for in-core fuel management of commercial PWRs. The present algorithm guarantees the identical convergence process as that in serial execution, which is very important from the viewpoint of quality management. The fine-mesh geometry is constructed by hierarchical decomposition with introduction of intermediate management layer as a block that is a quarter piece of a fuel assembly in radial direction. A combination of a mesh division scheme forcing even meshes on each edge and a latency-hidden communication algorithm provided simplicity and efficiency to message passing to enhance parallel performance. Inter-processor communication and parallel I/O access were realized using the MPI functions. Parallel performance was measured for depletion calculations by the 9-group nodal-SP3 transport method in 3-dimensional pin-by-pin geometry with 340 x 340 x 26 meshes for full core geometry and 170 x 170 x 26 for quarter core geometry. A PC cluster that consists of 24 Pentium-4 processors connected by the Fast Ethernet was used for the performance measurement. Calculations in full core geometry gave better speedups compared to those in quarter core geometry because of larger granularity. Fine-mesh sweep and feedback calculation parts gave almost perfect scalability since granularity is large enough, while 1-group coarse-mesh diffusion acceleration gave only around 80%. The speedup and parallel efficiency for total computation time were 22.6 and 94%, respectively, for the calculation in full core geometry with 24 processors. (authors)

  17. Measuring Teacher Effectiveness through Hierarchical Linear Models: Exploring Predictors of Student Achievement and Truancy

    Science.gov (United States)

    Subedi, Bidya Raj; Reese, Nancy; Powell, Randy

    2015-01-01

    This study explored significant predictors of student's Grade Point Average (GPA) and truancy (days absent), and also determined teacher effectiveness based on proportion of variance explained at teacher level model. We employed a two-level hierarchical linear model (HLM) with student and teacher data at level-1 and level-2 models, respectively.…

  18. The Potsdam Parallel Ice Sheet Model (PISM-PIK) - Part 1: Model description

    Science.gov (United States)

    Winkelmann, R.; Martin, M. A.; Haseloff, M.; Albrecht, T.; Bueler, E.; Khroulev, C.; Levermann, A.

    2011-09-01

    We present the Potsdam Parallel Ice Sheet Model (PISM-PIK), developed at the Potsdam Institute for Climate Impact Research to be used for simulations of large-scale ice sheet-shelf systems. It is derived from the Parallel Ice Sheet Model (Bueler and Brown, 2009). Velocities are calculated by superposition of two shallow stress balance approximations within the entire ice covered region: the shallow ice approximation (SIA) is dominant in grounded regions and accounts for shear deformation parallel to the geoid. The plug-flow type shallow shelf approximation (SSA) dominates the velocity field in ice shelf regions and serves as a basal sliding velocity in grounded regions. Ice streams can be identified diagnostically as regions with a significant contribution of membrane stresses to the local momentum balance. All lateral boundaries in PISM-PIK are free to evolve, including the grounding line and ice fronts. Ice shelf margins in particular are modeled using Neumann boundary conditions for the SSA equations, reflecting a hydrostatic stress imbalance along the vertical calving face. The ice front position is modeled using a subgrid-scale representation of calving front motion (Albrecht et al., 2011) and a physically-motivated calving law based on horizontal spreading rates. The model is tested in experiments from the Marine Ice Sheet Model Intercomparison Project (MISMIP). A dynamic equilibrium simulation of Antarctica under present-day conditions is presented in Martin et al. (2011).

  19. Parallel-Batch Scheduling with Two Models of Deterioration to Minimize the Makespan

    Directory of Open Access Journals (Sweden)

    Cuixia Miao

    2014-01-01

    Full Text Available We consider the bounded parallel-batch scheduling with two models of deterioration, in which the processing time of the first model is pj=aj+αt and of the second model is pj=a+αjt. The objective is to minimize the makespan. We present O(n log n time algorithms for the single-machine problems, respectively. And we propose fully polynomial time approximation schemes to solve the identical-parallel-machine problem and uniform-parallel-machine problem, respectively.

  20. Optical hierarchical control of the nuclear power plant using the PPA

    International Nuclear Information System (INIS)

    Yamagishi, Yasuaki; Tsuji, Masashi; Ogawa, Yuichi

    1986-01-01

    Optimization of a large-scale system is a very difficult task due to the requirement of an extremely large amount of calculation. One effective way to avoid the difficulty is to divide the system into a number of subsystems and carry out the optimization calculation with each subsystem allowing mutual cooperation among the subsystems to meet the need to optimize the whole system. When an algorithm based on the dividing method is realized in the parallel multi-processor system instead of the conventional computor of series type operation, the efficiency of calculation will be largely improved. In this paper, using the PPA (Parallel Processor Array) located in the ''General Purpose Simulator Facility'' of this school, and utilizing the hierarchical structure which is furnished within the PPA system, we prepare a software for the multilevel optimization system, and confirm the effectiveness of the hierarchical algorithm. We then, using the prepared algorithm, design the optimal control of the primary system of two loop PWR power plant, and confirmed the effectiveness of the designed optimal control. (author)

  1. The Case for a Hierarchical Cosmology

    Science.gov (United States)

    Vaucouleurs, G. de

    1970-01-01

    The development of modern theoretical cosmology is presented and some questionable assumptions of orthodox cosmology are pointed out. Suggests that recent observations indicate that hierarchical clustering is a basic factor in cosmology. The implications of hierarchical models of the universe are considered. Bibliography. (LC)

  2. Market Competitiveness Evaluation of Mechanical Equipment with a Pairwise Comparisons Hierarchical Model.

    Science.gov (United States)

    Hou, Fujun

    2016-01-01

    This paper provides a description of how market competitiveness evaluations concerning mechanical equipment can be made in the context of multi-criteria decision environments. It is assumed that, when we are evaluating the market competitiveness, there are limited number of candidates with some required qualifications, and the alternatives will be pairwise compared on a ratio scale. The qualifications are depicted as criteria in hierarchical structure. A hierarchical decision model called PCbHDM was used in this study based on an analysis of its desirable traits. Illustration and comparison shows that the PCbHDM provides a convenient and effective tool for evaluating the market competitiveness of mechanical equipment. The researchers and practitioners might use findings of this paper in application of PCbHDM.

  3. Calibrating the sqHIMMELI v1.0 wetland methane emission model with hierarchical modeling and adaptive MCMC

    Science.gov (United States)

    Susiluoto, Jouni; Raivonen, Maarit; Backman, Leif; Laine, Marko; Makela, Jarmo; Peltola, Olli; Vesala, Timo; Aalto, Tuula

    2018-03-01

    Estimating methane (CH4) emissions from natural wetlands is complex, and the estimates contain large uncertainties. The models used for the task are typically heavily parameterized and the parameter values are not well known. In this study, we perform a Bayesian model calibration for a new wetland CH4 emission model to improve the quality of the predictions and to understand the limitations of such models.The detailed process model that we analyze contains descriptions for CH4 production from anaerobic respiration, CH4 oxidation, and gas transportation by diffusion, ebullition, and the aerenchyma cells of vascular plants. The processes are controlled by several tunable parameters. We use a hierarchical statistical model to describe the parameters and obtain the posterior distributions of the parameters and uncertainties in the processes with adaptive Markov chain Monte Carlo (MCMC), importance resampling, and time series analysis techniques. For the estimation, the analysis utilizes measurement data from the Siikaneva flux measurement site in southern Finland. The uncertainties related to the parameters and the modeled processes are described quantitatively. At the process level, the flux measurement data are able to constrain the CH4 production processes, methane oxidation, and the different gas transport processes. The posterior covariance structures explain how the parameters and the processes are related. Additionally, the flux and flux component uncertainties are analyzed both at the annual and daily levels. The parameter posterior densities obtained provide information regarding importance of the different processes, which is also useful for development of wetland methane emission models other than the square root HelsinkI Model of MEthane buiLd-up and emIssion for peatlands (sqHIMMELI). The hierarchical modeling allows us to assess the effects of some of the parameters on an annual basis. The results of the calibration and the cross validation suggest that

  4. A hybrid deterministic-probabilistic approach to model the mechanical response of helically arranged hierarchical strands

    Science.gov (United States)

    Fraldi, M.; Perrella, G.; Ciervo, M.; Bosia, F.; Pugno, N. M.

    2017-09-01

    Very recently, a Weibull-based probabilistic strategy has been successfully applied to bundles of wires to determine their overall stress-strain behaviour, also capturing previously unpredicted nonlinear and post-elastic features of hierarchical strands. This approach is based on the so-called "Equal Load Sharing (ELS)" hypothesis by virtue of which, when a wire breaks, the load acting on the strand is homogeneously redistributed among the surviving wires. Despite the overall effectiveness of the method, some discrepancies between theoretical predictions and in silico Finite Element-based simulations or experimental findings might arise when more complex structures are analysed, e.g. helically arranged bundles. To overcome these limitations, an enhanced hybrid approach is proposed in which the probability of rupture is combined with a deterministic mechanical model of a strand constituted by helically-arranged and hierarchically-organized wires. The analytical model is validated comparing its predictions with both Finite Element simulations and experimental tests. The results show that generalized stress-strain responses - incorporating tension/torsion coupling - are naturally found and, once one or more elements break, the competition between geometry and mechanics of the strand microstructure, i.e. the different cross sections and helical angles of the wires in the different hierarchical levels of the strand, determines the no longer homogeneous stress redistribution among the surviving wires whose fate is hence governed by a "Hierarchical Load Sharing" criterion.

  5. Quantum Ising model on hierarchical structures

    International Nuclear Information System (INIS)

    Lin Zhifang; Tao Ruibao.

    1989-11-01

    A quantum Ising chain with both the exchange couplings and the transverse fields arranged in a hierarchical way is considered. Exact analytical results for the critical line and energy gap are obtained. It is shown that when R 1 not= R 2 , where R 1 and R 2 are the hierarchical parameters for the exchange couplings and the transverse fields, respectively, the system undergoes a phase transition in a different universality class from the pure quantum Ising chain with R 1 =R 2 =1. On the other hand, when R 1 =R 2 =R, there exists a critical value R c dependent on the furcating number of the hierarchy. In case of R > R c , the system is shown to exhibit as Ising-like critical point with the critical behaviour the same as in the pure case, while for R c the system belongs to another universality class. (author). 19 refs, 2 figs

  6. Unified Singularity Modeling and Reconfiguration of 3rTPS Metamorphic Parallel Mechanisms with Parallel Constraint Screws

    Directory of Open Access Journals (Sweden)

    Yufeng Zhuang

    2015-01-01

    Full Text Available This paper presents a unified singularity modeling and reconfiguration analysis of variable topologies of a class of metamorphic parallel mechanisms with parallel constraint screws. The new parallel mechanisms consist of three reconfigurable rTPS limbs that have two working phases stemming from the reconfigurable Hooke (rT joint. While one phase has full mobility, the other supplies a constraint force to the platform. Based on these, the platform constraint screw systems show that the new metamorphic parallel mechanisms have four topologies by altering the limb phases with mobility change among 1R2T (one rotation with two translations, 2R2T, and 3R2T and mobility 6. Geometric conditions of the mechanism design are investigated with some special topologies illustrated considering the limb arrangement. Following this and the actuation scheme analysis, a unified Jacobian matrix is formed using screw theory to include the change between geometric constraints and actuation constraints in the topology reconfiguration. Various singular configurations are identified by analyzing screw dependency in the Jacobian matrix. The work in this paper provides basis for singularity-free workspace analysis and optimal design of the class of metamorphic parallel mechanisms with parallel constraint screws which shows simple geometric constraints with potential simple kinematics and dynamics properties.

  7. Co-simulation of dynamic systems in parallel and serial model configurations

    International Nuclear Information System (INIS)

    Sweafford, Trevor; Yoon, Hwan Sik

    2013-01-01

    Recent advancement in simulation software and computation hardware make it realizable to simulate complex dynamic systems comprised of multiple submodels developed in different modeling languages. The so-called co-simulation enables one to study various aspects of a complex dynamic system with heterogeneous submodels in a cost-effective manner. Among several different model configurations for co-simulation, synchronized parallel configuration is regarded to expedite the simulation process by simulation multiple sub models concurrently on a multi core processor. In this paper, computational accuracies as well as computation time are studied for three different co-simulation frameworks : integrated, serial, and parallel. for this purpose, analytical evaluations of the three different methods are made using the explicit Euler method and then they are applied to two-DOF mass-spring systems. The result show that while the parallel simulation configuration produces the same accurate results as the integrated configuration, results of the serial configuration, results of the serial configuration show a slight deviation. it is also shown that the computation time can be reduced by running simulation in the parallel configuration. Therefore, it can be concluded that the synchronized parallel simulation methodology is the best for both simulation accuracy and time efficiency.

  8. Efficient parallel implementation of active appearance model fitting algorithm on GPU.

    Science.gov (United States)

    Wang, Jinwei; Ma, Xirong; Zhu, Yuanping; Sun, Jizhou

    2014-01-01

    The active appearance model (AAM) is one of the most powerful model-based object detecting and tracking methods which has been widely used in various situations. However, the high-dimensional texture representation causes very time-consuming computations, which makes the AAM difficult to apply to real-time systems. The emergence of modern graphics processing units (GPUs) that feature a many-core, fine-grained parallel architecture provides new and promising solutions to overcome the computational challenge. In this paper, we propose an efficient parallel implementation of the AAM fitting algorithm on GPUs. Our design idea is fine grain parallelism in which we distribute the texture data of the AAM, in pixels, to thousands of parallel GPU threads for processing, which makes the algorithm fit better into the GPU architecture. We implement our algorithm using the compute unified device architecture (CUDA) on the Nvidia's GTX 650 GPU, which has the latest Kepler architecture. To compare the performance of our algorithm with different data sizes, we built sixteen face AAM models of different dimensional textures. The experiment results show that our parallel AAM fitting algorithm can achieve real-time performance for videos even on very high-dimensional textures.

  9. Parallelization of simulation code for liquid-gas model of lattice-gas fluid

    International Nuclear Information System (INIS)

    Kawai, Wataru; Ebihara, Kenichi; Kume, Etsuo; Watanabe, Tadashi

    2000-03-01

    A simulation code for hydrodynamical phenomena which is based on the liquid-gas model of lattice-gas fluid is parallelized by using MPI (Message Passing Interface) library. The parallelized code can be applied to the larger size of the simulations than the non-parallelized code. The calculation times of the parallelized code on VPP500 (Vector-Parallel super computer with dispersed memory units), AP3000 (Scalar-parallel server with dispersed memory units), and a workstation cluster decreased in inverse proportion to the number of processors. (author)

  10. The Hierarchical Factor Model of ADHD: Invariant across Age and National Groupings?

    Science.gov (United States)

    Toplak, Maggie E.; Sorge, Geoff B.; Flora, David B.; Chen, Wai; Banaschewski, Tobias; Buitelaar, Jan; Ebstein, Richard; Eisenberg, Jacques; Franke, Barbara; Gill, Michael; Miranda, Ana; Oades, Robert D.; Roeyers, Herbert; Rothenberger, Aribert; Sergeant, Joseph; Sonuga-Barke, Edmund; Steinhausen, Hans-Christoph; Thompson, Margaret; Tannock, Rosemary; Asherson, Philip; Faraone, Stephen V.

    2012-01-01

    Objective: To examine the factor structure of attention-deficit/hyperactivity disorder (ADHD) in a clinical sample of 1,373 children and adolescents with ADHD and their 1,772 unselected siblings recruited from different countries across a large age range. Hierarchical and correlated factor analytic models were compared separately in the ADHD and…

  11. Perfect observables for the hierarchical non-linear O(N)-invariant σ-model

    International Nuclear Information System (INIS)

    Wieczerkowski, C.; Xylander, Y.

    1995-05-01

    We compute moving eigenvalues and the eigenvectors of the linear renormalization group transformation for observables along the renormalized trajectory of the hierarchical non-linear O(N)-invariant σ-model by means of perturbation theory in the running coupling constant. Moving eigenvectors are defined as solutions to a Callan-Symanzik type equation. (orig.)

  12. On hierarchical models for visual recognition and learning of objects, scenes, and activities

    CERN Document Server

    Spehr, Jens

    2015-01-01

    In many computer vision applications, objects have to be learned and recognized in images or image sequences. This book presents new probabilistic hierarchical models that allow an efficient representation of multiple objects of different categories, scales, rotations, and views. The idea is to exploit similarities between objects and object parts in order to share calculations and avoid redundant information. Furthermore inference approaches for fast and robust detection are presented. These new approaches combine the idea of compositional and similarity hierarchies and overcome limitations of previous methods. Besides classical object recognition the book shows the use for detection of human poses in a project for gait analysis. The use of activity detection is presented for the design of environments for ageing, to identify activities and behavior patterns in smart homes. In a presented project for parking spot detection using an intelligent vehicle, the proposed approaches are used to hierarchically model...

  13. Galactic chemical evolution in hierarchical formation models - I. Early-type galaxies in the local Universe

    NARCIS (Netherlands)

    Arrigoni, Matías; Trager, Scott C.; Somerville, Rachel S.; Gibson, Brad K.

    We study the metallicities and abundance ratios of early-type galaxies in cosmological semi-analytic models (SAMs) within the hierarchical galaxy formation paradigm. To achieve this we implemented a detailed galactic chemical evolution model and can now predict abundances of individual elements for

  14. Galactic chemical evolution in hierarchical formation models : I. Early-type galaxies in the local Universe

    NARCIS (Netherlands)

    Arrigoni, Matias; Trager, Scott C.; Somerville, Rachel S.; Gibson, Brad K.

    2010-01-01

    We study the metallicities and abundance ratios of early-type galaxies in cosmological semi-analytic models (SAMs) within the hierarchical galaxy formation paradigm. To achieve this we implemented a detailed galactic chemical evolution model and can now predict abundances of individual elements for

  15. A hierarchical probabilistic model for rapid object categorization in natural scenes.

    Directory of Open Access Journals (Sweden)

    Xiaofu He

    Full Text Available Humans can categorize objects in complex natural scenes within 100-150 ms. This amazing ability of rapid categorization has motivated many computational models. Most of these models require extensive training to obtain a decision boundary in a very high dimensional (e.g., ∼6,000 in a leading model feature space and often categorize objects in natural scenes by categorizing the context that co-occurs with objects when objects do not occupy large portions of the scenes. It is thus unclear how humans achieve rapid scene categorization.To address this issue, we developed a hierarchical probabilistic model for rapid object categorization in natural scenes. In this model, a natural object category is represented by a coarse hierarchical probability distribution (PD, which includes PDs of object geometry and spatial configuration of object parts. Object parts are encoded by PDs of a set of natural object structures, each of which is a concatenation of local object features. Rapid categorization is performed as statistical inference. Since the model uses a very small number (∼100 of structures for even complex object categories such as animals and cars, it requires little training and is robust in the presence of large variations within object categories and in their occurrences in natural scenes. Remarkably, we found that the model categorized animals in natural scenes and cars in street scenes with a near human-level performance. We also found that the model located animals and cars in natural scenes, thus overcoming a flaw in many other models which is to categorize objects in natural context by categorizing contextual features. These results suggest that coarse PDs of object categories based on natural object structures and statistical operations on these PDs may underlie the human ability to rapidly categorize scenes.

  16. Modeling and Control of Primary Parallel Isolated Boost Converter

    DEFF Research Database (Denmark)

    Mira Albert, Maria del Carmen; Hernandez Botella, Juan Carlos; Sen, Gökhan

    2012-01-01

    In this paper state space modeling and closed loop controlled operation have been presented for primary parallel isolated boost converter (PPIBC) topology as a battery charging unit. Parasitic resistances have been included to have an accurate dynamic model. The accuracy of the model has been...

  17. Modeling evolutionary dynamics of epigenetic mutations in hierarchically organized tumors.

    Directory of Open Access Journals (Sweden)

    Andrea Sottoriva

    2011-05-01

    Full Text Available The cancer stem cell (CSC concept is a highly debated topic in cancer research. While experimental evidence in favor of the cancer stem cell theory is apparently abundant, the results are often criticized as being difficult to interpret. An important reason for this is that most experimental data that support this model rely on transplantation studies. In this study we use a novel cellular Potts model to elucidate the dynamics of established malignancies that are driven by a small subset of CSCs. Our results demonstrate that epigenetic mutations that occur during mitosis display highly altered dynamics in CSC-driven malignancies compared to a classical, non-hierarchical model of growth. In particular, the heterogeneity observed in CSC-driven tumors is considerably higher. We speculate that this feature could be used in combination with epigenetic (methylation sequencing studies of human malignancies to prove or refute the CSC hypothesis in established tumors without the need for transplantation. Moreover our tumor growth simulations indicate that CSC-driven tumors display evolutionary features that can be considered beneficial during tumor progression. Besides an increased heterogeneity they also exhibit properties that allow the escape of clones from local fitness peaks. This leads to more aggressive phenotypes in the long run and makes the neoplasm more adaptable to stringent selective forces such as cancer treatment. Indeed when therapy is applied the clone landscape of the regrown tumor is more aggressive with respect to the primary tumor, whereas the classical model demonstrated similar patterns before and after therapy. Understanding these often counter-intuitive fundamental properties of (non-hierarchically organized malignancies is a crucial step in validating the CSC concept as well as providing insight into the therapeutical consequences of this model.

  18. Hierarchical Self Assembly of Patterns from the Robinson Tilings: DNA Tile Design in an Enhanced Tile Assembly Model.

    Science.gov (United States)

    Padilla, Jennifer E; Liu, Wenyan; Seeman, Nadrian C

    2012-06-01

    We introduce a hierarchical self assembly algorithm that produces the quasiperiodic patterns found in the Robinson tilings and suggest a practical implementation of this algorithm using DNA origami tiles. We modify the abstract Tile Assembly Model, (aTAM), to include active signaling and glue activation in response to signals to coordinate the hierarchical assembly of Robinson patterns of arbitrary size from a small set of tiles according to the tile substitution algorithm that generates them. Enabling coordinated hierarchical assembly in the aTAM makes possible the efficient encoding of the recursive process of tile substitution.

  19. Deep hierarchical attention network for video description

    Science.gov (United States)

    Li, Shuohao; Tang, Min; Zhang, Jun

    2018-03-01

    Pairing video to natural language description remains a challenge in computer vision and machine translation. Inspired by image description, which uses an encoder-decoder model for reducing visual scene into a single sentence, we propose a deep hierarchical attention network for video description. The proposed model uses convolutional neural network (CNN) and bidirectional LSTM network as encoders while a hierarchical attention network is used as the decoder. Compared to encoder-decoder models used in video description, the bidirectional LSTM network can capture the temporal structure among video frames. Moreover, the hierarchical attention network has an advantage over single-layer attention network on global context modeling. To make a fair comparison with other methods, we evaluate the proposed architecture with different types of CNN structures and decoders. Experimental results on the standard datasets show that our model has a more superior performance than the state-of-the-art techniques.

  20. TYPE Ia SUPERNOVA LIGHT CURVE INFERENCE: HIERARCHICAL MODELS IN THE OPTICAL AND NEAR-INFRARED

    International Nuclear Information System (INIS)

    Mandel, Kaisey S.; Narayan, Gautham; Kirshner, Robert P.

    2011-01-01

    We have constructed a comprehensive statistical model for Type Ia supernova (SN Ia) light curves spanning optical through near-infrared (NIR) data. A hierarchical framework coherently models multiple random and uncertain effects, including intrinsic supernova (SN) light curve covariances, dust extinction and reddening, and distances. An improved BAYESN Markov Chain Monte Carlo code computes probabilistic inferences for the hierarchical model by sampling the global probability density of parameters describing individual SNe and the population. We have applied this hierarchical model to optical and NIR data of 127 SNe Ia from PAIRITEL, CfA3, Carnegie Supernova Project, and the literature. We find an apparent population correlation between the host galaxy extinction A V and the ratio of total-to-selective dust absorption R V . For SNe with low dust extinction, A V ∼ V ∼ 2.5-2.9, while at high extinctions, A V ∼> 1, low values of R V < 2 are favored. The NIR luminosities are excellent standard candles and are less sensitive to dust extinction. They exhibit low correlation with optical peak luminosities, and thus provide independent information on distances. The combination of NIR and optical data constrains the dust extinction and improves the predictive precision of individual SN Ia distances by about 60%. Using cross-validation, we estimate an rms distance modulus prediction error of 0.11 mag for SNe with optical and NIR data versus 0.15 mag for SNe with optical data alone. Continued study of SNe Ia in the NIR is important for improving their utility as precise and accurate cosmological distance indicators.

  1. Hierarchical Models for Type Ia Supernova Light Curves in the Optical and Near Infrared

    Science.gov (United States)

    Mandel, Kaisey; Narayan, G.; Kirshner, R. P.

    2011-01-01

    I have constructed a comprehensive statistical model for Type Ia supernova optical and near infrared light curves. Since the near infrared light curves are excellent standard candles and are less sensitive to dust extinction and reddening, the combination of near infrared and optical data better constrains the host galaxy extinction and improves the precision of distance predictions to SN Ia. A hierarchical probabilistic model coherently accounts for multiple random and uncertain effects, including photometric error, intrinsic supernova light curve variations and correlations across phase and wavelength, dust extinction and reddening, peculiar velocity dispersion and distances. An improved BayeSN MCMC code is implemented for computing probabilistic inferences for individual supernovae and the SN Ia and host galaxy dust populations. I use this hierarchical model to analyze nearby Type Ia supernovae with optical and near infared data from the PAIRITEL, CfA3, and CSP samples and the literature. Using cross-validation to test the robustness of the model predictions, I find that the rms Hubble diagram scatter of predicted distance moduli is 0.11 mag for SN with optical and near infrared data versus 0.15 mag for SN with only optical data. Accounting for the dispersion expected from random peculiar velocities, the rms intrinsic prediction error is 0.08-0.10 mag for SN with both optical and near infrared light curves. I discuss results for the inferred intrinsic correlation structures of the optical-NIR SN Ia light curves and the host galaxy dust distribution captured by the hierarchical model. The continued observation and analysis of Type Ia SN in the optical and near infrared is important for improving their utility as precise and accurate cosmological distance indicators.

  2. Discrete and Continuous Optimization Based on Hierarchical Artificial Bee Colony Optimizer

    Directory of Open Access Journals (Sweden)

    Lianbo Ma

    2014-01-01

    Full Text Available This paper presents a novel optimization algorithm, namely, hierarchical artificial bee colony optimization (HABC, to tackle complex high-dimensional problems. In the proposed multilevel model, the higher-level species can be aggregated by the subpopulations from lower level. In the bottom level, each subpopulation employing the canonical ABC method searches the part-dimensional optimum in parallel, which can be constructed into a complete solution for the upper level. At the same time, the comprehensive learning method with crossover and mutation operator is applied to enhance the global search ability between species. Experiments are conducted on a set of 20 continuous and discrete benchmark problems. The experimental results demonstrate remarkable performance of the HABC algorithm when compared with other six evolutionary algorithms.

  3. Hierarchical Bayesian modeling of the space - time diffusion patterns of cholera epidemic in Kumasi, Ghana

    NARCIS (Netherlands)

    Osei, Frank B.; Osei, F.B.; Duker, Alfred A.; Stein, A.

    2011-01-01

    This study analyses the joint effects of the two transmission routes of cholera on the space-time diffusion dynamics. Statistical models are developed and presented to investigate the transmission network routes of cholera diffusion. A hierarchical Bayesian modelling approach is employed for a joint

  4. Exploration Of Deep Learning Algorithms Using Openacc Parallel Programming Model

    KAUST Repository

    Hamam, Alwaleed A.

    2017-03-13

    Deep learning is based on a set of algorithms that attempt to model high level abstractions in data. Specifically, RBM is a deep learning algorithm that used in the project to increase it\\'s time performance using some efficient parallel implementation by OpenACC tool with best possible optimizations on RBM to harness the massively parallel power of NVIDIA GPUs. GPUs development in the last few years has contributed to growing the concept of deep learning. OpenACC is a directive based ap-proach for computing where directives provide compiler hints to accelerate code. The traditional Restricted Boltzmann Ma-chine is a stochastic neural network that essentially perform a binary version of factor analysis. RBM is a useful neural net-work basis for larger modern deep learning model, such as Deep Belief Network. RBM parameters are estimated using an efficient training method that called Contrastive Divergence. Parallel implementation of RBM is available using different models such as OpenMP, and CUDA. But this project has been the first attempt to apply OpenACC model on RBM.

  5. Exploration Of Deep Learning Algorithms Using Openacc Parallel Programming Model

    KAUST Repository

    Hamam, Alwaleed A.; Khan, Ayaz H.

    2017-01-01

    Deep learning is based on a set of algorithms that attempt to model high level abstractions in data. Specifically, RBM is a deep learning algorithm that used in the project to increase it's time performance using some efficient parallel implementation by OpenACC tool with best possible optimizations on RBM to harness the massively parallel power of NVIDIA GPUs. GPUs development in the last few years has contributed to growing the concept of deep learning. OpenACC is a directive based ap-proach for computing where directives provide compiler hints to accelerate code. The traditional Restricted Boltzmann Ma-chine is a stochastic neural network that essentially perform a binary version of factor analysis. RBM is a useful neural net-work basis for larger modern deep learning model, such as Deep Belief Network. RBM parameters are estimated using an efficient training method that called Contrastive Divergence. Parallel implementation of RBM is available using different models such as OpenMP, and CUDA. But this project has been the first attempt to apply OpenACC model on RBM.

  6. Improving Hierarchical Models Using Historical Data with Applications in High-Throughput Genomics Data Analysis.

    Science.gov (United States)

    Li, Ben; Li, Yunxiao; Qin, Zhaohui S

    2017-06-01

    Modern high-throughput biotechnologies such as microarray and next generation sequencing produce a massive amount of information for each sample assayed. However, in a typical high-throughput experiment, only limited amount of data are observed for each individual feature, thus the classical 'large p , small n ' problem. Bayesian hierarchical model, capable of borrowing strength across features within the same dataset, has been recognized as an effective tool in analyzing such data. However, the shrinkage effect, the most prominent feature of hierarchical features, can lead to undesirable over-correction for some features. In this work, we discuss possible causes of the over-correction problem and propose several alternative solutions. Our strategy is rooted in the fact that in the Big Data era, large amount of historical data are available which should be taken advantage of. Our strategy presents a new framework to enhance the Bayesian hierarchical model. Through simulation and real data analysis, we demonstrated superior performance of the proposed strategy. Our new strategy also enables borrowing information across different platforms which could be extremely useful with emergence of new technologies and accumulation of data from different platforms in the Big Data era. Our method has been implemented in R package "adaptiveHM", which is freely available from https://github.com/benliemory/adaptiveHM.

  7. LiDAR based prediction of forest biomass using hierarchical models with spatially varying coefficients

    Science.gov (United States)

    Chad Babcock; Andrew O. Finley; John B. Bradford; Randy Kolka; Richard Birdsey; Michael G. Ryan

    2015-01-01

    Many studies and production inventory systems have shown the utility of coupling covariates derived from Light Detection and Ranging (LiDAR) data with forest variables measured on georeferenced inventory plots through regression models. The objective of this study was to propose and assess the use of a Bayesian hierarchical modeling framework that accommodates both...

  8. Efficient Parallel Implementation of Active Appearance Model Fitting Algorithm on GPU

    Directory of Open Access Journals (Sweden)

    Jinwei Wang

    2014-01-01

    Full Text Available The active appearance model (AAM is one of the most powerful model-based object detecting and tracking methods which has been widely used in various situations. However, the high-dimensional texture representation causes very time-consuming computations, which makes the AAM difficult to apply to real-time systems. The emergence of modern graphics processing units (GPUs that feature a many-core, fine-grained parallel architecture provides new and promising solutions to overcome the computational challenge. In this paper, we propose an efficient parallel implementation of the AAM fitting algorithm on GPUs. Our design idea is fine grain parallelism in which we distribute the texture data of the AAM, in pixels, to thousands of parallel GPU threads for processing, which makes the algorithm fit better into the GPU architecture. We implement our algorithm using the compute unified device architecture (CUDA on the Nvidia’s GTX 650 GPU, which has the latest Kepler architecture. To compare the performance of our algorithm with different data sizes, we built sixteen face AAM models of different dimensional textures. The experiment results show that our parallel AAM fitting algorithm can achieve real-time performance for videos even on very high-dimensional textures.

  9. The Parallel System for Integrating Impact Models and Sectors (pSIMS)

    Science.gov (United States)

    Elliott, Joshua; Kelly, David; Chryssanthacopoulos, James; Glotter, Michael; Jhunjhnuwala, Kanika; Best, Neil; Wilde, Michael; Foster, Ian

    2014-01-01

    We present a framework for massively parallel climate impact simulations: the parallel System for Integrating Impact Models and Sectors (pSIMS). This framework comprises a) tools for ingesting and converting large amounts of data to a versatile datatype based on a common geospatial grid; b) tools for translating this datatype into custom formats for site-based models; c) a scalable parallel framework for performing large ensemble simulations, using any one of a number of different impacts models, on clusters, supercomputers, distributed grids, or clouds; d) tools and data standards for reformatting outputs to common datatypes for analysis and visualization; and e) methodologies for aggregating these datatypes to arbitrary spatial scales such as administrative and environmental demarcations. By automating many time-consuming and error-prone aspects of large-scale climate impacts studies, pSIMS accelerates computational research, encourages model intercomparison, and enhances reproducibility of simulation results. We present the pSIMS design and use example assessments to demonstrate its multi-model, multi-scale, and multi-sector versatility.

  10. Parallelization of a hydrological model using the message passing interface

    Science.gov (United States)

    Wu, Yiping; Li, Tiejian; Sun, Liqun; Chen, Ji

    2013-01-01

    With the increasing knowledge about the natural processes, hydrological models such as the Soil and Water Assessment Tool (SWAT) are becoming larger and more complex with increasing computation time. Additionally, other procedures such as model calibration, which may require thousands of model iterations, can increase running time and thus further reduce rapid modeling and analysis. Using the widely-applied SWAT as an example, this study demonstrates how to parallelize a serial hydrological model in a Windows® environment using a parallel programing technology—Message Passing Interface (MPI). With a case study, we derived the optimal values for the two parameters (the number of processes and the corresponding percentage of work to be distributed to the master process) of the parallel SWAT (P-SWAT) on an ordinary personal computer and a work station. Our study indicates that model execution time can be reduced by 42%–70% (or a speedup of 1.74–3.36) using multiple processes (two to five) with a proper task-distribution scheme (between the master and slave processes). Although the computation time cost becomes lower with an increasing number of processes (from two to five), this enhancement becomes less due to the accompanied increase in demand for message passing procedures between the master and all slave processes. Our case study demonstrates that the P-SWAT with a five-process run may reach the maximum speedup, and the performance can be quite stable (fairly independent of a project size). Overall, the P-SWAT can help reduce the computation time substantially for an individual model run, manual and automatic calibration procedures, and optimization of best management practices. In particular, the parallelization method we used and the scheme for deriving the optimal parameters in this study can be valuable and easily applied to other hydrological or environmental models.

  11. A Programming Model for Massive Data Parallelism with Data Dependencies

    International Nuclear Information System (INIS)

    Cui, Xiaohui; Mueller, Frank; Potok, Thomas E.; Zhang, Yongpeng

    2009-01-01

    Accelerating processors can often be more cost and energy effective for a wide range of data-parallel computing problems than general-purpose processors. For graphics processor units (GPUs), this is particularly the case when program development is aided by environments such as NVIDIA s Compute Unified Device Architecture (CUDA), which dramatically reduces the gap between domain-specific architectures and general purpose programming. Nonetheless, general-purpose GPU (GPGPU) programming remains subject to several restrictions. Most significantly, the separation of host (CPU) and accelerator (GPU) address spaces requires explicit management of GPU memory resources, especially for massive data parallelism that well exceeds the memory capacity of GPUs. One solution to this problem is to transfer data between the GPU and host memories frequently. In this work, we investigate another approach. We run massively data-parallel applications on GPU clusters. We further propose a programming model for massive data parallelism with data dependencies for this scenario. Experience from micro benchmarks and real-world applications shows that our model provides not only ease of programming but also significant performance gains

  12. Hierarchical Agent-Based Integrated Modelling Approach for Microgrids with Adoption of EVs and HRES

    Directory of Open Access Journals (Sweden)

    Peng Han

    2014-01-01

    Full Text Available The large adoption of electric vehicles (EVs, hybrid renewable energy systems (HRESs, and the increasing of the loads shall bring significant challenges to the microgrid. The methodology to model microgrid with high EVs and HRESs penetrations is the key to EVs adoption assessment and optimized HRESs deployment. However, considering the complex interactions of the microgrid containing massive EVs and HRESs, any previous single modelling approaches are insufficient. Therefore in this paper, the methodology named Hierarchical Agent-based Integrated Modelling Approach (HAIMA is proposed. With the effective integration of the agent-based modelling with other advanced modelling approaches, the proposed approach theoretically contributes to a new microgrid model hierarchically constituted by microgrid management layer, component layer, and event layer. Then the HAIMA further links the key parameters and interconnects them to achieve the interactions of the whole model. Furthermore, HAIMA practically contributes to a comprehensive microgrid operation system, through which the assessment of the proposed model and the impact of the EVs adoption are achieved. Simulations show that the proposed HAIMA methodology will be beneficial for the microgrid study and EV’s operation assessment and shall be further utilized for the energy management, electricity consumption prediction, the EV scheduling control, and HRES deployment optimization.

  13. Development of Parallel Code for the Alaska Tsunami Forecast Model

    Science.gov (United States)

    Bahng, B.; Knight, W. R.; Whitmore, P.

    2014-12-01

    The Alaska Tsunami Forecast Model (ATFM) is a numerical model used to forecast propagation and inundation of tsunamis generated by earthquakes and other means in both the Pacific and Atlantic Oceans. At the U.S. National Tsunami Warning Center (NTWC), the model is mainly used in a pre-computed fashion. That is, results for hundreds of hypothetical events are computed before alerts, and are accessed and calibrated with observations during tsunamis to immediately produce forecasts. ATFM uses the non-linear, depth-averaged, shallow-water equations of motion with multiply nested grids in two-way communications between domains of each parent-child pair as waves get closer to coastal waters. Even with the pre-computation the task becomes non-trivial as sub-grid resolution gets finer. Currently, the finest resolution Digital Elevation Models (DEM) used by ATFM are 1/3 arc-seconds. With a serial code, large or multiple areas of very high resolution can produce run-times that are unrealistic even in a pre-computed approach. One way to increase the model performance is code parallelization used in conjunction with a multi-processor computing environment. NTWC developers have undertaken an ATFM code-parallelization effort to streamline the creation of the pre-computed database of results with the long term aim of tsunami forecasts from source to high resolution shoreline grids in real time. Parallelization will also permit timely regeneration of the forecast model database with new DEMs; and, will make possible future inclusion of new physics such as the non-hydrostatic treatment of tsunami propagation. The purpose of our presentation is to elaborate on the parallelization approach and to show the compute speed increase on various multi-processor systems.

  14. Final Report of Optimization Algorithms for Hierarchical Problems, with Applications to Nanoporous Materials

    Energy Technology Data Exchange (ETDEWEB)

    Nash, Stephen G.

    2013-11-11

    The research focuses on the modeling and optimization of nanoporous materials. In systems with hierarchical structure that we consider, the physics changes as the scale of the problem is reduced and it can be important to account for physics at the fine level to obtain accurate approximations at coarser levels. For example, nanoporous materials hold promise for energy production and storage. A significant issue is the fabrication of channels within these materials to allow rapid diffusion through the material. One goal of our research is to apply optimization methods to the design of nanoporous materials. Such problems are large and challenging, with hierarchical structure that we believe can be exploited, and with a large range of important scales, down to atomistic. This requires research on large-scale optimization for systems that exhibit different physics at different scales, and the development of algorithms applicable to designing nanoporous materials for many important applications in energy production, storage, distribution, and use. Our research has two major research thrusts. The first is hierarchical modeling. We plan to develop and study hierarchical optimization models for nanoporous materials. The models have hierarchical structure, and attempt to balance the conflicting aims of model fidelity and computational tractability. In addition, we analyze the general hierarchical model, as well as the specific application models, to determine their properties, particularly those properties that are relevant to the hierarchical optimization algorithms. The second thrust was to develop, analyze, and implement a class of hierarchical optimization algorithms, and apply them to the hierarchical models we have developed. We adapted and extended the optimization-based multigrid algorithms of Lewis and Nash to the optimization models exemplified by the hierarchical optimization model. This class of multigrid algorithms has been shown to be a powerful tool for

  15. Anti-hierarchical evolution of the active galactic nucleus space density in a hierarchical universe

    International Nuclear Information System (INIS)

    Enoki, Motohiro; Ishiyama, Tomoaki; Kobayashi, Masakazu A. R.; Nagashima, Masahiro

    2014-01-01

    Recent observations show that the space density of luminous active galactic nuclei (AGNs) peaks at higher redshifts than that of faint AGNs. This downsizing trend in the AGN evolution seems to be contradictory to the hierarchical structure formation scenario. In this study, we present the AGN space density evolution predicted by a semi-analytic model of galaxy and AGN formation based on the hierarchical structure formation scenario. We demonstrate that our model can reproduce the downsizing trend of the AGN space density evolution. The reason for the downsizing trend in our model is a combination of the cold gas depletion as a consequence of star formation, the gas cooling suppression in massive halos, and the AGN lifetime scaling with the dynamical timescale. We assume that a major merger of galaxies causes a starburst, spheroid formation, and cold gas accretion onto a supermassive black hole (SMBH). We also assume that this cold gas accretion triggers AGN activity. Since the cold gas is mainly depleted by star formation and gas cooling is suppressed in massive dark halos, the amount of cold gas accreted onto SMBHs decreases with cosmic time. Moreover, AGN lifetime increases with cosmic time. Thus, at low redshifts, major mergers do not always lead to luminous AGNs. Because the luminosity of AGNs is correlated with the mass of accreted gas onto SMBHs, the space density of luminous AGNs decreases more quickly than that of faint AGNs. We conclude that the anti-hierarchical evolution of the AGN space density is not contradictory to the hierarchical structure formation scenario.

  16. Anti-hierarchical evolution of the active galactic nucleus space density in a hierarchical universe

    Energy Technology Data Exchange (ETDEWEB)

    Enoki, Motohiro [Faculty of Business Administration, Tokyo Keizai University, Kokubunji, Tokyo 185-8502 (Japan); Ishiyama, Tomoaki [Center for Computational Sciences, University of Tsukuba, Tsukuba, Ibaraki 305-8577 (Japan); Kobayashi, Masakazu A. R. [Research Center for Space and Cosmic Evolution, Ehime University, Matsuyama, Ehime 790-8577 (Japan); Nagashima, Masahiro, E-mail: enokimt@tku.ac.jp [Faculty of Education, Nagasaki University, Nagasaki, Nagasaki 852-8521 (Japan)

    2014-10-10

    Recent observations show that the space density of luminous active galactic nuclei (AGNs) peaks at higher redshifts than that of faint AGNs. This downsizing trend in the AGN evolution seems to be contradictory to the hierarchical structure formation scenario. In this study, we present the AGN space density evolution predicted by a semi-analytic model of galaxy and AGN formation based on the hierarchical structure formation scenario. We demonstrate that our model can reproduce the downsizing trend of the AGN space density evolution. The reason for the downsizing trend in our model is a combination of the cold gas depletion as a consequence of star formation, the gas cooling suppression in massive halos, and the AGN lifetime scaling with the dynamical timescale. We assume that a major merger of galaxies causes a starburst, spheroid formation, and cold gas accretion onto a supermassive black hole (SMBH). We also assume that this cold gas accretion triggers AGN activity. Since the cold gas is mainly depleted by star formation and gas cooling is suppressed in massive dark halos, the amount of cold gas accreted onto SMBHs decreases with cosmic time. Moreover, AGN lifetime increases with cosmic time. Thus, at low redshifts, major mergers do not always lead to luminous AGNs. Because the luminosity of AGNs is correlated with the mass of accreted gas onto SMBHs, the space density of luminous AGNs decreases more quickly than that of faint AGNs. We conclude that the anti-hierarchical evolution of the AGN space density is not contradictory to the hierarchical structure formation scenario.

  17. The hierarchical nature of the spin alignment of dark matter haloes in filaments

    Science.gov (United States)

    Aragon-Calvo, M. A.; Yang, Lin Forrest

    2014-05-01

    Dark matter haloes in cosmological filaments and walls have (in average) their spin vector aligned with their host structure. While haloes in walls are aligned with the plane of the wall independently of their mass, haloes in filaments present a mass-dependent two-regime orientation. Here, we show that the transition mass determining the change in the alignment regime (from parallel to perpendicular) depends on the hierarchical level in which the halo is located, reflecting the hierarchical nature of the Cosmic Web. By explicitly exposing the hierarchical structure of the Cosmic Web, we are able to identify the contributions of different components of the filament network to the alignment signal. We propose a unifying picture of angular momentum acquisition that is based on the results presented here and previous results found by other authors. In order to do a hierarchical characterization of the Cosmic Web, we introduce a new implementation of the multiscale morphology filter, the MMF-2, that significantly improves the identification of structures and explicitly describes their hierarchy. L36

  18. Parallel computing in enterprise modeling.

    Energy Technology Data Exchange (ETDEWEB)

    Goldsby, Michael E.; Armstrong, Robert C.; Shneider, Max S.; Vanderveen, Keith; Ray, Jaideep; Heath, Zach; Allan, Benjamin A.

    2008-08-01

    This report presents the results of our efforts to apply high-performance computing to entity-based simulations with a multi-use plugin for parallel computing. We use the term 'Entity-based simulation' to describe a class of simulation which includes both discrete event simulation and agent based simulation. What simulations of this class share, and what differs from more traditional models, is that the result sought is emergent from a large number of contributing entities. Logistic, economic and social simulations are members of this class where things or people are organized or self-organize to produce a solution. Entity-based problems never have an a priori ergodic principle that will greatly simplify calculations. Because the results of entity-based simulations can only be realized at scale, scalable computing is de rigueur for large problems. Having said that, the absence of a spatial organizing principal makes the decomposition of the problem onto processors problematic. In addition, practitioners in this domain commonly use the Java programming language which presents its own problems in a high-performance setting. The plugin we have developed, called the Parallel Particle Data Model, overcomes both of these obstacles and is now being used by two Sandia frameworks: the Decision Analysis Center, and the Seldon social simulation facility. While the ability to engage U.S.-sized problems is now available to the Decision Analysis Center, this plugin is central to the success of Seldon. Because Seldon relies on computationally intensive cognitive sub-models, this work is necessary to achieve the scale necessary for realistic results. With the recent upheavals in the financial markets, and the inscrutability of terrorist activity, this simulation domain will likely need a capability with ever greater fidelity. High-performance computing will play an important part in enabling that greater fidelity.

  19. Discovering hierarchical structure in normal relational data

    DEFF Research Database (Denmark)

    Schmidt, Mikkel Nørgaard; Herlau, Tue; Mørup, Morten

    2014-01-01

    -parametric generative model for hierarchical clustering of similarity based on multifurcating Gibbs fragmentation trees. This allows us to infer and display the posterior distribution of hierarchical structures that comply with the data. We demonstrate the utility of our method on synthetic data and data of functional...

  20. Batched QR and SVD Algorithms on GPUs with Applications in Hierarchical Matrix Compression

    KAUST Repository

    Halim Boukaram, Wajih

    2017-09-14

    We present high performance implementations of the QR and the singular value decomposition of a batch of small matrices hosted on the GPU with applications in the compression of hierarchical matrices. The one-sided Jacobi algorithm is used for its simplicity and inherent parallelism as a building block for the SVD of low rank blocks using randomized methods. We implement multiple kernels based on the level of the GPU memory hierarchy in which the matrices can reside and show substantial speedups against streamed cuSOLVER SVDs. The resulting batched routine is a key component of hierarchical matrix compression, opening up opportunities to perform H-matrix arithmetic efficiently on GPUs.

  1. Batched QR and SVD Algorithms on GPUs with Applications in Hierarchical Matrix Compression

    KAUST Repository

    Halim Boukaram, Wajih; Turkiyyah, George; Ltaief, Hatem; Keyes, David E.

    2017-01-01

    We present high performance implementations of the QR and the singular value decomposition of a batch of small matrices hosted on the GPU with applications in the compression of hierarchical matrices. The one-sided Jacobi algorithm is used for its simplicity and inherent parallelism as a building block for the SVD of low rank blocks using randomized methods. We implement multiple kernels based on the level of the GPU memory hierarchy in which the matrices can reside and show substantial speedups against streamed cuSOLVER SVDs. The resulting batched routine is a key component of hierarchical matrix compression, opening up opportunities to perform H-matrix arithmetic efficiently on GPUs.

  2. Hierarchical modelling of line commutated power systems used in particle accelerators using Saber

    International Nuclear Information System (INIS)

    Reimund, J.A.

    1993-01-01

    This paper discusses the use of hierarchical simulation models using the program Saber trademark for the prediction of magnet ripple currents generated by the power supply/output filter combination. Modeling of an entire power system connected to output filters and particle accelerator ring magnets will be presented. Special emphasis is made on the modeling of power source imbalances caused by transformer impedance imbalances and utility variances. The affect that these imbalances have on the harmonic content of ripple current is also investigated

  3. The Potsdam Parallel Ice Sheet Model (PISM-PIK – Part 1: Model description

    Directory of Open Access Journals (Sweden)

    R. Winkelmann

    2011-09-01

    Full Text Available We present the Potsdam Parallel Ice Sheet Model (PISM-PIK, developed at the Potsdam Institute for Climate Impact Research to be used for simulations of large-scale ice sheet-shelf systems. It is derived from the Parallel Ice Sheet Model (Bueler and Brown, 2009. Velocities are calculated by superposition of two shallow stress balance approximations within the entire ice covered region: the shallow ice approximation (SIA is dominant in grounded regions and accounts for shear deformation parallel to the geoid. The plug-flow type shallow shelf approximation (SSA dominates the velocity field in ice shelf regions and serves as a basal sliding velocity in grounded regions. Ice streams can be identified diagnostically as regions with a significant contribution of membrane stresses to the local momentum balance. All lateral boundaries in PISM-PIK are free to evolve, including the grounding line and ice fronts. Ice shelf margins in particular are modeled using Neumann boundary conditions for the SSA equations, reflecting a hydrostatic stress imbalance along the vertical calving face. The ice front position is modeled using a subgrid-scale representation of calving front motion (Albrecht et al., 2011 and a physically-motivated calving law based on horizontal spreading rates. The model is tested in experiments from the Marine Ice Sheet Model Intercomparison Project (MISMIP. A dynamic equilibrium simulation of Antarctica under present-day conditions is presented in Martin et al. (2011.

  4. Hierarchical modeling of plasma and transport phenomena in a dielectric barrier discharge reactor

    Science.gov (United States)

    Bali, N.; Aggelopoulos, C. A.; Skouras, E. D.; Tsakiroglou, C. D.; Burganos, V. N.

    2017-12-01

    A novel dual-time hierarchical approach is developed to link the plasma process to macroscopic transport phenomena in the interior of a dielectric barrier discharge (DBD) reactor that has been used for soil remediation (Aggelopoulos et al 2016 Chem. Eng. J. 301 353-61). The generation of active species by plasma reactions is simulated at the microseconds (µs) timescale, whereas convection and thermal conduction are simulated at the macroscopic (minutes) timescale. This hierarchical model is implemented in order to investigate the influence of the plasma DBD process on the transport and reaction mechanisms during remediation of polluted soil. In the microscopic model, the variables of interest include the plasma-induced reactive concentrations, while in the macroscopic approach, the temperature distribution, and the velocity field both inside the discharge gap and within the polluted soil material as well. For the latter model, the Navier-Stokes and Darcy Brinkman equations for the transport phenomena in the porous domain are solved numerically using a FEM software. The effective medium theory is employed to provide estimates of the effective time-evolving and three-phase transport properties in the soil sample. Model predictions considering the temporal evolution of the plasma remediation process are presented and compared with corresponding experimental data.

  5. Fear of Failure, 2x2 Achievement Goal and Self-Handicapping: An Examination of the Hierarchical Model of Achievement Motivation in Physical Education

    Science.gov (United States)

    Chen, Lung Hung; Wu, Chia-Huei; Kee, Ying Hwa; Lin, Meng-Shyan; Shui, Shang-Hsueh

    2009-01-01

    In this study, the hierarchical model of achievement motivation [Elliot, A. J. (1997). Integrating the "classic" and "contemporary" approaches to achievement motivation: A hierarchical model of approach and avoidance achievement motivation. In P. Pintrich & M. Maehr (Eds.), "Advances in motivation and achievement"…

  6. A parallelized three-dimensional cellular automaton model for grain growth during additive manufacturing

    Science.gov (United States)

    Lian, Yanping; Lin, Stephen; Yan, Wentao; Liu, Wing Kam; Wagner, Gregory J.

    2018-05-01

    In this paper, a parallelized 3D cellular automaton computational model is developed to predict grain morphology for solidification of metal during the additive manufacturing process. Solidification phenomena are characterized by highly localized events, such as the nucleation and growth of multiple grains. As a result, parallelization requires careful treatment of load balancing between processors as well as interprocess communication in order to maintain a high parallel efficiency. We give a detailed summary of the formulation of the model, as well as a description of the communication strategies implemented to ensure parallel efficiency. Scaling tests on a representative problem with about half a billion cells demonstrate parallel efficiency of more than 80% on 8 processors and around 50% on 64; loss of efficiency is attributable to load imbalance due to near-surface grain nucleation in this test problem. The model is further demonstrated through an additive manufacturing simulation with resulting grain structures showing reasonable agreement with those observed in experiments.

  7. A parallelized three-dimensional cellular automaton model for grain growth during additive manufacturing

    Science.gov (United States)

    Lian, Yanping; Lin, Stephen; Yan, Wentao; Liu, Wing Kam; Wagner, Gregory J.

    2018-01-01

    In this paper, a parallelized 3D cellular automaton computational model is developed to predict grain morphology for solidification of metal during the additive manufacturing process. Solidification phenomena are characterized by highly localized events, such as the nucleation and growth of multiple grains. As a result, parallelization requires careful treatment of load balancing between processors as well as interprocess communication in order to maintain a high parallel efficiency. We give a detailed summary of the formulation of the model, as well as a description of the communication strategies implemented to ensure parallel efficiency. Scaling tests on a representative problem with about half a billion cells demonstrate parallel efficiency of more than 80% on 8 processors and around 50% on 64; loss of efficiency is attributable to load imbalance due to near-surface grain nucleation in this test problem. The model is further demonstrated through an additive manufacturing simulation with resulting grain structures showing reasonable agreement with those observed in experiments.

  8. Regulator Loss Functions and Hierarchical Modeling for Safety Decision Making.

    Science.gov (United States)

    Hatfield, Laura A; Baugh, Christine M; Azzone, Vanessa; Normand, Sharon-Lise T

    2017-07-01

    Regulators must act to protect the public when evidence indicates safety problems with medical devices. This requires complex tradeoffs among risks and benefits, which conventional safety surveillance methods do not incorporate. To combine explicit regulator loss functions with statistical evidence on medical device safety signals to improve decision making. In the Hospital Cost and Utilization Project National Inpatient Sample, we select pediatric inpatient admissions and identify adverse medical device events (AMDEs). We fit hierarchical Bayesian models to the annual hospital-level AMDE rates, accounting for patient and hospital characteristics. These models produce expected AMDE rates (a safety target), against which we compare the observed rates in a test year to compute a safety signal. We specify a set of loss functions that quantify the costs and benefits of each action as a function of the safety signal. We integrate the loss functions over the posterior distribution of the safety signal to obtain the posterior (Bayes) risk; the preferred action has the smallest Bayes risk. Using simulation and an analysis of AMDE data, we compare our minimum-risk decisions to a conventional Z score approach for classifying safety signals. The 2 rules produced different actions for nearly half of hospitals (45%). In the simulation, decisions that minimize Bayes risk outperform Z score-based decisions, even when the loss functions or hierarchical models are misspecified. Our method is sensitive to the choice of loss functions; eliciting quantitative inputs to the loss functions from regulators is challenging. A decision-theoretic approach to acting on safety signals is potentially promising but requires careful specification of loss functions in consultation with subject matter experts.

  9. Depth-Averaged Non-Hydrostatic Hydrodynamic Model Using a New Multithreading Parallel Computing Method

    Directory of Open Access Journals (Sweden)

    Ling Kang

    2017-03-01

    Full Text Available Compared to the hydrostatic hydrodynamic model, the non-hydrostatic hydrodynamic model can accurately simulate flows that feature vertical accelerations. The model’s low computational efficiency severely restricts its wider application. This paper proposes a non-hydrostatic hydrodynamic model based on a multithreading parallel computing method. The horizontal momentum equation is obtained by integrating the Navier–Stokes equations from the bottom to the free surface. The vertical momentum equation is approximated by the Keller-box scheme. A two-step method is used to solve the model equations. A parallel strategy based on block decomposition computation is utilized. The original computational domain is subdivided into two subdomains that are physically connected via a virtual boundary technique. Two sub-threads are created and tasked with the computation of the two subdomains. The producer–consumer model and the thread lock technique are used to achieve synchronous communication between sub-threads. The validity of the model was verified by solitary wave propagation experiments over a flat bottom and slope, followed by two sinusoidal wave propagation experiments over submerged breakwater. The parallel computing method proposed here was found to effectively enhance computational efficiency and save 20%–40% computation time compared to serial computing. The parallel acceleration rate and acceleration efficiency are approximately 1.45% and 72%, respectively. The parallel computing method makes a contribution to the popularization of non-hydrostatic models.

  10. Boltzmann machines as a model for parallel annealing

    NARCIS (Netherlands)

    Aarts, E.H.L.; Korst, J.H.M.

    1991-01-01

    The potential of Boltzmann machines to cope with difficult combinatorial optimization problems is investigated. A discussion of various (parallel) models of Boltzmann machines is given based on the theory of Markov chains. A general strategy is presented for solving (approximately) combinatorial

  11. Using Hierarchical Linear Modelling to Examine Factors Predicting English Language Students' Reading Achievement

    Science.gov (United States)

    Fung, Karen; ElAtia, Samira

    2015-01-01

    Using Hierarchical Linear Modelling (HLM), this study aimed to identify factors such as ESL/ELL/EAL status that would predict students' reading performance in an English language arts exam taken across Canada. Using data from the 2007 administration of the Pan-Canadian Assessment Program (PCAP) along with the accompanying surveys for students and…

  12. Bayesian Hierarchical Distributed Lag Models for Summer Ozone Exposure and Cardio-Respiratory Mortality

    OpenAIRE

    Yi Huang; Francesca Dominici; Michelle Bell

    2004-01-01

    In this paper, we develop Bayesian hierarchical distributed lag models for estimating associations between daily variations in summer ozone levels and daily variations in cardiovascular and respiratory (CVDRESP) mortality counts for 19 U.S. large cities included in the National Morbidity Mortality Air Pollution Study (NMMAPS) for the period 1987 - 1994. At the first stage, we define a semi-parametric distributed lag Poisson regression model to estimate city-specific relative rates of CVDRESP ...

  13. Road Network Selection Based on Road Hierarchical Structure Control

    Directory of Open Access Journals (Sweden)

    HE Haiwei

    2015-04-01

    Full Text Available A new road network selection method based on hierarchical structure is studied. Firstly, road network is built as strokes which are then classified into hierarchical collections according to the criteria of betweenness centrality value (BC value. Secondly, the hierarchical structure of the strokes is enhanced using structural characteristic identification technique. Thirdly, the importance calculation model was established according to the relationships among the hierarchical structure of the strokes. Finally, the importance values of strokes are got supported with the model's hierarchical calculation, and with which the road network is selected. Tests are done to verify the advantage of this method by comparing it with other common stroke-oriented methods using three kinds of typical road network data. Comparision of the results show that this method had few need to semantic data, and could eliminate the negative influence of edge strokes caused by the criteria of BC value well. So, it is better to maintain the global hierarchical structure of road network, and suitable to meet with the selection of various kinds of road network at the same time.

  14. Performance of Air Pollution Models on Massively Parallel Computers

    DEFF Research Database (Denmark)

    Brown, John; Hansen, Per Christian; Wasniewski, Jerzy

    1996-01-01

    To compare the performance and use of three massively parallel SIMD computers, we implemented a large air pollution model on the computers. Using a realistic large-scale model, we gain detailed insight about the performance of the three computers when used to solve large-scale scientific problems...

  15. New Parallel Algorithms for Landscape Evolution Model

    Science.gov (United States)

    Jin, Y.; Zhang, H.; Shi, Y.

    2017-12-01

    Most landscape evolution models (LEM) developed in the last two decades solve the diffusion equation to simulate the transportation of surface sediments. This numerical approach is difficult to parallelize due to the computation of drainage area for each node, which needs huge amount of communication if run in parallel. In order to overcome this difficulty, we developed two parallel algorithms for LEM with a stream net. One algorithm handles the partition of grid with traditional methods and applies an efficient global reduction algorithm to do the computation of drainage areas and transport rates for the stream net; the other algorithm is based on a new partition algorithm, which partitions the nodes in catchments between processes first, and then partitions the cells according to the partition of nodes. Both methods focus on decreasing communication between processes and take the advantage of massive computing techniques, and numerical experiments show that they are both adequate to handle large scale problems with millions of cells. We implemented the two algorithms in our program based on the widely used finite element library deal.II, so that it can be easily coupled with ASPECT.

  16. A Logistic Regression Model with a Hierarchical Random Error Term for Analyzing the Utilization of Public Transport

    Directory of Open Access Journals (Sweden)

    Chong Wei

    2015-01-01

    Full Text Available Logistic regression models have been widely used in previous studies to analyze public transport utilization. These studies have shown travel time to be an indispensable variable for such analysis and usually consider it to be a deterministic variable. This formulation does not allow us to capture travelers’ perception error regarding travel time, and recent studies have indicated that this error can have a significant effect on modal choice behavior. In this study, we propose a logistic regression model with a hierarchical random error term. The proposed model adds a new random error term for the travel time variable. This term structure enables us to investigate travelers’ perception error regarding travel time from a given choice behavior dataset. We also propose an extended model that allows constraining the sign of this error in the model. We develop two Gibbs samplers to estimate the basic hierarchical model and the extended model. The performance of the proposed models is examined using a well-known dataset.

  17. HPC parallel programming model for gyrokinetic MHD simulation

    International Nuclear Information System (INIS)

    Naitou, Hiroshi; Yamada, Yusuke; Tokuda, Shinji; Ishii, Yasutomo; Yagi, Masatoshi

    2011-01-01

    The 3-dimensional gyrokinetic PIC (particle-in-cell) code for MHD simulation, Gpic-MHD, was installed on SR16000 (“Plasma Simulator”), which is a scalar cluster system consisting of 8,192 logical cores. The Gpic-MHD code advances particle and field quantities in time. In order to distribute calculations over large number of logical cores, the total simulation domain in cylindrical geometry was broken up into N DD-r × N DD-z (number of radial decomposition times number of axial decomposition) small domains including approximately the same number of particles. The axial direction was uniformly decomposed, while the radial direction was non-uniformly decomposed. N RP replicas (copies) of each decomposed domain were used (“particle decomposition”). The hybrid parallelization model of multi-threads and multi-processes was employed: threads were parallelized by the auto-parallelization and N DD-r × N DD-z × N RP processes were parallelized by MPI (message-passing interface). The parallelization performance of Gpic-MHD was investigated for the medium size system of N r × N θ × N z = 1025 × 128 × 128 mesh with 4.196 or 8.192 billion particles. The highest speed for the fixed number of logical cores was obtained for two threads, the maximum number of N DD-z , and optimum combination of N DD-r and N RP . The observed optimum speeds demonstrated good scaling up to 8,192 logical cores. (author)

  18. Efficient Out of Core Sorting Algorithms for the Parallel Disks Model.

    Science.gov (United States)

    Kundeti, Vamsi; Rajasekaran, Sanguthevar

    2011-11-01

    In this paper we present efficient algorithms for sorting on the Parallel Disks Model (PDM). Numerous asymptotically optimal algorithms have been proposed in the literature. However many of these merge based algorithms have large underlying constants in the time bounds, because they suffer from the lack of read parallelism on PDM. The irregular consumption of the runs during the merge affects the read parallelism and contributes to the increased sorting time. In this paper we first introduce a novel idea called the dirty sequence accumulation that improves the read parallelism. Secondly, we show analytically that this idea can reduce the number of parallel I/O's required to sort the input close to the lower bound of [Formula: see text]. We experimentally verify our dirty sequence idea with the standard R-Way merge and show that our idea can reduce the number of parallel I/Os to sort on PDM significantly.

  19. A Bayesian Hierarchical Model for Glacial Dynamics Based on the Shallow Ice Approximation and its Evaluation Using Analytical Solutions

    Science.gov (United States)

    Gopalan, Giri; Hrafnkelsson, Birgir; Aðalgeirsdóttir, Guðfinna; Jarosch, Alexander H.; Pálsson, Finnur

    2018-03-01

    Bayesian hierarchical modeling can assist the study of glacial dynamics and ice flow properties. This approach will allow glaciologists to make fully probabilistic predictions for the thickness of a glacier at unobserved spatio-temporal coordinates, and it will also allow for the derivation of posterior probability distributions for key physical parameters such as ice viscosity and basal sliding. The goal of this paper is to develop a proof of concept for a Bayesian hierarchical model constructed, which uses exact analytical solutions for the shallow ice approximation (SIA) introduced by Bueler et al. (2005). A suite of test simulations utilizing these exact solutions suggests that this approach is able to adequately model numerical errors and produce useful physical parameter posterior distributions and predictions. A byproduct of the development of the Bayesian hierarchical model is the derivation of a novel finite difference method for solving the SIA partial differential equation (PDE). An additional novelty of this work is the correction of numerical errors induced through a numerical solution using a statistical model. This error correcting process models numerical errors that accumulate forward in time and spatial variation of numerical errors between the dome, interior, and margin of a glacier.

  20. Error Modeling and Design Optimization of Parallel Manipulators

    DEFF Research Database (Denmark)

    Wu, Guanglei

    /backlash, manufacturing and assembly errors and joint clearances. From the error prediction model, the distributions of the pose errors due to joint clearances are mapped within its constant-orientation workspace and the correctness of the developed model is validated experimentally. ix Additionally, using the screw......, dynamic modeling etc. Next, the rst-order dierential equation of the kinematic closure equation of planar parallel manipulator is obtained to develop its error model both in Polar and Cartesian coordinate systems. The established error model contains the error sources of actuation error...

  1. Tractography segmentation using a hierarchical Dirichlet processes mixture model.

    Science.gov (United States)

    Wang, Xiaogang; Grimson, W Eric L; Westin, Carl-Fredrik

    2011-01-01

    In this paper, we propose a new nonparametric Bayesian framework to cluster white matter fiber tracts into bundles using a hierarchical Dirichlet processes mixture (HDPM) model. The number of clusters is automatically learned driven by data with a Dirichlet process (DP) prior instead of being manually specified. After the models of bundles have been learned from training data without supervision, they can be used as priors to cluster/classify fibers of new subjects for comparison across subjects. When clustering fibers of new subjects, new clusters can be created for structures not observed in the training data. Our approach does not require computing pairwise distances between fibers and can cluster a huge set of fibers across multiple subjects. We present results on several data sets, the largest of which has more than 120,000 fibers. Copyright © 2010 Elsevier Inc. All rights reserved.

  2. Predicting Longitudinal Change in Language Production and Comprehension in Individuals with Down Syndrome: Hierarchical Linear Modeling.

    Science.gov (United States)

    Chapman, Robin S.; Hesketh, Linda J.; Kistler, Doris J.

    2002-01-01

    Longitudinal change in syntax comprehension and production skill, measured over six years, was modeled in 31 individuals (ages 5-20) with Down syndrome. The best fitting Hierarchical Linear Modeling model of comprehension uses age and visual and auditory short-term memory as predictors of initial status, and age for growth trajectory. (Contains…

  3. Group-level self-definition and self-investment: a hierarchical (multicomponent) model of in-group identification.

    Science.gov (United States)

    Leach, Colin Wayne; van Zomeren, Martijn; Zebel, Sven; Vliek, Michael L W; Pennekamp, Sjoerd F; Doosje, Bertjan; Ouwerkerk, Jaap W; Spears, Russell

    2008-07-01

    Recent research shows individuals' identification with in-groups to be psychologically important and socially consequential. However, there is little agreement about how identification should be conceptualized or measured. On the basis of previous work, the authors identified 5 specific components of in-group identification and offered a hierarchical 2-dimensional model within which these components are organized. Studies 1 and 2 used confirmatory factor analysis to validate the proposed model of self-definition (individual self-stereotyping, in-group homogeneity) and self-investment (solidarity, satisfaction, and centrality) dimensions, across 3 different group identities. Studies 3 and 4 demonstrated the construct validity of the 5 components by examining their (concurrent) correlations with established measures of in-group identification. Studies 5-7 demonstrated the predictive and discriminant validity of the 5 components by examining their (prospective) prediction of individuals' orientation to, and emotions about, real intergroup relations. Together, these studies illustrate the conceptual and empirical value of a hierarchical multicomponent model of in-group identification.

  4. Efficient Parallel Statistical Model Checking of Biochemical Networks

    Directory of Open Access Journals (Sweden)

    Paolo Ballarini

    2009-12-01

    Full Text Available We consider the problem of verifying stochastic models of biochemical networks against behavioral properties expressed in temporal logic terms. Exact probabilistic verification approaches such as, for example, CSL/PCTL model checking, are undermined by a huge computational demand which rule them out for most real case studies. Less demanding approaches, such as statistical model checking, estimate the likelihood that a property is satisfied by sampling executions out of the stochastic model. We propose a methodology for efficiently estimating the likelihood that a LTL property P holds of a stochastic model of a biochemical network. As with other statistical verification techniques, the methodology we propose uses a stochastic simulation algorithm for generating execution samples, however there are three key aspects that improve the efficiency: first, the sample generation is driven by on-the-fly verification of P which results in optimal overall simulation time. Second, the confidence interval estimation for the probability of P to hold is based on an efficient variant of the Wilson method which ensures a faster convergence. Third, the whole methodology is designed according to a parallel fashion and a prototype software tool has been implemented that performs the sampling/verification process in parallel over an HPC architecture.

  5. cellGPU: Massively parallel simulations of dynamic vertex models

    Science.gov (United States)

    Sussman, Daniel M.

    2017-10-01

    Vertex models represent confluent tissue by polygonal or polyhedral tilings of space, with the individual cells interacting via force laws that depend on both the geometry of the cells and the topology of the tessellation. This dependence on the connectivity of the cellular network introduces several complications to performing molecular-dynamics-like simulations of vertex models, and in particular makes parallelizing the simulations difficult. cellGPU addresses this difficulty and lays the foundation for massively parallelized, GPU-based simulations of these models. This article discusses its implementation for a pair of two-dimensional models, and compares the typical performance that can be expected between running cellGPU entirely on the CPU versus its performance when running on a range of commercial and server-grade graphics cards. By implementing the calculation of topological changes and forces on cells in a highly parallelizable fashion, cellGPU enables researchers to simulate time- and length-scales previously inaccessible via existing single-threaded CPU implementations. Program Files doi:http://dx.doi.org/10.17632/6j2cj29t3r.1 Licensing provisions: MIT Programming language: CUDA/C++ Nature of problem: Simulations of off-lattice "vertex models" of cells, in which the interaction forces depend on both the geometry and the topology of the cellular aggregate. Solution method: Highly parallelized GPU-accelerated dynamical simulations in which the force calculations and the topological features can be handled on either the CPU or GPU. Additional comments: The code is hosted at https://gitlab.com/dmsussman/cellGPU, with documentation additionally maintained at http://dmsussman.gitlab.io/cellGPUdocumentation

  6. HLIBCov: Parallel Hierarchical Matrix Approximation of Large Covariance Matrices and Likelihoods with Applications in Parameter Identification

    KAUST Repository

    Litvinenko, Alexander

    2017-01-01

    matrices. Therefore covariance matrices are approximated in the hierarchical ($\\H$-) matrix format with computational cost $\\mathcal{O}(k^2n \\log^2 n/p)$ and storage $\\mathcal{O}(kn \\log n)$, where the rank $k$ is a small integer (typically $k<25$), $p

  7. Hierarchical regression analysis in structural Equation Modeling

    NARCIS (Netherlands)

    de Jong, P.F.

    1999-01-01

    In a hierarchical or fixed-order regression analysis, the independent variables are entered into the regression equation in a prespecified order. Such an analysis is often performed when the extra amount of variance accounted for in a dependent variable by a specific independent variable is the main

  8. A hierarchical bayesian model to quantify uncertainty of stream water temperature forecasts.

    Directory of Open Access Journals (Sweden)

    Guillaume Bal

    Full Text Available Providing generic and cost effective modelling approaches to reconstruct and forecast freshwater temperature using predictors as air temperature and water discharge is a prerequisite to understanding ecological processes underlying the impact of water temperature and of global warming on continental aquatic ecosystems. Using air temperature as a simple linear predictor of water temperature can lead to significant bias in forecasts as it does not disentangle seasonality and long term trends in the signal. Here, we develop an alternative approach based on hierarchical Bayesian statistical time series modelling of water temperature, air temperature and water discharge using seasonal sinusoidal periodic signals and time varying means and amplitudes. Fitting and forecasting performances of this approach are compared with that of simple linear regression between water and air temperatures using i an emotive simulated example, ii application to three French coastal streams with contrasting bio-geographical conditions and sizes. The time series modelling approach better fit data and does not exhibit forecasting bias in long term trends contrary to the linear regression. This new model also allows for more accurate forecasts of water temperature than linear regression together with a fair assessment of the uncertainty around forecasting. Warming of water temperature forecast by our hierarchical Bayesian model was slower and more uncertain than that expected with the classical regression approach. These new forecasts are in a form that is readily usable in further ecological analyses and will allow weighting of outcomes from different scenarios to manage climate change impacts on freshwater wildlife.

  9. Modeling and optimization of parallel and distributed embedded systems

    CERN Document Server

    Munir, Arslan; Ranka, Sanjay

    2016-01-01

    This book introduces the state-of-the-art in research in parallel and distributed embedded systems, which have been enabled by developments in silicon technology, micro-electro-mechanical systems (MEMS), wireless communications, computer networking, and digital electronics. These systems have diverse applications in domains including military and defense, medical, automotive, and unmanned autonomous vehicles. The emphasis of the book is on the modeling and optimization of emerging parallel and distributed embedded systems in relation to the three key design metrics of performance, power and dependability.

  10. Data driven parallelism in experimental high energy physics applications

    International Nuclear Information System (INIS)

    Pohl, M.

    1987-01-01

    I present global design principles for the implementation of high energy physics data analysis code on sequential and parallel processors with mixed shared and local memory. Potential parallelism in the structure of high energy physics tasks is identified with granularity varying from a few times 10 8 instructions all the way down to a few times 10 4 instructions. It follows the hierarchical structure of detector and data acquisition systems. To take advantage of this - yet preserving the necessary portability of the code - I propose a computational model with purely data driven concurrency in Single Program Multiple Data (SPMD) mode. The task granularity is defined by varying the granularity of the central data structure manipulated. Concurrent processes coordiate themselves asynchroneously using simple lock constructs on parts of the data structure. Load balancing among processes occurs naturally. The scheme allows to map the internal layout of the data structure closely onto the layout of local and shared memory in a parallel architecture. It thus allows to optimize the application with respect to synchronization as well as data transport overheads. I present a coarse top level design for a portable implementation of this scheme on sequential machines, multiprocessor mainframes (e.g. IBM 3090), tightly coupled multiprocessors (e.g. RP-3) and loosely coupled processor arrays (e.g. LCAP, Emulating Processor Farms). (orig.)

  11. Data driven parallelism in experimental high energy physics applications

    Science.gov (United States)

    Pohl, Martin

    1987-08-01

    I present global design principles for the implementation of High Energy Physics data analysis code on sequential and parallel processors with mixed shared and local memory. Potential parallelism in the structure of High Energy Physics tasks is identified with granularity varying from a few times 10 8 instructions all the way down to a few times 10 4 instructions. It follows the hierarchical structure of detector and data acquisition systems. To take advantage of this - yet preserving the necessary portability of the code - I propose a computational model with purely data driven concurrency in Single Program Multiple Data (SPMD) mode. The Task granularity is defined by varying the granularity of the central data structure manipulated. Concurrent processes coordinate themselves asynchroneously using simple lock constructs on parts of the data structure. Load balancing among processes occurs naturally. The scheme allows to map the internal layout of the data structure closely onto the layout of local and shared memory in a parallel architecture. It thus allows to optimize the application with respect to synchronization as well as data transport overheads. I present a coarse top level design for a portable implementation of this scheme on sequential machines, multiprocessor mainframes (e.g. IBM 3090), tightly coupled multiprocessors (e.g. RP-3) and loosely coupled processor arrays (e.g. LCAP, Emulating Processor Farms).

  12. Hierarchical Rhetorical Sentence Categorization for Scientific Papers

    Science.gov (United States)

    Rachman, G. H.; Khodra, M. L.; Widyantoro, D. H.

    2018-03-01

    Important information in scientific papers can be composed of rhetorical sentences that is structured from certain categories. To get this information, text categorization should be conducted. Actually, some works in this task have been completed by employing word frequency, semantic similarity words, hierarchical classification, and the others. Therefore, this paper aims to present the rhetorical sentence categorization from scientific paper by employing TF-IDF and Word2Vec to capture word frequency and semantic similarity words and employing hierarchical classification. Every experiment is tested in two classifiers, namely Naïve Bayes and SVM Linear. This paper shows that hierarchical classifier is better than flat classifier employing either TF-IDF or Word2Vec, although it increases only almost 2% from 27.82% when using flat classifier until 29.61% when using hierarchical classifier. It shows also different learning model for child-category can be built by hierarchical classifier.

  13. Empirical Bayes ranking and selection methods via semiparametric hierarchical mixture models in microarray studies.

    Science.gov (United States)

    Noma, Hisashi; Matsui, Shigeyuki

    2013-05-20

    The main purpose of microarray studies is screening of differentially expressed genes as candidates for further investigation. Because of limited resources in this stage, prioritizing genes are relevant statistical tasks in microarray studies. For effective gene selections, parametric empirical Bayes methods for ranking and selection of genes with largest effect sizes have been proposed (Noma et al., 2010; Biostatistics 11: 281-289). The hierarchical mixture model incorporates the differential and non-differential components and allows information borrowing across differential genes with separation from nuisance, non-differential genes. In this article, we develop empirical Bayes ranking methods via a semiparametric hierarchical mixture model. A nonparametric prior distribution, rather than parametric prior distributions, for effect sizes is specified and estimated using the "smoothing by roughening" approach of Laird and Louis (1991; Computational statistics and data analysis 12: 27-37). We present applications to childhood and infant leukemia clinical studies with microarrays for exploring genes related to prognosis or disease progression. Copyright © 2012 John Wiley & Sons, Ltd.

  14. Testing adaptive toolbox models: a Bayesian hierarchical approach.

    Science.gov (United States)

    Scheibehenne, Benjamin; Rieskamp, Jörg; Wagenmakers, Eric-Jan

    2013-01-01

    Many theories of human cognition postulate that people are equipped with a repertoire of strategies to solve the tasks they face. This theoretical framework of a cognitive toolbox provides a plausible account of intra- and interindividual differences in human behavior. Unfortunately, it is often unclear how to rigorously test the toolbox framework. How can a toolbox model be quantitatively specified? How can the number of toolbox strategies be limited to prevent uncontrolled strategy sprawl? How can a toolbox model be formally tested against alternative theories? The authors show how these challenges can be met by using Bayesian inference techniques. By means of parameter recovery simulations and the analysis of empirical data across a variety of domains (i.e., judgment and decision making, children's cognitive development, function learning, and perceptual categorization), the authors illustrate how Bayesian inference techniques allow toolbox models to be quantitatively specified, strategy sprawl to be contained, and toolbox models to be rigorously tested against competing theories. The authors demonstrate that their approach applies at the individual level but can also be generalized to the group level with hierarchical Bayesian procedures. The suggested Bayesian inference techniques represent a theoretical and methodological advancement for toolbox theories of cognition and behavior.

  15. Parallel computing of a climate model on the dawn 1000 by domain decomposition method

    Science.gov (United States)

    Bi, Xunqiang

    1997-12-01

    In this paper the parallel computing of a grid-point nine-level atmospheric general circulation model on the Dawn 1000 is introduced. The model was developed by the Institute of Atmospheric Physics (IAP), Chinese Academy of Sciences (CAS). The Dawn 1000 is a MIMD massive parallel computer made by National Research Center for Intelligent Computer (NCIC), CAS. A two-dimensional domain decomposition method is adopted to perform the parallel computing. The potential ways to increase the speed-up ratio and exploit more resources of future massively parallel supercomputation are also discussed.

  16. Entrepreneurial intention modeling using hierarchical multiple regression

    Directory of Open Access Journals (Sweden)

    Marina Jeger

    2014-12-01

    Full Text Available The goal of this study is to identify the contribution of effectuation dimensions to the predictive power of the entrepreneurial intention model over and above that which can be accounted for by other predictors selected and confirmed in previous studies. As is often the case in social and behavioral studies, some variables are likely to be highly correlated with each other. Therefore, the relative amount of variance in the criterion variable explained by each of the predictors depends on several factors such as the order of variable entry and sample specifics. The results show the modest predictive power of two dimensions of effectuation prior to the introduction of the theory of planned behavior elements. The article highlights the main advantages of applying hierarchical regression in social sciences as well as in the specific context of entrepreneurial intention formation, and addresses some of the potential pitfalls that this type of analysis entails.

  17. Climate models on massively parallel computers

    International Nuclear Information System (INIS)

    Vitart, F.; Rouvillois, P.

    1993-01-01

    First results got on massively parallel computers (Multiple Instruction Multiple Data and Simple Instruction Multiple Data) allow to consider building of coupled models with high resolutions. This would make possible simulation of thermoaline circulation and other interaction phenomena between atmosphere and ocean. The increasing of computers powers, and then the improvement of resolution will go us to revise our approximations. Then hydrostatic approximation (in ocean circulation) will not be valid when the grid mesh will be of a dimension lower than a few kilometers: We shall have to find other models. The expert appraisement got in numerical analysis at the Center of Limeil-Valenton (CEL-V) will be used again to imagine global models taking in account atmosphere, ocean, ice floe and biosphere, allowing climate simulation until a regional scale

  18. Hierarchical decision making for flood risk reduction

    DEFF Research Database (Denmark)

    Custer, Rocco; Nishijima, Kazuyoshi

    2013-01-01

    . In current practice, structures are often optimized individually without considering benefits of having a hierarchy of protection structures. It is here argued, that the joint consideration of hierarchically integrated protection structures is beneficial. A hierarchical decision model is utilized to analyze...... and compare the benefit of large upstream protection structures and local downstream protection structures in regard to epistemic uncertainty parameters. Results suggest that epistemic uncertainty influences the outcome of the decision model and that, depending on the magnitude of epistemic uncertainty...

  19. Control Strategies for Islanded Microgrid using Enhanced Hierarchical Control Structure with Multiple Current-Loop Damping Schemes

    DEFF Research Database (Denmark)

    Han, Yang; Shen, Pan; Zhao, Xin

    2017-01-01

    In this paper, the modeling, controller design, and stability analysis of the islanded microgrid (MG) using enhanced hierarchical control structure with multiple current loop damping schemes is proposed. The islanded MG is consisted of the parallel-connected voltage source inverters using LCL...... output filters, and the proposed control structure includes: the primary control with additional phase-shift loop, the secondary control for voltage amplitude and frequency restoration, the virtual impedance loops which contains virtual positive- and negative-sequence impedance loops at fundamental...... frequency, and virtual variable harmonic impedance loop at harmonic frequencies, and the inner voltage and current loop controllers. A small-signal model for the primary and secondary controls with additional phase-shift loop is presented, which shows an over-damped feature from eigenvalue analysis...

  20. A simple and efficient parallel FFT algorithm using the BSP model

    NARCIS (Netherlands)

    Bisseling, R.H.; Inda, M.A.

    2000-01-01

    In this paper we present a new parallel radix FFT algorithm based on the BSP model Our parallel algorithm uses the groupcyclic distribution family which makes it simple to understand and easy to implement We show how to reduce the com munication cost of the algorithm by a factor of three in the case

  1. Parallelization of the Coupled Earthquake Model

    Science.gov (United States)

    Block, Gary; Li, P. Peggy; Song, Yuhe T.

    2007-01-01

    This Web-based tsunami simulation system allows users to remotely run a model on JPL s supercomputers for a given undersea earthquake. At the time of this reporting, predicting tsunamis on the Internet has never happened before. This new code directly couples the earthquake model and the ocean model on parallel computers and improves simulation speed. Seismometers can only detect information from earthquakes; they cannot detect whether or not a tsunami may occur as a result of the earthquake. When earthquake-tsunami models are coupled with the improved computational speed of modern, high-performance computers and constrained by remotely sensed data, they are able to provide early warnings for those coastal regions at risk. The software is capable of testing NASA s satellite observations of tsunamis. It has been successfully tested for several historical tsunamis, has passed all alpha and beta testing, and is well documented for users.

  2. Use of a Bayesian hierarchical model to study the allometric scaling of the fetoplacental weight ratio

    Directory of Open Access Journals (Sweden)

    Fidel Ernesto Castro Morales

    2016-03-01

    Full Text Available Abstract Objectives: to propose the use of a Bayesian hierarchical model to study the allometric scaling of the fetoplacental weight ratio, including possible confounders. Methods: data from 26 singleton pregnancies with gestational age at birth between 37 and 42 weeks were analyzed. The placentas were collected immediately after delivery and stored under refrigeration until the time of analysis, which occurred within up to 12 hours. Maternal data were collected from medical records. A Bayesian hierarchical model was proposed and Markov chain Monte Carlo simulation methods were used to obtain samples from distribution a posteriori. Results: the model developed showed a reasonable fit, even allowing for the incorporation of variables and a priori information on the parameters used. Conclusions: new variables can be added to the modelfrom the available code, allowing many possibilities for data analysis and indicating the potential for use in research on the subject.

  3. Model-driven product line engineering for mapping parallel algorithms to parallel computing platforms

    NARCIS (Netherlands)

    Arkin, Ethem; Tekinerdogan, Bedir

    2016-01-01

    Mapping parallel algorithms to parallel computing platforms requires several activities such as the analysis of the parallel algorithm, the definition of the logical configuration of the platform, the mapping of the algorithm to the logical configuration platform and the implementation of the

  4. Hierarchical neural network model of the visual system determining figure/ground relation

    Science.gov (United States)

    Kikuchi, Masayuki

    2017-07-01

    One of the most important functions of the visual perception in the brain is figure/ground interpretation from input images. Figural region in 2D image corresponding to object in 3D space are distinguished from background region extended behind the object. Previously the author proposed a neural network model of figure/ground separation constructed on the standpoint that local geometric features such as curvatures and outer angles at corners are extracted and propagated along input contour in a single layer network (Kikuchi & Akashi, 2001). However, such a processing principle has the defect that signal propagation requires manyiterations despite the fact that actual visual system determines figure/ground relation within the short period (Zhou et al., 2000). In order to attain speed-up for determining figure/ground, this study incorporates hierarchical architecture into the previous model. This study confirmed the effect of the hierarchization as for the computation time by simulation. As the number of layers increased, the required computation time reduced. However, such speed-up effect was saturatedas the layers increased to some extent. This study attempted to explain this saturation effect by the notion of average distance between vertices in the area of complex network, and succeeded to mimic the saturation effect by computer simulation.

  5. A Scalable Parallel PWTD-Accelerated SIE Solver for Analyzing Transient Scattering from Electrically Large Objects

    KAUST Repository

    Liu, Yang; Yucel, Abdulkadir; Bagci, Hakan; Michielssen, Eric

    2015-01-01

    of processors by leveraging two mechanisms: (i) a hierarchical parallelization strategy to evenly distribute the computation and memory loads at all levels of the PWTD tree among processors, and (ii) a novel asynchronous communication scheme to reduce the cost

  6. Hierarchical modeling for rare event detection and cell subset alignment across flow cytometry samples.

    Directory of Open Access Journals (Sweden)

    Andrew Cron

    Full Text Available Flow cytometry is the prototypical assay for multi-parameter single cell analysis, and is essential in vaccine and biomarker research for the enumeration of antigen-specific lymphocytes that are often found in extremely low frequencies (0.1% or less. Standard analysis of flow cytometry data relies on visual identification of cell subsets by experts, a process that is subjective and often difficult to reproduce. An alternative and more objective approach is the use of statistical models to identify cell subsets of interest in an automated fashion. Two specific challenges for automated analysis are to detect extremely low frequency event subsets without biasing the estimate by pre-processing enrichment, and the ability to align cell subsets across multiple data samples for comparative analysis. In this manuscript, we develop hierarchical modeling extensions to the Dirichlet Process Gaussian Mixture Model (DPGMM approach we have previously described for cell subset identification, and show that the hierarchical DPGMM (HDPGMM naturally generates an aligned data model that captures both commonalities and variations across multiple samples. HDPGMM also increases the sensitivity to extremely low frequency events by sharing information across multiple samples analyzed simultaneously. We validate the accuracy and reproducibility of HDPGMM estimates of antigen-specific T cells on clinically relevant reference peripheral blood mononuclear cell (PBMC samples with known frequencies of antigen-specific T cells. These cell samples take advantage of retrovirally TCR-transduced T cells spiked into autologous PBMC samples to give a defined number of antigen-specific T cells detectable by HLA-peptide multimer binding. We provide open source software that can take advantage of both multiple processors and GPU-acceleration to perform the numerically-demanding computations. We show that hierarchical modeling is a useful probabilistic approach that can provide a

  7. Epigenetic change detection and pattern recognition via Bayesian hierarchical hidden Markov models.

    Science.gov (United States)

    Wang, Xinlei; Zang, Miao; Xiao, Guanghua

    2013-06-15

    Epigenetics is the study of changes to the genome that can switch genes on or off and determine which proteins are transcribed without altering the DNA sequence. Recently, epigenetic changes have been linked to the development and progression of disease such as psychiatric disorders. High-throughput epigenetic experiments have enabled researchers to measure genome-wide epigenetic profiles and yield data consisting of intensity ratios of immunoprecipitation versus reference samples. The intensity ratios can provide a view of genomic regions where protein binding occur under one experimental condition and further allow us to detect epigenetic alterations through comparison between two different conditions. However, such experiments can be expensive, with only a few replicates available. Moreover, epigenetic data are often spatially correlated with high noise levels. In this paper, we develop a Bayesian hierarchical model, combined with hidden Markov processes with four states for modeling spatial dependence, to detect genomic sites with epigenetic changes from two-sample experiments with paired internal control. One attractive feature of the proposed method is that the four states of the hidden Markov process have well-defined biological meanings and allow us to directly call the change patterns based on the corresponding posterior probabilities. In contrast, none of existing methods can offer this advantage. In addition, the proposed method offers great power in statistical inference by spatial smoothing (via hidden Markov modeling) and information pooling (via hierarchical modeling). Both simulation studies and real data analysis in a cocaine addiction study illustrate the reliability and success of this method. Copyright © 2012 John Wiley & Sons, Ltd.

  8. Speed-Accuracy Trade-Off in Skilled Typewriting: Decomposing the Contributions of Hierarchical Control Loops

    Science.gov (United States)

    Yamaguchi, Motonori; Crump, Matthew J. C.; Logan, Gordon D.

    2013-01-01

    Typing performance involves hierarchically structured control systems: At the higher level, an outer loop generates a word or a series of words to be typed; at the lower level, an inner loop activates the keystrokes comprising the word in parallel and executes them in the correct order. The present experiments examined contributions of the outer-…

  9. Self-assembled biomimetic superhydrophobic hierarchical arrays.

    Science.gov (United States)

    Yang, Hongta; Dou, Xuan; Fang, Yin; Jiang, Peng

    2013-09-01

    Here, we report a simple and inexpensive bottom-up technology for fabricating superhydrophobic coatings with hierarchical micro-/nano-structures, which are inspired by the binary periodic structure found on the superhydrophobic compound eyes of some insects (e.g., mosquitoes and moths). Binary colloidal arrays consisting of exemplary large (4 and 30 μm) and small (300 nm) silica spheres are first assembled by a scalable Langmuir-Blodgett (LB) technology in a layer-by-layer manner. After surface modification with fluorosilanes, the self-assembled hierarchical particle arrays become superhydrophobic with an apparent water contact angle (CA) larger than 150°. The throughput of the resulting superhydrophobic coatings with hierarchical structures can be significantly improved by templating the binary periodic structures of the LB-assembled colloidal arrays into UV-curable fluoropolymers by a soft lithography approach. Superhydrophobic perfluoroether acrylate hierarchical arrays with large CAs and small CA hysteresis can be faithfully replicated onto various substrates. Both experiments and theoretical calculations based on the Cassie's dewetting model demonstrate the importance of the hierarchical structure in achieving the final superhydrophobic surface states. Copyright © 2013 Elsevier Inc. All rights reserved.

  10. How does aging affect recognition-based inference? A hierarchical Bayesian modeling approach.

    Science.gov (United States)

    Horn, Sebastian S; Pachur, Thorsten; Mata, Rui

    2015-01-01

    The recognition heuristic (RH) is a simple strategy for probabilistic inference according to which recognized objects are judged to score higher on a criterion than unrecognized objects. In this article, a hierarchical Bayesian extension of the multinomial r-model is applied to measure use of the RH on the individual participant level and to re-evaluate differences between younger and older adults' strategy reliance across environments. Further, it is explored how individual r-model parameters relate to alternative measures of the use of recognition and other knowledge, such as adherence rates and indices from signal-detection theory (SDT). Both younger and older adults used the RH substantially more often in an environment with high than low recognition validity, reflecting adaptivity in strategy use across environments. In extension of previous analyses (based on adherence rates), hierarchical modeling revealed that in an environment with low recognition validity, (a) older adults had a stronger tendency than younger adults to rely on the RH and (b) variability in RH use between individuals was larger than in an environment with high recognition validity; variability did not differ between age groups. Further, the r-model parameters correlated moderately with an SDT measure expressing how well people can discriminate cases where the RH leads to a correct vs. incorrect inference; this suggests that the r-model and the SDT measures may offer complementary insights into the use of recognition in decision making. In conclusion, younger and older adults are largely adaptive in their application of the RH, but cognitive aging may be associated with an increased tendency to rely on this strategy. Copyright © 2014 Elsevier B.V. All rights reserved.

  11. Analysis and Modeling of Circulating Current in Two Parallel-Connected Inverters

    DEFF Research Database (Denmark)

    Maheshwari, Ram Krishan; Gohil, Ghanshyamsinh Vijaysinh; Bede, Lorand

    2015-01-01

    Parallel-connected inverters are gaining attention for high power applications because of the limited power handling capability of the power modules. Moreover, the parallel-connected inverters may have low total harmonic distortion of the ac current if they are operated with the interleaved pulse...... this model, the circulating current between two parallel-connected inverters is analysed in this study. The peak and root mean square (rms) values of the normalised circulating current are calculated for different PWM methods, which makes this analysis a valuable tool to design a filter for the circulating......-width modulation (PWM). However, the interleaved PWM causes a circulating current between the inverters, which in turn causes additional losses. A model describing the dynamics of the circulating current is presented in this study which shows that the circulating current depends on the common-mode voltage. Using...

  12. Multiperiod Hierarchical Location Problem of Transit Hub in Urban Agglomeration Area

    Directory of Open Access Journals (Sweden)

    Ting-ting Li

    2017-01-01

    Full Text Available With the rapid urbanization in developing countries, urban agglomeration area (UAA forms. Also, transportation demand in UAA grows rapidly and presents hierarchical feature. Therefore, it is imperative to develop models for transit hubs to guide the development of UAA and better meet the time-varying and hierarchical transportation demand. In this paper, the multiperiod hierarchical location problem of transit hub in urban agglomeration area (THUAA is studied. A hierarchical service network of THUAA with a multiflow, nested, and noncoherent structure is described. Then a multiperiod hierarchical mathematical programming model is proposed, aiming at minimizing the total demand weighted travel time. Moreover, an improved adaptive clonal selection algorithm is presented to solve the model. Both the model and algorithm are verified by the application to a real-life problem of Beijing-Tianjin-Hebei Region in China. The results of different scenarios in the case show that urban population migration has a great impact on the THUAA location scheme. Sustained and appropriate urban population migration helps to reduce travel time for urban residents.

  13. Engineering-Based Thermal CFD Simulations on Massive Parallel Systems

    KAUST Repository

    Frisch, Jérôme

    2015-05-22

    The development of parallel Computational Fluid Dynamics (CFD) codes is a challenging task that entails efficient parallelization concepts and strategies in order to achieve good scalability values when running those codes on modern supercomputers with several thousands to millions of cores. In this paper, we present a hierarchical data structure for massive parallel computations that supports the coupling of a Navier–Stokes-based fluid flow code with the Boussinesq approximation in order to address complex thermal scenarios for energy-related assessments. The newly designed data structure is specifically designed with the idea of interactive data exploration and visualization during runtime of the simulation code; a major shortcoming of traditional high-performance computing (HPC) simulation codes. We further show and discuss speed-up values obtained on one of Germany’s top-ranked supercomputers with up to 140,000 processes and present simulation results for different engineering-based thermal problems.

  14. Development Of A Parallel Performance Model For The THOR Neutral Particle Transport Code

    Energy Technology Data Exchange (ETDEWEB)

    Yessayan, Raffi; Azmy, Yousry; Schunert, Sebastian

    2017-02-01

    The THOR neutral particle transport code enables simulation of complex geometries for various problems from reactor simulations to nuclear non-proliferation. It is undergoing a thorough V&V requiring computational efficiency. This has motivated various improvements including angular parallelization, outer iteration acceleration, and development of peripheral tools. For guiding future improvements to the code’s efficiency, better characterization of its parallel performance is useful. A parallel performance model (PPM) can be used to evaluate the benefits of modifications and to identify performance bottlenecks. Using INL’s Falcon HPC, the PPM development incorporates an evaluation of network communication behavior over heterogeneous links and a functional characterization of the per-cell/angle/group runtime of each major code component. After evaluating several possible sources of variability, this resulted in a communication model and a parallel portion model. The former’s accuracy is bounded by the variability of communication on Falcon while the latter has an error on the order of 1%.

  15. Renormalization group analysis of a simple hierarchical fermion model

    International Nuclear Information System (INIS)

    Dorlas, T.C.

    1991-01-01

    A simple hierarchical fermion model is constructed which gives rise to an exact renormalization transformation in a 2-dimensional parameter space. The behaviour of this transformation is studied. It has two hyperbolic fixed points for which the existence of a global critical line is proven. The asymptotic behaviour of the transformation is used to prove the existence of the thermodynamic limit in a certain domain in parameter space. Also the existence of a continuum limit for these theories is investigated using information about the asymptotic renormalization behaviour. It turns out that the 'trivial' fixed point gives rise to a two-parameter family of continuum limits corresponding to that part of parameter space where the renormalization trajectories originate at this fixed point. Although the model is not very realistic it serves as a simple example of the appliclation of the renormalization group to proving the existence of the thermodynamic limit and the continuum limit of lattice models. Moreover, it illustrates possible complications that can arise in global renormalization group behaviour, and that might also be present in other models where no global analysis of the renormalization transformation has yet been achieved. (orig.)

  16. Predicting Examination Performance Using an Expanded Integrated Hierarchical Model of Test Emotions and Achievement Goals

    Science.gov (United States)

    Putwain, Dave; Deveney, Carolyn

    2009-01-01

    The aim of this study was to examine an expanded integrative hierarchical model of test emotions and achievement goal orientations in predicting the examination performance of undergraduate students. Achievement goals were theorised as mediating the relationship between test emotions and performance. 120 undergraduate students completed…

  17. A hierarchical stochastic model for bistable perception.

    Directory of Open Access Journals (Sweden)

    Stefan Albert

    2017-11-01

    Full Text Available Viewing of ambiguous stimuli can lead to bistable perception alternating between the possible percepts. During continuous presentation of ambiguous stimuli, percept changes occur as single events, whereas during intermittent presentation of ambiguous stimuli, percept changes occur at more or less regular intervals either as single events or bursts. Response patterns can be highly variable and have been reported to show systematic differences between patients with schizophrenia and healthy controls. Existing models of bistable perception often use detailed assumptions and large parameter sets which make parameter estimation challenging. Here we propose a parsimonious stochastic model that provides a link between empirical data analysis of the observed response patterns and detailed models of underlying neuronal processes. Firstly, we use a Hidden Markov Model (HMM for the times between percept changes, which assumes one single state in continuous presentation and a stable and an unstable state in intermittent presentation. The HMM captures the observed differences between patients with schizophrenia and healthy controls, but remains descriptive. Therefore, we secondly propose a hierarchical Brownian model (HBM, which produces similar response patterns but also provides a relation to potential underlying mechanisms. The main idea is that neuronal activity is described as an activity difference between two competing neuronal populations reflected in Brownian motions with drift. This differential activity generates switching between the two conflicting percepts and between stable and unstable states with similar mechanisms on different neuronal levels. With only a small number of parameters, the HBM can be fitted closely to a high variety of response patterns and captures group differences between healthy controls and patients with schizophrenia. At the same time, it provides a link to mechanistic models of bistable perception, linking the group

  18. A hierarchical stochastic model for bistable perception.

    Science.gov (United States)

    Albert, Stefan; Schmack, Katharina; Sterzer, Philipp; Schneider, Gaby

    2017-11-01

    Viewing of ambiguous stimuli can lead to bistable perception alternating between the possible percepts. During continuous presentation of ambiguous stimuli, percept changes occur as single events, whereas during intermittent presentation of ambiguous stimuli, percept changes occur at more or less regular intervals either as single events or bursts. Response patterns can be highly variable and have been reported to show systematic differences between patients with schizophrenia and healthy controls. Existing models of bistable perception often use detailed assumptions and large parameter sets which make parameter estimation challenging. Here we propose a parsimonious stochastic model that provides a link between empirical data analysis of the observed response patterns and detailed models of underlying neuronal processes. Firstly, we use a Hidden Markov Model (HMM) for the times between percept changes, which assumes one single state in continuous presentation and a stable and an unstable state in intermittent presentation. The HMM captures the observed differences between patients with schizophrenia and healthy controls, but remains descriptive. Therefore, we secondly propose a hierarchical Brownian model (HBM), which produces similar response patterns but also provides a relation to potential underlying mechanisms. The main idea is that neuronal activity is described as an activity difference between two competing neuronal populations reflected in Brownian motions with drift. This differential activity generates switching between the two conflicting percepts and between stable and unstable states with similar mechanisms on different neuronal levels. With only a small number of parameters, the HBM can be fitted closely to a high variety of response patterns and captures group differences between healthy controls and patients with schizophrenia. At the same time, it provides a link to mechanistic models of bistable perception, linking the group differences to

  19. 3D hierarchical computational model of wood as a cellular material with fibril reinforced, heterogeneous multiple layers

    DEFF Research Database (Denmark)

    Qing, Hai; Mishnaevsky, Leon

    2009-01-01

    A 3D hierarchical computational model of deformation and stiffness of wood, which takes into account the structures of wood at several scale levels (cellularity, multilayered nature of cell walls, composite-like structures of the wall layers) is developed. At the mesoscale, the softwood cell...... cellular model. With the use of the developed hierarchical model, the influence of the microstructure, including microfibril angles (MFAs, which characterizes the orientation of the cellulose fibrils with respect to the cell axis), the thickness of the cell wall, the shape of the cell cross...... is presented as a 3D hexagon-shape-tube with multilayered walls. The layers in the softwood cell are considered as considered as composite reinforced by microfibrils (celluloses). The elastic properties of the layers are determined with Halpin–Tsai equations, and introduced into mesoscale finite element...

  20. Bayesian Hierarchical Random Effects Models in Forensic Science

    Directory of Open Access Journals (Sweden)

    Colin G. G. Aitken

    2018-04-01

    Full Text Available Statistical modeling of the evaluation of evidence with the use of the likelihood ratio has a long history. It dates from the Dreyfus case at the end of the nineteenth century through the work at Bletchley Park in the Second World War to the present day. The development received a significant boost in 1977 with a seminal work by Dennis Lindley which introduced a Bayesian hierarchical random effects model for the evaluation of evidence with an example of refractive index measurements on fragments of glass. Many models have been developed since then. The methods have now been sufficiently well-developed and have become so widespread that it is timely to try and provide a software package to assist in their implementation. With that in mind, a project (SAILR: Software for the Analysis and Implementation of Likelihood Ratios was funded by the European Network of Forensic Science Institutes through their Monopoly programme to develop a software package for use by forensic scientists world-wide that would assist in the statistical analysis and implementation of the approach based on likelihood ratios. It is the purpose of this document to provide a short review of a small part of this history. The review also provides a background, or landscape, for the development of some of the models within the SAILR package and references to SAILR as made as appropriate.

  1. Bayesian Hierarchical Random Effects Models in Forensic Science.

    Science.gov (United States)

    Aitken, Colin G G

    2018-01-01

    Statistical modeling of the evaluation of evidence with the use of the likelihood ratio has a long history. It dates from the Dreyfus case at the end of the nineteenth century through the work at Bletchley Park in the Second World War to the present day. The development received a significant boost in 1977 with a seminal work by Dennis Lindley which introduced a Bayesian hierarchical random effects model for the evaluation of evidence with an example of refractive index measurements on fragments of glass. Many models have been developed since then. The methods have now been sufficiently well-developed and have become so widespread that it is timely to try and provide a software package to assist in their implementation. With that in mind, a project (SAILR: Software for the Analysis and Implementation of Likelihood Ratios) was funded by the European Network of Forensic Science Institutes through their Monopoly programme to develop a software package for use by forensic scientists world-wide that would assist in the statistical analysis and implementation of the approach based on likelihood ratios. It is the purpose of this document to provide a short review of a small part of this history. The review also provides a background, or landscape, for the development of some of the models within the SAILR package and references to SAILR as made as appropriate.

  2. Methods to model-check parallel systems software

    International Nuclear Information System (INIS)

    Matlin, O. S.; McCune, W.; Lusk, E.

    2003-01-01

    We report on an effort to develop methodologies for formal verification of parts of the Multi-Purpose Daemon (MPD) parallel process management system. MPD is a distributed collection of communicating processes. While the individual components of the collection execute simple algorithms, their interaction leads to unexpected errors that are difficult to uncover by conventional means. Two verification approaches are discussed here: the standard model checking approach using the software model checker SPIN and the nonstandard use of a general-purpose first-order resolution-style theorem prover OTTER to conduct the traditional state space exploration. We compare modeling methodology and analyze performance and scalability of the two methods with respect to verification of MPD

  3. Molecular simulation of adsorption and transport in hierarchical porous materials.

    Science.gov (United States)

    Coasne, Benoit; Galarneau, Anne; Gerardin, Corine; Fajula, François; Villemot, François

    2013-06-25

    Adsorption and transport in hierarchical porous solids with micro- (~1 nm) and mesoporosities (>2 nm) are investigated by molecular simulation. Two models of hierarchical solids are considered: microporous materials in which mesopores are carved out (model A) and mesoporous materials in which microporous nanoparticles are inserted (model B). Adsorption isotherms for model A can be described as a linear combination of the adsorption isotherms for pure mesoporous and microporous solids. In contrast, adsorption in model B departs from adsorption in pure microporous and mesoporous solids; the inserted microporous particles act as defects, which help nucleate the liquid phase within the mesopore and shift capillary condensation toward lower pressures. As far as transport under a pressure gradient is concerned, the flux in hierarchical materials consisting of microporous solids in which mesopores are carved out obeys the Navier-Stokes equation so that Darcy's law is verified within the mesopore. Moreover, the flow in such materials is larger than in a single mesopore, due to the transfer between micropores and mesopores. This nonzero velocity at the mesopore surface implies that transport in such hierarchical materials involves slippage at the mesopore surface, although the adsorbate has a strong affinity for the surface. In contrast to model A, flux in model B is smaller than in a single mesopore, as the nanoparticles act as constrictions that hinder transport. By a subtle effect arising from fast transport in the mesopores, the presence of mesopores increases the number of molecules in the microporosity in hierarchical materials and, hence, decreases the flow in the micropores (due to mass conservation). As a result, we do not observe faster diffusion in the micropores of hierarchical materials upon flow but slower diffusion, which increases the contact time between the adsorbate and the surface of the microporosity.

  4. Interneuronal Mechanism for Tinbergen’s Hierarchical Model of Behavioral Choice

    Science.gov (United States)

    Pirger, Zsolt; Crossley, Michael; László, Zita; Naskar, Souvik; Kemenes, György; O’Shea, Michael; Benjamin, Paul R.; Kemenes, Ildikó

    2014-01-01

    Summary Recent studies of behavioral choice support the notion that the decision to carry out one behavior rather than another depends on the reconfiguration of shared interneuronal networks [1]. We investigated another decision-making strategy, derived from the classical ethological literature [2, 3], which proposes that behavioral choice depends on competition between autonomous networks. According to this model, behavioral choice depends on inhibitory interactions between incompatible hierarchically organized behaviors. We provide evidence for this by investigating the interneuronal mechanisms mediating behavioral choice between two autonomous circuits that underlie whole-body withdrawal [4, 5] and feeding [6] in the pond snail Lymnaea. Whole-body withdrawal is a defensive reflex that is initiated by tactile contact with predators. As predicted by the hierarchical model, tactile stimuli that evoke whole-body withdrawal responses also inhibit ongoing feeding in the presence of feeding stimuli. By recording neurons from the feeding and withdrawal networks, we found no direct synaptic connections between the interneuronal and motoneuronal elements that generate the two behaviors. Instead, we discovered that behavioral choice depends on the interaction between two unique types of interneurons with asymmetrical synaptic connectivity that allows withdrawal to override feeding. One type of interneuron, the Pleuro-Buccal (PlB), is an extrinsic modulatory neuron of the feeding network that completely inhibits feeding when excited by touch-induced monosynaptic input from the second type of interneuron, Pedal-Dorsal12 (PeD12). PeD12 plays a critical role in behavioral choice by providing a synaptic pathway joining the two behavioral networks that underlies the competitive dominance of whole-body withdrawal over feeding. PMID:25155505

  5. A robust H∞ control-based hierarchical mode transition control system for plug-in hybrid electric vehicle

    Science.gov (United States)

    Yang, Chao; Jiao, Xiaohong; Li, Liang; Zhang, Yuanbo; Chen, Zheng

    2018-01-01

    To realize a fast and smooth operating mode transition process from electric driving mode to engine-on driving mode, this paper presents a novel robust hierarchical mode transition control method for a plug-in hybrid electric bus (PHEB) with pre-transmission parallel hybrid powertrain. Firstly, the mode transition process is divided into five stages to clearly describe the powertrain dynamics. Based on the dynamics models of powertrain and clutch actuating mechanism, a hierarchical control structure including two robust H∞ controllers in both upper layer and lower layer is proposed. In upper layer, the demand clutch torque can be calculated by a robust H∞controller considering the clutch engaging time and the vehicle jerk. While in lower layer a robust tracking controller with L2-gain is designed to perform the accurate position tracking control, especially when the parameters uncertainties and external disturbance occur in the clutch actuating mechanism. Simulation and hardware-in-the-loop (HIL) test are carried out in a traditional driving condition of PHEB. Results show that the proposed hierarchical control approach can obtain the good control performance: mode transition time is greatly reduced with the acceptable jerk. Meanwhile, the designed control system shows the obvious robustness with the uncertain parameters and disturbance. Therefore, the proposed approach may offer a theoretical reference for the actual vehicle controller.

  6. Optimal parallel algorithms for problems modeled by a family of intervals

    Science.gov (United States)

    Olariu, Stephan; Schwing, James L.; Zhang, Jingyuan

    1992-01-01

    A family of intervals on the real line provides a natural model for a vast number of scheduling and VLSI problems. Recently, a number of parallel algorithms to solve a variety of practical problems on such a family of intervals have been proposed in the literature. Computational tools are developed, and it is shown how they can be used for the purpose of devising cost-optimal parallel algorithms for a number of interval-related problems including finding a largest subset of pairwise nonoverlapping intervals, a minimum dominating subset of intervals, along with algorithms to compute the shortest path between a pair of intervals and, based on the shortest path, a parallel algorithm to find the center of the family of intervals. More precisely, with an arbitrary family of n intervals as input, all algorithms run in O(log n) time using O(n) processors in the EREW-PRAM model of computation.

  7. Symptom structure of PTSD: support for a hierarchical model separating core PTSD symptoms from dysphoria

    NARCIS (Netherlands)

    Rademaker, Arthur R.; van Minnen, Agnes; Ebberink, Freek; van Zuiden, Mirjam; Hagenaars, Muriel A.; Geuze, Elbert

    2012-01-01

    As of yet, no collective agreement has been reached regarding the precise factor structure of posttraumatic stress disorder (PTSD). Several alternative factor-models have been proposed in the last decades. The current study examined the fit of a hierarchical adaptation of the Simms et al. (2002)

  8. Top-down feedback in an HMAX-like cortical model of object perception based on hierarchical Bayesian networks and belief propagation.

    Directory of Open Access Journals (Sweden)

    Salvador Dura-Bernal

    Full Text Available Hierarchical generative models, such as Bayesian networks, and belief propagation have been shown to provide a theoretical framework that can account for perceptual processes, including feedforward recognition and feedback modulation. The framework explains both psychophysical and physiological experimental data and maps well onto the hierarchical distributed cortical anatomy. However, the complexity required to model cortical processes makes inference, even using approximate methods, very computationally expensive. Thus, existing object perception models based on this approach are typically limited to tree-structured networks with no loops, use small toy examples or fail to account for certain perceptual aspects such as invariance to transformations or feedback reconstruction. In this study we develop a Bayesian network with an architecture similar to that of HMAX, a biologically-inspired hierarchical model of object recognition, and use loopy belief propagation to approximate the model operations (selectivity and invariance. Crucially, the resulting Bayesian network extends the functionality of HMAX by including top-down recursive feedback. Thus, the proposed model not only achieves successful feedforward recognition invariant to noise, occlusions, and changes in position and size, but is also able to reproduce modulatory effects such as illusory contour completion and attention. Our novel and rigorous methodology covers key aspects such as learning using a layerwise greedy algorithm, combining feedback information from multiple parents and reducing the number of operations required. Overall, this work extends an established model of object recognition to include high-level feedback modulation, based on state-of-the-art probabilistic approaches. The methodology employed, consistent with evidence from the visual cortex, can be potentially generalized to build models of hierarchical perceptual organization that include top-down and bottom

  9. Parallel computation for biological sequence comparison: comparing a portable model to the native model for the Intel Hypercube.

    Science.gov (United States)

    Nadkarni, P M; Miller, P L

    1991-01-01

    A parallel program for inter-database sequence comparison was developed on the Intel Hypercube using two models of parallel programming. One version was built using machine-specific Hypercube parallel programming commands. The other version was built using Linda, a machine-independent parallel programming language. The two versions of the program provide a case study comparing these two approaches to parallelization in an important biological application area. Benchmark tests with both programs gave comparable results with a small number of processors. As the number of processors was increased, the Linda version was somewhat less efficient. The Linda version was also run without change on Network Linda, a virtual parallel machine running on a network of desktop workstations.

  10. Investigation of Mediational Processes Using Parallel Process Latent Growth Curve Modeling

    Science.gov (United States)

    Cheong, JeeWon; MacKinnon, David P.; Khoo, Siek Toon

    2010-01-01

    This study investigated a method to evaluate mediational processes using latent growth curve modeling. The mediator and the outcome measured across multiple time points were viewed as 2 separate parallel processes. The mediational process was defined as the independent variable influencing the growth of the mediator, which, in turn, affected the growth of the outcome. To illustrate modeling procedures, empirical data from a longitudinal drug prevention program, Adolescents Training and Learning to Avoid Steroids, were used. The program effects on the growth of the mediator and the growth of the outcome were examined first in a 2-group structural equation model. The mediational process was then modeled and tested in a parallel process latent growth curve model by relating the prevention program condition, the growth rate factor of the mediator, and the growth rate factor of the outcome. PMID:20157639

  11. A Scalable Parallel PWTD-Accelerated SIE Solver for Analyzing Transient Scattering from Electrically Large Objects

    KAUST Repository

    Liu, Yang

    2015-12-17

    A scalable parallel plane-wave time-domain (PWTD) algorithm for efficient and accurate analysis of transient scattering from electrically large objects is presented. The algorithm produces scalable communication patterns on very large numbers of processors by leveraging two mechanisms: (i) a hierarchical parallelization strategy to evenly distribute the computation and memory loads at all levels of the PWTD tree among processors, and (ii) a novel asynchronous communication scheme to reduce the cost and memory requirement of the communications between the processors. The efficiency and accuracy of the algorithm are demonstrated through its applications to the analysis of transient scattering from a perfect electrically conducting (PEC) sphere with a diameter of 70 wavelengths and a PEC square plate with a dimension of 160 wavelengths. Furthermore, the proposed algorithm is used to analyze transient fields scattered from realistic airplane and helicopter models under high frequency excitation.

  12. Process-based modelling of tree and stand growth: towards a hierarchical treatment of multiscale processes

    International Nuclear Information System (INIS)

    Makela, A.

    2003-01-01

    A generally accepted method has not emerged for managing the different temporal and spatial scales in a forest ecosystem. This paper reviews a hierarchical-modular modelling tradition, with the main focus on individual tree growth throughout the rotation. At this scale, model performance requires (i) realistic long-term dynamic properties, (ii) realistic responses of growth and mortality of competing individuals, and (iii) realistic responses to ecophysio-logical inputs. Model development and validation are illustrated through allocation patterns, height growth, and size-related feedbacks. Empirical work to test the approach is reviewed. In this approach, finer scale effects are embedded in parameters calculated using more detailed, interacting modules. This is exemplified by (i) the within-year effect of weather on annual photosynthesis, (ii) the effects of fast soil processes on carbon allocation and photosynthesis, and (iii) the utilization of detailed stem structure to predict wood quality. Prevailing management paradigms are reflected in growth modelling. A shift of emphasis has occurred from productivity in homogeneous canopies towards, e.g., wood quality versus total yield, spatially more explicit models, and growth decline in old-growth forests. The new problems emphasize the hierarchy of the system and interscale interactions, suggesting that the hierarchical-modular approach could prove constructive. (author)

  13. Evolutionary-Hierarchical Bases of the Formation of Cluster Model of Innovation Economic Development

    Directory of Open Access Journals (Sweden)

    Yuliya Vladimirovna Dubrovskaya

    2016-10-01

    Full Text Available The functioning of a modern economic system is based on the interaction of objects of different hierarchical levels. Thus, the problem of the study of innovation processes taking into account the mutual influence of the activities of these economic actors becomes important. The paper dwells evolutionary basis for the formation of models of innovation development on the basis of micro and macroeconomic analysis. Most of the concepts recognized that despite a big number of diverse models, the coordination of the relations between economic agents is of crucial importance for the successful innovation development. According to the results of the evolutionary-hierarchical analysis, the authors reveal key phases of the development of forms of business cooperation, science and government in the domestic economy. It has become the starting point of the conception of the characteristics of the interaction in the cluster models of innovation development of the economy. Considerable expectancies on improvement of the national innovative system are connected with the development of cluster and network structures. The main objective of government authorities is the formation of mechanisms and institutions that will foster cooperation between members of the clusters. The article explains that the clusters cannot become the factors in the growth of the national economy, not being an effective tool for interaction between the actors of the regional innovative systems.

  14. Static and dynamic friction of hierarchical surfaces.

    Science.gov (United States)

    Costagliola, Gianluca; Bosia, Federico; Pugno, Nicola M

    2016-12-01

    Hierarchical structures are very common in nature, but only recently have they been systematically studied in materials science, in order to understand the specific effects they can have on the mechanical properties of various systems. Structural hierarchy provides a way to tune and optimize macroscopic mechanical properties starting from simple base constituents and new materials are nowadays designed exploiting this possibility. This can be true also in the field of tribology. In this paper we study the effect of hierarchical patterned surfaces on the static and dynamic friction coefficients of an elastic material. Our results are obtained by means of numerical simulations using a one-dimensional spring-block model, which has previously been used to investigate various aspects of friction. Despite the simplicity of the model, we highlight some possible mechanisms that explain how hierarchical structures can significantly modify the friction coefficients of a material, providing a means to achieve tunability.

  15. The island dynamics model on parallel quadtree grids

    Science.gov (United States)

    Mistani, Pouria; Guittet, Arthur; Bochkov, Daniil; Schneider, Joshua; Margetis, Dionisios; Ratsch, Christian; Gibou, Frederic

    2018-05-01

    We introduce an approach for simulating epitaxial growth by use of an island dynamics model on a forest of quadtree grids, and in a parallel environment. To this end, we use a parallel framework introduced in the context of the level-set method. This framework utilizes: discretizations that achieve a second-order accurate level-set method on non-graded adaptive Cartesian grids for solving the associated free boundary value problem for surface diffusion; and an established library for the partitioning of the grid. We consider the cases with: irreversible aggregation, which amounts to applying Dirichlet boundary conditions at the island boundary; and an asymmetric (Ehrlich-Schwoebel) energy barrier for attachment/detachment of atoms at the island boundary, which entails the use of a Robin boundary condition. We provide the scaling analyses performed on the Stampede supercomputer and numerical examples that illustrate the capability of our methodology to efficiently simulate different aspects of epitaxial growth. The combination of adaptivity and parallelism in our approach enables simulations that are several orders of magnitude faster than those reported in the recent literature and, thus, provides a viable framework for the systematic study of mound formation on crystal surfaces.

  16. Modeling when people quit: Bayesian censored geometric models with hierarchical and latent-mixture extensions.

    Science.gov (United States)

    Okada, Kensuke; Vandekerckhove, Joachim; Lee, Michael D

    2018-02-01

    People often interact with environments that can provide only a finite number of items as resources. Eventually a book contains no more chapters, there are no more albums available from a band, and every Pokémon has been caught. When interacting with these sorts of environments, people either actively choose to quit collecting new items, or they are forced to quit when the items are exhausted. Modeling the distribution of how many items people collect before they quit involves untangling these two possibilities, We propose that censored geometric models are a useful basic technique for modeling the quitting distribution, and, show how, by implementing these models in a hierarchical and latent-mixture framework through Bayesian methods, they can be extended to capture the additional features of specific situations. We demonstrate this approach by developing and testing a series of models in two case studies involving real-world data. One case study deals with people choosing jokes from a recommender system, and the other deals with people completing items in a personality survey.

  17. Exploiting Thread Parallelism for Ocean Modeling on Cray XC Supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Sarje, Abhinav [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Jacobsen, Douglas W. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Williams, Samuel W. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Ringler, Todd [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Oliker, Leonid [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2016-05-01

    The incorporation of increasing core counts in modern processors used to build state-of-the-art supercomputers is driving application development towards exploitation of thread parallelism, in addition to distributed memory parallelism, with the goal of delivering efficient high-performance codes. In this work we describe the exploitation of threading and our experiences with it with respect to a real-world ocean modeling application code, MPAS-Ocean. We present detailed performance analysis and comparisons of various approaches and configurations for threading on the Cray XC series supercomputers.

  18. Control of parallel-connected bidirectional AC-DC converters in stationary frame for microgrid application

    DEFF Research Database (Denmark)

    Lu, Xiaonan; Guerrero, Josep M.; Teodorescu, Remus

    2011-01-01

    With the penetration of renewable energy in modern power system, microgrid has become a popular application worldwide. In this paper, parallel-connected bidirectional converters for AC and DC hybrid microgrid application are proposed as an efficient interface. To reach the goal of bidirectional...... power conversion, both rectifier and inverter modes are analyzed. In order to achieve high performance operation, hierarchical control system is accomplished. The control system is designed in stationary frame, with harmonic compensation in parallel and no coupled terms between axes. In this control...

  19. Hierarchical Planning Methodology for a Supply Chain Management

    Directory of Open Access Journals (Sweden)

    Virna ORTIZ-ARAYA

    2012-01-01

    Full Text Available Hierarchical production planning is a widely utilized methodology for real world capacitated production planning systems with the aim of establishing different decision–making levels of the planning issues on the time horizon considered. This paper presents a hierarchical approach proposed to a company that produces reusable shopping bags in Chile and Perú, to determine the optimal allocation of resources at the tactical level as well as over the most immediate planning horizon to meet customer demands for the next weeks. Starting from an aggregated production planning model, the aggregated decisions are disaggregated into refined decisions in two levels, using a couple of optimization models that impose appropriate constraints to keep coherence of the plan on the production system. The main features of the hierarchical solution approach are presented.

  20. Parallelization Experience with Four Canonical Econometric Models Using ParMitISEM

    Directory of Open Access Journals (Sweden)

    Nalan Baştürk

    2016-03-01

    Full Text Available This paper presents the parallel computing implementation of the MitISEM algorithm, labeled Parallel MitISEM. The basic MitISEM algorithm provides an automatic and flexible method to approximate a non-elliptical target density using adaptive mixtures of Student-t densities, where only a kernel of the target density is required. The approximation can be used as a candidate density in Importance Sampling or Metropolis Hastings methods for Bayesian inference on model parameters and probabilities. We present and discuss four canonical econometric models using a Graphics Processing Unit and a multi-core Central Processing Unit version of the MitISEM algorithm. The results show that the parallelization of the MitISEM algorithm on Graphics Processing Units and multi-core Central Processing Units is straightforward and fast to program using MATLAB. Moreover the speed performance of the Graphics Processing Unit version is much higher than the Central Processing Unit one.

  1. Mechatronic Model Based Computed Torque Control of a Parallel Manipulator

    Directory of Open Access Journals (Sweden)

    Zhiyong Yang

    2008-11-01

    Full Text Available With high speed and accuracy the parallel manipulators have wide application in the industry, but there still exist many difficulties in the actual control process because of the time-varying and coupling. Unfortunately, the present-day commercial controlles cannot provide satisfying performance for its single axis linear control only. Therefore, aimed at a novel 2-DOF (Degree of Freedom parallel manipulator called Diamond 600, a motor-mechanism coupling dynamic model based control scheme employing the computed torque control algorithm are presented in this paper. First, the integrated dynamic coupling model is deduced, according to equivalent torques between the mechanical structure and the PM (Permanent Magnetism servomotor. Second, computed torque controller is described in detail for the above proposed model. At last, a series of numerical simulations and experiments are carried out to test the effectiveness of the system, and the results verify the favourable tracking ability and robustness.

  2. Mechatronic Model Based Computed Torque Control of a Parallel Manipulator

    Directory of Open Access Journals (Sweden)

    Zhiyong Yang

    2008-03-01

    Full Text Available With high speed and accuracy the parallel manipulators have wide application in the industry, but there still exist many difficulties in the actual control process because of the time-varying and coupling. Unfortunately, the present-day commercial controlles cannot provide satisfying performance for its single axis linear control only. Therefore, aimed at a novel 2-DOF (Degree of Freedom parallel manipulator called Diamond 600, a motor-mechanism coupling dynamic model based control scheme employing the computed torque control algorithm are presented in this paper. First, the integrated dynamic coupling model is deduced, according to equivalent torques between the mechanical structure and the PM (Permanent Magnetism servomotor. Second, computed torque controller is described in detail for the above proposed model. At last, a series of numerical simulations and experiments are carried out to test the effectiveness of the system, and the results verify the favourable tracking ability and robustness.

  3. Parallel sorting algorithms

    CERN Document Server

    Akl, Selim G

    1985-01-01

    Parallel Sorting Algorithms explains how to use parallel algorithms to sort a sequence of items on a variety of parallel computers. The book reviews the sorting problem, the parallel models of computation, parallel algorithms, and the lower bounds on the parallel sorting problems. The text also presents twenty different algorithms, such as linear arrays, mesh-connected computers, cube-connected computers. Another example where algorithm can be applied is on the shared-memory SIMD (single instruction stream multiple data stream) computers in which the whole sequence to be sorted can fit in the

  4. Bayesian hierarchical model for variations in earthquake peak ground acceleration within small-aperture arrays

    KAUST Repository

    Rahpeyma, Sahar

    2018-04-17

    Knowledge of the characteristics of earthquake ground motion is fundamental for earthquake hazard assessments. Over small distances, relative to the source–site distance, where uniform site conditions are expected, the ground motion variability is also expected to be insignificant. However, despite being located on what has been characterized as a uniform lava‐rock site condition, considerable peak ground acceleration (PGA) variations were observed on stations of a small‐aperture array (covering approximately 1 km2) of accelerographs in Southwest Iceland during the Ölfus earthquake of magnitude 6.3 on May 29, 2008 and its sequence of aftershocks. We propose a novel Bayesian hierarchical model for the PGA variations accounting separately for earthquake event effects, station effects, and event‐station effects. An efficient posterior inference scheme based on Markov chain Monte Carlo (MCMC) simulations is proposed for the new model. The variance of the station effect is certainly different from zero according to the posterior density, indicating that individual station effects are different from one another. The Bayesian hierarchical model thus captures the observed PGA variations and quantifies to what extent the source and recording sites contribute to the overall variation in ground motions over relatively small distances on the lava‐rock site condition.

  5. Bayesian hierarchical model for variations in earthquake peak ground acceleration within small-aperture arrays

    KAUST Repository

    Rahpeyma, Sahar; Halldorsson, Benedikt; Hrafnkelsson, Birgir; Jonsson, Sigurjon

    2018-01-01

    Knowledge of the characteristics of earthquake ground motion is fundamental for earthquake hazard assessments. Over small distances, relative to the source–site distance, where uniform site conditions are expected, the ground motion variability is also expected to be insignificant. However, despite being located on what has been characterized as a uniform lava‐rock site condition, considerable peak ground acceleration (PGA) variations were observed on stations of a small‐aperture array (covering approximately 1 km2) of accelerographs in Southwest Iceland during the Ölfus earthquake of magnitude 6.3 on May 29, 2008 and its sequence of aftershocks. We propose a novel Bayesian hierarchical model for the PGA variations accounting separately for earthquake event effects, station effects, and event‐station effects. An efficient posterior inference scheme based on Markov chain Monte Carlo (MCMC) simulations is proposed for the new model. The variance of the station effect is certainly different from zero according to the posterior density, indicating that individual station effects are different from one another. The Bayesian hierarchical model thus captures the observed PGA variations and quantifies to what extent the source and recording sites contribute to the overall variation in ground motions over relatively small distances on the lava‐rock site condition.

  6. Reduced Rank Mixed Effects Models for Spatially Correlated Hierarchical Functional Data

    KAUST Repository

    Zhou, Lan

    2010-03-01

    Hierarchical functional data are widely seen in complex studies where sub-units are nested within units, which in turn are nested within treatment groups. We propose a general framework of functional mixed effects model for such data: within unit and within sub-unit variations are modeled through two separate sets of principal components; the sub-unit level functions are allowed to be correlated. Penalized splines are used to model both the mean functions and the principal components functions, where roughness penalties are used to regularize the spline fit. An EM algorithm is developed to fit the model, while the specific covariance structure of the model is utilized for computational efficiency to avoid storage and inversion of large matrices. Our dimension reduction with principal components provides an effective solution to the difficult tasks of modeling the covariance kernel of a random function and modeling the correlation between functions. The proposed methodology is illustrated using simulations and an empirical data set from a colon carcinogenesis study. Supplemental materials are available online.

  7. Fast robot kinematics modeling by using a parallel simulator (PSIM)

    International Nuclear Information System (INIS)

    El-Gazzar, H.M.; Ayad, N.M.A.

    2002-01-01

    High-speed computers are strongly needed not only for solving scientific and engineering problems, but also for numerous industrial applications. Such applications include computer-aided design, oil exploration, weather predication, space applications and safety of nuclear reactors. The rapid development in VLSI technology makes it possible to implement time consuming algorithms in real-time situations. Parallel processing approaches can now be used to reduce the processing-time for models of very high mathematical structure such as the kinematics molding of robot manipulator. This system is used to construct and evaluate the performance and cost effectiveness of several proposed methods to solve the Jacobian algorithm. Parallelism is introduced to the algorithms by using different task-allocations and dividing the whole job into sub tasks. Detailed analysis is performed and results are obtained for the case of six DOF (degree of freedom) robot arms (Stanford Arm). Execution times comparisons between Von Neumann (uni processor) and parallel processor architectures by using parallel simulator package (PSIM) are presented. The gained results are much in favour for the parallel techniques by at least fifty-percent improvements. Of course, further studies are needed to achieve the convenient and optimum number of processors has to be done

  8. Fast robot kinematics modeling by using a parallel simulator (PSIM)

    Energy Technology Data Exchange (ETDEWEB)

    El-Gazzar, H M; Ayad, N M.A. [Atomic Energy Authority, Reactor Dept., Computer and Control Lab., P.O. Box no 13759 (Egypt)

    2002-09-15

    High-speed computers are strongly needed not only for solving scientific and engineering problems, but also for numerous industrial applications. Such applications include computer-aided design, oil exploration, weather predication, space applications and safety of nuclear reactors. The rapid development in VLSI technology makes it possible to implement time consuming algorithms in real-time situations. Parallel processing approaches can now be used to reduce the processing-time for models of very high mathematical structure such as the kinematics molding of robot manipulator. This system is used to construct and evaluate the performance and cost effectiveness of several proposed methods to solve the Jacobian algorithm. Parallelism is introduced to the algorithms by using different task-allocations and dividing the whole job into sub tasks. Detailed analysis is performed and results are obtained for the case of six DOF (degree of freedom) robot arms (Stanford Arm). Execution times comparisons between Von Neumann (uni processor) and parallel processor architectures by using parallel simulator package (PSIM) are presented. The gained results are much in favour for the parallel techniques by at least fifty-percent improvements. Of course, further studies are needed to achieve the convenient and optimum number of processors has to be done.

  9. Resolving Neighbourhood Relations in a Parallel Fluid Dynamic Solver

    KAUST Repository

    Frisch, Jerome

    2012-06-01

    Computational Fluid Dynamics simulations require an enormous computational effort if a physically reasonable accuracy should be reached. Therefore, a parallel implementation is inevitable. This paper describes the basics of our implemented fluid solver with a special aspect on the hierarchical data structure, unique cell and grid identification, and the neighbourhood relations in-between grids on different processes. A special server concept keeps track of every grid over all processes while minimising data transfer between the nodes. © 2012 IEEE.

  10. Hierarchical modeling and robust synthesis for the preliminary design of large scale complex systems

    Science.gov (United States)

    Koch, Patrick Nathan

    Large-scale complex systems are characterized by multiple interacting subsystems and the analysis of multiple disciplines. The design and development of such systems inevitably requires the resolution of multiple conflicting objectives. The size of complex systems, however, prohibits the development of comprehensive system models, and thus these systems must be partitioned into their constituent parts. Because simultaneous solution of individual subsystem models is often not manageable iteration is inevitable and often excessive. In this dissertation these issues are addressed through the development of a method for hierarchical robust preliminary design exploration to facilitate concurrent system and subsystem design exploration, for the concurrent generation of robust system and subsystem specifications for the preliminary design of multi-level, multi-objective, large-scale complex systems. This method is developed through the integration and expansion of current design techniques: (1) Hierarchical partitioning and modeling techniques for partitioning large-scale complex systems into more tractable parts, and allowing integration of subproblems for system synthesis, (2) Statistical experimentation and approximation techniques for increasing both the efficiency and the comprehensiveness of preliminary design exploration, and (3) Noise modeling techniques for implementing robust preliminary design when approximate models are employed. The method developed and associated approaches are illustrated through their application to the preliminary design of a commercial turbofan turbine propulsion system; the turbofan system-level problem is partitioned into engine cycle and configuration design and a compressor module is integrated for more detailed subsystem-level design exploration, improving system evaluation.

  11. Hierarchical modelling of temperature and habitat size effects on population dynamics of North Atlantic cod

    DEFF Research Database (Denmark)

    Mantzouni, Irene; Sørensen, Helle; O'Hara, Robert B.

    2010-01-01

    and Beverton and Holt stock–recruitment (SR) models were extended by applying hierarchical methods, mixed-effects models, and Bayesian inference to incorporate the influence of these ecosystem factors on model parameters representing cod maximum reproductive rate and carrying capacity. We identified......Understanding how temperature affects cod (Gadus morhua) ecology is important for forecasting how populations will develop as climate changes in future. The effects of spawning-season temperature and habitat size on cod recruitment dynamics have been investigated across the North Atlantic. Ricker...

  12. An approach based on Hierarchical Bayesian Graphical Models for measurement interpretation under uncertainty

    Science.gov (United States)

    Skataric, Maja; Bose, Sandip; Zeroug, Smaine; Tilke, Peter

    2017-02-01

    It is not uncommon in the field of non-destructive evaluation that multiple measurements encompassing a variety of modalities are available for analysis and interpretation for determining the underlying states of nature of the materials or parts being tested. Despite and sometimes due to the richness of data, significant challenges arise in the interpretation manifested as ambiguities and inconsistencies due to various uncertain factors in the physical properties (inputs), environment, measurement device properties, human errors, and the measurement data (outputs). Most of these uncertainties cannot be described by any rigorous mathematical means, and modeling of all possibilities is usually infeasible for many real time applications. In this work, we will discuss an approach based on Hierarchical Bayesian Graphical Models (HBGM) for the improved interpretation of complex (multi-dimensional) problems with parametric uncertainties that lack usable physical models. In this setting, the input space of the physical properties is specified through prior distributions based on domain knowledge and expertise, which are represented as Gaussian mixtures to model the various possible scenarios of interest for non-destructive testing applications. Forward models are then used offline to generate the expected distribution of the proposed measurements which are used to train a hierarchical Bayesian network. In Bayesian analysis, all model parameters are treated as random variables, and inference of the parameters is made on the basis of posterior distribution given the observed data. Learned parameters of the posterior distribution obtained after the training can therefore be used to build an efficient classifier for differentiating new observed data in real time on the basis of pre-trained models. We will illustrate the implementation of the HBGM approach to ultrasonic measurements used for cement evaluation of cased wells in the oil industry.

  13. Dynamic classification of fetal heart rates by hierarchical Dirichlet process mixture models.

    Directory of Open Access Journals (Sweden)

    Kezi Yu

    Full Text Available In this paper, we propose an application of non-parametric Bayesian (NPB models for classification of fetal heart rate (FHR recordings. More specifically, we propose models that are used to differentiate between FHR recordings that are from fetuses with or without adverse outcomes. In our work, we rely on models based on hierarchical Dirichlet processes (HDP and the Chinese restaurant process with finite capacity (CRFC. Two mixture models were inferred from real recordings, one that represents healthy and another, non-healthy fetuses. The models were then used to classify new recordings and provide the probability of the fetus being healthy. First, we compared the classification performance of the HDP models with that of support vector machines on real data and concluded that the HDP models achieved better performance. Then we demonstrated the use of mixture models based on CRFC for dynamic classification of the performance of (FHR recordings in a real-time setting.

  14. Modular networks with hierarchical organization

    Indian Academy of Sciences (India)

    Several networks occurring in real life have modular structures that are arranged in a hierarchical fashion. In this paper, we have proposed a model for such networks, using a stochastic generation method. Using this model we show that, the scaling relation between the clustering and degree of the nodes is not a necessary ...

  15. Verification of Electromagnetic Physics Models for Parallel Computing Architectures in the GeantV Project

    Energy Technology Data Exchange (ETDEWEB)

    Amadio, G.; et al.

    2017-11-22

    An intensive R&D and programming effort is required to accomplish new challenges posed by future experimental high-energy particle physics (HEP) programs. The GeantV project aims to narrow the gap between the performance of the existing HEP detector simulation software and the ideal performance achievable, exploiting latest advances in computing technology. The project has developed a particle detector simulation prototype capable of transporting in parallel particles in complex geometries exploiting instruction level microparallelism (SIMD and SIMT), task-level parallelism (multithreading) and high-level parallelism (MPI), leveraging both the multi-core and the many-core opportunities. We present preliminary verification results concerning the electromagnetic (EM) physics models developed for parallel computing architectures within the GeantV project. In order to exploit the potential of vectorization and accelerators and to make the physics model effectively parallelizable, advanced sampling techniques have been implemented and tested. In this paper we introduce a set of automated statistical tests in order to verify the vectorized models by checking their consistency with the corresponding Geant4 models and to validate them against experimental data.

  16. Genomic analysis of the hierarchical structure of regulatory networks

    Science.gov (United States)

    Yu, Haiyuan; Gerstein, Mark

    2006-01-01

    A fundamental question in biology is how the cell uses transcription factors (TFs) to coordinate the expression of thousands of genes in response to various stimuli. The relationships between TFs and their target genes can be modeled in terms of directed regulatory networks. These relationships, in turn, can be readily compared with commonplace “chain-of-command” structures in social networks, which have characteristic hierarchical layouts. Here, we develop algorithms for identifying generalized hierarchies (allowing for various loop structures) and use these approaches to illuminate extensive pyramid-shaped hierarchical structures existing in the regulatory networks of representative prokaryotes (Escherichia coli) and eukaryotes (Saccharomyces cerevisiae), with most TFs at the bottom levels and only a few master TFs on top. These masters are situated near the center of the protein–protein interaction network, a different type of network from the regulatory one, and they receive most of the input for the whole regulatory hierarchy through protein interactions. Moreover, they have maximal influence over other genes, in terms of affecting expression-level changes. Surprisingly, however, TFs at the bottom of the regulatory hierarchy are more essential to the viability of the cell. Finally, one might think master TFs achieve their wide influence through directly regulating many targets, but TFs with most direct targets are in the middle of the hierarchy. We find, in fact, that these midlevel TFs are “control bottlenecks” in the hierarchy, and this great degree of control for “middle managers” has parallels in efficient social structures in various corporate and governmental settings. PMID:17003135

  17. Construction of a digital elevation model: methods and parallelization

    International Nuclear Information System (INIS)

    Mazzoni, Christophe

    1995-01-01

    The aim of this work is to reduce the computation time needed to produce the Digital Elevation Models (DEM) by using a parallel machine. It is made in collaboration between the French 'Institut Geographique National' (IGN) and the Laboratoire d'Electronique de Technologie et d'Instrumentation (LETI) of the French Atomic Energy Commission (CEA). The IGN has developed a system which provides DEM that is used to produce topographic maps. The kernel of this system is the correlator, a software which automatically matches pairs of homologous points of a stereo-pair of photographs. Nevertheless the correlator is expensive In computing time. In order to reduce computation time and to produce the DEM with same accuracy that the actual system, we have parallelized the IGN's correlator on the OPENVISION system. This hardware solution uses a SIMD (Single Instruction Multiple Data) parallel machine SYMPATI-2, developed by the LETI that is involved in parallel architecture and image processing. Our analysis of the implementation has demonstrated the difficulty of efficient coupling between scalar and parallel structure. So we propose solutions to reinforce this coupling. In order to accelerate more the processing we evaluate SYMPHONIE, a SIMD calculator, successor of SYMPATI-2. On an other hand, we developed a multi-agent approach for what a MIMD (Multiple Instruction, Multiple Data) architecture is available. At last, we describe a Multi-SIMD architecture that conciliates our two approaches. This architecture offers a capacity to apprehend efficiently multi-level treatment image. It is flexible by its modularity, and its communication network supplies reliability that interest sensible systems. (author) [fr

  18. Modelling distribution of evaporating CO2 in parallel minichannels

    DEFF Research Database (Denmark)

    Brix, Wiebke; Kærn, Martin Ryhl; Elmegaard, Brian

    2010-01-01

    The effects of airflow non-uniformity and uneven inlet qualities on the performance of a minichannel evaporator with parallel channels, using CO2 as refrigerant, are investigated numerically. For this purpose a one-dimensional discretised steady-state model was developed, applying well-known empi......The effects of airflow non-uniformity and uneven inlet qualities on the performance of a minichannel evaporator with parallel channels, using CO2 as refrigerant, are investigated numerically. For this purpose a one-dimensional discretised steady-state model was developed, applying well...... to maldistribution of the refrigerant and considerable capacity reduction of the evaporator. Uneven inlet ualities to the different channels show only minor effects on the refrigerant distribution and evaporator capacity as long as the channels are vertically oriented with CO2 flowing upwards. For horizontal...... channels capacity reductions are found for both non-uniform airflow and uneven inlet qualities. For horizontal minichannels the results are very similar to those obtained using R134a as refrigerant....

  19. Hierarchical Bayesian sparse image reconstruction with application to MRFM.

    Science.gov (United States)

    Dobigeon, Nicolas; Hero, Alfred O; Tourneret, Jean-Yves

    2009-09-01

    This paper presents a hierarchical Bayesian model to reconstruct sparse images when the observations are obtained from linear transformations and corrupted by an additive white Gaussian noise. Our hierarchical Bayes model is well suited to such naturally sparse image applications as it seamlessly accounts for properties such as sparsity and positivity of the image via appropriate Bayes priors. We propose a prior that is based on a weighted mixture of a positive exponential distribution and a mass at zero. The prior has hyperparameters that are tuned automatically by marginalization over the hierarchical Bayesian model. To overcome the complexity of the posterior distribution, a Gibbs sampling strategy is proposed. The Gibbs samples can be used to estimate the image to be recovered, e.g., by maximizing the estimated posterior distribution. In our fully Bayesian approach, the posteriors of all the parameters are available. Thus, our algorithm provides more information than other previously proposed sparse reconstruction methods that only give a point estimate. The performance of the proposed hierarchical Bayesian sparse reconstruction method is illustrated on synthetic data and real data collected from a tobacco virus sample using a prototype MRFM instrument.

  20. A conceptual modeling framework for discrete event simulation using hierarchical control structures.

    Science.gov (United States)

    Furian, N; O'Sullivan, M; Walker, C; Vössner, S; Neubacher, D

    2015-08-01

    Conceptual Modeling (CM) is a fundamental step in a simulation project. Nevertheless, it is only recently that structured approaches towards the definition and formulation of conceptual models have gained importance in the Discrete Event Simulation (DES) community. As a consequence, frameworks and guidelines for applying CM to DES have emerged and discussion of CM for DES is increasing. However, both the organization of model-components and the identification of behavior and system control from standard CM approaches have shortcomings that limit CM's applicability to DES. Therefore, we discuss the different aspects of previous CM frameworks and identify their limitations. Further, we present the Hierarchical Control Conceptual Modeling framework that pays more attention to the identification of a models' system behavior, control policies and dispatching routines and their structured representation within a conceptual model. The framework guides the user step-by-step through the modeling process and is illustrated by a worked example.

  1. PARALLEL ADAPTIVE MULTILEVEL SAMPLING ALGORITHMS FOR THE BAYESIAN ANALYSIS OF MATHEMATICAL MODELS

    KAUST Repository

    Prudencio, Ernesto; Cheung, Sai Hung

    2012-01-01

    In recent years, Bayesian model updating techniques based on measured data have been applied to many engineering and applied science problems. At the same time, parallel computational platforms are becoming increasingly more powerful and are being used more frequently by the engineering and scientific communities. Bayesian techniques usually require the evaluation of multi-dimensional integrals related to the posterior probability density function (PDF) of uncertain model parameters. The fact that such integrals cannot be computed analytically motivates the research of stochastic simulation methods for sampling posterior PDFs. One such algorithm is the adaptive multilevel stochastic simulation algorithm (AMSSA). In this paper we discuss the parallelization of AMSSA, formulating the necessary load balancing step as a binary integer programming problem. We present a variety of results showing the effectiveness of load balancing on the overall performance of AMSSA in a parallel computational environment.

  2. Hierarchical State Machines as Modular Horn Clauses

    Directory of Open Access Journals (Sweden)

    Pierre-Loïc Garoche

    2016-07-01

    Full Text Available In model based development, embedded systems are modeled using a mix of dataflow formalism, that capture the flow of computation, and hierarchical state machines, that capture the modal behavior of the system. For safety analysis, existing approaches rely on a compilation scheme that transform the original model (dataflow and state machines into a pure dataflow formalism. Such compilation often result in loss of important structural information that capture the modal behaviour of the system. In previous work we have developed a compilation technique from a dataflow formalism into modular Horn clauses. In this paper, we present a novel technique that faithfully compile hierarchical state machines into modular Horn clauses. Our compilation technique preserves the structural and modal behavior of the system, making the safety analysis of such models more tractable.

  3. High-accuracy critical exponents for O(N) hierarchical 3D sigma models

    International Nuclear Information System (INIS)

    Godina, J. J.; Li, L.; Meurice, Y.; Oktay, M. B.

    2006-01-01

    The critical exponent γ and its subleading exponent Δ in the 3D O(N) Dyson's hierarchical model for N up to 20 are calculated with high accuracy. We calculate the critical temperatures for the measure δ(φ-vector.φ-vector-1). We extract the first coefficients of the 1/N expansion from our numerical data. We show that the leading and subleading exponents agree with Polchinski equation and the equivalent Litim equation, in the local potential approximation, with at least 4 significant digits

  4. Evolution of a minimal parallel programming model

    International Nuclear Information System (INIS)

    Lusk, Ewing; Butler, Ralph; Pieper, Steven C.

    2017-01-01

    Here, we take a historical approach to our presentation of self-scheduled task parallelism, a programming model with its origins in early irregular and nondeterministic computations encountered in automated theorem proving and logic programming. We show how an extremely simple task model has evolved into a system, asynchronous dynamic load balancing (ADLB), and a scalable implementation capable of supporting sophisticated applications on today’s (and tomorrow’s) largest supercomputers; and we illustrate the use of ADLB with a Green’s function Monte Carlo application, a modern, mature nuclear physics code in production use. Our lesson is that by surrendering a certain amount of generality and thus applicability, a minimal programming model (in terms of its basic concepts and the size of its application programmer interface) can achieve extreme scalability without introducing complexity.

  5. Parallelization in Modern C++

    CERN Multimedia

    CERN. Geneva

    2016-01-01

    The traditionally used and well established parallel programming models OpenMP and MPI are both targeting lower level parallelism and are meant to be as language agnostic as possible. For a long time, those models were the only widely available portable options for developing parallel C++ applications beyond using plain threads. This has strongly limited the optimization capabilities of compilers, has inhibited extensibility and genericity, and has restricted the use of those models together with other, modern higher level abstractions introduced by the C++11 and C++14 standards. The recent revival of interest in the industry and wider community for the C++ language has also spurred a remarkable amount of standardization proposals and technical specifications being developed. Those efforts however have so far failed to build a vision on how to seamlessly integrate various types of parallelism, such as iterative parallel execution, task-based parallelism, asynchronous many-task execution flows, continuation s...

  6. Toward combining thematic information with hierarchical multiscale segmentations using tree Markov random field model

    Science.gov (United States)

    Zhang, Xueliang; Xiao, Pengfeng; Feng, Xuezhi

    2017-09-01

    It has been a common idea to produce multiscale segmentations to represent the various geographic objects in high-spatial resolution remote sensing (HR) images. However, it remains a great challenge to automatically select the proper segmentation scale(s) just according to the image information. In this study, we propose a novel way of information fusion at object level by combining hierarchical multiscale segmentations with existed thematic information produced by classification or recognition. The tree Markov random field (T-MRF) model is designed for the multiscale combination framework, through which the object type is determined as close as the existed thematic information. At the same time, the object boundary is jointly determined by the thematic labels and the multiscale segments through the minimization of the energy function. The benefits of the proposed T-MRF combination model include: (1) reducing the dependence of segmentation scale selection when utilizing multiscale segmentations; (2) exploring the hierarchical context naturally imbedded in the multiscale segmentations. The HR images in both urban and rural areas are used in the experiments to show the effectiveness of the proposed combination framework on these two aspects.

  7. Bayesian hierarchical modelling of North Atlantic windiness

    Directory of Open Access Journals (Sweden)

    E. Vanem

    2013-03-01

    Full Text Available Extreme weather conditions represent serious natural hazards to ship operations and may be the direct cause or contributing factor to maritime accidents. Such severe environmental conditions can be taken into account in ship design and operational windows can be defined that limits hazardous operations to less extreme conditions. Nevertheless, possible changes in the statistics of extreme weather conditions, possibly due to anthropogenic climate change, represent an additional hazard to ship operations that is less straightforward to account for in a consistent way. Obviously, there are large uncertainties as to how future climate change will affect the extreme weather conditions at sea and there is a need for stochastic models that can describe the variability in both space and time at various scales of the environmental conditions. Previously, Bayesian hierarchical space-time models have been developed to describe the variability and complex dependence structures of significant wave height in space and time. These models were found to perform reasonably well and provided some interesting results, in particular, pertaining to long-term trends in the wave climate. In this paper, a similar framework is applied to oceanic windiness and the spatial and temporal variability of the 10-m wind speed over an area in the North Atlantic ocean is investigated. When the results from the model for North Atlantic windiness is compared to the results for significant wave height over the same area, it is interesting to observe that whereas an increasing trend in significant wave height was identified, no statistically significant long-term trend was estimated in windiness. This may indicate that the increase in significant wave height is not due to an increase in locally generated wind waves, but rather to increased swell. This observation is also consistent with studies that have suggested a poleward shift of the main storm tracks.

  8. Bayesian hierarchical modelling of North Atlantic windiness

    Science.gov (United States)

    Vanem, E.; Breivik, O. N.

    2013-03-01

    Extreme weather conditions represent serious natural hazards to ship operations and may be the direct cause or contributing factor to maritime accidents. Such severe environmental conditions can be taken into account in ship design and operational windows can be defined that limits hazardous operations to less extreme conditions. Nevertheless, possible changes in the statistics of extreme weather conditions, possibly due to anthropogenic climate change, represent an additional hazard to ship operations that is less straightforward to account for in a consistent way. Obviously, there are large uncertainties as to how future climate change will affect the extreme weather conditions at sea and there is a need for stochastic models that can describe the variability in both space and time at various scales of the environmental conditions. Previously, Bayesian hierarchical space-time models have been developed to describe the variability and complex dependence structures of significant wave height in space and time. These models were found to perform reasonably well and provided some interesting results, in particular, pertaining to long-term trends in the wave climate. In this paper, a similar framework is applied to oceanic windiness and the spatial and temporal variability of the 10-m wind speed over an area in the North Atlantic ocean is investigated. When the results from the model for North Atlantic windiness is compared to the results for significant wave height over the same area, it is interesting to observe that whereas an increasing trend in significant wave height was identified, no statistically significant long-term trend was estimated in windiness. This may indicate that the increase in significant wave height is not due to an increase in locally generated wind waves, but rather to increased swell. This observation is also consistent with studies that have suggested a poleward shift of the main storm tracks.

  9. Parallel Optimization of 3D Cardiac Electrophysiological Model Using GPU

    Directory of Open Access Journals (Sweden)

    Yong Xia

    2015-01-01

    Full Text Available Large-scale 3D virtual heart model simulations are highly demanding in computational resources. This imposes a big challenge to the traditional computation resources based on CPU environment, which already cannot meet the requirement of the whole computation demands or are not easily available due to expensive costs. GPU as a parallel computing environment therefore provides an alternative to solve the large-scale computational problems of whole heart modeling. In this study, using a 3D sheep atrial model as a test bed, we developed a GPU-based simulation algorithm to simulate the conduction of electrical excitation waves in the 3D atria. In the GPU algorithm, a multicellular tissue model was split into two components: one is the single cell model (ordinary differential equation and the other is the diffusion term of the monodomain model (partial differential equation. Such a decoupling enabled realization of the GPU parallel algorithm. Furthermore, several optimization strategies were proposed based on the features of the virtual heart model, which enabled a 200-fold speedup as compared to a CPU implementation. In conclusion, an optimized GPU algorithm has been developed that provides an economic and powerful platform for 3D whole heart simulations.

  10. Type Ia Supernova Light Curve Inference: Hierarchical Models for Nearby SN Ia in the Optical and Near Infrared

    Science.gov (United States)

    Mandel, Kaisey; Kirshner, R. P.; Narayan, G.; Wood-Vasey, W. M.; Friedman, A. S.; Hicken, M.

    2010-01-01

    I have constructed a comprehensive statistical model for Type Ia supernova light curves spanning optical through near infrared data simultaneously. The near infrared light curves are found to be excellent standard candles (sigma(MH) = 0.11 +/- 0.03 mag) that are less vulnerable to systematic error from dust extinction, a major confounding factor for cosmological studies. A hierarchical statistical framework incorporates coherently multiple sources of randomness and uncertainty, including photometric error, intrinsic supernova light curve variations and correlations, dust extinction and reddening, peculiar velocity dispersion and distances, for probabilistic inference with Type Ia SN light curves. Inferences are drawn from the full probability density over individual supernovae and the SN Ia and dust populations, conditioned on a dataset of SN Ia light curves and redshifts. To compute probabilistic inferences with hierarchical models, I have developed BayeSN, a Markov Chain Monte Carlo algorithm based on Gibbs sampling. This code explores and samples the global probability density of parameters describing individual supernovae and the population. I have applied this hierarchical model to optical and near infrared data of over 100 nearby Type Ia SN from PAIRITEL, the CfA3 sample, and the literature. Using this statistical model, I find that SN with optical and NIR data have a smaller residual scatter in the Hubble diagram than SN with only optical data. The continued study of Type Ia SN in the near infrared will be important for improving their utility as precise and accurate cosmological distance indicators.

  11. Parallel algorithms for interactive manipulation of digital terrain models

    Science.gov (United States)

    Davis, E. W.; Mcallister, D. F.; Nagaraj, V.

    1988-01-01

    Interactive three-dimensional graphics applications, such as terrain data representation and manipulation, require extensive arithmetic processing. Massively parallel machines are attractive for this application since they offer high computational rates, and grid connected architectures provide a natural mapping for grid based terrain models. Presented here are algorithms for data movement on the massive parallel processor (MPP) in support of pan and zoom functions over large data grids. It is an extension of earlier work that demonstrated real-time performance of graphics functions on grids that were equal in size to the physical dimensions of the MPP. When the dimensions of a data grid exceed the processing array size, data is packed in the array memory. Windows of the total data grid are interactively selected for processing. Movement of packed data is needed to distribute items across the array for efficient parallel processing. Execution time for data movement was found to exceed that for arithmetic aspects of graphics functions. Performance figures are given for routines written in MPP Pascal.

  12. Monitoring Farmland Loss Caused by Urbanization in Beijing from Modis Time Series Using Hierarchical Hidden Markov Model

    Science.gov (United States)

    Yuan, Y.; Meng, Y.; Chen, Y. X.; Jiang, C.; Yue, A. Z.

    2018-04-01

    In this study, we proposed a method to map urban encroachment onto farmland using satellite image time series (SITS) based on the hierarchical hidden Markov model (HHMM). In this method, the farmland change process is decomposed into three hierarchical levels, i.e., the land cover level, the vegetation phenology level, and the SITS level. Then a three-level HHMM is constructed to model the multi-level semantic structure of farmland change process. Once the HHMM is established, a change from farmland to built-up could be detected by inferring the underlying state sequence that is most likely to generate the input time series. The performance of the method is evaluated on MODIS time series in Beijing. Results on both simulated and real datasets demonstrate that our method improves the change detection accuracy compared with the HMM-based method.

  13. Parallel performance of TORT on the CRAY J90: Model and measurement

    International Nuclear Information System (INIS)

    Barnett, A.; Azmy, Y.Y.

    1997-10-01

    A limitation on the parallel performance of TORT on the CRAY J90 is the amount of extra work introduced by the multitasking algorithm itself. The extra work beyond that of the serial version of the code, called overhead, arises from the synchronization of the parallel tasks and the accumulation of results by the master task. The goal of recent updates to TORT was to reduce the time consumed by these activities. To help understand which components of the multitasking algorithm contribute significantly to the overhead, a parallel performance model was constructed and compared to measurements of actual timings of the code

  14. Implementing parallel spreadsheet models for health policy decisions: The impact of unintentional errors on model projections.

    Science.gov (United States)

    Bailey, Stephanie L; Bono, Rose S; Nash, Denis; Kimmel, April D

    2018-01-01

    Spreadsheet software is increasingly used to implement systems science models informing health policy decisions, both in academia and in practice where technical capacity may be limited. However, spreadsheet models are prone to unintentional errors that may not always be identified using standard error-checking techniques. Our objective was to illustrate, through a methodologic case study analysis, the impact of unintentional errors on model projections by implementing parallel model versions. We leveraged a real-world need to revise an existing spreadsheet model designed to inform HIV policy. We developed three parallel versions of a previously validated spreadsheet-based model; versions differed by the spreadsheet cell-referencing approach (named single cells; column/row references; named matrices). For each version, we implemented three model revisions (re-entry into care; guideline-concordant treatment initiation; immediate treatment initiation). After standard error-checking, we identified unintentional errors by comparing model output across the three versions. Concordant model output across all versions was considered error-free. We calculated the impact of unintentional errors as the percentage difference in model projections between model versions with and without unintentional errors, using +/-5% difference to define a material error. We identified 58 original and 4,331 propagated unintentional errors across all model versions and revisions. Over 40% (24/58) of original unintentional errors occurred in the column/row reference model version; most (23/24) were due to incorrect cell references. Overall, >20% of model spreadsheet cells had material unintentional errors. When examining error impact along the HIV care continuum, the percentage difference between versions with and without unintentional errors ranged from +3% to +16% (named single cells), +26% to +76% (column/row reference), and 0% (named matrices). Standard error-checking techniques may not

  15. A conceptual modeling framework for discrete event simulation using hierarchical control structures

    Science.gov (United States)

    Furian, N.; O’Sullivan, M.; Walker, C.; Vössner, S.; Neubacher, D.

    2015-01-01

    Conceptual Modeling (CM) is a fundamental step in a simulation project. Nevertheless, it is only recently that structured approaches towards the definition and formulation of conceptual models have gained importance in the Discrete Event Simulation (DES) community. As a consequence, frameworks and guidelines for applying CM to DES have emerged and discussion of CM for DES is increasing. However, both the organization of model-components and the identification of behavior and system control from standard CM approaches have shortcomings that limit CM’s applicability to DES. Therefore, we discuss the different aspects of previous CM frameworks and identify their limitations. Further, we present the Hierarchical Control Conceptual Modeling framework that pays more attention to the identification of a models’ system behavior, control policies and dispatching routines and their structured representation within a conceptual model. The framework guides the user step-by-step through the modeling process and is illustrated by a worked example. PMID:26778940

  16. Construction of 3D Arrays of Cylindrically Hierarchical Structures with ZnO Nanorods Hydrothermally Synthesized on Optical Fiber Cores

    Directory of Open Access Journals (Sweden)

    Weixuan Jing

    2014-01-01

    Full Text Available With ZnO nanorods hydrothermally synthesized on manually assembled arrays of optical fiber cores, 3D arrays of ZnO nanorod-based cylindrically hierarchical structures with nominal pitch 250 μm or 375 μm were constructed. Based on micrographs of scanning electron microscopy and image processing operators of MATLAB software, the 3D arrays of cylindrically hierarchical structures were quantitatively characterized. The values of the actual diameters, the actual pitches, and the parallelism errors suggest that the process capability of the manual assembling is sufficient and the quality of the 3D arrays of cylindrically hierarchical structures is acceptable. The values of the characteristic parameters such as roughness, skewness, kurtosis, correlation length, and power spectrum density show that the surface morphologies of the cylindrically hierarchical structures not only were affected significantly by Zn2+ concentration of the growth solution but also were anisotropic due to different curvature radii of the optical fiber core at side and front view.

  17. Hierarchical Bayesian modelling of gene expression time series across irregularly sampled replicates and clusters.

    Science.gov (United States)

    Hensman, James; Lawrence, Neil D; Rattray, Magnus

    2013-08-20

    Time course data from microarrays and high-throughput sequencing experiments require simple, computationally efficient and powerful statistical models to extract meaningful biological signal, and for tasks such as data fusion and clustering. Existing methodologies fail to capture either the temporal or replicated nature of the experiments, and often impose constraints on the data collection process, such as regularly spaced samples, or similar sampling schema across replications. We propose hierarchical Gaussian processes as a general model of gene expression time-series, with application to a variety of problems. In particular, we illustrate the method's capacity for missing data imputation, data fusion and clustering.The method can impute data which is missing both systematically and at random: in a hold-out test on real data, performance is significantly better than commonly used imputation methods. The method's ability to model inter- and intra-cluster variance leads to more biologically meaningful clusters. The approach removes the necessity for evenly spaced samples, an advantage illustrated on a developmental Drosophila dataset with irregular replications. The hierarchical Gaussian process model provides an excellent statistical basis for several gene-expression time-series tasks. It has only a few additional parameters over a regular GP, has negligible additional complexity, is easily implemented and can be integrated into several existing algorithms. Our experiments were implemented in python, and are available from the authors' website: http://staffwww.dcs.shef.ac.uk/people/J.Hensman/.

  18. A parallel adaptive mesh refinement algorithm for predicting turbulent non-premixed combusting flows

    International Nuclear Information System (INIS)

    Gao, X.; Groth, C.P.T.

    2005-01-01

    A parallel adaptive mesh refinement (AMR) algorithm is proposed for predicting turbulent non-premixed combusting flows characteristic of gas turbine engine combustors. The Favre-averaged Navier-Stokes equations governing mixture and species transport for a reactive mixture of thermally perfect gases in two dimensions, the two transport equations of the κ-ψ turbulence model, and the time-averaged species transport equations, are all solved using a fully coupled finite-volume formulation. A flexible block-based hierarchical data structure is used to maintain the connectivity of the solution blocks in the multi-block mesh and facilitate automatic solution-directed mesh adaptation according to physics-based refinement criteria. This AMR approach allows for anisotropic mesh refinement and the block-based data structure readily permits efficient and scalable implementations of the algorithm on multi-processor architectures. Numerical results for turbulent non-premixed diffusion flames, including cold- and hot-flow predictions for a bluff body burner, are described and compared to available experimental data. The numerical results demonstrate the validity and potential of the parallel AMR approach for predicting complex non-premixed turbulent combusting flows. (author)

  19. Analysis of clinical complication data for radiation hepatitis using a parallel architecture model

    International Nuclear Information System (INIS)

    Jackson, A.; Haken, R.K. ten; Robertson, J.M.; Kessler, M.L.; Kutcher, G.J.; Lawrence, T.S.

    1995-01-01

    Purpose: The detailed knowledge of dose volume distributions available from the three-dimensional (3D) conformal radiation treatment of tumors in the liver (reported elsewhere) offers new opportunities to quantify the effect of volume on the probability of producing radiation hepatitis. We aim to test a new parallel architecture model of normal tissue complication probability (NTCP) with these data. Methods and Materials: Complication data and dose volume histograms from a total of 93 patients with normal liver function, treated on a prospective protocol with 3D conformal radiation therapy and intraarterial hepatic fluorodeoxyuridine, were analyzed with a new parallel architecture model. Patient treatment fell into six categories differing in doses delivered and volumes irradiated. By modeling the radiosensitivity of liver subunits, we are able to use dose volume histograms to calculate the fraction of the liver damaged in each patient. A complication results if this fraction exceeds the patient's functional reserve. To determine the patient distribution of functional reserves and the subunit radiosensitivity, the maximum likelihood method was used to fit the observed complication data. Results: The parallel model fit the complication data well, although uncertainties on the functional reserve distribution and subunit radiosensitivy are highly correlated. Conclusion: The observed radiation hepatitis complications show a threshold effect that can be described well with a parallel architecture model. However, additional independent studies are required to better determine the parameters defining the functional reserve distribution and subunit radiosensitivity

  20. Teaching Scientific Computing: A Model-Centered Approach to Pipeline and Parallel Programming with C

    Directory of Open Access Journals (Sweden)

    Vladimiras Dolgopolovas

    2015-01-01

    Full Text Available The aim of this study is to present an approach to the introduction into pipeline and parallel computing, using a model of the multiphase queueing system. Pipeline computing, including software pipelines, is among the key concepts in modern computing and electronics engineering. The modern computer science and engineering education requires a comprehensive curriculum, so the introduction to pipeline and parallel computing is the essential topic to be included in the curriculum. At the same time, the topic is among the most motivating tasks due to the comprehensive multidisciplinary and technical requirements. To enhance the educational process, the paper proposes a novel model-centered framework and develops the relevant learning objects. It allows implementing an educational platform of constructivist learning process, thus enabling learners’ experimentation with the provided programming models, obtaining learners’ competences of the modern scientific research and computational thinking, and capturing the relevant technical knowledge. It also provides an integral platform that allows a simultaneous and comparative introduction to pipelining and parallel computing. The programming language C for developing programming models and message passing interface (MPI and OpenMP parallelization tools have been chosen for implementation.

  1. A new model for reliability optimization of series-parallel systems with non-homogeneous components

    International Nuclear Information System (INIS)

    Feizabadi, Mohammad; Jahromi, Abdolhamid Eshraghniaye

    2017-01-01

    In discussions related to reliability optimization using redundancy allocation, one of the structures that has attracted the attention of many researchers, is series-parallel structure. In models previously presented for reliability optimization of series-parallel systems, there is a restricting assumption based on which all components of a subsystem must be homogeneous. This constraint limits system designers in selecting components and prevents achieving higher levels of reliability. In this paper, a new model is proposed for reliability optimization of series-parallel systems, which makes possible the use of non-homogeneous components in each subsystem. As a result of this flexibility, the process of supplying system components will be easier. To solve the proposed model, since the redundancy allocation problem (RAP) belongs to the NP-hard class of optimization problems, a genetic algorithm (GA) is developed. The computational results of the designed GA are indicative of high performance of the proposed model in increasing system reliability and decreasing costs. - Highlights: • In this paper, a new model is proposed for reliability optimization of series-parallel systems. • In the previous models, there is a restricting assumption based on which all components of a subsystem must be homogeneous. • The presented model provides a possibility for the subsystems’ components to be non- homogeneous in the required conditions. • The computational results demonstrate the high performance of the proposed model in improving reliability and reducing costs.

  2. Reduced-Order Structure-Preserving Model for Parallel-Connected Three-Phase Grid-Tied Inverters: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, Brian B [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Purba, Victor [University of Minnesota; Jafarpour, Saber [University of California, Santa Barbara; Bullo, Francesco [University of California, Santa Barbara; Dhople, Sairaj [University of Minnesota

    2017-08-31

    Given that next-generation infrastructures will contain large numbers of grid-connected inverters and these interfaces will be satisfying a growing fraction of system load, it is imperative to analyze the impacts of power electronics on such systems. However, since each inverter model has a relatively large number of dynamic states, it would be impractical to execute complex system models where the full dynamics of each inverter are retained. To address this challenge, we derive a reduced-order structure-preserving model for parallel-connected grid-tied three-phase inverters. Here, each inverter in the system is assumed to have a full-bridge topology, LCL filter at the point of common coupling, and the control architecture for each inverter includes a current controller, a power controller, and a phase-locked loop for grid synchronization. We outline a structure-preserving reduced-order inverter model for the setting where the parallel inverters are each designed such that the filter components and controller gains scale linearly with the power rating. By structure preserving, we mean that the reduced-order three-phase inverter model is also composed of an LCL filter, a power controller, current controller, and PLL. That is, we show that the system of parallel inverters can be modeled exactly as one aggregated inverter unit and this equivalent model has the same number of dynamical states as an individual inverter in the paralleled system. Numerical simulations validate the reduced-order models.

  3. Accelerating Lattice QCD Multigrid on GPUs Using Fine-Grained Parallelization

    Energy Technology Data Exchange (ETDEWEB)

    Clark, M. A. [NVIDIA Corp., Santa Clara; Joó, Bálint [Jefferson Lab; Strelchenko, Alexei [Fermilab; Cheng, Michael [Boston U., Ctr. Comp. Sci.; Gambhir, Arjun [William-Mary Coll.; Brower, Richard [Boston U.

    2016-12-22

    The past decade has witnessed a dramatic acceleration of lattice quantum chromodynamics calculations in nuclear and particle physics. This has been due to both significant progress in accelerating the iterative linear solvers using multi-grid algorithms, and due to the throughput improvements brought by GPUs. Deploying hierarchical algorithms optimally on GPUs is non-trivial owing to the lack of parallelism on the coarse grids, and as such, these advances have not proved multiplicative. Using the QUDA library, we demonstrate that by exposing all sources of parallelism that the underlying stencil problem possesses, and through appropriate mapping of this parallelism to the GPU architecture, we can achieve high efficiency even for the coarsest of grids. Results are presented for the Wilson-Clover discretization, where we demonstrate up to 10x speedup over present state-of-the-art GPU-accelerated methods on Titan. Finally, we look to the future, and consider the software implications of our findings.

  4. Toward a model framework of generalized parallel componential processing of multi-symbol numbers.

    Science.gov (United States)

    Huber, Stefan; Cornelsen, Sonja; Moeller, Korbinian; Nuerk, Hans-Christoph

    2015-05-01

    In this article, we propose and evaluate a new model framework of parallel componential multi-symbol number processing, generalizing the idea of parallel componential processing of multi-digit numbers to the case of negative numbers by considering the polarity signs similar to single digits. In a first step, we evaluated this account by defining and investigating a sign-decade compatibility effect for the comparison of positive and negative numbers, which extends the unit-decade compatibility effect in 2-digit number processing. Then, we evaluated whether the model is capable of accounting for previous findings in negative number processing. In a magnitude comparison task, in which participants had to single out the larger of 2 integers, we observed a reliable sign-decade compatibility effect with prolonged reaction times for incompatible (e.g., -97 vs. +53; in which the number with the larger decade digit has the smaller, i.e., negative polarity sign) as compared with sign-decade compatible number pairs (e.g., -53 vs. +97). Moreover, an analysis of participants' eye fixation behavior corroborated our model of parallel componential processing of multi-symbol numbers. These results are discussed in light of concurrent theoretical notions about negative number processing. On the basis of the present results, we propose a generalized integrated model framework of parallel componential multi-symbol processing. (c) 2015 APA, all rights reserved).

  5. Comparison of a species distribution model and a process model from a hierarchical perspective to quantify effects of projected climate change on tree species

    Science.gov (United States)

    Jeffrey E. Schneiderman; Hong S. He; Frank R. Thompson; William D. Dijak; Jacob S. Fraser

    2015-01-01

    Tree species distribution and abundance are affected by forces operating across a hierarchy of ecological scales. Process and species distribution models have been developed emphasizing forces at different scales. Understanding model agreement across hierarchical scales provides perspective on prediction uncertainty and ultimately enables policy makers and managers to...

  6. New physics beyond the standard model of particle physics and parallel universes

    Energy Technology Data Exchange (ETDEWEB)

    Plaga, R. [Franzstr. 40, 53111 Bonn (Germany)]. E-mail: rainer.plaga@gmx.de

    2006-03-09

    It is shown that if-and only if-'parallel universes' exist, an electroweak vacuum that is expected to have decayed since the big bang with a high probability might exist. It would neither necessarily render our existence unlikely nor could it be observed. In this special case the observation of certain combinations of Higgs-boson and top-quark masses-for which the standard model predicts such a decay-cannot be interpreted as evidence for new physics at low energy scales. The question of whether parallel universes exist is of interest to our understanding of the standard model of particle physics.

  7. Hierarchical decision modeling essays in honor of Dundar F. Kocaoglu

    CERN Document Server

    2016-01-01

    This volume, developed in honor of Dr. Dundar F. Kocaoglu, aims to demonstrate the applications of the Hierarchical Decision Model (HDM) in different sectors and its capacity in decision analysis. It is comprised of essays from noted scholars, academics and researchers of engineering and technology management around the world. This book is organized into four parts: Technology Assessment, Strategic Planning, National Technology Planning and Decision Making Tools. Dr. Dundar F. Kocaoglu is one of the pioneers of multiple decision models using hierarchies, and creator of the HDM in decision analysis. HDM is a mission-oriented method for evaluation and/or selection among alternatives. A wide range of alternatives can be considered, including but not limited to, different technologies, projects, markets, jobs, products, cities to live in, houses to buy, apartments to rent, and schools to attend. Dr. Kocaoglu’s approach has been adopted for decision problems in many industrial sectors, including electronics rese...

  8. Parallel programming practical aspects, models and current limitations

    CERN Document Server

    Tarkov, Mikhail S

    2014-01-01

    Parallel programming is designed for the use of parallel computer systems for solving time-consuming problems that cannot be solved on a sequential computer in a reasonable time. These problems can be divided into two classes: 1. Processing large data arrays (including processing images and signals in real time)2. Simulation of complex physical processes and chemical reactions For each of these classes, prospective methods are designed for solving problems. For data processing, one of the most promising technologies is the use of artificial neural networks. Particles-in-cell method and cellular automata are very useful for simulation. Problems of scalability of parallel algorithms and the transfer of existing parallel programs to future parallel computers are very acute now. An important task is to optimize the use of the equipment (including the CPU cache) of parallel computers. Along with parallelizing information processing, it is essential to ensure the processing reliability by the relevant organization ...

  9. A Bose-Einstein model of particle multiplicity distributions

    Energy Technology Data Exchange (ETDEWEB)

    Mekjian, A.Z. [Department of Physics and Astronomy, Rutgers University, Piscataway, NJ 08854 (United States) and California Institute of Technology, Kellogg Radiation Lab., Pasadena, CA 91106 (United States) and MTA KFKI RMKI, 114 PO Box 49, H-1525 Budapest (Hungary)]. E-mail: amekjian@physics.rutgers.edu; Csoergoe, T. [MTA KFKI RMKI, 114 PO Box 49, H-1525 Budapest (Hungary); Hegyi, S. [MTA KFKI RMKI, 114 PO Box 49, H-1525 Budapest (Hungary)

    2007-03-01

    A model of particle production is developed based on a parallel with a theory of Bose-Einstein condensation and similarities with other critical phenomena such as critical opalescence. The role of a power law critical exponent {tau} and Levy index {alpha} are studied. Various features of this model are developed and compared with other commonly used models of particle production which are shown to differ by having different values for {tau}, {alpha}. While void scaling is a feature of this model, hierarchical structure is not a general property of it. The value of the exponent {tau}=2 is a transition point associated with void and hierarchical scaling features. An exponent {gamma} is introduced to describe enhanced fluctuations near a critical point. Experimentally determined properties of the void scaling function can be used to determine {tau}.

  10. A Bose-Einstein model of particle multiplicity distributions

    International Nuclear Information System (INIS)

    Mekjian, A.Z.; Csoergoe, T.; Hegyi, S.

    2007-01-01

    A model of particle production is developed based on a parallel with a theory of Bose-Einstein condensation and similarities with other critical phenomena such as critical opalescence. The role of a power law critical exponent τ and Levy index α are studied. Various features of this model are developed and compared with other commonly used models of particle production which are shown to differ by having different values for τ, α. While void scaling is a feature of this model, hierarchical structure is not a general property of it. The value of the exponent τ=2 is a transition point associated with void and hierarchical scaling features. An exponent γ is introduced to describe enhanced fluctuations near a critical point. Experimentally determined properties of the void scaling function can be used to determine τ

  11. A Bose Einstein model of particle multiplicity distributions

    Science.gov (United States)

    Mekjian, A. Z.; Csörgö, T.; Hegyi, S.

    2007-03-01

    A model of particle production is developed based on a parallel with a theory of Bose-Einstein condensation and similarities with other critical phenomena such as critical opalescence. The role of a power law critical exponent τ and Levy index α are studied. Various features of this model are developed and compared with other commonly used models of particle production which are shown to differ by having different values for τ, α. While void scaling is a feature of this model, hierarchical structure is not a general property of it. The value of the exponent τ=2 is a transition point associated with void and hierarchical scaling features. An exponent γ is introduced to describe enhanced fluctuations near a critical point. Experimentally determined properties of the void scaling function can be used to determine τ.

  12. Connectionist Models and Parallelism in High Level Vision.

    Science.gov (United States)

    1985-01-01

    GRANT NUMBER(s) Jerome A. Feldman N00014-82-K-0193 9. PERFORMING ORGANIZATION NAME AND ADDRESS 10. PROGRAM ELEMENt. PROJECT, TASK Computer Science...Connectionist Models 2.1 Background and Overviev % Computer science is just beginning to look seriously at parallel computation : it may turn out that...the chair. The program includes intermediate level networks that compute more complex joints and ones that compute parallelograms in the image. These

  13. Review of Active and Reactive Power Sharing Strategies in Hierarchical Controlled Microgrids

    DEFF Research Database (Denmark)

    Han, Yang; Li, Hong; Shen, Pan

    2017-01-01

    Microgrids consist of multiple parallel-connected distributed generation (DG) units with coordinated control strategies, which are able to operate in both grid-connected and islanded mode. Microgrids are attracting more and more attention since they can alleviate the stress of main transmission...... systems, reduce feeder losses, and improve system power quality. When the islanded microgrids are concerned, it is important to maintain system stability and achieve load power sharing among the multiple parallel-connected DG units. However, the poor active and reactive power sharing problems due...... in this paper for active power sharing. Moreover, nonlinear and unbalanced loads could further affect the reactive power sharing when regulating the active power, and it is difficult to share the reactive power accurately only by using the enhanced virtual impedance method. Therefore, the hierarchical control...

  14. Prion Amplification and Hierarchical Bayesian Modeling Refine Detection of Prion Infection

    Science.gov (United States)

    Wyckoff, A. Christy; Galloway, Nathan; Meyerett-Reid, Crystal; Powers, Jenny; Spraker, Terry; Monello, Ryan J.; Pulford, Bruce; Wild, Margaret; Antolin, Michael; Vercauteren, Kurt; Zabel, Mark

    2015-02-01

    Prions are unique infectious agents that replicate without a genome and cause neurodegenerative diseases that include chronic wasting disease (CWD) of cervids. Immunohistochemistry (IHC) is currently considered the gold standard for diagnosis of a prion infection but may be insensitive to early or sub-clinical CWD that are important to understanding CWD transmission and ecology. We assessed the potential of serial protein misfolding cyclic amplification (sPMCA) to improve detection of CWD prior to the onset of clinical signs. We analyzed tissue samples from free-ranging Rocky Mountain elk (Cervus elaphus nelsoni) and used hierarchical Bayesian analysis to estimate the specificity and sensitivity of IHC and sPMCA conditional on simultaneously estimated disease states. Sensitivity estimates were higher for sPMCA (99.51%, credible interval (CI) 97.15-100%) than IHC of obex (brain stem, 76.56%, CI 57.00-91.46%) or retropharyngeal lymph node (90.06%, CI 74.13-98.70%) tissues, or both (98.99%, CI 90.01-100%). Our hierarchical Bayesian model predicts the prevalence of prion infection in this elk population to be 18.90% (CI 15.50-32.72%), compared to previous estimates of 12.90%. Our data reveal a previously unidentified sub-clinical prion-positive portion of the elk population that could represent silent carriers capable of significantly impacting CWD ecology.

  15. Prion amplification and hierarchical Bayesian modeling refine detection of prion infection.

    Science.gov (United States)

    Wyckoff, A Christy; Galloway, Nathan; Meyerett-Reid, Crystal; Powers, Jenny; Spraker, Terry; Monello, Ryan J; Pulford, Bruce; Wild, Margaret; Antolin, Michael; VerCauteren, Kurt; Zabel, Mark

    2015-02-10

    Prions are unique infectious agents that replicate without a genome and cause neurodegenerative diseases that include chronic wasting disease (CWD) of cervids. Immunohistochemistry (IHC) is currently considered the gold standard for diagnosis of a prion infection but may be insensitive to early or sub-clinical CWD that are important to understanding CWD transmission and ecology. We assessed the potential of serial protein misfolding cyclic amplification (sPMCA) to improve detection of CWD prior to the onset of clinical signs. We analyzed tissue samples from free-ranging Rocky Mountain elk (Cervus elaphus nelsoni) and used hierarchical Bayesian analysis to estimate the specificity and sensitivity of IHC and sPMCA conditional on simultaneously estimated disease states. Sensitivity estimates were higher for sPMCA (99.51%, credible interval (CI) 97.15-100%) than IHC of obex (brain stem, 76.56%, CI 57.00-91.46%) or retropharyngeal lymph node (90.06%, CI 74.13-98.70%) tissues, or both (98.99%, CI 90.01-100%). Our hierarchical Bayesian model predicts the prevalence of prion infection in this elk population to be 18.90% (CI 15.50-32.72%), compared to previous estimates of 12.90%. Our data reveal a previously unidentified sub-clinical prion-positive portion of the elk population that could represent silent carriers capable of significantly impacting CWD ecology.

  16. Hierarchical drivers of reef-fish metacommunity structure.

    Science.gov (United States)

    MacNeil, M Aaron; Graham, Nicholas A J; Polunin, Nicholas V C; Kulbicki, Michel; Galzin, René; Harmelin-Vivien, Mireille; Rushton, Steven P

    2009-01-01

    Coral reefs are highly complex ecological systems, where multiple processes interact across scales in space and time to create assemblages of exceptionally high biodiversity. Despite the increasing frequency of hierarchically structured sampling programs used in coral-reef science, little progress has been made in quantifying the relative importance of processes operating across multiple scales. The vast majority of reef studies are conducted, or at least analyzed, at a single spatial scale, ignoring the implicitly hierarchical structure of the overall system in favor of small-scale experiments or large-scale observations. Here we demonstrate how alpha (mean local number of species), beta diversity (degree of species dissimilarity among local sites), and gamma diversity (overall species richness) vary with spatial scale, and using a hierarchical, information-theoretic approach, we evaluate the relative importance of site-, reef-, and atoll-level processes driving the fish metacommunity structure among 10 atolls in French Polynesia. Process-based models, representing well-established hypotheses about drivers of reef-fish community structure, were assembled into a candidate set of 12 hierarchical linear models. Variation in fish abundance, biomass, and species richness were unevenly distributed among transect, reef, and atoll levels, establishing the relative contribution of variation at these spatial scales to the structure of the metacommunity. Reef-fish biomass, species richness, and the abundance of most functional-groups corresponded primarily with transect-level habitat diversity and atoll-lagoon size, whereas detritivore and grazer abundances were largely correlated with potential covariates of larval dispersal. Our findings show that (1) within-transect and among-atoll factors primarily drive the relationship between alpha and gamma diversity in this reef-fish metacommunity; (2) habitat is the primary correlate with reef-fish metacommunity structure at

  17. Using Hadoop MapReduce for Parallel Genetic Algorithms: A Comparison of the Global, Grid and Island Models.

    Science.gov (United States)

    Ferrucci, Filomena; Salza, Pasquale; Sarro, Federica

    2017-06-29

    The need to improve the scalability of Genetic Algorithms (GAs) has motivated the research on Parallel Genetic Algorithms (PGAs), and different technologies and approaches have been used. Hadoop MapReduce represents one of the most mature technologies to develop parallel algorithms. Based on the fact that parallel algorithms introduce communication overhead, the aim of the present work is to understand if, and possibly when, the parallel GAs solutions using Hadoop MapReduce show better performance than sequential versions in terms of execution time. Moreover, we are interested in understanding which PGA model can be most effective among the global, grid, and island models. We empirically assessed the performance of these three parallel models with respect to a sequential GA on a software engineering problem, evaluating the execution time and the achieved speedup. We also analysed the behaviour of the parallel models in relation to the overhead produced by the use of Hadoop MapReduce and the GAs' computational effort, which gives a more machine-independent measure of these algorithms. We exploited three problem instances to differentiate the computation load and three cluster configurations based on 2, 4, and 8 parallel nodes. Moreover, we estimated the costs of the execution of the experimentation on a potential cloud infrastructure, based on the pricing of the major commercial cloud providers. The empirical study revealed that the use of PGA based on the island model outperforms the other parallel models and the sequential GA for all the considered instances and clusters. Using 2, 4, and 8 nodes, the island model achieves an average speedup over the three datasets of 1.8, 3.4, and 7.0 times, respectively. Hadoop MapReduce has a set of different constraints that need to be considered during the design and the implementation of parallel algorithms. The overhead of data store (i.e., HDFS) accesses, communication, and latency requires solutions that reduce data store

  18. HIERARCHICAL REGULARIZATION OF POLYGONS FOR PHOTOGRAMMETRIC POINT CLOUDS OF OBLIQUE IMAGES

    Directory of Open Access Journals (Sweden)

    L. Xie

    2017-05-01

    Full Text Available Despite the success of multi-view stereo (MVS reconstruction from massive oblique images in city scale, only point clouds and triangulated meshes are available from existing MVS pipelines, which are topologically defect laden, free of semantical information and hard to edit and manipulate interactively in further applications. On the other hand, 2D polygons and polygonal models are still the industrial standard. However, extraction of the 2D polygons from MVS point clouds is still a non-trivial task, given the fact that the boundaries of the detected planes are zigzagged and regularities, such as parallel and orthogonal, cannot preserve. Aiming to solve these issues, this paper proposes a hierarchical polygon regularization method for the photogrammetric point clouds from existing MVS pipelines, which comprises of local and global levels. After boundary points extraction, e.g. using alpha shapes, the local level is used to consolidate the original points, by refining the orientation and position of the points using linear priors. The points are then grouped into local segments by forward searching. In the global level, regularities are enforced through a labeling process, which encourage the segments share the same label and the same label represents segments are parallel or orthogonal. This is formulated as Markov Random Field and solved efficiently. Preliminary results are made with point clouds from aerial oblique images and compared with two classical regularization methods, which have revealed that the proposed method are more powerful in abstracting a single building and is promising for further 3D polygonal model reconstruction and GIS applications.

  19. PARAMO: a PARAllel predictive MOdeling platform for healthcare analytic research using electronic health records.

    Science.gov (United States)

    Ng, Kenney; Ghoting, Amol; Steinhubl, Steven R; Stewart, Walter F; Malin, Bradley; Sun, Jimeng

    2014-04-01

    Healthcare analytics research increasingly involves the construction of predictive models for disease targets across varying patient cohorts using electronic health records (EHRs). To facilitate this process, it is critical to support a pipeline of tasks: (1) cohort construction, (2) feature construction, (3) cross-validation, (4) feature selection, and (5) classification. To develop an appropriate model, it is necessary to compare and refine models derived from a diversity of cohorts, patient-specific features, and statistical frameworks. The goal of this work is to develop and evaluate a predictive modeling platform that can be used to simplify and expedite this process for health data. To support this goal, we developed a PARAllel predictive MOdeling (PARAMO) platform which (1) constructs a dependency graph of tasks from specifications of predictive modeling pipelines, (2) schedules the tasks in a topological ordering of the graph, and (3) executes those tasks in parallel. We implemented this platform using Map-Reduce to enable independent tasks to run in parallel in a cluster computing environment. Different task scheduling preferences are also supported. We assess the performance of PARAMO on various workloads using three datasets derived from the EHR systems in place at Geisinger Health System and Vanderbilt University Medical Center and an anonymous longitudinal claims database. We demonstrate significant gains in computational efficiency against a standard approach. In particular, PARAMO can build 800 different models on a 300,000 patient data set in 3h in parallel compared to 9days if running sequentially. This work demonstrates that an efficient parallel predictive modeling platform can be developed for EHR data. This platform can facilitate large-scale modeling endeavors and speed-up the research workflow and reuse of health information. This platform is only a first step and provides the foundation for our ultimate goal of building analytic pipelines

  20. Using hierarchical linear growth models to evaluate protective mechanisms that mediate science achievement

    Science.gov (United States)

    von Secker, Clare Elaine

    The study of students at risk is a major topic of science education policy and discussion. Much research has focused on describing conditions and problems associated with the statistical risk of low science achievement among individuals who are members of groups characterized by problems such as poverty and social disadvantage. But outcomes attributed to these factors do not explain the nature and extent of mechanisms that account for differences in performance among individuals at risk. There is ample theoretical and empirical evidence that demographic differences should be conceptualized as social contexts, or collections of variables, that alter the psychological significance and social demands of life events, and affect subsequent relationships between risk and resilience. The hierarchical linear growth models used in this dissertation provide greater specification of the role of social context and the protective effects of attitude, expectations, parenting practices, peer influences, and learning opportunities on science achievement. While the individual influences of these protective factors on science achievement were small, their cumulative effect was substantial. Meta-analysis conducted on the effects associated with psychological and environmental processes that mediate risk mechanisms in sixteen social contexts revealed twenty-two significant differences between groups of students. Positive attitudes, high expectations, and more intense science course-taking had positive effects on achievement of all students, although these factors were not equally protective in all social contexts. In general, effects associated with authoritative parenting and peer influences were negative, regardless of social context. An evaluation comparing the performance and stability of hierarchical linear growth models with traditional repeated measures models is included as well.

  1. Final Report: Center for Programming Models for Scalable Parallel Computing

    Energy Technology Data Exchange (ETDEWEB)

    Mellor-Crummey, John [William Marsh Rice University

    2011-09-13

    As part of the Center for Programming Models for Scalable Parallel Computing, Rice University collaborated with project partners in the design, development and deployment of language, compiler, and runtime support for parallel programming models to support application development for the “leadership-class” computer systems at DOE national laboratories. Work over the course of this project has focused on the design, implementation, and evaluation of a second-generation version of Coarray Fortran. Research and development efforts of the project have focused on the CAF 2.0 language, compiler, runtime system, and supporting infrastructure. This has involved working with the teams that provide infrastructure for CAF that we rely on, implementing new language and runtime features, producing an open source compiler that enabled us to evaluate our ideas, and evaluating our design and implementation through the use of benchmarks. The report details the research, development, findings, and conclusions from this work.

  2. Interaction Admittance Based Modeling of Multi-Paralleled Grid-Connected Inverter with LCL-Filter

    DEFF Research Database (Denmark)

    Lu, Minghui; Blaabjerg, Frede; Wang, Xiongfei

    2016-01-01

    This paper investigates the mutual interaction and stability issues of multi-parallel LCL-filtered inverters. The stability and power quality of multiple grid-tied inverters are gaining more and more research attention as the penetration of renewables increases. In this paper, interactions...... and coupling effects among the multi-paralleled inverters and power grid are explicitly revealed. An Interaction Admittance concept is introduced to express and model the interaction through the physical admittances of the network. Compared to the existing modeling methods, the proposed analysis provides...

  3. PVeStA: A Parallel Statistical Model Checking and Quantitative Analysis Tool

    KAUST Repository

    AlTurki, Musab

    2011-01-01

    Statistical model checking is an attractive formal analysis method for probabilistic systems such as, for example, cyber-physical systems which are often probabilistic in nature. This paper is about drastically increasing the scalability of statistical model checking, and making such scalability of analysis available to tools like Maude, where probabilistic systems can be specified at a high level as probabilistic rewrite theories. It presents PVeStA, an extension and parallelization of the VeStA statistical model checking tool [10]. PVeStA supports statistical model checking of probabilistic real-time systems specified as either: (i) discrete or continuous Markov Chains; or (ii) probabilistic rewrite theories in Maude. Furthermore, the properties that it can model check can be expressed in either: (i) PCTL/CSL, or (ii) the QuaTEx quantitative temporal logic. As our experiments show, the performance gains obtained from parallelization can be very high. © 2011 Springer-Verlag.

  4. Relating Memory To Functional Performance In Normal Aging to Dementia Using Hierarchical Bayesian Cognitive Processing Models

    Science.gov (United States)

    Shankle, William R.; Pooley, James P.; Steyvers, Mark; Hara, Junko; Mangrola, Tushar; Reisberg, Barry; Lee, Michael D.

    2012-01-01

    Determining how cognition affects functional abilities is important in Alzheimer’s disease and related disorders (ADRD). 280 patients (normal or ADRD) received a total of 1,514 assessments using the Functional Assessment Staging Test (FAST) procedure and the MCI Screen (MCIS). A hierarchical Bayesian cognitive processing (HBCP) model was created by embedding a signal detection theory (SDT) model of the MCIS delayed recognition memory task into a hierarchical Bayesian framework. The SDT model used latent parameters of discriminability (memory process) and response bias (executive function) to predict, simultaneously, recognition memory performance for each patient and each FAST severity group. The observed recognition memory data did not distinguish the six FAST severity stages, but the latent parameters completely separated them. The latent parameters were also used successfully to transform the ordinal FAST measure into a continuous measure reflecting the underlying continuum of functional severity. HBCP models applied to recognition memory data from clinical practice settings accurately translated a latent measure of cognition to a continuous measure of functional severity for both individuals and FAST groups. Such a translation links two levels of brain information processing, and may enable more accurate correlations with other levels, such as those characterized by biomarkers. PMID:22407225

  5. Parallel External Memory Graph Algorithms

    DEFF Research Database (Denmark)

    Arge, Lars Allan; Goodrich, Michael T.; Sitchinava, Nodari

    2010-01-01

    In this paper, we study parallel I/O efficient graph algorithms in the Parallel External Memory (PEM) model, one o f the private-cache chip multiprocessor (CMP) models. We study the fundamental problem of list ranking which leads to efficient solutions to problems on trees, such as computing lowest...... an optimal speedup of ¿(P) in parallel I/O complexity and parallel computation time, compared to the single-processor external memory counterparts....

  6. Stage-by-Stage and Parallel Flow Path Compressor Modeling for a Variable Cycle Engine

    Science.gov (United States)

    Kopasakis, George; Connolly, Joseph W.; Cheng, Larry

    2015-01-01

    This paper covers the development of stage-by-stage and parallel flow path compressor modeling approaches for a Variable Cycle Engine. The stage-by-stage compressor modeling approach is an extension of a technique for lumped volume dynamics and performance characteristic modeling. It was developed to improve the accuracy of axial compressor dynamics over lumped volume dynamics modeling. The stage-by-stage compressor model presented here is formulated into a parallel flow path model that includes both axial and rotational dynamics. This is done to enable the study of compressor and propulsion system dynamic performance under flow distortion conditions. The approaches utilized here are generic and should be applicable for the modeling of any axial flow compressor design.

  7. Reduced-Order Structure-Preserving Model for Parallel-Connected Three-Phase Grid-Tied Inverters

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, Brian B [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Purba, Victor [University of Minnesota; Jafarpour, Saber [University of California Santa-Barbara; Bullo, Francesco [University of California Santa-Barbara; Dhople, Sairaj V. [University of Minnesota

    2017-08-21

    Next-generation power networks will contain large numbers of grid-connected inverters satisfying a significant fraction of system load. Since each inverter model has a relatively large number of dynamic states, it is impractical to analyze complex system models where the full dynamics of each inverter are retained. To address this challenge, we derive a reduced-order structure-preserving model for parallel-connected grid-tied three-phase inverters. Here, each inverter in the system is assumed to have a full-bridge topology, LCL filter at the point of common coupling, and the control architecture for each inverter includes a current controller, a power controller, and a phase-locked loop for grid synchronization. We outline a structure-preserving reduced-order inverter model with lumped parameters for the setting where the parallel inverters are each designed such that the filter components and controller gains scale linearly with the power rating. By structure preserving, we mean that the reduced-order three-phase inverter model is also composed of an LCL filter, a power controller, current controller, and PLL. We show that the system of parallel inverters can be modeled exactly as one aggregated inverter unit and this equivalent model has the same number of dynamical states as any individual inverter in the system. Numerical simulations validate the reduced-order model.

  8. Using hierarchical Bayesian methods to examine the tools of decision-making

    OpenAIRE

    Michael D. Lee; Benjamin R. Newell

    2011-01-01

    Hierarchical Bayesian methods offer a principled and comprehensive way to relate psychological models to data. Here we use them to model the patterns of information search, stopping and deciding in a simulated binary comparison judgment task. The simulation involves 20 subjects making 100 forced choice comparisons about the relative magnitudes of two objects (which of two German cities has more inhabitants). Two worked-examples show how hierarchical models can be developed to account for and ...

  9. Intensity-based hierarchical elastic registration using approximating splines.

    Science.gov (United States)

    Serifovic-Trbalic, Amira; Demirovic, Damir; Cattin, Philippe C

    2014-01-01

    We introduce a new hierarchical approach for elastic medical image registration using approximating splines. In order to obtain the dense deformation field, we employ Gaussian elastic body splines (GEBS) that incorporate anisotropic landmark errors and rotation information. Since the GEBS approach is based on a physical model in form of analytical solutions of the Navier equation, it can very well cope with the local as well as global deformations present in the images by varying the standard deviation of the Gaussian forces. The proposed GEBS approximating model is integrated into the elastic hierarchical image registration framework, which decomposes a nonrigid registration problem into numerous local rigid transformations. The approximating GEBS registration scheme incorporates anisotropic landmark errors as well as rotation information. The anisotropic landmark localization uncertainties can be estimated directly from the image data, and in this case, they represent the minimal stochastic localization error, i.e., the Cramér-Rao bound. The rotation information of each landmark obtained from the hierarchical procedure is transposed in an additional angular landmark, doubling the number of landmarks in the GEBS model. The modified hierarchical registration using the approximating GEBS model is applied to register 161 image pairs from a digital mammogram database. The obtained results are very encouraging, and the proposed approach significantly improved all registrations comparing the mean-square error in relation to approximating TPS with the rotation information. On artificially deformed breast images, the newly proposed method performed better than the state-of-the-art registration algorithm introduced by Rueckert et al. (IEEE Trans Med Imaging 18:712-721, 1999). The average error per breast tissue pixel was less than 2.23 pixels compared to 2.46 pixels for Rueckert's method. The proposed hierarchical elastic image registration approach incorporates the GEBS

  10. Hierarchical Network Design

    DEFF Research Database (Denmark)

    Thomadsen, Tommy

    2005-01-01

    Communication networks are immensely important today, since both companies and individuals use numerous services that rely on them. This thesis considers the design of hierarchical (communication) networks. Hierarchical networks consist of layers of networks and are well-suited for coping...... with changing and increasing demands. Two-layer networks consist of one backbone network, which interconnects cluster networks. The clusters consist of nodes and links, which connect the nodes. One node in each cluster is a hub node, and the backbone interconnects the hub nodes of each cluster and thus...... the clusters. The design of hierarchical networks involves clustering of nodes, hub selection, and network design, i.e. selection of links and routing of ows. Hierarchical networks have been in use for decades, but integrated design of these networks has only been considered for very special types of networks...

  11. A hierarchical model for estimating density in camera-trap studies

    Science.gov (United States)

    Royle, J. Andrew; Nichols, James D.; Karanth, K.Ullas; Gopalaswamy, Arjun M.

    2009-01-01

    Estimating animal density using capture–recapture data from arrays of detection devices such as camera traps has been problematic due to the movement of individuals and heterogeneity in capture probability among them induced by differential exposure to trapping.We develop a spatial capture–recapture model for estimating density from camera-trapping data which contains explicit models for the spatial point process governing the distribution of individuals and their exposure to and detection by traps.We adopt a Bayesian approach to analysis of the hierarchical model using the technique of data augmentation.The model is applied to photographic capture–recapture data on tigers Panthera tigris in Nagarahole reserve, India. Using this model, we estimate the density of tigers to be 14·3 animals per 100 km2 during 2004.Synthesis and applications. Our modelling framework largely overcomes several weaknesses in conventional approaches to the estimation of animal density from trap arrays. It effectively deals with key problems such as individual heterogeneity in capture probabilities, movement of traps, presence of potential ‘holes’ in the array and ad hoc estimation of sample area. The formulation, thus, greatly enhances flexibility in the conduct of field surveys as well as in the analysis of data, from studies that may involve physical, photographic or DNA-based ‘captures’ of individual animals.

  12. GSMNet: A Hierarchical Graph Model for Moving Objects in Networks

    Directory of Open Access Journals (Sweden)

    Hengcai Zhang

    2017-03-01

    Full Text Available Existing data models for moving objects in networks are often limited by flexibly controlling the granularity of representing networks and the cost of location updates and do not encompass semantic information, such as traffic states, traffic restrictions and social relationships. In this paper, we aim to fill the gap of traditional network-constrained models and propose a hierarchical graph model called the Geo-Social-Moving model for moving objects in Networks (GSMNet that adopts four graph structures, RouteGraph, SegmentGraph, ObjectGraph and MoveGraph, to represent the underlying networks, trajectories and semantic information in an integrated manner. The bulk of user-defined data types and corresponding operators is proposed to handle moving objects and answer a new class of queries supporting three kinds of conditions: spatial, temporal and semantic information. Then, we develop a prototype system with the native graph database system Neo4Jto implement the proposed GSMNet model. In the experiment, we conduct the performance evaluation using simulated trajectories generated from the BerlinMOD (Berlin Moving Objects Database benchmark and compare with the mature MOD system Secondo. The results of 17 benchmark queries demonstrate that our proposed GSMNet model has strong potential to reduce time-consuming table join operations an d shows remarkable advantages with regard to representing semantic information and controlling the cost of location updates.

  13. Modeling, analysis, and design of stationary reference frame droop controlled parallel three-phase voltage source inverters

    DEFF Research Database (Denmark)

    Vasquez, Juan Carlos; Guerrero, Josep M.; Savaghebi, Mehdi

    2013-01-01

    Power electronics based MicroGrids consist of a number of voltage source inverters (VSIs) operating in parallel. In this paper, the modeling, control design, and stability analysis of parallel connected three-phase VSIs are derived. The proposed voltage and current inner control loops and the mat......Power electronics based MicroGrids consist of a number of voltage source inverters (VSIs) operating in parallel. In this paper, the modeling, control design, and stability analysis of parallel connected three-phase VSIs are derived. The proposed voltage and current inner control loops...... control restores the frequency and amplitude deviations produced by the primary control. Also, a synchronization algorithm is presented in order to connect the MicroGrid to the grid. Experimental results are provided to validate the performance and robustness of the parallel VSI system control...

  14. Methods and models for the construction of weakly parallel tests

    NARCIS (Netherlands)

    Adema, J.J.; Adema, Jos J.

    1992-01-01

    Several methods are proposed for the construction of weakly parallel tests [i.e., tests with the same test information function (TIF)]. A mathematical programming model that constructs tests containing a prespecified TIF and a heuristic that assigns items to tests with information functions that are

  15. Methods and models for the construction of weakly parallel tests

    NARCIS (Netherlands)

    Adema, J.J.; Adema, Jos J.

    1990-01-01

    Methods are proposed for the construction of weakly parallel tests, that is, tests with the same test information function. A mathematical programing model for constructing tests with a prespecified test information function and a heuristic for assigning items to tests such that their information

  16. Resolution of Singularities Introduced by Hierarchical Structure in Deep Neural Networks.

    Science.gov (United States)

    Nitta, Tohru

    2017-10-01

    We present a theoretical analysis of singular points of artificial deep neural networks, resulting in providing deep neural network models having no critical points introduced by a hierarchical structure. It is considered that such deep neural network models have good nature for gradient-based optimization. First, we show that there exist a large number of critical points introduced by a hierarchical structure in deep neural networks as straight lines, depending on the number of hidden layers and the number of hidden neurons. Second, we derive a sufficient condition for deep neural networks having no critical points introduced by a hierarchical structure, which can be applied to general deep neural networks. It is also shown that the existence of critical points introduced by a hierarchical structure is determined by the rank and the regularity of weight matrices for a specific class of deep neural networks. Finally, two kinds of implementation methods of the sufficient conditions to have no critical points are provided. One is a learning algorithm that can avoid critical points introduced by the hierarchical structure during learning (called avoidant learning algorithm). The other is a neural network that does not have some critical points introduced by the hierarchical structure as an inherent property (called avoidant neural network).

  17. Probabilistic daily ILI syndromic surveillance with a spatio-temporal Bayesian hierarchical model.

    Directory of Open Access Journals (Sweden)

    Ta-Chien Chan

    Full Text Available BACKGROUND: For daily syndromic surveillance to be effective, an efficient and sensible algorithm would be expected to detect aberrations in influenza illness, and alert public health workers prior to any impending epidemic. This detection or alert surely contains uncertainty, and thus should be evaluated with a proper probabilistic measure. However, traditional monitoring mechanisms simply provide a binary alert, failing to adequately address this uncertainty. METHODS AND FINDINGS: Based on the Bayesian posterior probability of influenza-like illness (ILI visits, the intensity of outbreak can be directly assessed. The numbers of daily emergency room ILI visits at five community hospitals in Taipei City during 2006-2007 were collected and fitted with a Bayesian hierarchical model containing meteorological factors such as temperature and vapor pressure, spatial interaction with conditional autoregressive structure, weekend and holiday effects, seasonality factors, and previous ILI visits. The proposed algorithm recommends an alert for action if the posterior probability is larger than 70%. External data from January to February of 2008 were retained for validation. The decision rule detects successfully the peak in the validation period. When comparing the posterior probability evaluation with the modified Cusum method, results show that the proposed method is able to detect the signals 1-2 days prior to the rise of ILI visits. CONCLUSIONS: This Bayesian hierarchical model not only constitutes a dynamic surveillance system but also constructs a stochastic evaluation of the need to call for alert. The monitoring mechanism provides earlier detection as well as a complementary tool for current surveillance programs.

  18. Parallelization of a Quantum-Classic Hybrid Model For Nanoscale Semiconductor Devices

    Directory of Open Access Journals (Sweden)

    Oscar Salas

    2011-07-01

    Full Text Available The expensive reengineering of the sequential software and the difficult parallel programming are two of the many technical and economic obstacles to the wide use of HPC. We investigate the chance to improve in a rapid way the performance of a numerical serial code for the simulation of the transport of a charged carriers in a Double-Gate MOSFET. We introduce the Drift-Diffusion-Schrödinger-Poisson (DDSP model and we study a rapid parallelization strategy of the numerical procedure on shared memory architectures.

  19. F-Nets and Software Cabling: Deriving a Formal Model and Language for Portable Parallel Programming

    Science.gov (United States)

    DiNucci, David C.; Saini, Subhash (Technical Monitor)

    1998-01-01

    Parallel programming is still being based upon antiquated sequence-based definitions of the terms "algorithm" and "computation", resulting in programs which are architecture dependent and difficult to design and analyze. By focusing on obstacles inherent in existing practice, a more portable model is derived here, which is then formalized into a model called Soviets which utilizes a combination of imperative and functional styles. This formalization suggests more general notions of algorithm and computation, as well as insights into the meaning of structured programming in a parallel setting. To illustrate how these principles can be applied, a very-high-level graphical architecture-independent parallel language, called Software Cabling, is described, with many of the features normally expected from today's computer languages (e.g. data abstraction, data parallelism, and object-based programming constructs).

  20. Hierarchical competition models with the Allee effect II: the case of immigration.

    Science.gov (United States)

    Assas, Laila; Dennis, Brian; Elaydi, Saber; Kwessi, Eddy; Livadiotis, George

    2015-01-01

    This is part II of an earlier paper that dealt with hierarchical models with the Allee effect but with no immigration. In this paper, we greatly simplify the proofs in part I and provide a proof of the global dynamics of the non-hyperbolic cases that were previously conjectured. Then, we show how immigration to one of the species or to both would, drastically, change the dynamics of the system. It is shown that if the level of immigration to one or to both species is above a specified level, then there will be no extinction region where both species go to extinction.

  1. Mathematical Model of Thyristor Inverter Including a Series-parallel Resonant Circuit

    OpenAIRE

    Miroslaw Luft; Elzbieta Szychta

    2008-01-01

    The article presents a mathematical model of thyristor inverter including a series-parallel resonant circuit with theaid of state variable method. Maple procedures are used to compute current and voltage waveforms in the inverter.

  2. A hierarchical network modeling method for railway tunnels safety assessment

    Science.gov (United States)

    Zhou, Jin; Xu, Weixiang; Guo, Xin; Liu, Xumin

    2017-02-01

    Using network theory to model risk-related knowledge on accidents is regarded as potential very helpful in risk management. A large amount of defects detection data for railway tunnels is collected in autumn every year in China. It is extremely important to discover the regularities knowledge in database. In this paper, based on network theories and by using data mining techniques, a new method is proposed for mining risk-related regularities to support risk management in railway tunnel projects. A hierarchical network (HN) model which takes into account the tunnel structures, tunnel defects, potential failures and accidents is established. An improved Apriori algorithm is designed to rapidly and effectively mine correlations between tunnel structures and tunnel defects. Then an algorithm is presented in order to mine the risk-related regularities table (RRT) from the frequent patterns. At last, a safety assessment method is proposed by consideration of actual defects and possible risks of defects gained from the RRT. This method cannot only generate the quantitative risk results but also reveal the key defects and critical risks of defects. This paper is further development on accident causation network modeling methods which can provide guidance for specific maintenance measure.

  3. A model of breakdown in parallel-plate detectors

    International Nuclear Information System (INIS)

    Fonte, P.

    1996-01-01

    Parallel-plate avalanche chambers (PPAC's) have many desirable properties, such as a fast, large area particle detector. However, the maximum gain is limited by a form of violent breakdown that limits the usefulness of this detector, despite its other evident qualities. The exact nature of this phenomenon is not yet sufficiently clear to sustain possible improvements. A previous experimental study is complemented in the present work by a quantitative model of the breakdown phenomenon in PPAC's, based on the streamer theory. The model reproduces well the peculiar behavior of the external current observed in PPAC's and resistive-plate chambers. Other breakdown properties measured in PPAC's are also well reproduced

  4. Parallel Algorithm for Solving TOV Equations for Sequence of Cold and Dense Nuclear Matter Models

    Science.gov (United States)

    Ayriyan, Alexander; Buša, Ján; Grigorian, Hovik; Poghosyan, Gevorg

    2018-04-01

    We have introduced parallel algorithm simulation of neutron star configurations for set of equation of state models. The performance of the parallel algorithm has been investigated for testing set of EoS models on two computational systems. It scales when using with MPI on modern CPUs and this investigation allowed us also to compare two different types of computational nodes.

  5. Vlasov modelling of parallel transport in a tokamak scrape-off layer

    International Nuclear Information System (INIS)

    Manfredi, G; Hirstoaga, S; Devaux, S

    2011-01-01

    A one-dimensional Vlasov-Poisson model is used to describe the parallel transport in a tokamak scrape-off layer. Thanks to a recently developed 'asymptotic-preserving' numerical scheme, it is possible to lift numerical constraints on the time step and grid spacing, which are no longer limited by, respectively, the electron plasma period and Debye length. The Vlasov approach provides a good velocity-space resolution even in regions of low density. The model is applied to the study of parallel transport during edge-localized modes, with particular emphasis on the particles and energy fluxes on the divertor plates. The numerical results are compared with analytical estimates based on a free-streaming model, with good general agreement. An interesting feature is the observation of an early electron energy flux, due to suprathermal electrons escaping the ions' attraction. In contrast, the long-time evolution is essentially quasi-neutral and dominated by the ion dynamics.

  6. Vlasov modelling of parallel transport in a tokamak scrape-off layer

    Energy Technology Data Exchange (ETDEWEB)

    Manfredi, G [Institut de Physique et Chimie des Materiaux, CNRS and Universite de Strasbourg, BP 43, F-67034 Strasbourg (France); Hirstoaga, S [INRIA Nancy Grand-Est and Institut de Recherche en Mathematiques Avancees, 7 rue Rene Descartes, F-67084 Strasbourg (France); Devaux, S, E-mail: Giovanni.Manfredi@ipcms.u-strasbg.f, E-mail: hirstoaga@math.unistra.f, E-mail: Stephane.Devaux@ccfe.ac.u [JET-EFDA, Culham Science Centre, Abingdon, OX14 3DB (United Kingdom)

    2011-01-15

    A one-dimensional Vlasov-Poisson model is used to describe the parallel transport in a tokamak scrape-off layer. Thanks to a recently developed 'asymptotic-preserving' numerical scheme, it is possible to lift numerical constraints on the time step and grid spacing, which are no longer limited by, respectively, the electron plasma period and Debye length. The Vlasov approach provides a good velocity-space resolution even in regions of low density. The model is applied to the study of parallel transport during edge-localized modes, with particular emphasis on the particles and energy fluxes on the divertor plates. The numerical results are compared with analytical estimates based on a free-streaming model, with good general agreement. An interesting feature is the observation of an early electron energy flux, due to suprathermal electrons escaping the ions' attraction. In contrast, the long-time evolution is essentially quasi-neutral and dominated by the ion dynamics.

  7. Mathematical model of thyristor inverter including a series-parallel resonant circuit

    OpenAIRE

    Luft, M.; Szychta, E.

    2008-01-01

    The article presents a mathematical model of thyristor inverter including a series-parallel resonant circuit with the aid of state variable method. Maple procedures are used to compute current and voltage waveforms in the inverter.

  8. Parallel shooting methods for finding steady state solutions to engine simulation models

    DEFF Research Database (Denmark)

    Andersen, Stig Kildegård; Thomsen, Per Grove; Carlsen, Henrik

    2007-01-01

    Parallel single- and multiple shooting methods were tested for finding periodic steady state solutions to a Stirling engine model. The model was used to illustrate features of the methods and possibilities for optimisations. Performance was measured using simulation of an experimental data set...

  9. A parallel stereo reconstruction algorithm with applications in entomology (APSRA)

    Science.gov (United States)

    Bhasin, Rajesh; Jang, Won Jun; Hart, John C.

    2012-03-01

    We propose a fast parallel algorithm for the reconstruction of 3-Dimensional point clouds of insects from binocular stereo image pairs using a hierarchical approach for disparity estimation. Entomologists study various features of insects to classify them, build their distribution maps, and discover genetic links between specimens among various other essential tasks. This information is important to the pesticide and the pharmaceutical industries among others. When considering the large collections of insects entomologists analyze, it becomes difficult to physically handle the entire collection and share the data with researchers across the world. With the method presented in our work, Entomologists can create an image database for their collections and use the 3D models for studying the shape and structure of the insects thus making it easier to maintain and share. Initial feedback shows that the reconstructed 3D models preserve the shape and size of the specimen. We further optimize our results to incorporate multiview stereo which produces better overall structure of the insects. Our main contribution is applying stereoscopic vision techniques to entomology to solve the problems faced by entomologists.

  10. Hierarchical cellular designs for load-bearing biocomposite beams and plates

    International Nuclear Information System (INIS)

    Burgueno, Rigoberto; Quagliata, Mario J.; Mohanty, Amar K.; Mehta, Geeta; Drzal, Lawrence T.; Misra, Manjusri

    2005-01-01

    Scrutiny into the composition of natural, or biological materials convincingly reveals that high material and structural efficiency can be attained, even with moderate-quality constituents, by hierarchical topologies, i.e., successively organized material levels or layers. The present study demonstrates that biologically inspired hierarchical designs can help improve the moderate properties of natural fiber polymer composites or biocomposites and allow them to compete with conventional materials for load-bearing applications. An overview of the mechanics concepts that allow hierarchical designs to achieve higher performance is presented, followed by observation and results from flexural tests on periodic and hierarchical cellular beams and plates made from industrial hemp fibers and unsaturated polyester resin biocomposites. The experimental data is shown to agree well with performance indices predicted by mechanics models. A procedure for the multi-scale integrated material/structural analysis of hierarchical cellular biocomposite components is presented and its advantages and limitations are discussed

  11. A Hybrid Parallel Execution Model for Logic Based Requirement Specifications (Invited Paper

    Directory of Open Access Journals (Sweden)

    Jeffrey J. P. Tsai

    1999-05-01

    Full Text Available It is well known that undiscovered errors in a requirements specification is extremely expensive to be fixed when discovered in the software maintenance phase. Errors in the requirement phase can be reduced through the validation and verification of the requirements specification. Many logic-based requirements specification languages have been developed to achieve these goals. However, the execution and reasoning of a logic-based requirements specification can be very slow. An effective way to improve their performance is to execute and reason the logic-based requirements specification in parallel. In this paper, we present a hybrid model to facilitate the parallel execution of a logic-based requirements specification language. A logic-based specification is first applied by a data dependency analysis technique which can find all the mode combinations that exist within a specification clause. This mode information is used to support a novel hybrid parallel execution model, which combines both top-down and bottom-up evaluation strategies. This new execution model can find the failure in the deepest node of the search tree at the early stage of the evaluation, thus this new execution model can reduce the total number of nodes searched in the tree, the total processes needed to be generated, and the total communication channels needed in the search process. A simulator has been implemented to analyze the execution behavior of the new model. Experiments show significant improvement based on several criteria.

  12. Hierarchical control of a nuclear reactor using uncertain dynamics techniques

    International Nuclear Information System (INIS)

    Rovere, L.A.; Otaduy, P.J.; Brittain, C.R.; Perez, R.B.

    1988-01-01

    Recent advances in the nonlinear optimal control area are opening new possibilities towards its implementation in process control. Algorithms for multivariate control, hierarchical decomposition, parameter tracking, model uncertainties actuator saturation effects and physical limits to state variables can be implemented on the basis of a consistent mathematical formulation. In this paper, good agreement is shown between a centralized and a hierarchical implementation of a controller for a hypothetical nuclear power plant subject to multiple demands. The performance of the hierarchical distributed system in the presence of localized subsystem failures is analyzed. 4 refs., 13 figs

  13. Fast parallel algorithm for three-dimensional distance-driven model in iterative computed tomography reconstruction

    International Nuclear Information System (INIS)

    Chen Jian-Lin; Li Lei; Wang Lin-Yuan; Cai Ai-Long; Xi Xiao-Qi; Zhang Han-Ming; Li Jian-Xin; Yan Bin

    2015-01-01

    The projection matrix model is used to describe the physical relationship between reconstructed object and projection. Such a model has a strong influence on projection and backprojection, two vital operations in iterative computed tomographic reconstruction. The distance-driven model (DDM) is a state-of-the-art technology that simulates forward and back projections. This model has a low computational complexity and a relatively high spatial resolution; however, it includes only a few methods in a parallel operation with a matched model scheme. This study introduces a fast and parallelizable algorithm to improve the traditional DDM for computing the parallel projection and backprojection operations. Our proposed model has been implemented on a GPU (graphic processing unit) platform and has achieved satisfactory computational efficiency with no approximation. The runtime for the projection and backprojection operations with our model is approximately 4.5 s and 10.5 s per loop, respectively, with an image size of 256×256×256 and 360 projections with a size of 512×512. We compare several general algorithms that have been proposed for maximizing GPU efficiency by using the unmatched projection/backprojection models in a parallel computation. The imaging resolution is not sacrificed and remains accurate during computed tomographic reconstruction. (paper)

  14. Estimating effectiveness in HIV prevention trials with a Bayesian hierarchical compound Poisson frailty model

    Science.gov (United States)

    Coley, Rebecca Yates; Browna, Elizabeth R.

    2016-01-01

    Inconsistent results in recent HIV prevention trials of pre-exposure prophylactic interventions may be due to heterogeneity in risk among study participants. Intervention effectiveness is most commonly estimated with the Cox model, which compares event times between populations. When heterogeneity is present, this population-level measure underestimates intervention effectiveness for individuals who are at risk. We propose a likelihood-based Bayesian hierarchical model that estimates the individual-level effectiveness of candidate interventions by accounting for heterogeneity in risk with a compound Poisson-distributed frailty term. This model reflects the mechanisms of HIV risk and allows that some participants are not exposed to HIV and, therefore, have no risk of seroconversion during the study. We assess model performance via simulation and apply the model to data from an HIV prevention trial. PMID:26869051

  15. Combining self-affirmation with the extended parallel process model: the consequences for motivation to eat more fruit and vegetables.

    Science.gov (United States)

    Napper, Lucy E; Harris, Peter R; Klein, William M P

    2014-01-01

    There is potential for fruitful integration of research using the Extended Parallel Process Model (EPPM) with research using Self-affirmation Theory. However, to date no studies have attempted to do this. This article reports an experiment that tests whether (a) the effects of a self-affirmation manipulation add to those of EPPM variables in predicting intentions to improve a health behavior and (b) self-affirmation moderates the relationship between EPPM variables and intentions. Participants (N = 80) were randomized to either a self-affirmation or control condition prior to receiving personally relevant health information about the risks of not eating at least five portions of fruit and vegetables per day. A hierarchical regression model revealed that efficacy, threat × efficacy, self-affirmation, and self-affirmation × efficacy all uniquely contributed to the prediction of intentions to eat at least five portions per day. Self-affirmed participants and those with higher efficacy reported greater motivation to change. Threat predicted intentions at low levels of efficacy, but not at high levels. Efficacy had a stronger relationship with intentions in the nonaffirmed condition than in the self-affirmed condition. The findings indicate that self-affirmation processes can moderate the impact of variables in the EPPM and also add to the variance explained. We argue that there is potential for integration of the two traditions of research, to the benefit of both.

  16. Estimating temporal trend in the presence of spatial complexity: a Bayesian hierarchical model for a wetland plant population undergoing restoration.

    Directory of Open Access Journals (Sweden)

    Thomas J Rodhouse

    Full Text Available Monitoring programs that evaluate restoration and inform adaptive management are important for addressing environmental degradation. These efforts may be well served by spatially explicit hierarchical approaches to modeling because of unavoidable spatial structure inherited from past land use patterns and other factors. We developed bayesian hierarchical models to estimate trends from annual density counts observed in a spatially structured wetland forb (Camassia quamash [camas] population following the cessation of grazing and mowing on the study area, and in a separate reference population of camas. The restoration site was bisected by roads and drainage ditches, resulting in distinct subpopulations ("zones" with different land use histories. We modeled this spatial structure by fitting zone-specific intercepts and slopes. We allowed spatial covariance parameters in the model to vary by zone, as in stratified kriging, accommodating anisotropy and improving computation and biological interpretation. Trend estimates provided evidence of a positive effect of passive restoration, and the strength of evidence was influenced by the amount of spatial structure in the model. Allowing trends to vary among zones and accounting for topographic heterogeneity increased precision of trend estimates. Accounting for spatial autocorrelation shifted parameter coefficients in ways that varied among zones depending on strength of statistical shrinkage, autocorrelation and topographic heterogeneity--a phenomenon not widely described. Spatially explicit estimates of trend from hierarchical models will generally be more useful to land managers than pooled regional estimates and provide more realistic assessments of uncertainty. The ability to grapple with historical contingency is an appealing benefit of this approach.

  17. Hierarchical Artificial Bee Colony Algorithm for RFID Network Planning Optimization

    Directory of Open Access Journals (Sweden)

    Lianbo Ma

    2014-01-01

    Full Text Available This paper presents a novel optimization algorithm, namely, hierarchical artificial bee colony optimization, called HABC, to tackle the radio frequency identification network planning (RNP problem. In the proposed multilevel model, the higher-level species can be aggregated by the subpopulations from lower level. In the bottom level, each subpopulation employing the canonical ABC method searches the part-dimensional optimum in parallel, which can be constructed into a complete solution for the upper level. At the same time, the comprehensive learning method with crossover and mutation operators is applied to enhance the global search ability between species. Experiments are conducted on a set of 10 benchmark optimization problems. The results demonstrate that the proposed HABC obtains remarkable performance on most chosen benchmark functions when compared to several successful swarm intelligence and evolutionary algorithms. Then HABC is used for solving the real-world RNP problem on two instances with different scales. Simulation results show that the proposed algorithm is superior for solving RNP, in terms of optimization accuracy and computation robustness.

  18. Hierarchical artificial bee colony algorithm for RFID network planning optimization.

    Science.gov (United States)

    Ma, Lianbo; Chen, Hanning; Hu, Kunyuan; Zhu, Yunlong

    2014-01-01

    This paper presents a novel optimization algorithm, namely, hierarchical artificial bee colony optimization, called HABC, to tackle the radio frequency identification network planning (RNP) problem. In the proposed multilevel model, the higher-level species can be aggregated by the subpopulations from lower level. In the bottom level, each subpopulation employing the canonical ABC method searches the part-dimensional optimum in parallel, which can be constructed into a complete solution for the upper level. At the same time, the comprehensive learning method with crossover and mutation operators is applied to enhance the global search ability between species. Experiments are conducted on a set of 10 benchmark optimization problems. The results demonstrate that the proposed HABC obtains remarkable performance on most chosen benchmark functions when compared to several successful swarm intelligence and evolutionary algorithms. Then HABC is used for solving the real-world RNP problem on two instances with different scales. Simulation results show that the proposed algorithm is superior for solving RNP, in terms of optimization accuracy and computation robustness.

  19. Parallel eigenanalysis of finite element models in a completely connected architecture

    Science.gov (United States)

    Akl, F. A.; Morel, M. R.

    1989-01-01

    A parallel algorithm is presented for the solution of the generalized eigenproblem in linear elastic finite element analysis, (K)(phi) = (M)(phi)(omega), where (K) and (M) are of order N, and (omega) is order of q. The concurrent solution of the eigenproblem is based on the multifrontal/modified subspace method and is achieved in a completely connected parallel architecture in which each processor is allowed to communicate with all other processors. The algorithm was successfully implemented on a tightly coupled multiple-instruction multiple-data parallel processing machine, Cray X-MP. A finite element model is divided into m domains each of which is assumed to process n elements. Each domain is then assigned to a processor or to a logical processor (task) if the number of domains exceeds the number of physical processors. The macrotasking library routines are used in mapping each domain to a user task. Computational speed-up and efficiency are used to determine the effectiveness of the algorithm. The effect of the number of domains, the number of degrees-of-freedom located along the global fronts and the dimension of the subspace on the performance of the algorithm are investigated. A parallel finite element dynamic analysis program, p-feda, is documented and the performance of its subroutines in parallel environment is analyzed.

  20. Lamb wave propagation modelling and simulation using parallel processing architecture and graphical cards

    International Nuclear Information System (INIS)

    Paćko, P; Bielak, T; Staszewski, W J; Uhl, T; Spencer, A B; Worden, K

    2012-01-01

    This paper demonstrates new parallel computation technology and an implementation for Lamb wave propagation modelling in complex structures. A graphical processing unit (GPU) and computer unified device architecture (CUDA), available in low-cost graphical cards in standard PCs, are used for Lamb wave propagation numerical simulations. The local interaction simulation approach (LISA) wave propagation algorithm has been implemented as an example. Other algorithms suitable for parallel discretization can also be used in practice. The method is illustrated using examples related to damage detection. The results demonstrate good accuracy and effective computational performance of very large models. The wave propagation modelling presented in the paper can be used in many practical applications of science and engineering. (paper)

  1. Clinical time series prediction: towards a hierarchical dynamical system framework

    Science.gov (United States)

    Liu, Zitao; Hauskrecht, Milos

    2014-01-01

    Objective Developing machine learning and data mining algorithms for building temporal models of clinical time series is important for understanding of the patient condition, the dynamics of a disease, effect of various patient management interventions and clinical decision making. In this work, we propose and develop a novel hierarchical framework for modeling clinical time series data of varied length and with irregularly sampled observations. Materials and methods Our hierarchical dynamical system framework for modeling clinical time series combines advantages of the two temporal modeling approaches: the linear dynamical system and the Gaussian process. We model the irregularly sampled clinical time series by using multiple Gaussian process sequences in the lower level of our hierarchical framework and capture the transitions between Gaussian processes by utilizing the linear dynamical system. The experiments are conducted on the complete blood count (CBC) panel data of 1000 post-surgical cardiac patients during their hospitalization. Our framework is evaluated and compared to multiple baseline approaches in terms of the mean absolute prediction error and the absolute percentage error. Results We tested our framework by first learning the time series model from data for the patient in the training set, and then applying the model in order to predict future time series values on the patients in the test set. We show that our model outperforms multiple existing models in terms of its predictive accuracy. Our method achieved a 3.13% average prediction accuracy improvement on ten CBC lab time series when it was compared against the best performing baseline. A 5.25% average accuracy improvement was observed when only short-term predictions were considered. Conclusion A new hierarchical dynamical system framework that lets us model irregularly sampled time series data is a promising new direction for modeling clinical time series and for improving their predictive

  2. Improving satellite-based PM2.5 estimates in China using Gaussian processes modeling in a Bayesian hierarchical setting.

    Science.gov (United States)

    Yu, Wenxi; Liu, Yang; Ma, Zongwei; Bi, Jun

    2017-08-01

    Using satellite-based aerosol optical depth (AOD) measurements and statistical models to estimate ground-level PM 2.5 is a promising way to fill the areas that are not covered by ground PM 2.5 monitors. The statistical models used in previous studies are primarily Linear Mixed Effects (LME) and Geographically Weighted Regression (GWR) models. In this study, we developed a new regression model between PM 2.5 and AOD using Gaussian processes in a Bayesian hierarchical setting. Gaussian processes model the stochastic nature of the spatial random effects, where the mean surface and the covariance function is specified. The spatial stochastic process is incorporated under the Bayesian hierarchical framework to explain the variation of PM 2.5 concentrations together with other factors, such as AOD, spatial and non-spatial random effects. We evaluate the results of our model and compare them with those of other, conventional statistical models (GWR and LME) by within-sample model fitting and out-of-sample validation (cross validation, CV). The results show that our model possesses a CV result (R 2  = 0.81) that reflects higher accuracy than that of GWR and LME (0.74 and 0.48, respectively). Our results indicate that Gaussian process models have the potential to improve the accuracy of satellite-based PM 2.5 estimates.

  3. Mathematical Model of Thyristor Inverter Including a Series-parallel Resonant Circuit

    Directory of Open Access Journals (Sweden)

    Miroslaw Luft

    2008-01-01

    Full Text Available The article presents a mathematical model of thyristor inverter including a series-parallel resonant circuit with theaid of state variable method. Maple procedures are used to compute current and voltage waveforms in the inverter.

  4. Parallelizing the spectral transform method: A comparison of alternative parallel algorithms

    International Nuclear Information System (INIS)

    Foster, I.; Worley, P.H.

    1993-01-01

    The spectral transform method is a standard numerical technique for solving partial differential equations on the sphere and is widely used in global climate modeling. In this paper, we outline different approaches to parallelizing the method and describe experiments that we are conducting to evaluate the efficiency of these approaches on parallel computers. The experiments are conducted using a testbed code that solves the nonlinear shallow water equations on a sphere, but are designed to permit evaluation in the context of a global model. They allow us to evaluate the relative merits of the approaches as a function of problem size and number of processors. The results of this study are guiding ongoing work on PCCM2, a parallel implementation of the Community Climate Model developed at the National Center for Atmospheric Research

  5. Loss Performance Modeling for Hierarchical Heterogeneous Wireless Networks With Speed-Sensitive Call Admission Control

    DEFF Research Database (Denmark)

    Huang, Qian; Huang, Yue-Cai; Ko, King-Tim

    2011-01-01

    . This approach avoids unnecessary and frequent handoff between cells and reduces signaling overheads. An approximation model with guaranteed accuracy and low computational complexity is presented for the loss performance of multiservice traffic. The accuracy of numerical results is validated by comparing......A hierarchical overlay structure is an alternative solution that integrates existing and future heterogeneous wireless networks to provide subscribers with better mobile broadband services. Traffic loss performance in such integrated heterogeneous networks is necessary for an operator's network...

  6. A bayesian hierarchical model for classification with selection of functional predictors.

    Science.gov (United States)

    Zhu, Hongxiao; Vannucci, Marina; Cox, Dennis D

    2010-06-01

    In functional data classification, functional observations are often contaminated by various systematic effects, such as random batch effects caused by device artifacts, or fixed effects caused by sample-related factors. These effects may lead to classification bias and thus should not be neglected. Another issue of concern is the selection of functions when predictors consist of multiple functions, some of which may be redundant. The above issues arise in a real data application where we use fluorescence spectroscopy to detect cervical precancer. In this article, we propose a Bayesian hierarchical model that takes into account random batch effects and selects effective functions among multiple functional predictors. Fixed effects or predictors in nonfunctional form are also included in the model. The dimension of the functional data is reduced through orthonormal basis expansion or functional principal components. For posterior sampling, we use a hybrid Metropolis-Hastings/Gibbs sampler, which suffers slow mixing. An evolutionary Monte Carlo algorithm is applied to improve the mixing. Simulation and real data application show that the proposed model provides accurate selection of functional predictors as well as good classification.

  7. Hierarchical model-based predictive control of a power plant portfolio

    DEFF Research Database (Denmark)

    Edlund, Kristian; Bendtsen, Jan Dimon; Jørgensen, John Bagterp

    2011-01-01

    One of the main difficulties in large-scale implementation of renewable energy in existing power systems is that the production from renewable sources is difficult to predict and control. For this reason, fast and efficient control of controllable power producing units – so-called “portfolio...... design for power system portfolio control, which aims specifically at meeting these demands.The design involves a two-layer hierarchical structure with clearly defined interfaces that facilitate an object-oriented implementation approach. The same hierarchical structure is reflected in the underlying...... optimisation problem, which is solved using Dantzig–Wolfe decomposition. This decomposition yields improved computational efficiency and better scalability compared to centralised methods.The proposed control scheme is compared to an existing, state-of-the-art portfolio control system (operated by DONG Energy...

  8. Two Phase Flow Split Model for Parallel Channels | Iloeje | Nigerian ...

    African Journals Online (AJOL)

    The model and code are capable of handling single and two phase flows, steady states and transients, up to ten parallel flow paths, simple and complicated geometries, including the boilers of fossil steam generators and nuclear power plants. A test calculation has been made with a simplified three-channel system ...

  9. Clinical time series prediction: Toward a hierarchical dynamical system framework.

    Science.gov (United States)

    Liu, Zitao; Hauskrecht, Milos

    2015-09-01

    Developing machine learning and data mining algorithms for building temporal models of clinical time series is important for understanding of the patient condition, the dynamics of a disease, effect of various patient management interventions and clinical decision making. In this work, we propose and develop a novel hierarchical framework for modeling clinical time series data of varied length and with irregularly sampled observations. Our hierarchical dynamical system framework for modeling clinical time series combines advantages of the two temporal modeling approaches: the linear dynamical system and the Gaussian process. We model the irregularly sampled clinical time series by using multiple Gaussian process sequences in the lower level of our hierarchical framework and capture the transitions between Gaussian processes by utilizing the linear dynamical system. The experiments are conducted on the complete blood count (CBC) panel data of 1000 post-surgical cardiac patients during their hospitalization. Our framework is evaluated and compared to multiple baseline approaches in terms of the mean absolute prediction error and the absolute percentage error. We tested our framework by first learning the time series model from data for the patients in the training set, and then using it to predict future time series values for the patients in the test set. We show that our model outperforms multiple existing models in terms of its predictive accuracy. Our method achieved a 3.13% average prediction accuracy improvement on ten CBC lab time series when it was compared against the best performing baseline. A 5.25% average accuracy improvement was observed when only short-term predictions were considered. A new hierarchical dynamical system framework that lets us model irregularly sampled time series data is a promising new direction for modeling clinical time series and for improving their predictive performance. Copyright © 2014 Elsevier B.V. All rights reserved.

  10. Experiments with parallel algorithms for combinatorial problems

    NARCIS (Netherlands)

    G.A.P. Kindervater (Gerard); H.W.J.M. Trienekens

    1985-01-01

    textabstractIn the last decade many models for parallel computation have been proposed and many parallel algorithms have been developed. However, few of these models have been realized and most of these algorithms are supposed to run on idealized, unrealistic parallel machines. The parallel machines

  11. Functional annotation of hierarchical modularity.

    Directory of Open Access Journals (Sweden)

    Kanchana Padmanabhan

    Full Text Available In biological networks of molecular interactions in a cell, network motifs that are biologically relevant are also functionally coherent, or form functional modules. These functionally coherent modules combine in a hierarchical manner into larger, less cohesive subsystems, thus revealing one of the essential design principles of system-level cellular organization and function-hierarchical modularity. Arguably, hierarchical modularity has not been explicitly taken into consideration by most, if not all, functional annotation systems. As a result, the existing methods would often fail to assign a statistically significant functional coherence score to biologically relevant molecular machines. We developed a methodology for hierarchical functional annotation. Given the hierarchical taxonomy of functional concepts (e.g., Gene Ontology and the association of individual genes or proteins with these concepts (e.g., GO terms, our method will assign a Hierarchical Modularity Score (HMS to each node in the hierarchy of functional modules; the HMS score and its p-value measure functional coherence of each module in the hierarchy. While existing methods annotate each module with a set of "enriched" functional terms in a bag of genes, our complementary method provides the hierarchical functional annotation of the modules and their hierarchically organized components. A hierarchical organization of functional modules often comes as a bi-product of cluster analysis of gene expression data or protein interaction data. Otherwise, our method will automatically build such a hierarchy by directly incorporating the functional taxonomy information into the hierarchy search process and by allowing multi-functional genes to be part of more than one component in the hierarchy. In addition, its underlying HMS scoring metric ensures that functional specificity of the terms across different levels of the hierarchical taxonomy is properly treated. We have evaluated our

  12. Measuring effectiveness of a university by a parallel network DEA model

    Science.gov (United States)

    Kashim, Rosmaini; Kasim, Maznah Mat; Rahman, Rosshairy Abd

    2017-11-01

    Universities contribute significantly to the development of human capital and socio-economic improvement of a country. Due to that, Malaysian universities carried out various initiatives to improve their performance. Most studies have used the Data Envelopment Analysis (DEA) model to measure efficiency rather than effectiveness, even though, the measurement of effectiveness is important to realize how effective a university in achieving its ultimate goals. A university system has two major functions, namely teaching and research and every function has different resources based on its emphasis. Therefore, a university is actually structured as a parallel production system with its overall effectiveness is the aggregated effectiveness of teaching and research. Hence, this paper is proposing a parallel network DEA model to measure the effectiveness of a university. This model includes internal operations of both teaching and research functions into account in computing the effectiveness of a university system. In literature, the graduate and the number of program offered are defined as the outputs, then, the employed graduates and the numbers of programs accredited from professional bodies are considered as the outcomes for measuring the teaching effectiveness. Amount of grants is regarded as the output of research, while the different quality of publications considered as the outcomes of research. A system is considered effective if only all functions are effective. This model has been tested using a hypothetical set of data consisting of 14 faculties at a public university in Malaysia. The results show that none of the faculties is relatively effective for the overall performance. Three faculties are effective in teaching and two faculties are effective in research. The potential applications of the parallel network DEA model allow the top management of a university to identify weaknesses in any functions in their universities and take rational steps for improvement.

  13. Dynamic control of quadruped robot with hierarchical control structure

    International Nuclear Information System (INIS)

    Wang, Yu-Zhang; Furusho, Junji; Okajima, Yosuke.

    1988-01-01

    For moving on irregular terrain, such as the inside of a nuclear power plant and outer space, it is generally recognized that the multilegged walking robot is suitable. This paper proposes a hierarchical control structure for the dynamic control of quadruped walking robots. For this purpose, we present a reduced order model which can approximate the original higher order model very well. Since this reduced order model does not require much computational time, it can be used in the real-time control of a quadruped walking robot. A hierarchical control experiment is shown in which the optimal control algorithm using a reduced order model is calculated by one microprocessor, and the other control algorithm is calculated by another microprocessor. (author)

  14. A Bayesian Approach to Model Selection in Hierarchical Mixtures-of-Experts Architectures.

    Science.gov (United States)

    Tanner, Martin A.; Peng, Fengchun; Jacobs, Robert A.

    1997-03-01

    There does not exist a statistical model that shows good performance on all tasks. Consequently, the model selection problem is unavoidable; investigators must decide which model is best at summarizing the data for each task of interest. This article presents an approach to the model selection problem in hierarchical mixtures-of-experts architectures. These architectures combine aspects of generalized linear models with those of finite mixture models in order to perform tasks via a recursive "divide-and-conquer" strategy. Markov chain Monte Carlo methodology is used to estimate the distribution of the architectures' parameters. One part of our approach to model selection attempts to estimate the worth of each component of an architecture so that relatively unused components can be pruned from the architecture's structure. A second part of this approach uses a Bayesian hypothesis testing procedure in order to differentiate inputs that carry useful information from nuisance inputs. Simulation results suggest that the approach presented here adheres to the dictum of Occam's razor; simple architectures that are adequate for summarizing the data are favored over more complex structures. Copyright 1997 Elsevier Science Ltd. All Rights Reserved.

  15. Hierarchical self-assembly of two-length-scale multiblock copolymers

    International Nuclear Information System (INIS)

    Brinke, Gerrit ten; Loos, Katja; Vukovic, Ivana; Du Sart, Gerrit Gobius

    2011-01-01

    The self-assembly in diblock copolymer-based supramolecules, obtained by hydrogen bonding short side chains to one of the blocks, as well as in two-length-scale linear terpolymers results in hierarchical structure formation. The orientation of the different domains, e.g. layers in the case of a lamellar-in-lamellar structure, is determined by the molecular architecture, graft-like versus linear, and the relative magnitude of the interactions involved. In both cases parallel and perpendicular arrangements have been observed. The comb-shaped supramolecules approach is ideally suited for the preparation of nanoporous structures. A bicontinuous morphology with the supramolecular comb block forming the channels was finally achieved by extending the original approach to suitable triblock copolymer-based supramolecules.

  16. The Extended Parallel Process Model: Illuminating the Gaps in Research

    Science.gov (United States)

    Popova, Lucy

    2012-01-01

    This article examines constructs, propositions, and assumptions of the extended parallel process model (EPPM). Review of the EPPM literature reveals that its theoretical concepts are thoroughly developed, but the theory lacks consistency in operational definitions of some of its constructs. Out of the 12 propositions of the EPPM, a few have not…

  17. Parallel education: what is it?

    OpenAIRE

    Amos, Michelle Peta

    2017-01-01

    In the history of education it has long been discussed that single-sex and coeducation are the two models of education present in schools. With the introduction of parallel schools over the last 15 years, there has been very little research into this 'new model'. Many people do not understand what it means for a school to be parallel or they confuse a parallel model with co-education, due to the presence of both boys and girls within the one institution. Therefore, the main obj...

  18. Technical Note: Probabilistically constraining proxy age–depth models within a Bayesian hierarchical reconstruction model

    Directory of Open Access Journals (Sweden)

    J. P. Werner

    2015-03-01

    Full Text Available Reconstructions of the late-Holocene climate rely heavily upon proxies that are assumed to be accurately dated by layer counting, such as measurements of tree rings, ice cores, and varved lake sediments. Considerable advances could be achieved if time-uncertain proxies were able to be included within these multiproxy reconstructions, and if time uncertainties were recognized and correctly modeled for proxies commonly treated as free of age model errors. Current approaches for accounting for time uncertainty are generally limited to repeating the reconstruction using each one of an ensemble of age models, thereby inflating the final estimated uncertainty – in effect, each possible age model is given equal weighting. Uncertainties can be reduced by exploiting the inferred space–time covariance structure of the climate to re-weight the possible age models. Here, we demonstrate how Bayesian hierarchical climate reconstruction models can be augmented to account for time-uncertain proxies. Critically, although a priori all age models are given equal probability of being correct, the probabilities associated with the age models are formally updated within the Bayesian framework, thereby reducing uncertainties. Numerical experiments show that updating the age model probabilities decreases uncertainty in the resulting reconstructions, as compared with the current de facto standard of sampling over all age models, provided there is sufficient information from other data sources in the spatial region of the time-uncertain proxy. This approach can readily be generalized to non-layer-counted proxies, such as those derived from marine sediments.

  19. A one-dimensional heat transfer model for parallel-plate thermoacoustic heat exchangers

    NARCIS (Netherlands)

    de Jong, Anne; Wijnant, Ysbrand H.; de Boer, Andries

    2014-01-01

    A one-dimensional (1D) laminar oscillating flow heat transfer model is derived and applied to parallel-plate thermoacoustic heat exchangers. The model can be used to estimate the heat transfer from the solid wall to the acoustic medium, which is required for the heat input/output of thermoacoustic

  20. Class hierarchical test case generation algorithm based on expanded EMDPN model

    Institute of Scientific and Technical Information of China (English)

    LI Jun-yi; GONG Hong-fang; HU Ji-ping; ZOU Bei-ji; SUN Jia-guang

    2006-01-01

    A new model of event and message driven Petri network(EMDPN) based on the characteristic of class interaction for messages passing between two objects was extended. Using EMDPN interaction graph, a class hierarchical test-case generation algorithm with cooperated paths (copaths) was proposed, which can be used to solve the problems resulting from the class inheritance mechanism encountered in object-oriented software testing such as oracle, message transfer errors, and unreachable statement. Finally, the testing sufficiency was analyzed with the ordered sequence testing criterion(OSC). The results indicate that the test cases stemmed from newly proposed automatic algorithm of copaths generation satisfies synchronization message sequences testing criteria, therefore the proposed new algorithm of copaths generation has a good coverage rate.

  1. Introduction to parallel programming

    CERN Document Server

    Brawer, Steven

    1989-01-01

    Introduction to Parallel Programming focuses on the techniques, processes, methodologies, and approaches involved in parallel programming. The book first offers information on Fortran, hardware and operating system models, and processes, shared memory, and simple parallel programs. Discussions focus on processes and processors, joining processes, shared memory, time-sharing with multiple processors, hardware, loops, passing arguments in function/subroutine calls, program structure, and arithmetic expressions. The text then elaborates on basic parallel programming techniques, barriers and race

  2. Catalysis with hierarchical zeolites

    DEFF Research Database (Denmark)

    Holm, Martin Spangsberg; Taarning, Esben; Egeblad, Kresten

    2011-01-01

    Hierarchical (or mesoporous) zeolites have attracted significant attention during the first decade of the 21st century, and so far this interest continues to increase. There have already been several reviews giving detailed accounts of the developments emphasizing different aspects of this research...... topic. Until now, the main reason for developing hierarchical zeolites has been to achieve heterogeneous catalysts with improved performance but this particular facet has not yet been reviewed in detail. Thus, the present paper summaries and categorizes the catalytic studies utilizing hierarchical...... zeolites that have been reported hitherto. Prototypical examples from some of the different categories of catalytic reactions that have been studied using hierarchical zeolite catalysts are highlighted. This clearly illustrates the different ways that improved performance can be achieved with this family...

  3. Linking landscape characteristics to local grizzly bear abundance using multiple detection methods in a hierarchical model

    Science.gov (United States)

    Graves, T.A.; Kendall, Katherine C.; Royle, J. Andrew; Stetz, J.B.; Macleod, A.C.

    2011-01-01

    Few studies link habitat to grizzly bear Ursus arctos abundance and these have not accounted for the variation in detection or spatial autocorrelation. We collected and genotyped bear hair in and around Glacier National Park in northwestern Montana during the summer of 2000. We developed a hierarchical Markov chain Monte Carlo model that extends the existing occupancy and count models by accounting for (1) spatially explicit variables that we hypothesized might influence abundance; (2) separate sub-models of detection probability for two distinct sampling methods (hair traps and rub trees) targeting different segments of the population; (3) covariates to explain variation in each sub-model of detection; (4) a conditional autoregressive term to account for spatial autocorrelation; (5) weights to identify most important variables. Road density and per cent mesic habitat best explained variation in female grizzly bear abundance; spatial autocorrelation was not supported. More female bears were predicted in places with lower road density and with more mesic habitat. Detection rates of females increased with rub tree sampling effort. Road density best explained variation in male grizzly bear abundance and spatial autocorrelation was supported. More male bears were predicted in areas of low road density. Detection rates of males increased with rub tree and hair trap sampling effort and decreased over the sampling period. We provide a new method to (1) incorporate multiple detection methods into hierarchical models of abundance; (2) determine whether spatial autocorrelation should be included in final models. Our results suggest that the influence of landscape variables is consistent between habitat selection and abundance in this system.

  4. Hierarchical Compliance Control of a Soft Ankle Rehabilitation Robot Actuated by Pneumatic Muscles.

    Science.gov (United States)

    Liu, Quan; Liu, Aiming; Meng, Wei; Ai, Qingsong; Xie, Sheng Q

    2017-01-01

    Traditional compliance control of a rehabilitation robot is implemented in task space by using impedance or admittance control algorithms. The soft robot actuated by pneumatic muscle actuators (PMAs) is becoming prominent for patients as it enables the compliance being adjusted in each active link, which, however, has not been reported in the literature. This paper proposes a new compliance control method of a soft ankle rehabilitation robot that is driven by four PMAs configured in parallel to enable three degrees of freedom movement of the ankle joint. A new hierarchical compliance control structure, including a low-level compliance adjustment controller in joint space and a high-level admittance controller in task space, is designed. An adaptive compliance control paradigm is further developed by taking into account patient's active contribution and movement ability during a previous period of time, in order to provide robot assistance only when it is necessarily required. Experiments on healthy and impaired human subjects were conducted to verify the adaptive hierarchical compliance control scheme. The results show that the robot hierarchical compliance can be online adjusted according to the participant's assessment. The robot reduces its assistance output when participants contribute more and vice versa , thus providing a potentially feasible solution to the patient-in-loop cooperative training strategy.

  5. Production optimisation in the petrochemical industry by hierarchical multivariate modelling

    Energy Technology Data Exchange (ETDEWEB)

    Andersson, Magnus; Furusjoe, Erik; Jansson, Aasa

    2004-06-01

    This project demonstrates the advantages of applying hierarchical multivariate modelling in the petrochemical industry in order to increase knowledge of the total process. The models indicate possible ways to optimise the process regarding the use of energy and raw material, which is directly linked to the environmental impact of the process. The refinery of Nynaes Refining AB (Goeteborg, Sweden) has acted as a demonstration site in this project. The models developed for the demonstration site resulted in: Detection of an unknown process disturbance and suggestions of possible causes; Indications on how to increase the yield in combination with energy savings; The possibility to predict product quality from on-line process measurements, making the results available at a higher frequency than customary laboratory analysis; Quantification of the gradually lowered efficiency of heat transfer in the furnace and increased fuel consumption as an effect of soot build-up on the furnace coils; Increased knowledge of the relation between production rate and the efficiency of the heat exchangers. This report is one of two reports from the project. It contains a technical discussion of the result with some degree of detail. A shorter and more easily accessible report is also available, see IVL report B1586-A.

  6. Implementing O(N N-Body Algorithms Efficiently in Data-Parallel Languages

    Directory of Open Access Journals (Sweden)

    Yu Hu

    1996-01-01

    Full Text Available The optimization techniques for hierarchical O(N N-body algorithms described here focus on managing the data distribution and the data references, both between the memories of different nodes and within the memory hierarchy of each node. We show how the techniques can be expressed in data-parallel languages, such as High Performance Fortran (HPF and Connection Machine Fortran (CMF. The effectiveness of our techniques is demonstrated on an implementation of Anderson's hierarchical O(N N-body method for the Connection Machine system CM-5/5E. Of the total execution time, communication accounts for about 10–20% of the total time, with the average efficiency for arithmetic operations being about 40% and the total efficiency (including communication being about 35%. For the CM-5E, a performance in excess of 60 Mflop/s per node (peak 160 Mflop/s per node has been measured.

  7. Hierarchical prisoner’s dilemma in hierarchical game for resource competition

    Science.gov (United States)

    Fujimoto, Yuma; Sagawa, Takahiro; Kaneko, Kunihiko

    2017-07-01

    Dilemmas in cooperation are one of the major concerns in game theory. In a public goods game, each individual cooperates by paying a cost or defecting without paying it, and receives a reward from the group out of the collected cost. Thus, defecting is beneficial for each individual, while cooperation is beneficial for the group. Now, groups (say, countries) consisting of individuals also play games. To study such a multi-level game, we introduce a hierarchical game in which multiple groups compete for limited resources by utilizing the collected cost in each group, where the power to appropriate resources increases with the population of the group. Analyzing this hierarchical game, we found a hierarchical prisoner’s dilemma, in which groups choose the defecting policy (say, armament) as a Nash strategy to optimize each group’s benefit, while cooperation optimizes the total benefit. On the other hand, for each individual, refusing to pay the cost (say, tax) is a Nash strategy, which turns out to be a cooperation policy for the group, thus leading to a hierarchical dilemma. Here the group reward increases with the group size. However, we find that there exists an optimal group size that maximizes the individual payoff. Furthermore, when the population asymmetry between two groups is large, the smaller group will choose a cooperation policy (say, disarmament) to avoid excessive response from the larger group, and the prisoner’s dilemma between the groups is resolved. Accordingly, the relevance of this hierarchical game on policy selection in society and the optimal size of human or animal groups are discussed.

  8. Algorithm comparison and benchmarking using a parallel spectra transform shallow water model

    Energy Technology Data Exchange (ETDEWEB)

    Worley, P.H. [Oak Ridge National Lab., TN (United States); Foster, I.T.; Toonen, B. [Argonne National Lab., IL (United States)

    1995-04-01

    In recent years, a number of computer vendors have produced supercomputers based on a massively parallel processing (MPP) architecture. These computers have been shown to be competitive in performance with conventional vector supercomputers for some applications. As spectral weather and climate models are heavy users of vector supercomputers, it is interesting to determine how these models perform on MPPS, and which MPPs are best suited to the execution of spectral models. The benchmarking of MPPs is complicated by the fact that different algorithms may be more efficient on different architectures. Hence, a comprehensive benchmarking effort must answer two related questions: which algorithm is most efficient on each computer and how do the most efficient algorithms compare on different computers. In general, these are difficult questions to answer because of the high cost associated with implementing and evaluating a range of different parallel algorithms on each MPP platform.

  9. A Hierarchical Poisson Log-Normal Model for Network Inference from RNA Sequencing Data

    Science.gov (United States)

    Gallopin, Mélina; Rau, Andrea; Jaffrézic, Florence

    2013-01-01

    Gene network inference from transcriptomic data is an important methodological challenge and a key aspect of systems biology. Although several methods have been proposed to infer networks from microarray data, there is a need for inference methods able to model RNA-seq data, which are count-based and highly variable. In this work we propose a hierarchical Poisson log-normal model with a Lasso penalty to infer gene networks from RNA-seq data; this model has the advantage of directly modelling discrete data and accounting for inter-sample variance larger than the sample mean. Using real microRNA-seq data from breast cancer tumors and simulations, we compare this method to a regularized Gaussian graphical model on log-transformed data, and a Poisson log-linear graphical model with a Lasso penalty on power-transformed data. For data simulated with large inter-sample dispersion, the proposed model performs better than the other methods in terms of sensitivity, specificity and area under the ROC curve. These results show the necessity of methods specifically designed for gene network inference from RNA-seq data. PMID:24147011

  10. Greater expectations: using hierarchical linear modeling to examine expectancy for treatment outcome as a predictor of treatment response.

    Science.gov (United States)

    Price, Matthew; Anderson, Page; Henrich, Christopher C; Rothbaum, Barbara Olasov

    2008-12-01

    A client's expectation that therapy will be beneficial has long been considered an important factor contributing to therapeutic outcomes, but recent empirical work examining this hypothesis has primarily yielded null findings. The present study examined the contribution of expectancies for treatment outcome to actual treatment outcome from the start of therapy through 12-month follow-up in a clinical sample of individuals (n=72) treated for fear of flying with either in vivo exposure or virtual reality exposure therapy. Using a piecewise hierarchical linear model, outcome expectancy predicted treatment gains made during therapy but not during follow-up. Compared to lower levels, higher expectations for treatment outcome yielded stronger rates of symptom reduction from the beginning to the end of treatment on 2 standardized self-report questionnaires on fear of flying. The analytic approach of the current study is one potential reason that findings contrast with prior literature. The advantages of using hierarchical linear modeling to assess interindividual differences in longitudinal data are discussed.

  11. a Predator-Prey Model Based on the Fully Parallel Cellular Automata

    Science.gov (United States)

    He, Mingfeng; Ruan, Hongbo; Yu, Changliang

    We presented a predator-prey lattice model containing moveable wolves and sheep, which are characterized by Penna double bit strings. Sexual reproduction and child-care strategies are considered. To implement this model in an efficient way, we build a fully parallel Cellular Automata based on a new definition of the neighborhood. We show the roles played by the initial densities of the populations, the mutation rate and the linear size of the lattice in the evolution of this model.

  12. Dynamic modelling of a 3-CPU parallel robot via screw theory

    Directory of Open Access Journals (Sweden)

    L. Carbonari

    2013-04-01

    Full Text Available The article describes the dynamic modelling of I.Ca.Ro., a novel Cartesian parallel robot recently designed and prototyped by the robotics research group of the Polytechnic University of Marche. By means of screw theory and virtual work principle, a computationally efficient model has been built, with the final aim of realising advanced model based controllers. Then a dynamic analysis has been performed in order to point out possible model simplifications that could lead to a more efficient run time implementation.

  13. Simulating spin models on GPU

    Science.gov (United States)

    Weigel, Martin

    2011-09-01

    Over the last couple of years it has been realized that the vast computational power of graphics processing units (GPUs) could be harvested for purposes other than the video game industry. This power, which at least nominally exceeds that of current CPUs by large factors, results from the relative simplicity of the GPU architectures as compared to CPUs, combined with a large number of parallel processing units on a single chip. To benefit from this setup for general computing purposes, the problems at hand need to be prepared in a way to profit from the inherent parallelism and hierarchical structure of memory accesses. In this contribution I discuss the performance potential for simulating spin models, such as the Ising model, on GPU as compared to conventional simulations on CPU.

  14. Hierarchical Matrices Method and Its Application in Electromagnetic Integral Equations

    Directory of Open Access Journals (Sweden)

    Han Guo

    2012-01-01

    Full Text Available Hierarchical (H- matrices method is a general mathematical framework providing a highly compact representation and efficient numerical arithmetic. When applied in integral-equation- (IE- based computational electromagnetics, H-matrices can be regarded as a fast algorithm; therefore, both the CPU time and memory requirement are reduced significantly. Its kernel independent feature also makes it suitable for any kind of integral equation. To solve H-matrices system, Krylov iteration methods can be employed with appropriate preconditioners, and direct solvers based on the hierarchical structure of H-matrices are also available along with high efficiency and accuracy, which is a unique advantage compared to other fast algorithms. In this paper, a novel sparse approximate inverse (SAI preconditioner in multilevel fashion is proposed to accelerate the convergence rate of Krylov iterations for solving H-matrices system in electromagnetic applications, and a group of parallel fast direct solvers are developed for dealing with multiple right-hand-side cases. Finally, numerical experiments are given to demonstrate the advantages of the proposed multilevel preconditioner compared to conventional “single level” preconditioners and the practicability of the fast direct solvers for arbitrary complex structures.

  15. Spatial patterns of breeding success of grizzly bears derived from hierarchical multistate models.

    Science.gov (United States)

    Fisher, Jason T; Wheatley, Matthew; Mackenzie, Darryl

    2014-10-01

    Conservation programs often manage populations indirectly through the landscapes in which they live. Empirically, linking reproductive success with landscape structure and anthropogenic change is a first step in understanding and managing the spatial mechanisms that affect reproduction, but this link is not sufficiently informed by data. Hierarchical multistate occupancy models can forge these links by estimating spatial patterns of reproductive success across landscapes. To illustrate, we surveyed the occurrence of grizzly bears (Ursus arctos) in the Canadian Rocky Mountains Alberta, Canada. We deployed camera traps for 6 weeks at 54 surveys sites in different types of land cover. We used hierarchical multistate occupancy models to estimate probability of detection, grizzly bear occupancy, and probability of reproductive success at each site. Grizzly bear occupancy varied among cover types and was greater in herbaceous alpine ecotones than in low-elevation wetlands or mid-elevation conifer forests. The conditional probability of reproductive success given grizzly bear occupancy was 30% (SE = 0.14). Grizzly bears with cubs had a higher probability of detection than grizzly bears without cubs, but sites were correctly classified as being occupied by breeding females 49% of the time based on raw data and thus would have been underestimated by half. Repeated surveys and multistate modeling reduced the probability of misclassifying sites occupied by breeders as unoccupied to <2%. The probability of breeding grizzly bear occupancy varied across the landscape. Those patches with highest probabilities of breeding occupancy-herbaceous alpine ecotones-were small and highly dispersed and are projected to shrink as treelines advance due to climate warming. Understanding spatial correlates in breeding distribution is a key requirement for species conservation in the face of climate change and can help identify priorities for landscape management and protection. © 2014 Society

  16. Hierarchical capillary adhesion of microcantilevers or hairs

    International Nuclear Information System (INIS)

    Liu Jianlin; Feng Xiqiao; Xia Re; Zhao Hongping

    2007-01-01

    As a result of capillary forces, animal hairs, carbon nanotubes or nanowires of a periodically or randomly distributed array often assemble into hierarchical structures. In this paper, the energy method is adopted to analyse the capillary adhesion of microsized hairs, which are modelled as clamped microcantilevers wetted by liquids. The critical conditions for capillary adhesion of two hairs, three hairs or two bundles of hairs are derived in terms of Young's contact angle, elastic modulus and geometric sizes of the beams. Then, the hierarchical capillary adhesion of hairs is addressed. It is found that for multiple hairs or microcantilevers, the system tends to take a hierarchical structure as a result of the minimization of the total potential energy of the system. The level number of structural hierarchy increases with the increase in the number of hairs if they are sufficiently long. Additionally, we performed experiments to verify our theoretical solutions for the adhesion of microbeams

  17. A Hybrid Shared-Memory Parallel Max-Tree Algorithm for Extreme Dynamic-Range Images.

    Science.gov (United States)

    Moschini, Ugo; Meijster, Arnold; Wilkinson, Michael H F

    2018-03-01

    Max-trees, or component trees, are graph structures that represent the connected components of an image in a hierarchical way. Nowadays, many application fields rely on images with high-dynamic range or floating point values. Efficient sequential algorithms exist to build trees and compute attributes for images of any bit depth. However, we show that the current parallel algorithms perform poorly already with integers at bit depths higher than 16 bits per pixel. We propose a parallel method combining the two worlds of flooding and merging max-tree algorithms. First, a pilot max-tree of a quantized version of the image is built in parallel using a flooding method. Later, this structure is used in a parallel leaf-to-root approach to compute efficiently the final max-tree and to drive the merging of the sub-trees computed by the threads. We present an analysis of the performance both on simulated and actual 2D images and 3D volumes. Execution times are about better than the fastest sequential algorithm and speed-up goes up to on 64 threads.

  18. Parallel processing and non-uniform grids in global air quality modeling

    NARCIS (Netherlands)

    Berkvens, P.J.F.; Bochev, Mikhail A.

    2002-01-01

    A large-scale global air quality model, running efficiently on a single vector processor, is enhanced to make more realistic and more long-term simulations feasible. Two strategies are combined: non-uniform grids and parallel processing. The communication through the hierarchy of non-uniform grids

  19. MODELING THE RED SEQUENCE: HIERARCHICAL GROWTH YET SLOW LUMINOSITY EVOLUTION

    International Nuclear Information System (INIS)

    Skelton, Rosalind E.; Bell, Eric F.; Somerville, Rachel S.

    2012-01-01

    We explore the effects of mergers on the evolution of massive early-type galaxies by modeling the evolution of their stellar populations in a hierarchical context. We investigate how a realistic red sequence population set up by z ∼ 1 evolves under different assumptions for the merger and star formation histories, comparing changes in color, luminosity, and mass. The purely passive fading of existing red sequence galaxies, with no further mergers or star formation, results in dramatic changes at the bright end of the luminosity function and color-magnitude relation. Without mergers there is too much evolution in luminosity at a fixed space density compared to observations. The change in color and magnitude at a fixed mass resembles that of a passively evolving population that formed relatively recently, at z ∼ 2. Mergers among the red sequence population ('dry mergers') occurring after z = 1 build up mass, counteracting the fading of the existing stellar populations to give smaller changes in both color and luminosity for massive galaxies. By allowing some galaxies to migrate from the blue cloud onto the red sequence after z = 1 through gas-rich mergers, younger stellar populations are added to the red sequence. This manifestation of the progenitor bias increases the scatter in age and results in even smaller changes in color and luminosity between z = 1 and z = 0 at a fixed mass. The resultant evolution appears much slower, resembling the passive evolution of a population that formed at high redshift (z ∼ 3-5), and is in closer agreement with observations. We conclude that measurements of the luminosity and color evolution alone are not sufficient to distinguish between the purely passive evolution of an old population and cosmologically motivated hierarchical growth, although these scenarios have very different implications for the mass growth of early-type galaxies over the last half of cosmic history.

  20. Three-dimensional parallel edge-based finite element modeling of electromagnetic data with field redatuming

    DEFF Research Database (Denmark)

    Cai, Hongzhu; Čuma, Martin; Zhdanov, Michael

    2015-01-01

    This paper presents a parallelized version of the edge-based finite element method with a novel post-processing approach for numerical modeling of an electromagnetic field in complex media. The method uses an unstructured tetrahedral mesh which can reduce the number of degrees of freedom signific......This paper presents a parallelized version of the edge-based finite element method with a novel post-processing approach for numerical modeling of an electromagnetic field in complex media. The method uses an unstructured tetrahedral mesh which can reduce the number of degrees of freedom...... significantly. The linear system of finite element equations is solved using parallel direct solvers which are robust for ill-conditioned systems and efficient for multiple source electromagnetic (EM) modeling. We also introduce a novel approach to compute the scalar components of the electric field from...... the tangential components along each edge based on field redatuming. The method can produce a more accurate result as compared to conventional approach. We have applied the developed algorithm to compute the EM response for a typical 3D anisotropic geoelectrical model of the off-shore HC reservoir with complex...