Hierarchical Bulk Synchronous Parallel Model and Performance Optimization
Institute of Scientific and Technical Information of China (English)
HUANG Linpeng; SUNYongqiang; YUAN Wei
1999-01-01
Based on the framework of BSP, aHierarchical Bulk Synchronous Parallel (HBSP) performance model isintroduced in this paper to capture the performance optimizationproblem for various stages in parallel program development and toaccurately predict the performance of a parallel program byconsidering factors causing variance at local computation and globalcommunication. The related methodology has been applied to several realapplications and the results show that HBSP is a suitable model foroptimizing parallel programs.
Parallel hierarchical radiosity rendering
Energy Technology Data Exchange (ETDEWEB)
Carter, M.
1993-07-01
In this dissertation, the step-by-step development of a scalable parallel hierarchical radiosity renderer is documented. First, a new look is taken at the traditional radiosity equation, and a new form is presented in which the matrix of linear system coefficients is transformed into a symmetric matrix, thereby simplifying the problem and enabling a new solution technique to be applied. Next, the state-of-the-art hierarchical radiosity methods are examined for their suitability to parallel implementation, and scalability. Significant enhancements are also discovered which both improve their theoretical foundations and improve the images they generate. The resultant hierarchical radiosity algorithm is then examined for sources of parallelism, and for an architectural mapping. Several architectural mappings are discussed. A few key algorithmic changes are suggested during the process of making the algorithm parallel. Next, the performance, efficiency, and scalability of the algorithm are analyzed. The dissertation closes with a discussion of several ideas which have the potential to further enhance the hierarchical radiosity method, or provide an entirely new forum for the application of hierarchical methods.
Parallel Motion Simulation of Large-Scale Real-Time Crowd in a Hierarchical Environmental Model
Directory of Open Access Journals (Sweden)
Xin Wang
2012-01-01
Full Text Available This paper presents a parallel real-time crowd simulation method based on a hierarchical environmental model. A dynamical model of the complex environment should be constructed to simulate the state transition and propagation of individual motions. By modeling of a virtual environment where virtual crowds reside, we employ different parallel methods on a topological layer, a path layer and a perceptual layer. We propose a parallel motion path matching method based on the path layer and a parallel crowd simulation method based on the perceptual layer. The large-scale real-time crowd simulation becomes possible with these methods. Numerical experiments are carried out to demonstrate the methods and results.
Parallel hierarchical global illumination
Energy Technology Data Exchange (ETDEWEB)
Snell, Quinn O. [Iowa State Univ., Ames, IA (United States)
1997-10-08
Solving the global illumination problem is equivalent to determining the intensity of every wavelength of light in all directions at every point in a given scene. The complexity of the problem has led researchers to use approximation methods for solving the problem on serial computers. Rather than using an approximation method, such as backward ray tracing or radiosity, the authors have chosen to solve the Rendering Equation by direct simulation of light transport from the light sources. This paper presents an algorithm that solves the Rendering Equation to any desired accuracy, and can be run in parallel on distributed memory or shared memory computer systems with excellent scaling properties. It appears superior in both speed and physical correctness to recent published methods involving bidirectional ray tracing or hybrid treatments of diffuse and specular surfaces. Like progressive radiosity methods, it dynamically refines the geometry decomposition where required, but does so without the excessive storage requirements for ray histories. The algorithm, called Photon, produces a scene which converges to the global illumination solution. This amounts to a huge task for a 1997-vintage serial computer, but using the power of a parallel supercomputer significantly reduces the time required to generate a solution. Currently, Photon can be run on most parallel environments from a shared memory multiprocessor to a parallel supercomputer, as well as on clusters of heterogeneous workstations.
Hierarchical Parallel Evaluation of a Hamming Code
Directory of Open Access Journals (Sweden)
Shmuel T. Klein
2017-04-01
Full Text Available The Hamming code is a well-known error correction code and can correct a single error in an input vector of size n bits by adding logn parity checks. A new parallel implementation of the code is presented, using a hierarchical structure of n processors in logn layers. All the processors perform similar simple tasks, and need only a few bytes of internal memory.
Hierarchical Parallelization of Gene Differential Association Analysis
Directory of Open Access Journals (Sweden)
Dwarkadas Sandhya
2011-09-01
Full Text Available Abstract Background Microarray gene differential expression analysis is a widely used technique that deals with high dimensional data and is computationally intensive for permutation-based procedures. Microarray gene differential association analysis is even more computationally demanding and must take advantage of multicore computing technology, which is the driving force behind increasing compute power in recent years. In this paper, we present a two-layer hierarchical parallel implementation of gene differential association analysis. It takes advantage of both fine- and coarse-grain (with granularity defined by the frequency of communication parallelism in order to effectively leverage the non-uniform nature of parallel processing available in the cutting-edge systems of today. Results Our results show that this hierarchical strategy matches data sharing behavior to the properties of the underlying hardware, thereby reducing the memory and bandwidth needs of the application. The resulting improved efficiency reduces computation time and allows the gene differential association analysis code to scale its execution with the number of processors. The code and biological data used in this study are downloadable from http://www.urmc.rochester.edu/biostat/people/faculty/hu.cfm. Conclusions The performance sweet spot occurs when using a number of threads per MPI process that allows the working sets of the corresponding MPI processes running on the multicore to fit within the machine cache. Hence, we suggest that practitioners follow this principle in selecting the appropriate number of MPI processes and threads within each MPI process for their cluster configurations. We believe that the principles of this hierarchical approach to parallelization can be utilized in the parallelization of other computationally demanding kernels.
层级式可视化并行程序建模系统研究%Research of Hierarchical Visual Modeling System for Parallel Programs
Institute of Scientific and Technical Information of China (English)
徐祯; 孙济洲; 于策; 孙超; 汤善江
2011-01-01
The visual modeling technology can reduce the difficulty of the design of parallel programs effectively, the complex hardware architecture still puts forward new challenges on the parallel program design method on the software level.To solve these issues, this paper proposes a visual modeling methodology based on the hierarchical idea and an hierarchical modeling scheme for parallel programs, and designs and implements a modeling system called e-ParaModel for multi-core cluster environments.A modeling paradigm to verify the system's feasibility and applicability is completed.%可视化建模技术虽能降低并行程序设计的难度,但复杂的硬件结构仍使软件层面上的并行程序设计方法存在一定难度.为此,提出一种基于层级式建模思想的并行程序可视化建模方法和分层建模方案,设计和实现一个面向多层次集群环境的可视化建模系统e-ParaModel,用建模实例验证其可行性和实用性.
SORM applied to hierarchical parallel system
DEFF Research Database (Denmark)
Ditlevsen, Ove Dalager
2006-01-01
The old hierarchical stochastic load combination model of Ferry Borges and Castanheta and the corresponding problem of determining the distribution of the extreme random load effect is the inspiration to this paper. The evaluation of the distribution function of the extreme value by use of a part......The old hierarchical stochastic load combination model of Ferry Borges and Castanheta and the corresponding problem of determining the distribution of the extreme random load effect is the inspiration to this paper. The evaluation of the distribution function of the extreme value by use...... of a particular first order reliability method (FORM) was first described in a celebrated paper by Rackwitz and Fiessler more than a quarter of a century ago. The method has become known as the Rackwitz-Fiessler algorithm. The original RF-algorithm as applied to a hierarchical random variable model...... is recapitulated so that a simple but quite effective accuracy improving calculation can be explained. A limit state curvature correction factor on the probability approximation is obtained from the final stop results of the RF-algorithm. This correction factor is based on Breitung’s asymptotic formula for second...
Parallel hierarchical evaluation of transitive closure queries
Houtsma, M.A.W.; Cacace, F.; Ceri, S.
1991-01-01
Presents a new approach to parallel computation of transitive closure queries using a semantic data fragmentation. Tuples of a large base relation denote edges in a graph, which models a transportation network. A fragmentation algorithm is proposed which produces a partitioning of the base relation
Parallel hierarchical evaluation of transitive closure queries
Houtsma, M.A.W.; Houtsma, M.A.W.; Cacace, F.; Ceri, S.
1991-01-01
Presents a new approach to parallel computation of transitive closure queries using a semantic data fragmentation. Tuples of a large base relation denote edges in a graph, which models a transportation network. A fragmentation algorithm is proposed which produces a partitioning of the base relation
Parallelism and Time in Hierarchical Self-Assembly
Chen, Ho-Lin
2011-01-01
We study the role that parallelism plays in time complexity of Winfree's abstract Tile Assembly Model (aTAM), a model of molecular algorithmic self-assembly. In the "hierarchical" aTAM, two assemblies, both consisting of multiple tiles, are allowed to aggregate together, whereas in the "seeded" aTAM, tiles attach one at a time to a growing assembly. Adleman, Cheng, Goel, and Huang ("Running Time and Program Size for Self-Assembled Squares", STOC 2001) showed how to assemble an n x n square in O(n) time in the seeded aTAM using O(log n / log log n) unique tile types, where both of these parameters are optimal. They asked whether the hierarchical aTAM could allow a tile system to use the ability to form large assemblies in parallel before they attach to break the Omega(n) lower bound for assembly time. We show that there is a tile system with the optimal O(log n / log log n) tile types that assembles an n x n square using O(log^2 n) parallel "stages", which is close to the optimal Omega(log n) stages, forming t...
Collaborative Hierarchical Sparse Modeling
Sprechmann, Pablo; Sapiro, Guillermo; Eldar, Yonina C
2010-01-01
Sparse modeling is a powerful framework for data analysis and processing. Traditionally, encoding in this framework is done by solving an l_1-regularized linear regression problem, usually called Lasso. In this work we first combine the sparsity-inducing property of the Lasso model, at the individual feature level, with the block-sparsity property of the group Lasso model, where sparse groups of features are jointly encoded, obtaining a sparsity pattern hierarchically structured. This results in the hierarchical Lasso, which shows important practical modeling advantages. We then extend this approach to the collaborative case, where a set of simultaneously coded signals share the same sparsity pattern at the higher (group) level but not necessarily at the lower one. Signals then share the same active groups, or classes, but not necessarily the same active set. This is very well suited for applications such as source separation. An efficient optimization procedure, which guarantees convergence to the global opt...
Timchenko, Leonid; Yarovyi, Andrii; Kokriatskaya, Nataliya; Nakonechna, Svitlana; Abramenko, Ludmila; Ławicki, Tomasz; Popiel, Piotr; Yesmakhanova, Laura
2016-09-01
The paper presents a method of parallel-hierarchical transformations for rapid recognition of dynamic images using GPU technology. Direct parallel-hierarchical transformations based on cluster CPU-and GPU-oriented hardware platform. Mathematic models of training of the parallel hierarchical (PH) network for the transformation are developed, as well as a training method of the PH network for recognition of dynamic images. This research is most topical for problems on organizing high-performance computations of super large arrays of information designed to implement multi-stage sensing and processing as well as compaction and recognition of data in the informational structures and computer devices. This method has such advantages as high performance through the use of recent advances in parallelization, possibility to work with images of ultra dimension, ease of scaling in case of changing the number of nodes in the cluster, auto scan of local network to detect compute nodes.
On the hierarchical parallelization of ab initio simulations
Ruiz-Barragan, Sergi; Shiga, Motoyuki
2016-01-01
A hierarchical parallelization has been implemented in a new unified code PIMD-SMASH for ab initio simulation where the replicas and the Born-Oppenheimer forces are parallelized. It is demonstrated that ab initio path integral molecular dynamics simulations can be carried out very efficiently for systems up to a few tens of water molecules. The code was then used to study a Diels-Alder reaction of cyclopentadiene and butenone by ab initio string method. A reduction in the reaction energy barrier is found in the presence of hydrogen-bonded water, in accordance with experiment.
Modeling hierarchical structures - Hierarchical Linear Modeling using MPlus
Jelonek, M
2006-01-01
The aim of this paper is to present the technique (and its linkage with physics) of overcoming problems connected to modeling social structures, which are typically hierarchical. Hierarchical Linear Models provide a conceptual and statistical mechanism for drawing conclusions regarding the influence of phenomena at different levels of analysis. In the social sciences it is used to analyze many problems such as educational, organizational or market dilemma. This paper introduces the logic of modeling hierarchical linear equations and estimation based on MPlus software. I present my own model to illustrate the impact of different factors on school acceptation level.
Parallel iterative solvers and preconditioners using approximate hierarchical methods
Energy Technology Data Exchange (ETDEWEB)
Grama, A.; Kumar, V.; Sameh, A. [Univ. of Minnesota, Minneapolis, MN (United States)
1996-12-31
In this paper, we report results of the performance, convergence, and accuracy of a parallel GMRES solver for Boundary Element Methods. The solver uses a hierarchical approximate matrix-vector product based on a hybrid Barnes-Hut / Fast Multipole Method. We study the impact of various accuracy parameters on the convergence and show that with minimal loss in accuracy, our solver yields significant speedups. We demonstrate the excellent parallel efficiency and scalability of our solver. The combined speedups from approximation and parallelism represent an improvement of several orders in solution time. We also develop fast and paralellizable preconditioners for this problem. We report on the performance of an inner-outer scheme and a preconditioner based on truncated Green`s function. Experimental results on a 256 processor Cray T3D are presented.
Modeling hierarchical structures - Hierarchical Linear Modeling using MPlus
Jelonek, Magdalena
2006-01-01
The aim of this paper is to present the technique (and its linkage with physics) of overcoming problems connected to modeling social structures, which are typically hierarchical. Hierarchical Linear Models provide a conceptual and statistical mechanism for drawing conclusions regarding the influence of phenomena at different levels of analysis. In the social sciences it is used to analyze many problems such as educational, organizational or market dilemma. This paper introduces the logic of m...
Olfactory functions are mediated by parallel and hierarchical processing.
Savic, I; Gulyas, B; Larsson, M; Roland, P
2000-06-01
How the human brain processes the perception, discrimination, and recognition of odors has not been systematically explored. Cerebral activations were therefore studied with PET during five different olfactory tasks: monorhinal smelling of odorless air (AS), single odors (OS), discrimination of odor intensity (OD-i), discrimination of odor quality (OD-q), and odor recognition memory (OM). OS activated amygdala-piriform, orbitofrontal, insular, and cingulate cortices and right thalamus. OD-i and OD-q both engaged left insula and right cerebellum. OD-q also involved other areas, including right caudate and subiculum. OM did not activate the insula, but instead, the piriform cortex. With the exception of caudate and subiculum, it shared the remaining activations with the OD-q, and engaged, in addition, the temporal and parietal cortices. These findings indicate that olfactory functions are organized in a parallel and hierarchical manner.
Architecture of the parallel hierarchical network for fast image recognition
Timchenko, Leonid; Wójcik, Waldemar; Kokriatskaia, Natalia; Kutaev, Yuriy; Ivasyuk, Igor; Kotyra, Andrzej; Smailova, Saule
2016-09-01
Multistage integration of visual information in the brain allows humans to respond quickly to most significant stimuli while maintaining their ability to recognize small details in the image. Implementation of this principle in technical systems can lead to more efficient processing procedures. The multistage approach to image processing includes main types of cortical multistage convergence. The input images are mapped into a flexible hierarchy that reflects complexity of image data. Procedures of the temporal image decomposition and hierarchy formation are described in mathematical expressions. The multistage system highlights spatial regularities, which are passed through a number of transformational levels to generate a coded representation of the image that encapsulates a structure on different hierarchical levels in the image. At each processing stage a single output result is computed to allow a quick response of the system. The result is presented as an activity pattern, which can be compared with previously computed patterns on the basis of the closest match. With regard to the forecasting method, its idea lies in the following. In the results synchronization block, network-processed data arrive to the database where a sample of most correlated data is drawn using service parameters of the parallel-hierarchical network.
Hierarchical Place Trees: A Portable Abstraction for Task Parallelism and Data Movement
Yan, Yonghong; Zhao, Jisheng; Guo, Yi; Sarkar, Vivek
Modern computer systems feature multiple homogeneous or heterogeneous computing units with deep memory hierarchies, and expect a high degree of thread-level parallelism from the software. Exploitation of data locality is critical to achieving scalable parallelism, but adds a significant dimension of complexity to performance optimization of parallel programs. This is especially true for programming models where locality is implicit and opaque to programmers. In this paper, we introduce the hierarchical place tree (HPT) model as a portable abstraction for task parallelism and data movement. The HPT model supports co-allocation of data and computation at multiple levels of a memory hierarchy. It can be viewed as a generalization of concepts from the Sequoia and X10 programming models, resulting in capabilities that are not supported by either. Compared to Sequoia, HPT supports three kinds of data movement in a memory hierarchy rather than just explicit data transfer between adjacent levels, as well as dynamic task scheduling rather than static task assignment. Compared to X10, HPT provides a hierarchical notion of places for both computation and data mapping. We describe our work-in-progress on implementing the HPT model in the Habanero-Java (HJ) compiler and runtime system. Preliminary results on general-purpose multicore processors and GPU accelerators indicate that the HPT model can be a promising portable abstraction for future multicore processors.
Tashiro, Tohru
2014-03-01
We propose a new model about diffusion of a product which includes a memory of how many adopters or advertisements a non-adopter met, where (non-)adopters mean people (not) possessing the product. This effect is lacking in the Bass model. As an application, we utilize the model to fit the iPod sales data, and so the better agreement is obtained than the Bass model.
Tashiro, Tohru
2013-01-01
We propose a new model about diffusion of a product which includes a memory of how many adopters or advertisements a non-adopter met, where (non-)adopters mean people (not) possessing the product. This effect is lacking in the Bass model. As an application, we utilize the model to fit the iPod sales data, and so the better agreement is obtained than the Bass model.
Parallel Software Model Checking
2015-01-08
JAN 2015 2. REPORT TYPE N/A 3. DATES COVERED 4. TITLE AND SUBTITLE Parallel Software Model Checking 5a. CONTRACT NUMBER 5b. GRANT NUMBER...AND ADDRESS(ES) Software Engineering Institute Carnegie Mellon University Pittsburgh, PA 15213 8. PERFORMING ORGANIZATION REPORT NUMBER 9...3: ∧ ≥ 10 ∧ ≠ 10 ⇒ : Parallel Software Model Checking Team Members Sagar Chaki, Arie Gurfinkel
Hierarchical Cont-Bouchaud model
Paluch, Robert; Holyst, Janusz A
2015-01-01
We extend the well-known Cont-Bouchaud model to include a hierarchical topology of agent's interactions. The influence of hierarchy on system dynamics is investigated by two models. The first one is based on a multi-level, nested Erdos-Renyi random graph and individual decisions by agents according to Potts dynamics. This approach does not lead to a broad return distribution outside a parameter regime close to the original Cont-Bouchaud model. In the second model we introduce a limited hierarchical Erdos-Renyi graph, where merging of clusters at a level h+1 involves only clusters that have merged at the previous level h and we use the original Cont-Bouchaud agent dynamics on resulting clusters. The second model leads to a heavy-tail distribution of cluster sizes and relative price changes in a wide range of connection densities, not only close to the percolation threshold.
Generic, hierarchical framework for massively parallel Wang-Landau sampling.
Vogel, Thomas; Li, Ying Wai; Wüst, Thomas; Landau, David P
2013-05-24
We introduce a parallel Wang-Landau method based on the replica-exchange framework for Monte Carlo simulations. To demonstrate its advantages and general applicability for simulations of complex systems, we apply it to different spin models including spin glasses, the Ising model, and the Potts model, lattice protein adsorption, and the self-assembly process in amphiphilic solutions. Without loss of accuracy, the method gives significant speed-up and potentially scales up to petaflop machines.
Generic, Hierarchical Framework for Massively Parallel Wang-Landau Sampling
Vogel, Thomas; Li, Ying Wai; Wüst, Thomas; Landau, David P.
2013-05-01
We introduce a parallel Wang-Landau method based on the replica-exchange framework for Monte Carlo simulations. To demonstrate its advantages and general applicability for simulations of complex systems, we apply it to different spin models including spin glasses, the Ising model, and the Potts model, lattice protein adsorption, and the self-assembly process in amphiphilic solutions. Without loss of accuracy, the method gives significant speed-up and potentially scales up to petaflop machines.
A hierarchical approach to reducing communication in parallel graph algorithms
Harshvardhan,
2015-01-01
Large-scale graph computing has become critical due to the ever-increasing size of data. However, distributed graph computations are limited in their scalability and performance due to the heavy communication inherent in such computations. This is exacerbated in scale-free networks, such as social and web graphs, which contain hub vertices that have large degrees and therefore send a large number of messages over the network. Furthermore, many graph algorithms and computations send the same data to each of the neighbors of a vertex. Our proposed approach recognizes this, and reduces communication performed by the algorithm without change to user-code, through a hierarchical machine model imposed upon the input graph. The hierarchical model takes advantage of locale information of the neighboring vertices to reduce communication, both in message volume and total number of bytes sent. It is also able to better exploit the machine hierarchy to further reduce the communication costs, by aggregating traffic between different levels of the machine hierarchy. Results of an implementation in the STAPL GL shows improved scalability and performance over the traditional level-synchronous approach, with 2.5 × - 8× improvement for a variety of graph algorithms at 12, 000+ cores.
Hierarchical model of matching
Pedrycz, Witold; Roventa, Eugene
1992-01-01
The issue of matching two fuzzy sets becomes an essential design aspect of many algorithms including fuzzy controllers, pattern classifiers, knowledge-based systems, etc. This paper introduces a new model of matching. Its principal features involve the following: (1) matching carried out with respect to the grades of membership of fuzzy sets as well as some functionals defined on them (like energy, entropy,transom); (2) concepts of hierarchies in the matching model leading to a straightforward distinction between 'local' and 'global' levels of matching; and (3) a distributed character of the model realized as a logic-based neural network.
Institute of Scientific and Technical Information of China (English)
HuangMiao-hua; JinGuo-dong
2003-01-01
The Hierarchical Structure Fuzzy Logic Control(HSFLC) strategies of torque distribute for Parallel Hybrid Electric Vehicle (PHEV) in the mocle of operation of the vehicle i. e. , acceleration, cruise, deceleration etc. have been studied. Using secondly developed the hybrid vehicle simulation tool ADVISOR, the dynamic model of PHEV has been set up by MATLAB/SIMULINK. The engine, motor as well as the battery characteristics have been studied. Simulation results show that the proposed hierarchical structured fuzzy logic control strategy is effective over the entire operating range of the vehicle in terms of fuel economy. Based on the analyses of the simulation results and driver's experiences, a fuzzy controller is designed and developed to control the torque distribution. The controller is evaluated via hardware-in-the-loop simulator (HILS). The results show that controller verify its value.
Hierarchical topic modeling with nested hierarchical Dirichlet process
Institute of Scientific and Technical Information of China (English)
Yi-qun DING; Shan-ping LI; Zhen ZHANG; Bin SHEN
2009-01-01
This paper deals with the statistical modeling of latent topic hierarchies in text corpora. The height of the topic tree is assumed as fixed, while the number of topics on each level as unknown a priori and to be inferred from data. Taking a nonparametric Bayesian approach to this problem, we propose a new probabilistic generative model based on the nested hierarchical Dirichlet process (nHDP) and present a Markov chain Monte Carlo sampling algorithm for the inference of the topic tree structure as welt as the word distribution of each topic and topic distribution of each document. Our theoretical analysis and experiment results show that this model can produce a more compact hierarchical topic structure and captures more free-grained topic relationships compared to the hierarchical latent Dirichlet allocation model.
Multicollinearity in hierarchical linear models.
Yu, Han; Jiang, Shanhe; Land, Kenneth C
2015-09-01
This study investigates an ill-posed problem (multicollinearity) in Hierarchical Linear Models from both the data and the model perspectives. We propose an intuitive, effective approach to diagnosing the presence of multicollinearity and its remedies in this class of models. A simulation study demonstrates the impacts of multicollinearity on coefficient estimates, associated standard errors, and variance components at various levels of multicollinearity for finite sample sizes typical in social science studies. We further investigate the role multicollinearity plays at each level for estimation of coefficient parameters in terms of shrinkage. Based on these analyses, we recommend a top-down method for assessing multicollinearity in HLMs that first examines the contextual predictors (Level-2 in a two-level model) and then the individual predictors (Level-1) and uses the results for data collection, research problem redefinition, model re-specification, variable selection and estimation of a final model.
Optimizing FORTRAN Programs for Hierarchical Memory Parallel Processing Systems
Institute of Scientific and Technical Information of China (English)
金国华; 陈福接
1993-01-01
Parallel loops account for the greatest amount of parallelism in numerical programs.Executing nested loops in parallel with low run-time overhead is thus very important for achieving high performance in parallel processing systems.However,in parallel processing systems with caches or local memories in memory hierarchies,“thrashing problemmay”may arise whenever data move back and forth between the caches or local memories in different processors.Previous techniques can only deal with the rather simple cases with one linear function in the perfactly nested loop.In this paper,we present a parallel program optimizing technique called hybri loop interchange(HLI)for the cases with multiple linear functions and loop-carried data dependences in the nested loop.With HLI we can easily eliminate or reduce the thrashing phenomena without reucing the program parallelism.
Institute of Scientific and Technical Information of China (English)
祝永志; 张丹丹; 曹宝香; 禹继国
2012-01-01
针对多核SMP机群的体系结构特点,讨论了MPI+ OpenMP混合并行程序设计技术.提出了一种多层次化混合设计新方法.设计了N-body问题的多层次化并行算法,并在曙光5000A机群上与传统的混合算法作了性能方面的比较.结果表明,该层次化混合并行算法具有更好的扩展性和加速比.%For multi-core SMP cluster systems, this paper discusses hybrid parallel programming techniques based on MPI and OpenMP.We propose a new hybrid parallel programming methods lhat are aware of architecture hierarchy on SMP cluster systems. We design a hierarchically parallel algorithm on the N-body problem, and compared its performance with traditional hybrid parallel algorithms on the Dawning 5000A cluster. The results indicate that our hierarchically hybrid parallel algorithm has better scalability and speedup than others.
A Model of Hierarchical Key Assignment Scheme
Institute of Scientific and Technical Information of China (English)
ZHANG Zhigang; ZHAO Jing; XU Maozhi
2006-01-01
A model of the hierarchical key assignment scheme is approached in this paper, which can be used with any cryptography algorithm. Besides, the optimal dynamic control property of a hierarchical key assignment scheme will be defined in this paper. Also, our scheme model will meet this property.
Hierarchical approach to optimization of parallel matrix multiplication on large-scale platforms
Hasanov, Khalid
2014-03-04
© 2014, Springer Science+Business Media New York. Many state-of-the-art parallel algorithms, which are widely used in scientific applications executed on high-end computing systems, were designed in the twentieth century with relatively small-scale parallelism in mind. Indeed, while in 1990s a system with few hundred cores was considered a powerful supercomputer, modern top supercomputers have millions of cores. In this paper, we present a hierarchical approach to optimization of message-passing parallel algorithms for execution on large-scale distributed-memory systems. The idea is to reduce the communication cost by introducing hierarchy and hence more parallelism in the communication scheme. We apply this approach to SUMMA, the state-of-the-art parallel algorithm for matrix–matrix multiplication, and demonstrate both theoretically and experimentally that the modified Hierarchical SUMMA significantly improves the communication cost and the overall performance on large-scale platforms.
HIERARCHICAL OPTIMIZATION MODEL ON GEONETWORK
Directory of Open Access Journals (Sweden)
Z. Zha
2012-07-01
Full Text Available In existing construction experience of Spatial Data Infrastructure (SDI, GeoNetwork, as the geographical information integrated solution, is an effective way of building SDI. During GeoNetwork serving as an internet application, several shortcomings are exposed. The first one is that the time consuming of data loading has been considerately increasing with the growth of metadata count. Consequently, the efficiency of query and search service becomes lower. Another problem is that stability and robustness are both ruined since huge amount of metadata. The final flaw is that the requirements of multi-user concurrent accessing based on massive data are not effectively satisfied on the internet. A novel approach, Hierarchical Optimization Model (HOM, is presented to solve the incapability of GeoNetwork working with massive data in this paper. HOM optimizes the GeoNetwork from these aspects: internal procedure, external deployment strategies, etc. This model builds an efficient index for accessing huge metadata and supporting concurrent processes. In this way, the services based on GeoNetwork can maintain stable while running massive metadata. As an experiment, we deployed more than 30 GeoNetwork nodes, and harvest nearly 1.1 million metadata. From the contrast between the HOM-improved software and the original one, the model makes indexing and retrieval processes more quickly and keeps the speed stable on metadata amount increasing. It also shows stable on multi-user concurrent accessing to system services, the experiment achieved good results and proved that our optimization model is efficient and reliable.
Furuta, Atsuhiro; Mori, Hiroyuki
This paper proposes a hybrid method of hierarchical optimization and Parallel Tabu Search (PTS) for distribution system service restoration with distributed generators. The objective is to evaluate the optimal route to recover the service. The improvement of power quality makes the service restoration more important. Distribution system service restoration is one of complicated combinational optimization problems that are expressed as nonlinear mixed integer programming. In this paper, an efficient method is proposed to restore the service in a hierarchical optimization with Parallel Tabu Search. The proposed method is tested in a sample system.
A generic, hierarchical framework for massively parallel Wang Landau sampling
Energy Technology Data Exchange (ETDEWEB)
Vogel, Thomas [University of Georgia, Athens, GA; Li, Ying Wai [ORNL; Wuest, Thomas [Swiss Federal Research Institute, Switzerland; Landau, David P [University of Georgia, Athens, GA
2013-01-01
We introduce a parallel Wang Landau method based on the replica-exchange framework for Monte Carlo simulations. To demonstrate its advantages and general applicability for simulations of com- plex systems, we apply it to the self-assembly process in amphiphilic solutions and to lattice protein adsorption. Without loss of accuracy, the method gives significant speed-up on small architectures like multi-core processors, and should be beneficial for petaflop machines.
Hierarchical modeling and analysis for spatial data
Banerjee, Sudipto; Gelfand, Alan E
2003-01-01
Among the many uses of hierarchical modeling, their application to the statistical analysis of spatial and spatio-temporal data from areas such as epidemiology And environmental science has proven particularly fruitful. Yet to date, the few books that address the subject have been either too narrowly focused on specific aspects of spatial analysis, or written at a level often inaccessible to those lacking a strong background in mathematical statistics.Hierarchical Modeling and Analysis for Spatial Data is the first accessible, self-contained treatment of hierarchical methods, modeling, and dat
Hierarchical fractional-step approximations and parallel kinetic Monte Carlo algorithms
Arampatzis, Giorgos; Katsoulakis, Markos A.; Plechac, Petr; Taufer, Michela; Xu, Lifan
2011-01-01
We present a mathematical framework for constructing and analyzing parallel algorithms for lattice Kinetic Monte Carlo (KMC) simulations. The resulting algorithms have the capacity to simulate a wide range of spatio-temporal scales in spatially distributed, non-equilibrium physiochemical processes with complex chemistry and transport micro-mechanisms. The algorithms can be tailored to specific hierarchical parallel architectures such as multi-core processors or clusters of Graphical Processin...
A Model for Slicing JAVA Programs Hierarchically
Institute of Scientific and Technical Information of China (English)
Bi-Xin Li; Xiao-Cong Fan; Jun Pang; Jian-Jun Zhao
2004-01-01
Program slicing can be effectively used to debug, test, analyze, understand and maintain objectoriented software. In this paper, a new slicing model is proposed to slice Java programs based on their inherent hierarchical feature. The main idea of hierarchical slicing is to slice programs in a stepwise way, from package level, to class level, method level, and finally up to statement level. The stepwise slicing algorithm and the related graph reachability algorithms are presented, the architecture of the Java program Analyzing Tool (JATO) based on hierarchical slicing model is provided, the applications and a small case study are also discussed.
Tilton, James C.; Plaza, Antonio J. (Editor); Chang, Chein-I. (Editor)
2008-01-01
The hierarchical image segmentation algorithm (referred to as HSEG) is a hybrid of hierarchical step-wise optimization (HSWO) and constrained spectral clustering that produces a hierarchical set of image segmentations. HSWO is an iterative approach to region grooving segmentation in which the optimal image segmentation is found at N(sub R) regions, given a segmentation at N(sub R+1) regions. HSEG's addition of constrained spectral clustering makes it a computationally intensive algorithm, for all but, the smallest of images. To counteract this, a computationally efficient recursive approximation of HSEG (called RHSEG) has been devised. Further improvements in processing speed are obtained through a parallel implementation of RHSEG. This chapter describes this parallel implementation and demonstrates its computational efficiency on a Landsat Thematic Mapper test scene.
When to Use Hierarchical Linear Modeling
National Research Council Canada - National Science Library
Veronika Huta
2014-01-01
Previous publications on hierarchical linear modeling (HLM) have provided guidance on how to perform the analysis, yet there is relatively little information on two questions that arise even before analysis...
An introduction to hierarchical linear modeling
National Research Council Canada - National Science Library
Woltman, Heather; Feldstain, Andrea; MacKay, J. Christine; Rocchi, Meredith
2012-01-01
This tutorial aims to introduce Hierarchical Linear Modeling (HLM). A simple explanation of HLM is provided that describes when to use this statistical technique and identifies key factors to consider before conducting this analysis...
Conservation Laws in the Hierarchical Model
Beijeren, H. van; Gallavotti, G.; Knops, H.
1974-01-01
An exposition of the renormalization-group equations for the hierarchical model is given. Attention is drawn to some properties of the spin distribution functions which are conserved under the action of the renormalization group.
Intrusion Detection System with Hierarchical Different Parallel Classification
Directory of Open Access Journals (Sweden)
Behrouz Safaiezadeh
2015-12-01
Full Text Available Todays, lives integrated to networks and internet. The needed information is transmitted through networks. So, someone may attempt to abuse the information and attack and make changes by weakness of networks. Intrusion Detection System is a system capable to detect some attacks. The system detects attacks through classifier construction and considering IP in network. The recent researches showed that a fundamental classification cannot be effective lonely and due to its errors, but mixing some classifications provide better efficiency. So, the current study attempt to design three classes of support vector machine, the neural network of multilayer perceptron and parallel fuzzy system in which there are trained dataset and capability to detect two classes. Finally, decisions made by an intermediate network due to type of attack. In the present research, suggested system tested through dataset of KDD99 and results indicated appropriate efficiency 99.71% in average.
Institute of Scientific and Technical Information of China (English)
XIONGHongkai; YUSongyu; YEWei
2003-01-01
Because real-time compression and high-speed digital processing circuitry are crucial for digital high definition television (HDTV) coding, parallel processing has become a feasible scheme in most applications as yet. This paper presents a novel bit-allocation strategy for an HDTV encoder system with parallel architecture, in which the original HDTV-picture is divided into six hor-izontal sub-pictures. It is shown that the MPEG-2 Test Model 5 (TMS) rate control scheme would not only give rise to non-consistent sub-pictures visual quality in a com-posite HDTV frame, but also make the coding quality de-grade abruptly and the buffer underfiow at scene changes.How to allocate bit-rates among sub-pictures becomes a great challenge in literatures. The proposed strategy is dedicated to a hierarchical joint optimized bit-allocation with sub-pictures' average complexity and average bits measure, and moreover, capable of alleviating serious pic-ture quality inconsistence at scene changes. The optimized bit-allocation and its complementary rate adaptive proce-dures are formulated and described. In the paper, the pro-posed strategy is compared with the independent coding,in which each sub-picture sequence is assigned the same proportion of the channel bandwidth. Experimental re-suits demonstrate the effectiveness of the proposed scheme not only alleviates the boundary effect but also promises the sub-pictures quality consistency.
Classification using Hierarchical Naive Bayes models
DEFF Research Database (Denmark)
Langseth, Helge; Dyhre Nielsen, Thomas
2006-01-01
Classification problems have a long history in the machine learning literature. One of the simplest, and yet most consistently well-performing set of classifiers is the Naïve Bayes models. However, an inherent problem with these classifiers is the assumption that all attributes used to describe...... an instance are conditionally independent given the class of that instance. When this assumption is violated (which is often the case in practice) it can reduce classification accuracy due to “information double-counting” and interaction omission. In this paper we focus on a relatively new set of models......, termed Hierarchical Naïve Bayes models. Hierarchical Naïve Bayes models extend the modeling flexibility of Naïve Bayes models by introducing latent variables to relax some of the independence statements in these models. We propose a simple algorithm for learning Hierarchical Naïve Bayes models...
Analysis hierarchical model for discrete event systems
Ciortea, E. M.
2015-11-01
The This paper presents the hierarchical model based on discrete event network for robotic systems. Based on the hierarchical approach, Petri network is analysed as a network of the highest conceptual level and the lowest level of local control. For modelling and control of complex robotic systems using extended Petri nets. Such a system is structured, controlled and analysed in this paper by using Visual Object Net ++ package that is relatively simple and easy to use, and the results are shown as representations easy to interpret. The hierarchical structure of the robotic system is implemented on computers analysed using specialized programs. Implementation of hierarchical model discrete event systems, as a real-time operating system on a computer network connected via a serial bus is possible, where each computer is dedicated to local and Petri model of a subsystem global robotic system. Since Petri models are simplified to apply general computers, analysis, modelling, complex manufacturing systems control can be achieved using Petri nets. Discrete event systems is a pragmatic tool for modelling industrial systems. For system modelling using Petri nets because we have our system where discrete event. To highlight the auxiliary time Petri model using transport stream divided into hierarchical levels and sections are analysed successively. Proposed robotic system simulation using timed Petri, offers the opportunity to view the robotic time. Application of goods or robotic and transmission times obtained by measuring spot is obtained graphics showing the average time for transport activity, using the parameters sets of finished products. individually.
Hierarchical Control of Parallel AC-DC Converter Interfaces for Hybrid Microgrids
DEFF Research Database (Denmark)
Lu, Xiaonan; Guerrero, Josep M.; Sun, Kai;
2014-01-01
In this paper, a hierarchical control system for parallel power electronics interfaces between ac bus and dc bus in a hybrid microgrid is presented. Both standalone and grid-connected operation modes in the dc side of the microgrid are analyzed. Concretely, a three-level hierarchical control syst...... the three control levels is developed in order to adjust the main control parameters and study the system stability. Experimental results of a 2×2.2 kW parallel ac-dc converter system have shown satisfactory realization of the designed system.......In this paper, a hierarchical control system for parallel power electronics interfaces between ac bus and dc bus in a hybrid microgrid is presented. Both standalone and grid-connected operation modes in the dc side of the microgrid are analyzed. Concretely, a three-level hierarchical control system...... is implemented. In the primary control level, the decentralized control is realized by using the droop method. Local ac current proportional-resonant controller and dc voltage proportional-integral controller are employed. When the local load is connected to the dc bus, dc droop control is applied to obtain...
Semiparametric Quantile Modelling of Hierarchical Data
Institute of Scientific and Technical Information of China (English)
Mao Zai TIAN; Man Lai TANG; Ping Shing CHAN
2009-01-01
The classic hierarchical linear model formulation provides a considerable flexibility for modelling the random effects structure and a powerful tool for analyzing nested data that arise in various areas such as biology, economics and education. However, it assumes the within-group errors to be independently and identically distributed (i.i.d.) and models at all levels to be linear. Most importantly, traditional hierarchical models (just like other ordinary mean regression methods) cannot characterize the entire conditional distribution of a dependent variable given a set of covariates and fail to yield robust estimators. In this article, we relax the aforementioned and normality assumptions, and develop a so-called Hierarchical Semiparametric Quantile Regression Models in which the within-group errors could be heteroscedastic and models at some levels are allowed to be nonparametric. We present the ideas with a 2-level model. The level-l model is specified as a nonparametric model whereas level-2 model is set as a parametric model. Under the proposed semiparametric setting the vector of partial derivatives of the nonparametric function in level-1 becomes the response variable vector in level 2. The proposed method allows us to model the fixed effects in the innermost level (i.e., level 2) as a function of the covariates instead of a constant effect. We outline some mild regularity conditions required for convergence and asymptotic normality for our estimators. We illustrate our methodology with a real hierarchical data set from a laboratory study and some simulation studies.
Hierarchical linear regression models for conditional quantiles
Institute of Scientific and Technical Information of China (English)
TIAN Maozai; CHEN Gemai
2006-01-01
The quantile regression has several useful features and therefore is gradually developing into a comprehensive approach to the statistical analysis of linear and nonlinear response models,but it cannot deal effectively with the data with a hierarchical structure.In practice,the existence of such data hierarchies is neither accidental nor ignorable,it is a common phenomenon.To ignore this hierarchical data structure risks overlooking the importance of group effects,and may also render many of the traditional statistical analysis techniques used for studying data relationships invalid.On the other hand,the hierarchical models take a hierarchical data structure into account and have also many applications in statistics,ranging from overdispersion to constructing min-max estimators.However,the hierarchical models are virtually the mean regression,therefore,they cannot be used to characterize the entire conditional distribution of a dependent variable given high-dimensional covariates.Furthermore,the estimated coefficient vector (marginal effects)is sensitive to an outlier observation on the dependent variable.In this article,a new approach,which is based on the Gauss-Seidel iteration and taking a full advantage of the quantile regression and hierarchical models,is developed.On the theoretical front,we also consider the asymptotic properties of the new method,obtaining the simple conditions for an n1/2-convergence and an asymptotic normality.We also illustrate the use of the technique with the real educational data which is hierarchical and how the results can be explained.
Hierarchical models and chaotic spin glasses
Berker, A. Nihat; McKay, Susan R.
1984-09-01
Renormalization-group studies in position space have led to the discovery of hierarchical models which are exactly solvable, exhibiting nonclassical critical behavior at finite temperature. Position-space renormalization-group approximations that had been widely and successfully used are in fact alternatively applicable as exact solutions of hierarchical models, this realizability guaranteeing important physical requirements. For example, a hierarchized version of the Sierpiriski gasket is presented, corresponding to a renormalization-group approximation which has quantitatively yielded the multicritical phase diagrams of submonolayers on graphite. Hierarchical models are now being studied directly as a testing ground for new concepts. For example, with the introduction of frustration, chaotic renormalization-group trajectories were obtained for the first time. Thus, strong and weak correlations are randomly intermingled at successive length scales, and a new microscopic picture and mechanism for a spin glass emerges. An upper critical dimension occurs via a boundary crisis mechanism in cluster-hierarchical variants developed to have well-behaved susceptibilities.
Hierarchic Models of Turbulence, Superfluidity and Superconductivity
Kaivarainen, A
2000-01-01
New models of Turbulence, Superfluidity and Superconductivity, based on new Hierarchic theory, general for liquids and solids (physics/0102086), have been proposed. CONTENTS: 1 Turbulence. General description; 2 Mesoscopic mechanism of turbulence; 3 Superfluidity. General description; 4 Mesoscopic scenario of fluidity; 5 Superfluidity as a hierarchic self-organization process; 6 Superfluidity in 3He; 7 Superconductivity: General properties of metals and semiconductors; Plasma oscillations; Cyclotron resonance; Electroconductivity; 8. Microscopic theory of superconductivity (BCS); 9. Mesoscopic scenario of superconductivity: Interpretation of experimental data in the framework of mesoscopic model of superconductivity.
Strategic games on a hierarchical network model
Institute of Scientific and Technical Information of China (English)
无
2008-01-01
Among complex network models, the hierarchical network model is the one most close to such real networks as world trade web, metabolic network, WWW, actor network, and so on. It has not only the property of power-law degree distribution, but growth based on growth and preferential attachment, showing the scale-free degree distribution property. In this paper, we study the evolution of cooperation on a hierarchical network model, adopting the prisoner's dilemma (PD) game and snowdrift game (SG) as metaphors of the interplay between connected nodes. BA model provides a unifying framework for the emergence of cooperation. But interestingly, we found that on hierarchical model, there is no sign of cooperation for PD game, while the frequency of cooperation decreases as the common benefit decreases for SG. By comparing the scaling clustering coefficient properties of the hierarchical network model with that of BA model, we found that the former amplifies the effect of hubs. Considering different performances of PD game and SG on complex network, we also found that common benefit leads to cooperation in the evolution. Thus our study may shed light on the emergence of cooperation in both natural and social environments.
Hierarchical Context Modeling for Video Event Recognition.
Wang, Xiaoyang; Ji, Qiang
2016-10-11
Current video event recognition research remains largely target-centered. For real-world surveillance videos, targetcentered event recognition faces great challenges due to large intra-class target variation, limited image resolution, and poor detection and tracking results. To mitigate these challenges, we introduced a context-augmented video event recognition approach. Specifically, we explicitly capture different types of contexts from three levels including image level, semantic level, and prior level. At the image level, we introduce two types of contextual features including the appearance context features and interaction context features to capture the appearance of context objects and their interactions with the target objects. At the semantic level, we propose a deep model based on deep Boltzmann machine to learn event object representations and their interactions. At the prior level, we utilize two types of prior-level contexts including scene priming and dynamic cueing. Finally, we introduce a hierarchical context model that systematically integrates the contextual information at different levels. Through the hierarchical context model, contexts at different levels jointly contribute to the event recognition. We evaluate the hierarchical context model for event recognition on benchmark surveillance video datasets. Results show that incorporating contexts in each level can improve event recognition performance, and jointly integrating three levels of contexts through our hierarchical model achieves the best performance.
Hierarchical Image Segmentation of Remotely Sensed Data using Massively Parallel GNU-LINUX Software
Tilton, James C.
2003-01-01
A hierarchical set of image segmentations is a set of several image segmentations of the same image at different levels of detail in which the segmentations at coarser levels of detail can be produced from simple merges of regions at finer levels of detail. In [1], Tilton, et a1 describes an approach for producing hierarchical segmentations (called HSEG) and gave a progress report on exploiting these hierarchical segmentations for image information mining. The HSEG algorithm is a hybrid of region growing and constrained spectral clustering that produces a hierarchical set of image segmentations based on detected convergence points. In the main, HSEG employs the hierarchical stepwise optimization (HSWO) approach to region growing, which was described as early as 1989 by Beaulieu and Goldberg. The HSWO approach seeks to produce segmentations that are more optimized than those produced by more classic approaches to region growing (e.g. Horowitz and T. Pavlidis, [3]). In addition, HSEG optionally interjects between HSWO region growing iterations, merges between spatially non-adjacent regions (i.e., spectrally based merging or clustering) constrained by a threshold derived from the previous HSWO region growing iteration. While the addition of constrained spectral clustering improves the utility of the segmentation results, especially for larger images, it also significantly increases HSEG s computational requirements. To counteract this, a computationally efficient recursive, divide-and-conquer, implementation of HSEG (RHSEG) was devised, which includes special code to avoid processing artifacts caused by RHSEG s recursive subdivision of the image data. The recursive nature of RHSEG makes for a straightforward parallel implementation. This paper describes the HSEG algorithm, its recursive formulation (referred to as RHSEG), and the implementation of RHSEG using massively parallel GNU-LINUX software. Results with Landsat TM data are included comparing RHSEG with classic
Directory of Open Access Journals (Sweden)
TIMCHENKO, L.
2012-11-01
Full Text Available Propositions necessary for development of parallel-hierarchical (PH network training methods are discussed in this article. Unlike already known structures of the artificial neural network, where non-normalized (absolute similarity criteria are used for comparison, the suggested structure uses a normalized criterion. Based on the analysis of training rules, a conclusion is made that application of two training methods with a teacher is optimal for PH network training: error correction-based training and memory-based training. Mathematical models of training and a combined method of PH network training for recognition of static and dynamic patterns are developed.
Managing Clustered Data Using Hierarchical Linear Modeling
Warne, Russell T.; Li, Yan; McKyer, E. Lisako J.; Condie, Rachel; Diep, Cassandra S.; Murano, Peter S.
2012-01-01
Researchers in nutrition research often use cluster or multistage sampling to gather participants for their studies. These sampling methods often produce violations of the assumption of data independence that most traditional statistics share. Hierarchical linear modeling is a statistical method that can overcome violations of the independence…
Managing Clustered Data Using Hierarchical Linear Modeling
Warne, Russell T.; Li, Yan; McKyer, E. Lisako J.; Condie, Rachel; Diep, Cassandra S.; Murano, Peter S.
2012-01-01
Researchers in nutrition research often use cluster or multistage sampling to gather participants for their studies. These sampling methods often produce violations of the assumption of data independence that most traditional statistics share. Hierarchical linear modeling is a statistical method that can overcome violations of the independence…
The Infinite Hierarchical Factor Regression Model
Rai, Piyush
2009-01-01
We propose a nonparametric Bayesian factor regression model that accounts for uncertainty in the number of factors, and the relationship between factors. To accomplish this, we propose a sparse variant of the Indian Buffet Process and couple this with a hierarchical model over factors, based on Kingman's coalescent. We apply this model to two problems (factor analysis and factor regression) in gene-expression data analysis.
Hierarchical models in the brain.
Directory of Open Access Journals (Sweden)
Karl Friston
2008-11-01
Full Text Available This paper describes a general model that subsumes many parametric models for continuous data. The model comprises hidden layers of state-space or dynamic causal models, arranged so that the output of one provides input to another. The ensuing hierarchy furnishes a model for many types of data, of arbitrary complexity. Special cases range from the general linear model for static data to generalised convolution models, with system noise, for nonlinear time-series analysis. Crucially, all of these models can be inverted using exactly the same scheme, namely, dynamic expectation maximization. This means that a single model and optimisation scheme can be used to invert a wide range of models. We present the model and a brief review of its inversion to disclose the relationships among, apparently, diverse generative models of empirical data. We then show that this inversion can be formulated as a simple neural network and may provide a useful metaphor for inference and learning in the brain.
Hierarchical model of vulnerabilities for emotional disorders.
Norton, Peter J; Mehta, Paras D
2007-01-01
Clark and Watson's (1991) tripartite model of anxiety and depression has had a dramatic impact on our understanding of the dispositional variables underlying emotional disorders. More recently, calls have been made to examine not simply the influence of negative affectivity (NA) but also mediating factors that might better explain how NA influences anxious and depressive syndromes (e.g. Taylor, 1998; Watson, 2005). Extending preliminary projects, this study evaluated two hierarchical models of NA, mediating factors of anxiety sensitivity and intolerance of uncertainty, and specific emotional manifestations. Data provided a very good fit to a model elaborated from preliminary studies, lending further support to hierarchical models of emotional vulnerabilities. Implications for classification and diagnosis are discussed.
Bayesian hierarchical modeling of drug stability data.
Chen, Jie; Zhong, Jinglin; Nie, Lei
2008-06-15
Stability data are commonly analyzed using linear fixed or random effect model. The linear fixed effect model does not take into account the batch-to-batch variation, whereas the random effect model may suffer from the unreliable shelf-life estimates due to small sample size. Moreover, both methods do not utilize any prior information that might have been available. In this article, we propose a Bayesian hierarchical approach to modeling drug stability data. Under this hierarchical structure, we first use Bayes factor to test the poolability of batches. Given the decision on poolability of batches, we then estimate the shelf-life that applies to all batches. The approach is illustrated with two example data sets and its performance is compared in simulation studies with that of the commonly used frequentist methods. (c) 2008 John Wiley & Sons, Ltd.
Cellular automata a parallel model
Mazoyer, J
1999-01-01
Cellular automata can be viewed both as computational models and modelling systems of real processes. This volume emphasises the first aspect. In articles written by leading researchers, sophisticated massive parallel algorithms (firing squad, life, Fischer's primes recognition) are treated. Their computational power and the specific complexity classes they determine are surveyed, while some recent results in relation to chaos from a new dynamic systems point of view are also presented. Audience: This book will be of interest to specialists of theoretical computer science and the parallelism challenge.
Hierarchical Climate Modeling for Cosmoclimatology
Ohfuchi, Wataru
2010-05-01
It has been reported that there are correlations among solar activity, amount of galactic cosmic ray, amount of low clouds and surface air temperature (Svensmark and Friis-Chistensen, 1997). These correlations seem to exist for current climate change, Little Ice Age, and geological time scale climate changes. Some hypothetic mechanisms have been argued for the correlations but it still needs quantitative studies to understand the mechanism. In order to decrease uncertainties, only first principles or laws very close to first principles should be used. Our group at Japan Agency for Marine-Earth Science and Technology has started modeling effort to tackle this problem. We are constructing models from galactic cosmic ray inducing ionization, to aerosol formation, to cloud formation, to global climate. In this talk, we introduce our modeling activities. For aerosol formation, we use molecular dynamics. For cloud formation, we use a new cloud microphysics model called "super droplet method". We also try to couple a nonhydrostatic atmospheric regional cloud resolving model and a hydrostatic atmospheric general circulation model.
Hierarchical Boltzmann simulations and model error estimation
Torrilhon, Manuel; Sarna, Neeraj
2017-08-01
A hierarchical simulation approach for Boltzmann's equation should provide a single numerical framework in which a coarse representation can be used to compute gas flows as accurately and efficiently as in computational fluid dynamics, but a subsequent refinement allows to successively improve the result to the complete Boltzmann result. We use Hermite discretization, or moment equations, for the steady linearized Boltzmann equation for a proof-of-concept of such a framework. All representations of the hierarchy are rotationally invariant and the numerical method is formulated on fully unstructured triangular and quadrilateral meshes using a implicit discontinuous Galerkin formulation. We demonstrate the performance of the numerical method on model problems which in particular highlights the relevance of stability of boundary conditions on curved domains. The hierarchical nature of the method allows also to provide model error estimates by comparing subsequent representations. We present various model errors for a flow through a curved channel with obstacles.
Models of parallel computation :a survey and classification
Institute of Scientific and Technical Information of China (English)
ZHANG Yunquan; CHEN Guoliang; SUN Guangzhong; MIAO Qiankun
2007-01-01
In this paper,the state-of-the-art parallel computational model research is reviewed.We will introduce various models that were developed during the past decades.According to their targeting architecture features,especially memory organization,we classify these parallel computational models into three generations.These models and their characteristics are discussed based on three generations classification.We believe that with the ever increasing speed gap between the CPU and memory systems,incorporating non-uniform memory hierarchy into computational models will become unavoidable.With the emergence of multi-core CPUs,the parallelism hierarchy of current computing platforms becomes more and more complicated.Describing this complicated parallelism hierarchy in future computational models becomes more and more important.A semi-automatic toolkit that can extract model parameters and their values on real computers can reduce the model analysis complexity,thus allowing more complicated models with more parameters to be adopted.Hierarchical memory and hierarchical parallelism will be two very important features that should be considered in future model design and research.
Hierarchical mixture models for assessing fingerprint individuality
Dass, Sarat C.; Li, Mingfei
2009-01-01
The study of fingerprint individuality aims to determine to what extent a fingerprint uniquely identifies an individual. Recent court cases have highlighted the need for measures of fingerprint individuality when a person is identified based on fingerprint evidence. The main challenge in studies of fingerprint individuality is to adequately capture the variability of fingerprint features in a population. In this paper hierarchical mixture models are introduced to infer the extent of individua...
Semantic Image Segmentation with Contextual Hierarchical Models.
Seyedhosseini, Mojtaba; Tasdizen, Tolga
2016-05-01
Semantic segmentation is the problem of assigning an object label to each pixel. It unifies the image segmentation and object recognition problems. The importance of using contextual information in semantic segmentation frameworks has been widely realized in the field. We propose a contextual framework, called contextual hierarchical model (CHM), which learns contextual information in a hierarchical framework for semantic segmentation. At each level of the hierarchy, a classifier is trained based on downsampled input images and outputs of previous levels. Our model then incorporates the resulting multi-resolution contextual information into a classifier to segment the input image at original resolution. This training strategy allows for optimization of a joint posterior probability at multiple resolutions through the hierarchy. Contextual hierarchical model is purely based on the input image patches and does not make use of any fragments or shape examples. Hence, it is applicable to a variety of problems such as object segmentation and edge detection. We demonstrate that CHM performs at par with state-of-the-art on Stanford background and Weizmann horse datasets. It also outperforms state-of-the-art edge detection methods on NYU depth dataset and achieves state-of-the-art on Berkeley segmentation dataset (BSDS 500).
Magnetic susceptibilities of cluster-hierarchical models
McKay, Susan R.; Berker, A. Nihat
1984-02-01
The exact magnetic susceptibilities of hierarchical models are calculated near and away from criticality, in both the ordered and disordered phases. The mechanism and phenomenology are discussed for models with susceptibilities that are physically sensible, e.g., nondivergent away from criticality. Such models are found based upon the Niemeijer-van Leeuwen cluster renormalization. A recursion-matrix method is presented for the renormalization-group evaluation of response functions. Diagonalization of this matrix at fixed points provides simple criteria for well-behaved densities and response functions.
Three Layer Hierarchical Model for Chord
Directory of Open Access Journals (Sweden)
Waqas A. Imtiaz
2012-12-01
Full Text Available Increasing popularity of decentralized Peer-to-Peer (P2P architecture emphasizes on the need to come across an overlay structure that can provide efficient content discovery mechanism, accommodate high churn rate and adapt to failures in the presence of heterogeneity among the peers. Traditional p2p systems incorporate distributed client-server communication, which finds the peer efficiently that store a desires data item, with minimum delay and reduced overhead. However traditional models are not able to solve the problems relating scalability and high churn rates. Hierarchical model were introduced to provide better fault isolation, effective bandwidth utilization, a superior adaptation to the underlying physical network and a reduction of the lookup path length as additional advantages. It is more efficient and easier to manage than traditional p2p networks. This paper discusses a further step in p2p hierarchy via 3-layers hierarchical model with distributed database architecture in different layer, each of which is connected through its root. The peers are divided into three categories according to their physical stability and strength. They are Ultra Super-peer, Super-peer and Ordinary Peer and we assign these peers to first, second and third level of hierarchy respectively. Peers in a group in lower layer have their own local database which hold as associated super-peer in middle layer and access the database among the peers through user queries. In our 3-layer hierarchical model for DHT algorithms, we used an advanced Chord algorithm with optimized finger table which can remove the redundant entry in the finger table in upper layer that influences the system to reduce the lookup latency. Our research work finally resulted that our model really provides faster search since the network lookup latency is decreased by reducing the number of hops. The peers in such network then can contribute with improve functionality and can perform well in
Hierarchical Parallel Matrix Multiplication on Large-Scale Distributed Memory Platforms
Quintin, Jean-Noel
2013-10-01
Matrix multiplication is a very important computation kernel both in its own right as a building block of many scientific applications and as a popular representative for other scientific applications. Cannon\\'s algorithm which dates back to 1969 was the first efficient algorithm for parallel matrix multiplication providing theoretically optimal communication cost. However this algorithm requires a square number of processors. In the mid-1990s, the SUMMA algorithm was introduced. SUMMA overcomes the shortcomings of Cannon\\'s algorithm as it can be used on a nonsquare number of processors as well. Since then the number of processors in HPC platforms has increased by two orders of magnitude making the contribution of communication in the overall execution time more significant. Therefore, the state of the art parallel matrix multiplication algorithms should be revisited to reduce the communication cost further. This paper introduces a new parallel matrix multiplication algorithm, Hierarchical SUMMA (HSUMMA), which is a redesign of SUMMA. Our algorithm reduces the communication cost of SUMMA by introducing a two-level virtual hierarchy into the two-dimensional arrangement of processors. Experiments on an IBM BlueGene/P demonstrate the reduction of communication cost up to 2.08 times on 2048 cores and up to 5.89 times on 16384 cores. © 2013 IEEE.
An introduction to hierarchical linear modeling
Directory of Open Access Journals (Sweden)
Heather Woltman
2012-02-01
Full Text Available This tutorial aims to introduce Hierarchical Linear Modeling (HLM. A simple explanation of HLM is provided that describes when to use this statistical technique and identifies key factors to consider before conducting this analysis. The first section of the tutorial defines HLM, clarifies its purpose, and states its advantages. The second section explains the mathematical theory, equations, and conditions underlying HLM. HLM hypothesis testing is performed in the third section. Finally, the fourth section provides a practical example of running HLM, with which readers can follow along. Throughout this tutorial, emphasis is placed on providing a straightforward overview of the basic principles of HLM.
Universality: Accurate Checks in Dyson's Hierarchical Model
Godina, J. J.; Meurice, Y.; Oktay, M. B.
2003-06-01
In this talk we present high-accuracy calculations of the susceptibility near βc for Dyson's hierarchical model in D = 3. Using linear fitting, we estimate the leading (γ) and subleading (Δ) exponents. Independent estimates are obtained by calculating the first two eigenvalues of the linearized renormalization group transformation. We found γ = 1.29914073 ± 10 -8 and, Δ = 0.4259469 ± 10-7 independently of the choice of local integration measure (Ising or Landau-Ginzburg). After a suitable rescaling, the approximate fixed points for a large class of local measure coincide accurately with a fixed point constructed by Koch and Wittwer.
A Hierarchical Bayesian Model for Crowd Emotions
Urizar, Oscar J.; Baig, Mirza S.; Barakova, Emilia I.; Regazzoni, Carlo S.; Marcenaro, Lucio; Rauterberg, Matthias
2016-01-01
Estimation of emotions is an essential aspect in developing intelligent systems intended for crowded environments. However, emotion estimation in crowds remains a challenging problem due to the complexity in which human emotions are manifested and the capability of a system to perceive them in such conditions. This paper proposes a hierarchical Bayesian model to learn in unsupervised manner the behavior of individuals and of the crowd as a single entity, and explore the relation between behavior and emotions to infer emotional states. Information about the motion patterns of individuals are described using a self-organizing map, and a hierarchical Bayesian network builds probabilistic models to identify behaviors and infer the emotional state of individuals and the crowd. This model is trained and tested using data produced from simulated scenarios that resemble real-life environments. The conducted experiments tested the efficiency of our method to learn, detect and associate behaviors with emotional states yielding accuracy levels of 74% for individuals and 81% for the crowd, similar in performance with existing methods for pedestrian behavior detection but with novel concepts regarding the analysis of crowds. PMID:27458366
When to Use Hierarchical Linear Modeling
Directory of Open Access Journals (Sweden)
Veronika Huta
2014-04-01
Full Text Available Previous publications on hierarchical linear modeling (HLM have provided guidance on how to perform the analysis, yet there is relatively little information on two questions that arise even before analysis: Does HLM apply to ones data and research question? And if it does apply, how does one choose between HLM and other methods sometimes used in these circumstances, including multiple regression, repeated-measures or mixed ANOVA, and structural equation modeling or path analysis? The purpose of this tutorial is to briefly introduce HLM and then to review some of the considerations that are helpful in answering these questions, including the nature of the data, the model to be tested, and the information desired on the output. Some examples of how the same analysis could be performed in HLM, repeated-measures or mixed ANOVA, and structural equation modeling or path analysis are also provided. .
Litvinenko, Alexander
2017-09-26
The main goal of this article is to introduce the parallel hierarchical matrix library HLIBpro to the statistical community. We describe the HLIBCov package, which is an extension of the HLIBpro library for approximating large covariance matrices and maximizing likelihood functions. We show that an approximate Cholesky factorization of a dense matrix of size $2M\\\\times 2M$ can be computed on a modern multi-core desktop in few minutes. Further, HLIBCov is used for estimating the unknown parameters such as the covariance length, variance and smoothness parameter of a Mat\\\\\\'ern covariance function by maximizing the joint Gaussian log-likelihood function. The computational bottleneck here is expensive linear algebra arithmetics due to large and dense covariance matrices. Therefore covariance matrices are approximated in the hierarchical ($\\\\H$-) matrix format with computational cost $\\\\mathcal{O}(k^2n \\\\log^2 n/p)$ and storage $\\\\mathcal{O}(kn \\\\log n)$, where the rank $k$ is a small integer (typically $k<25$), $p$ the number of cores and $n$ the number of locations on a fairly general mesh. We demonstrate a synthetic example, where the true values of known parameters are known. For reproducibility we provide the C++ code, the documentation, and the synthetic data.
A hierarchical model of temporal perception.
Pöppel, E
1997-05-01
Temporal perception comprises subjective phenomena such as simultaneity, successiveness, temporal order, subjective present, temporal continuity and subjective duration. These elementary temporal experiences are hierarchically related to each other. Functional system states with a duration of 30 ms are implemented by neuronal oscillations and they provide a mechanism to define successiveness. These system states are also responsible for the identification of basic events. For a sequential representation of several events time tags are allocated, resulting in an ordinal representation of such events. A mechanism of temporal integration binds successive events into perceptual units of 3 s duration. Such temporal integration, which is automatic and presemantic, is also operative in movement control and other cognitive activities. Because of the omnipresence of this integration mechanism it is used for a pragmatic definition of the subjective present. Temporal continuity is the result of a semantic connection between successive integration intervals. Subjective duration is known to depend on mental load and attentional demand, high load resulting in long time estimates. In the hierarchical model proposed, system states of 30 ms and integration intervals of 3 s, together with a memory store, provide an explanatory neuro-cognitive machinery for differential subjective duration.
Antiferromagnetic Ising Model in Hierarchical Networks
Cheng, Xiang; Boettcher, Stefan
2015-03-01
The Ising antiferromagnet is a convenient model of glassy dynamics. It can introduce geometric frustrations and may give rise to a spin glass phase and glassy relaxation at low temperatures [ 1 ] . We apply the antiferromagnetic Ising model to 3 hierarchical networks which share features of both small world networks and regular lattices. Their recursive and fixed structures make them suitable for exact renormalization group analysis as well as numerical simulations. We first explore the dynamical behaviors using simulated annealing and discover an extremely slow relaxation at low temperatures. Then we employ the Wang-Landau algorithm to investigate the energy landscape and the corresponding equilibrium behaviors for different system sizes. Besides the Monte Carlo methods, renormalization group [ 2 ] is used to study the equilibrium properties in the thermodynamic limit and to compare with the results from simulated annealing and Wang-Landau sampling. Supported through NSF Grant DMR-1207431.
A hierarchical nest survival model integrating incomplete temporally varying covariates
Converse, Sarah J.; Royle, J. Andrew; Adler, Peter H.; Urbanek, Richard P.; Barzan, Jeb A.
2013-01-01
Nest success is a critical determinant of the dynamics of avian populations, and nest survival modeling has played a key role in advancing avian ecology and management. Beginning with the development of daily nest survival models, and proceeding through subsequent extensions, the capacity for modeling the effects of hypothesized factors on nest survival has expanded greatly. We extend nest survival models further by introducing an approach to deal with incompletely observed, temporally varying covariates using a hierarchical model. Hierarchical modeling offers a way to separate process and observational components of demographic models to obtain estimates of the parameters of primary interest, and to evaluate structural effects of ecological and management interest. We built a hierarchical model for daily nest survival to analyze nest data from reintroduced whooping cranes (Grus americana) in the Eastern Migratory Population. This reintroduction effort has been beset by poor reproduction, apparently due primarily to nest abandonment by breeding birds. We used the model to assess support for the hypothesis that nest abandonment is caused by harassment from biting insects. We obtained indices of blood-feeding insect populations based on the spatially interpolated counts of insects captured in carbon dioxide traps. However, insect trapping was not conducted daily, and so we had incomplete information on a temporally variable covariate of interest. We therefore supplemented our nest survival model with a parallel model for estimating the values of the missing insect covariates. We used Bayesian model selection to identify the best predictors of daily nest survival. Our results suggest that the black fly Simulium annulus may be negatively affecting nest survival of reintroduced whooping cranes, with decreasing nest survival as abundance of S. annulus increases. The modeling framework we have developed will be applied in the future to a larger data set to evaluate the
Parallel computing in enterprise modeling.
Energy Technology Data Exchange (ETDEWEB)
Goldsby, Michael E.; Armstrong, Robert C.; Shneider, Max S.; Vanderveen, Keith; Ray, Jaideep; Heath, Zach; Allan, Benjamin A.
2008-08-01
This report presents the results of our efforts to apply high-performance computing to entity-based simulations with a multi-use plugin for parallel computing. We use the term 'Entity-based simulation' to describe a class of simulation which includes both discrete event simulation and agent based simulation. What simulations of this class share, and what differs from more traditional models, is that the result sought is emergent from a large number of contributing entities. Logistic, economic and social simulations are members of this class where things or people are organized or self-organize to produce a solution. Entity-based problems never have an a priori ergodic principle that will greatly simplify calculations. Because the results of entity-based simulations can only be realized at scale, scalable computing is de rigueur for large problems. Having said that, the absence of a spatial organizing principal makes the decomposition of the problem onto processors problematic. In addition, practitioners in this domain commonly use the Java programming language which presents its own problems in a high-performance setting. The plugin we have developed, called the Parallel Particle Data Model, overcomes both of these obstacles and is now being used by two Sandia frameworks: the Decision Analysis Center, and the Seldon social simulation facility. While the ability to engage U.S.-sized problems is now available to the Decision Analysis Center, this plugin is central to the success of Seldon. Because Seldon relies on computationally intensive cognitive sub-models, this work is necessary to achieve the scale necessary for realistic results. With the recent upheavals in the financial markets, and the inscrutability of terrorist activity, this simulation domain will likely need a capability with ever greater fidelity. High-performance computing will play an important part in enabling that greater fidelity.
Hierarchical Data Structures, Institutional Research, and Multilevel Modeling
O'Connell, Ann A.; Reed, Sandra J.
2012-01-01
Multilevel modeling (MLM), also referred to as hierarchical linear modeling (HLM) or mixed models, provides a powerful analytical framework through which to study colleges and universities and their impact on students. Due to the natural hierarchical structure of data obtained from students or faculty in colleges and universities, MLM offers many…
Entrepreneurial intention modeling using hierarchical multiple regression
Directory of Open Access Journals (Sweden)
Marina Jeger
2014-12-01
Full Text Available The goal of this study is to identify the contribution of effectuation dimensions to the predictive power of the entrepreneurial intention model over and above that which can be accounted for by other predictors selected and confirmed in previous studies. As is often the case in social and behavioral studies, some variables are likely to be highly correlated with each other. Therefore, the relative amount of variance in the criterion variable explained by each of the predictors depends on several factors such as the order of variable entry and sample specifics. The results show the modest predictive power of two dimensions of effectuation prior to the introduction of the theory of planned behavior elements. The article highlights the main advantages of applying hierarchical regression in social sciences as well as in the specific context of entrepreneurial intention formation, and addresses some of the potential pitfalls that this type of analysis entails.
A generic, hierarchical framework for massively parallel Wang-Landau sampling
Vogel, Thomas; Wüst, Thomas; Landau, David P
2013-01-01
We introduce a parallel Wang-Landau method based on the replica-exchange framework for Monte Carlo simulations. To demonstrate its advantages and general applicability for simulations of complex systems, we apply it to different spin models including spin glasses, the Ising model and the Potts model, lattice protein adsorption, and the self-assembly process in amphiphilic solutions. Without loss of accuracy, the method gives significant speed-up and potentially scales up to petaflop machines.
Hierarchical spatiotemporal matrix models for characterizing invasions.
Hooten, Mevin B; Wikle, Christopher K; Dorazio, Robert M; Royle, J Andrew
2007-06-01
The growth and dispersal of biotic organisms is an important subject in ecology. Ecologists are able to accurately describe survival and fecundity in plant and animal populations and have developed quantitative approaches to study the dynamics of dispersal and population size. Of particular interest are the dynamics of invasive species. Such nonindigenous animals and plants can levy significant impacts on native biotic communities. Effective models for relative abundance have been developed; however, a better understanding of the dynamics of actual population size (as opposed to relative abundance) in an invasion would be beneficial to all branches of ecology. In this article, we adopt a hierarchical Bayesian framework for modeling the invasion of such species while addressing the discrete nature of the data and uncertainty associated with the probability of detection. The nonlinear dynamics between discrete time points are intuitively modeled through an embedded deterministic population model with density-dependent growth and dispersal components. Additionally, we illustrate the importance of accommodating spatially varying dispersal rates. The method is applied to the specific case of the Eurasian Collared-Dove, an invasive species at mid-invasion in the United States at the time of this writing.
A Parallel Programming Model With Sequential Semantics
1996-01-01
Parallel programming is more difficult than sequential programming in part because of the complexity of reasoning, testing, and debugging in the...context of concurrency. In the thesis, we present and investigate a parallel programming model that provides direct control of parallelism in a notation
Zhang, M.; Zhang, Y.; Lichtner, P. C.
2013-12-01
A high-resolution non-stationary hydraulic conductivity (K) model, or a fully heterogeneous (FHM), is generated from an experimental stratigraphy which exhibits realistic sedimentary heterogeneity at multiple scales. Based on this model, a set of hierarchical hydrostratigraphic models (HSMs) with decreasing heterogeneity resolutions are created. These models contain 8, 3, and 1 stratigraphic unit(s), respectively, that are irregular in shape and hierarchical in structure. For all models, increasing system ln(K) variances - 0.1, 1.0, 4.5 - are tested, leading to a suite of 12 conceptual aquifer models. Using a numerical upscaling technique, equivalent K tensors are first computed for each unit of the HSMs. For all the variances tested, significant accuracy is achieved with the upscaled K in terms of capturing both the hydraulic head and flow connectivity of the FHM, i.e., mean relative error in head predictions ranging from 1% to 10% (higher error correlates to higher variances). Among the HSMs, the 8-unit model, given its higher stratigraphic resolution, is always the most accurate flow predictor. The same suite of HSMs is then subject to a novel dispersivity scaling analysis whereas upscaled dispersivities are computed with both stochastic and deterministic methods. For this analysis, a parallel random walk particle tracking code (RWPT), which accounts for the divergence of the dispersion tensors, is developed and verified with 100,000 particles (Zhang & Zhang, 2013). This new code leads to significantly improved accuracy and efficiency in modeling transport. Interestingly, for all the HSMs, at all the variances tested, the effect of divergence of the dispersion coefficient on solute plume migration and its spatial moments is negligible, suggesting that this term can be neglected in future simulations. When comparing the transport prediction of the FHM against those of the HSMs with upscaled dispersivities, plume trajectory, breakthrough curve, and the arrival
Constructive Epistemic Modeling: A Hierarchical Bayesian Model Averaging Method
Tsai, F. T. C.; Elshall, A. S.
2014-12-01
Constructive epistemic modeling is the idea that our understanding of a natural system through a scientific model is a mental construct that continually develops through learning about and from the model. Using the hierarchical Bayesian model averaging (HBMA) method [1], this study shows that segregating different uncertain model components through a BMA tree of posterior model probabilities, model prediction, within-model variance, between-model variance and total model variance serves as a learning tool [2]. First, the BMA tree of posterior model probabilities permits the comparative evaluation of the candidate propositions of each uncertain model component. Second, systemic model dissection is imperative for understanding the individual contribution of each uncertain model component to the model prediction and variance. Third, the hierarchical representation of the between-model variance facilitates the prioritization of the contribution of each uncertain model component to the overall model uncertainty. We illustrate these concepts using the groundwater modeling of a siliciclastic aquifer-fault system. The sources of uncertainty considered are from geological architecture, formation dip, boundary conditions and model parameters. The study shows that the HBMA analysis helps in advancing knowledge about the model rather than forcing the model to fit a particularly understanding or merely averaging several candidate models. [1] Tsai, F. T.-C., and A. S. Elshall (2013), Hierarchical Bayesian model averaging for hydrostratigraphic modeling: Uncertainty segregation and comparative evaluation. Water Resources Research, 49, 5520-5536, doi:10.1002/wrcr.20428. [2] Elshall, A.S., and F. T.-C. Tsai (2014). Constructive epistemic modeling of groundwater flow with geological architecture and boundary condition uncertainty under Bayesian paradigm, Journal of Hydrology, 517, 105-119, doi: 10.1016/j.jhydrol.2014.05.027.
Classifying hospitals as mortality outliers: logistic versus hierarchical logistic models.
Alexandrescu, Roxana; Bottle, Alex; Jarman, Brian; Aylin, Paul
2014-05-01
The use of hierarchical logistic regression for provider profiling has been recommended due to the clustering of patients within hospitals, but has some associated difficulties. We assess changes in hospital outlier status based on standard logistic versus hierarchical logistic modelling of mortality. The study population consisted of all patients admitted to acute, non-specialist hospitals in England between 2007 and 2011 with a primary diagnosis of acute myocardial infarction, acute cerebrovascular disease or fracture of neck of femur or a primary procedure of coronary artery bypass graft or repair of abdominal aortic aneurysm. We compared standardised mortality ratios (SMRs) from non-hierarchical models with SMRs from hierarchical models, without and with shrinkage estimates of the predicted probabilities (Model 1 and Model 2). The SMRs from standard logistic and hierarchical models were highly statistically significantly correlated (r > 0.91, p = 0.01). More outliers were recorded in the standard logistic regression than hierarchical modelling only when using shrinkage estimates (Model 2): 21 hospitals (out of a cumulative number of 565 pairs of hospitals under study) changed from a low outlier and 8 hospitals changed from a high outlier based on the logistic regression to a not-an-outlier based on shrinkage estimates. Both standard logistic and hierarchical modelling have identified nearly the same hospitals as mortality outliers. The choice of methodological approach should, however, also consider whether the modelling aim is judgment or improvement, as shrinkage may be more appropriate for the former than the latter.
Higher-Order Item Response Models for Hierarchical Latent Traits
Huang, Hung-Yu; Wang, Wen-Chung; Chen, Po-Hsi; Su, Chi-Ming
2013-01-01
Many latent traits in the human sciences have a hierarchical structure. This study aimed to develop a new class of higher order item response theory models for hierarchical latent traits that are flexible in accommodating both dichotomous and polytomous items, to estimate both item and person parameters jointly, to allow users to specify…
On the renormalization group transformation for scalar hierarchical models
Energy Technology Data Exchange (ETDEWEB)
Koch, H. (Texas Univ., Austin (USA). Dept. of Mathematics); Wittwer, P. (Geneva Univ. (Switzerland). Dept. de Physique Theorique)
1991-06-01
We give a new proof for the existence of a non-Gaussian hierarchical renormalization group fixed point, using what could be called a beta-function for this problem. We also discuss the asymptotic behavior of this fixed point, and the connection between the hierarchical models of Dyson and Gallavotti. (orig.).
PDDP, A Data Parallel Programming Model
Directory of Open Access Journals (Sweden)
Karen H. Warren
1996-01-01
Full Text Available PDDP, the parallel data distribution preprocessor, is a data parallel programming model for distributed memory parallel computers. PDDP implements high-performance Fortran-compatible data distribution directives and parallelism expressed by the use of Fortran 90 array syntax, the FORALL statement, and the WHERE construct. Distributed data objects belong to a global name space; other data objects are treated as local and replicated on each processor. PDDP allows the user to program in a shared memory style and generates codes that are portable to a variety of parallel machines. For interprocessor communication, PDDP uses the fastest communication primitives on each platform.
Hierarchical Geometric Constraint Model for Parametric Feature Based Modeling
Institute of Scientific and Technical Information of China (English)
高曙明; 彭群生
1997-01-01
A new geometric constraint model is described,which is hierarchical and suitable for parametric feature based modeling.In this model,different levels of geometric information are repesented to support various stages of a design process.An efficient approach to parametric feature based modeling is also presented,adopting the high level geometric constraint model.The low level geometric model such as B-reps can be derived automatically from the hig level geometric constraint model,enabling designers to perform their task of detailed design.
A Topological Model for Parallel Algorithm Design
1991-09-01
New York, 1989. 108. J. Dugundji . Topology . Allen and Bacon, Rockleigh, NJ, 1966. 109. R. Duncan. A Survey of Parallel Computer Architectures. IEEE...Approved for public release; distribition unlimited 4N1f-e AFIT/DS/ENG/91-02 A TOPOLOGICAL MODEL FOR PARALLEL ALGORITHM DESIGN DISSERTATION Presented to...DC 20503. 4. TITLE AND SUBTITLE 5. FUNDING NUMBERS A Topological Model For Parallel Algorithm Design 6. AUTHOR(S) Jeffrey A Simmers, Captain, USAF 7
What are hierarchical models and how do we analyze them?
Royle, Andy
2016-01-01
In this chapter we provide a basic definition of hierarchical models and introduce the two canonical hierarchical models in this book: site occupancy and N-mixture models. The former is a hierarchical extension of logistic regression and the latter is a hierarchical extension of Poisson regression. We introduce basic concepts of probability modeling and statistical inference including likelihood and Bayesian perspectives. We go through the mechanics of maximizing the likelihood and characterizing the posterior distribution by Markov chain Monte Carlo (MCMC) methods. We give a general perspective on topics such as model selection and assessment of model fit, although we demonstrate these topics in practice in later chapters (especially Chapters 5, 6, 7, and 10 Chapter 5 Chapter 6 Chapter 7 Chapter 10)
Hierarchical Neural Regression Models for Customer Churn Prediction
Directory of Open Access Journals (Sweden)
Golshan Mohammadi
2013-01-01
Full Text Available As customers are the main assets of each industry, customer churn prediction is becoming a major task for companies to remain in competition with competitors. In the literature, the better applicability and efficiency of hierarchical data mining techniques has been reported. This paper considers three hierarchical models by combining four different data mining techniques for churn prediction, which are backpropagation artificial neural networks (ANN, self-organizing maps (SOM, alpha-cut fuzzy c-means (α-FCM, and Cox proportional hazards regression model. The hierarchical models are ANN + ANN + Cox, SOM + ANN + Cox, and α-FCM + ANN + Cox. In particular, the first component of the models aims to cluster data in two churner and nonchurner groups and also filter out unrepresentative data or outliers. Then, the clustered data as the outputs are used to assign customers to churner and nonchurner groups by the second technique. Finally, the correctly classified data are used to create Cox proportional hazards model. To evaluate the performance of the hierarchical models, an Iranian mobile dataset is considered. The experimental results show that the hierarchical models outperform the single Cox regression baseline model in terms of prediction accuracy, Types I and II errors, RMSE, and MAD metrics. In addition, the α-FCM + ANN + Cox model significantly performs better than the two other hierarchical models.
A Privacy Data-Oriented Hierarchical MapReduce Programming Model
Directory of Open Access Journals (Sweden)
Haiwen Han
2013-08-01
Full Text Available To realize privacy data protection efficiently in hybrid cloud service, a hierarchical control architecture based multi-cluster MapReduce programming model (the Hierarchical MapReduce Model,HMR is presented. Under this hierarchical control architecture, data isolation and placement among private cloud and public clouds according to the data privacy characteristic is implemented by the control center in private cloud. And then, to perform the corresponding distributed parallel computation correctly under the multi-clusters mode that is different to the conventional single-cluster mode, the Map-Reduce-GlobalReduce three stage scheduling process is designed. Limiting the computation about privacy data in private cloud while outsourcing the computation about non-privacy data to public clouds as much as possible, HMR reaches the performance of both security and low cost.
Study of chaos based on a hierarchical model
Energy Technology Data Exchange (ETDEWEB)
Yagi, Masatoshi; Itoh, Sanae-I. [Kyushu Univ., Fukuoka (Japan). Research Inst. for Applied Mechanics
2001-12-01
Study of chaos based on a hierarchical model is briefly reviewed. Here we categorize hierarchical model equations, i.e., (1) a model with a few degrees of freedom, e.g., the Lorenz model, (2) a model with intermediate degrees of freedom like a shell model, and (3) a model with many degrees of freedom such as a Navier-Stokes equation. We discuss the nature of chaos and turbulence described by these models via Lyapunov exponents. The interpretation of results observed in fundamental plasma experiments is also shown based on a shell model. (author)
An Unsupervised Model for Exploring Hierarchical Semantics from Social Annotations
Zhou, Mianwei; Bao, Shenghua; Wu, Xian; Yu, Yong
This paper deals with the problem of exploring hierarchical semantics from social annotations. Recently, social annotation services have become more and more popular in Semantic Web. It allows users to arbitrarily annotate web resources, thus, largely lowers the barrier to cooperation. Furthermore, through providing abundant meta-data resources, social annotation might become a key to the development of Semantic Web. However, on the other hand, social annotation has its own apparent limitations, for instance, 1) ambiguity and synonym phenomena and 2) lack of hierarchical information. In this paper, we propose an unsupervised model to automatically derive hierarchical semantics from social annotations. Using a social bookmark service Del.icio.us as example, we demonstrate that the derived hierarchical semantics has the ability to compensate those shortcomings. We further apply our model on another data set from Flickr to testify our model's applicability on different environments. The experimental results demonstrate our model's efficiency.
Modeling the deformation behavior of nanocrystalline alloy with hierarchical microstructures
Energy Technology Data Exchange (ETDEWEB)
Liu, Hongxi; Zhou, Jianqiu, E-mail: zhouj@njtech.edu.cn [Nanjing Tech University, Department of Mechanical Engineering (China); Zhao, Yonghao, E-mail: yhzhao@njust.edu.cn [Nanjing University of Science and Technology, Nanostructural Materials Research Center, School of Materials Science and Engineering (China)
2016-02-15
A mechanism-based plasticity model based on dislocation theory is developed to describe the mechanical behavior of the hierarchical nanocrystalline alloys. The stress–strain relationship is derived by invoking the impeding effect of the intra-granular solute clusters and the inter-granular nanostructures on the dislocation movements along the sliding path. We found that the interaction between dislocations and the hierarchical microstructures contributes to the strain hardening property and greatly influence the ductility of nanocrystalline metals. The analysis indicates that the proposed model can successfully describe the enhanced strength of the nanocrystalline hierarchical alloy. Moreover, the strain hardening rate is sensitive to the volume fraction of the hierarchical microstructures. The present model provides a new perspective to design the microstructures for optimizing the mechanical properties in nanostructural metals.
Road network safety evaluation using Bayesian hierarchical joint model.
Wang, Jie; Huang, Helai
2016-05-01
Safety and efficiency are commonly regarded as two significant performance indicators of transportation systems. In practice, road network planning has focused on road capacity and transport efficiency whereas the safety level of a road network has received little attention in the planning stage. This study develops a Bayesian hierarchical joint model for road network safety evaluation to help planners take traffic safety into account when planning a road network. The proposed model establishes relationships between road network risk and micro-level variables related to road entities and traffic volume, as well as socioeconomic, trip generation and network density variables at macro level which are generally used for long term transportation plans. In addition, network spatial correlation between intersections and their connected road segments is also considered in the model. A road network is elaborately selected in order to compare the proposed hierarchical joint model with a previous joint model and a negative binomial model. According to the results of the model comparison, the hierarchical joint model outperforms the joint model and negative binomial model in terms of the goodness-of-fit and predictive performance, which indicates the reasonableness of considering the hierarchical data structure in crash prediction and analysis. Moreover, both random effects at the TAZ level and the spatial correlation between intersections and their adjacent segments are found to be significant, supporting the employment of the hierarchical joint model as an alternative in road-network-level safety modeling as well.
Parallel implementation of approximate atomistic models of the AMOEBA polarizable model
Demerdash, Omar; Head-Gordon, Teresa
2016-11-01
In this work we present a replicated data hybrid OpenMP/MPI implementation of a hierarchical progression of approximate classical polarizable models that yields speedups of up to ∼10 compared to the standard OpenMP implementation of the exact parent AMOEBA polarizable model. In addition, our parallel implementation exhibits reasonable weak and strong scaling. The resulting parallel software will prove useful for those who are interested in how molecular properties converge in the condensed phase with respect to the MBE, it provides a fruitful test bed for exploring different electrostatic embedding schemes, and offers an interesting possibility for future exascale computing paradigms.
Modeling Bivariate Longitudinal Hormone Profiles by Hierarchical State Space Models.
Liu, Ziyue; Cappola, Anne R; Crofford, Leslie J; Guo, Wensheng
2014-01-01
The hypothalamic-pituitary-adrenal (HPA) axis is crucial in coping with stress and maintaining homeostasis. Hormones produced by the HPA axis exhibit both complex univariate longitudinal profiles and complex relationships among different hormones. Consequently, modeling these multivariate longitudinal hormone profiles is a challenging task. In this paper, we propose a bivariate hierarchical state space model, in which each hormone profile is modeled by a hierarchical state space model, with both population-average and subject-specific components. The bivariate model is constructed by concatenating the univariate models based on the hypothesized relationship. Because of the flexible framework of state space form, the resultant models not only can handle complex individual profiles, but also can incorporate complex relationships between two hormones, including both concurrent and feedback relationship. Estimation and inference are based on marginal likelihood and posterior means and variances. Computationally efficient Kalman filtering and smoothing algorithms are used for implementation. Application of the proposed method to a study of chronic fatigue syndrome and fibromyalgia reveals that the relationships between adrenocorticotropic hormone and cortisol in the patient group are weaker than in healthy controls.
The Role of Prototype Learning in Hierarchical Models of Vision
Thomure, Michael David
2014-01-01
I conduct a study of learning in HMAX-like models, which are hierarchical models of visual processing in biological vision systems. Such models compute a new representation for an image based on the similarity of image sub-parts to a number of specific patterns, called prototypes. Despite being a central piece of the overall model, the issue of…
Free-Energy Bounds for Hierarchical Spin Models
Castellana, Michele; Barra, Adriano; Guerra, Francesco
2014-04-01
In this paper we study two non-mean-field (NMF) spin models built on a hierarchical lattice: the hierarchical Edward-Anderson model (HEA) of a spin glass, and Dyson's hierarchical model (DHM) of a ferromagnet. For the HEA, we prove the existence of the thermodynamic limit of the free energy and the replica-symmetry-breaking (RSB) free-energy bounds previously derived for the Sherrington-Kirkpatrick model of a spin glass. These RSB mean-field bounds are exact only if the order-parameter fluctuations (OPF) vanish: given that such fluctuations are not negligible in NMF models, we develop a novel strategy to tackle part of OPF in hierarchical models. The method is based on absorbing part of OPF of a block of spins into an effective Hamiltonian of the underlying spin blocks. We illustrate this method for DHM and show that, compared to the mean-field bound for the free energy, it provides a tighter NMF bound, with a critical temperature closer to the exact one. To extend this method to the HEA model, a suitable generalization of Griffith's correlation inequalities for Ising ferromagnets is needed: since correlation inequalities for spin glasses are still an open topic, we leave the extension of this method to hierarchical spin glasses as a future perspective.
Energy Technology Data Exchange (ETDEWEB)
Muray, L.P.; Anderson, E.H.; Boegli, V. [Ernest Orlando Lawrence Berkeley National Laboratory, M/S 2-400, Berkeley, California 94720 (United States)
1997-11-01
A farm of off-the-shelf microprocessors is evaluated for use as a real-time parallel postprocessing subsystem of the Lawrence Berkeley National Laboratory datapath, including backscatter proximity correction. The native data format is GDSII with embedded control. Data storage is fully hierarchical with no intermediate binary pattern data formats. Benchmarks of a four Pentium Pro{trademark} farm, after optimization, demonstrate compatibility with exposure rates of 25 MHz for 32{percent} area fill on a vector scan Gaussian beam e-beam tool. Scalability of the architecture is discussed in detail. {copyright} {ital 1997 American Vacuum Society.}
A hierarchical linear model for tree height prediction.
Vicente J. Monleon
2003-01-01
Measuring tree height is a time-consuming process. Often, tree diameter is measured and height is estimated from a published regression model. Trees used to develop these models are clustered into stands, but this structure is ignored and independence is assumed. In this study, hierarchical linear models that account explicitly for the clustered structure of the data...
Modelling hierarchical and modular complex networks: division and independence
Kim, D.-H.; Rodgers, G. J.; Kahng, B.; Kim, D.
2005-06-01
We introduce a growing network model which generates both modular and hierarchical structure in a self-organized way. To this end, we modify the Barabási-Albert model into the one evolving under the principles of division and independence as well as growth and preferential attachment (PA). A newly added vertex chooses one of the modules composed of existing vertices, and attaches edges to vertices belonging to that module following the PA rule. When the module size reaches a proper size, the module is divided into two, and a new module is created. The karate club network studied by Zachary is a simple version of the current model. We find that the model can reproduce both modular and hierarchical properties, characterized by the hierarchical clustering function of a vertex with degree k, C(k), being in good agreement with empirical measurements for real-world networks.
Parallel Computing of Ocean General Circulation Model
Institute of Scientific and Technical Information of China (English)
无
2001-01-01
This paper discusses the parallel computing of the thirdgeneration Ocea n General Circulation Model (OGCM) from the State Key Laboratory of Numerical Mo deling for Atmospheric Science and Geophysical Fluid Dynamics(LASG),Institute of Atmosphere Physics(IAP). Meanwhile, several optimization strategies for paralle l computing of OGCM (POGCM) on Scalable Shared Memory Multiprocessor (S2MP) are presented. Using Message Passing Interface (MPI), we obtain super linear speedup on SGI Origin 2000 for parallel OGCM(POGCM) after optimization.
Multiple comparisons in genetic association studies: a hierarchical modeling approach.
Yi, Nengjun; Xu, Shizhong; Lou, Xiang-Yang; Mallick, Himel
2014-02-01
Multiple comparisons or multiple testing has been viewed as a thorny issue in genetic association studies aiming to detect disease-associated genetic variants from a large number of genotyped variants. We alleviate the problem of multiple comparisons by proposing a hierarchical modeling approach that is fundamentally different from the existing methods. The proposed hierarchical models simultaneously fit as many variables as possible and shrink unimportant effects towards zero. Thus, the hierarchical models yield more efficient estimates of parameters than the traditional methods that analyze genetic variants separately, and also coherently address the multiple comparisons problem due to largely reducing the effective number of genetic effects and the number of statistically "significant" effects. We develop a method for computing the effective number of genetic effects in hierarchical generalized linear models, and propose a new adjustment for multiple comparisons, the hierarchical Bonferroni correction, based on the effective number of genetic effects. Our approach not only increases the power to detect disease-associated variants but also controls the Type I error. We illustrate and evaluate our method with real and simulated data sets from genetic association studies. The method has been implemented in our freely available R package BhGLM (http://www.ssg.uab.edu/bhglm/).
Structured building model reduction toward parallel simulation
Energy Technology Data Exchange (ETDEWEB)
Dobbs, Justin R. [Cornell University; Hencey, Brondon M. [Cornell University
2013-08-26
Building energy model reduction exchanges accuracy for improved simulation speed by reducing the number of dynamical equations. Parallel computing aims to improve simulation times without loss of accuracy but is poorly utilized by contemporary simulators and is inherently limited by inter-processor communication. This paper bridges these disparate techniques to implement efficient parallel building thermal simulation. We begin with a survey of three structured reduction approaches that compares their performance to a leading unstructured method. We then use structured model reduction to find thermal clusters in the building energy model and allocate processing resources. Experimental results demonstrate faster simulation and low error without any interprocessor communication.
Modeling local item dependence with the hierarchical generalized linear model.
Jiao, Hong; Wang, Shudong; Kamata, Akihito
2005-01-01
Local item dependence (LID) can emerge when the test items are nested within common stimuli or item groups. This study proposes a three-level hierarchical generalized linear model (HGLM) to model LID when LID is due to such contextual effects. The proposed three-level HGLM was examined by analyzing simulated data sets and was compared with the Rasch-equivalent two-level HGLM that ignores such a nested structure of test items. The results demonstrated that the proposed model could capture LID and estimate its magnitude. Also, the two-level HGLM resulted in larger mean absolute differences between the true and the estimated item difficulties than those from the proposed three-level HGLM. Furthermore, it was demonstrated that the proposed three-level HGLM estimated the ability distribution variance unaffected by the LID magnitude, while the two-level HGLM with no LID consideration increasingly underestimated the ability variance as the LID magnitude increased.
The Revised Hierarchical Model: A critical review and assessment
Kroll, J.F.; Hell, J.G. van; Tokowicz, N.; Green, D.W.
2010-01-01
Brysbaert and Duyck (this issue) suggest that it is time to abandon the Revised Hierarchical Model (Kroll and Stewart, 1994) in favor of connectionist models such as BIA+ (Dijkstra and Van Heuven, 2002) that more accurately account for the recent evidence on non-selective access in bilingual word re
Parallel models of associative memory
Hinton, Geoffrey E
2014-01-01
This update of the 1981 classic on neural networks includes new commentaries by the authors that show how the original ideas are related to subsequent developments. As researchers continue to uncover ways of applying the complex information processing abilities of neural networks, they give these models an exciting future which may well involve revolutionary developments in understanding the brain and the mind -- developments that may allow researchers to build adaptive intelligent machines. The original chapters show where the ideas came from and the new commentaries show where they are going
Hierarchical Policy Model for Managing Heterogeneous Security Systems
Lee, Dong-Young; Kim, Minsoo
2007-12-01
The integrated security management becomes increasingly complex as security manager must take heterogeneous security systems, different networking technologies, and distributed applications into consideration. The task of managing these security systems and applications depends on various systems and vender specific issues. In this paper, we present a hierarchical policy model which are derived from the conceptual policy, and specify means to enforce this behavior. The hierarchical policy model consist of five levels which are conceptual policy level, goal-oriented policy level, target policy level, process policy level and low-level policy.
Hierarchical analytical and simulation modelling of human-machine systems with interference
Braginsky, M. Ya; Tarakanov, D. V.; Tsapko, S. G.; Tsapko, I. V.; Baglaeva, E. A.
2017-01-01
The article considers the principles of building the analytical and simulation model of the human operator and the industrial control system hardware and software. E-networks as the extension of Petri nets are used as the mathematical apparatus. This approach allows simulating complex parallel distributed processes in human-machine systems. The structural and hierarchical approach is used as the building method for the mathematical model of the human operator. The upper level of the human operator is represented by the logical dynamic model of decision making based on E-networks. The lower level reflects psychophysiological characteristics of the human-operator.
Iteration schemes for parallelizing models of superconductivity
Energy Technology Data Exchange (ETDEWEB)
Gray, P.A. [Michigan State Univ., East Lansing, MI (United States)
1996-12-31
The time dependent Lawrence-Doniach model, valid for high fields and high values of the Ginzburg-Landau parameter, is often used for studying vortex dynamics in layered high-T{sub c} superconductors. When solving these equations numerically, the added degrees of complexity due to the coupling and nonlinearity of the model often warrant the use of high-performance computers for their solution. However, the interdependence between the layers can be manipulated so as to allow parallelization of the computations at an individual layer level. The reduced parallel tasks may then be solved independently using a heterogeneous cluster of networked workstations connected together with Parallel Virtual Machine (PVM) software. Here, this parallelization of the model is discussed and several computational implementations of varying degrees of parallelism are presented. Computational results are also given which contrast properties of convergence speed, stability, and consistency of these implementations. Included in these results are models involving the motion of vortices due to an applied current and pinning effects due to various material properties.
Quick Web Services Lookup Model Based on Hierarchical Registration
Institute of Scientific and Technical Information of China (English)
谢山; 朱国进; 陈家训
2003-01-01
Quick Web Services Lookup (Q-WSL) is a new model to registration and lookup of complex services in the Internet. The model is designed to quickly find complex Web services by using hierarchical registration method. The basic concepts of Web services system are introduced and presented, and then the method of hierarchical registration of services is described. In particular, service query document description and service lookup procedure are concentrated, and it addresses how to lookup these services which are registered in the Web services system. Furthermore, an example design and an evaluation of its performance are presented.Specifically, it shows that the using of attributionbased service query document design and contentbased hierarchical registration in Q-WSL allows service requesters to discover needed services more flexibly and rapidly. It is confirmed that Q-WSL is very suitable for Web services system.
Bayesian structural equation modeling method for hierarchical model validation
Energy Technology Data Exchange (ETDEWEB)
Jiang Xiaomo [Department of Civil and Environmental Engineering, Vanderbilt University, Box 1831-B, Nashville, TN 37235 (United States)], E-mail: xiaomo.jiang@vanderbilt.edu; Mahadevan, Sankaran [Department of Civil and Environmental Engineering, Vanderbilt University, Box 1831-B, Nashville, TN 37235 (United States)], E-mail: sankaran.mahadevan@vanderbilt.edu
2009-04-15
A building block approach to model validation may proceed through various levels, such as material to component to subsystem to system, comparing model predictions with experimental observations at each level. Usually, experimental data becomes scarce as one proceeds from lower to higher levels. This paper presents a structural equation modeling approach to make use of the lower-level data for higher-level model validation under uncertainty, integrating several components: lower-level data, higher-level data, computational model, and latent variables. The method proposed in this paper uses latent variables to model two sets of relationships, namely, the computational model to system-level data, and lower-level data to system-level data. A Bayesian network with Markov chain Monte Carlo simulation is applied to represent the two relationships and to estimate the influencing factors between them. Bayesian hypothesis testing is employed to quantify the confidence in the predictive model at the system level, and the role of lower-level data in the model validation assessment at the system level. The proposed methodology is implemented for hierarchical assessment of three validation problems, using discrete observations and time-series data.
MULTILEVEL RECURRENT MODEL FOR HIERARCHICAL CONTROL OF COMPLEX REGIONAL SECURITY
Directory of Open Access Journals (Sweden)
Andrey V. Masloboev
2014-11-01
Full Text Available Subject of research. The research goal and scope are development of methods and software for mathematical and computer modeling of the regional security information support systems as multilevel hierarchical systems. Such systems are characterized by loosely formalization, multiple-aspect of descendent system processes and their interconnectivity, high level dynamics and uncertainty. The research methodology is based on functional-target approach and principles of multilevel hierarchical system theory. The work considers analysis and structural-algorithmic synthesis problem-solving of the multilevel computer-aided systems intended for management and decision-making information support in the field of regional security. Main results. A hierarchical control multilevel model of regional socio-economic system complex security has been developed. The model is based on functional-target approach and provides both formal statement and solving, and practical implementation of the automated information system structure and control algorithms synthesis problems of regional security management optimal in terms of specified criteria. An approach for intralevel and interlevel coordination problem-solving in the multilevel hierarchical systems has been proposed on the basis of model application. The coordination is provided at the expense of interconnection requirements satisfaction between the functioning quality indexes (objective functions, which are optimized by the different elements of multilevel systems. That gives the possibility for sufficient coherence reaching of the local decisions, being made on the different control levels, under decentralized decision-making and external environment high dynamics. Recurrent model application provides security control mathematical models formation of regional socioeconomic systems, functioning under uncertainty. Practical relevance. The model implementation makes it possible to automate synthesis realization of
Hierarchical Non-Emitting Markov Models
Ristad, E S; Ristad, Eric Sven; Thomas, Robert G.
1998-01-01
We describe a simple variant of the interpolated Markov model with non-emitting state transitions and prove that it is strictly more powerful than any Markov model. More importantly, the non-emitting model outperforms the classic interpolated model on the natural language texts under a wide range of experimental conditions, with only a modest increase in computational requirements. The non-emitting model is also much less prone to overfitting. Keywords: Markov model, interpolated Markov model, hidden Markov model, mixture modeling, non-emitting state transitions, state-conditional interpolation, statistical language model, discrete time series, Brown corpus, Wall Street Journal.
Wu, Stephen; Angelikopoulos, Panagiotis; Tauriello, Gerardo; Papadimitriou, Costas; Koumoutsakos, Petros
2016-12-28
We propose a hierarchical Bayesian framework to systematically integrate heterogeneous data for the calibration of force fields in Molecular Dynamics (MD) simulations. Our approach enables the fusion of diverse experimental data sets of the physico-chemical properties of a system at different thermodynamic conditions. We demonstrate the value of this framework for the robust calibration of MD force-fields for water using experimental data of its diffusivity, radial distribution function, and density. In order to address the high computational cost associated with the hierarchical Bayesian models, we develop a novel surrogate model based on the empirical interpolation method. Further computational savings are achieved by implementing a highly parallel transitional Markov chain Monte Carlo technique. The present method bypasses possible subjective weightings of the experimental data in identifying MD force-field parameters.
Conceptual hierarchical modeling to describe wetland plant community organization
Little, A.M.; Guntenspergen, G.R.; Allen, T.F.H.
2010-01-01
Using multivariate analysis, we created a hierarchical modeling process that describes how differently-scaled environmental factors interact to affect wetland-scale plant community organization in a system of small, isolated wetlands on Mount Desert Island, Maine. We followed the procedure: 1) delineate wetland groups using cluster analysis, 2) identify differently scaled environmental gradients using non-metric multidimensional scaling, 3) order gradient hierarchical levels according to spatiotem-poral scale of fluctuation, and 4) assemble hierarchical model using group relationships with ordination axes and post-hoc tests of environmental differences. Using this process, we determined 1) large wetland size and poor surface water chemistry led to the development of shrub fen wetland vegetation, 2) Sphagnum and water chemistry differences affected fen vs. marsh / sedge meadows status within small wetlands, and 3) small-scale hydrologic differences explained transitions between forested vs. non-forested and marsh vs. sedge meadow vegetation. This hierarchical modeling process can help explain how upper level contextual processes constrain biotic community response to lower-level environmental changes. It creates models with more nuanced spatiotemporal complexity than classification and regression tree procedures. Using this process, wetland scientists will be able to generate more generalizable theories of plant community organization, and useful management models. ?? Society of Wetland Scientists 2009.
Update Legal Documents Using Hierarchical Ranking Models and Word Clustering
Pham, Minh Quang Nhat; Nguyen, Minh Le; Shimazu, Akira
2010-01-01
Our research addresses the task of updating legal documents when newinformation emerges. In this paper, we employ a hierarchical ranking model tothe task of updating legal documents. Word clustering features are incorporatedto the ranking models to exploit semantic relations between words. Experimentalresults on legal data built from the United States Code show that the hierarchicalranking model with word clustering outperforms baseline methods using VectorSpace Model, and word cluster-based ...
Hierarchical modelling for the environmental sciences statistical methods and applications
Clark, James S
2006-01-01
New statistical tools are changing the way in which scientists analyze and interpret data and models. Hierarchical Bayes and Markov Chain Monte Carlo methods for analysis provide a consistent framework for inference and prediction where information is heterogeneous and uncertain, processes are complicated, and responses depend on scale. Nowhere are these methods more promising than in the environmental sciences.
On the construction of hierarchic models
Out, D.-J.; Rikxoort, van R.P.; Bakker, R.R.
1994-01-01
One of the main problems in the field of model-based diagnosis of technical systems today is finding the most useful model or models of the system being diagnosed. Often, a model showing the physical components and the connections between them is all that is available. As systems grow larger and lar
Modeling urban air pollution with optimized hierarchical fuzzy inference system.
Tashayo, Behnam; Alimohammadi, Abbas
2016-10-01
Environmental exposure assessments (EEA) and epidemiological studies require urban air pollution models with appropriate spatial and temporal resolutions. Uncertain available data and inflexible models can limit air pollution modeling techniques, particularly in under developing countries. This paper develops a hierarchical fuzzy inference system (HFIS) to model air pollution under different land use, transportation, and meteorological conditions. To improve performance, the system treats the issue as a large-scale and high-dimensional problem and develops the proposed model using a three-step approach. In the first step, a geospatial information system (GIS) and probabilistic methods are used to preprocess the data. In the second step, a hierarchical structure is generated based on the problem. In the third step, the accuracy and complexity of the model are simultaneously optimized with a multiple objective particle swarm optimization (MOPSO) algorithm. We examine the capabilities of the proposed model for predicting daily and annual mean PM2.5 and NO2 and compare the accuracy of the results with representative models from existing literature. The benefits provided by the model features, including probabilistic preprocessing, multi-objective optimization, and hierarchical structure, are precisely evaluated by comparing five different consecutive models in terms of accuracy and complexity criteria. Fivefold cross validation is used to assess the performance of the generated models. The respective average RMSEs and coefficients of determination (R (2)) for the test datasets using proposed model are as follows: daily PM2.5 = (8.13, 0.78), annual mean PM2.5 = (4.96, 0.80), daily NO2 = (5.63, 0.79), and annual mean NO2 = (2.89, 0.83). The obtained results demonstrate that the developed hierarchical fuzzy inference system can be utilized for modeling air pollution in EEA and epidemiological studies.
ECoS, a framework for modelling hierarchical spatial systems.
Harris, John R W; Gorley, Ray N
2003-10-01
A general framework for modelling hierarchical spatial systems has been developed and implemented as the ECoS3 software package. The structure of this framework is described, and illustrated with representative examples. It allows the set-up and integration of sets of advection-diffusion equations representing multiple constituents interacting in a spatial context. Multiple spaces can be defined, with zero, one or two-dimensions and can be nested, and linked through constituent transfers. Model structure is generally object-oriented and hierarchical, reflecting the natural relations within its real-world analogue. Velocities, dispersions and inter-constituent transfers, together with additional functions, are defined as properties of constituents to which they apply. The resulting modular structure of ECoS models facilitates cut and paste model development, and template model components have been developed for the assembly of a range of estuarine water quality models. Published examples of applications to the geochemical dynamics of estuaries are listed.
A Scalable Prescriptive Parallel Debugging Model
DEFF Research Database (Denmark)
Jensen, Nicklas Bo; Quarfot Nielsen, Niklas; Lee, Gregory L.
2015-01-01
Debugging is a critical step in the development of any parallel program. However, the traditional interactive debugging model, where users manually step through code and inspect their application, does not scale well even for current supercomputers due its centralized nature. While lightweight...
Synthetic models of distributed memory parallel programs
Energy Technology Data Exchange (ETDEWEB)
Poplawski, D.A. (Michigan Technological Univ., Houghton, MI (USA). Dept. of Computer Science)
1990-09-01
This paper deals with the construction and use of simple synthetic programs that model the behavior of more complex, real parallel programs. Synthetic programs can be used in many ways: to construct an easily ported suite of benchmark programs, to experiment with alternate parallel implementations of a program without actually writing them, and to predict the behavior and performance of an algorithm on a new or hypothetical machine. Synthetic programs are constructed easily from scratch, from existing programs, and can even be constructed using nothing but information obtained from traces of the real program's execution.
Inference in HIV dynamics models via hierarchical likelihood
2010-01-01
HIV dynamical models are often based on non-linear systems of ordinary differential equations (ODE), which do not have analytical solution. Introducing random effects in such models leads to very challenging non-linear mixed-effects models. To avoid the numerical computation of multiple integrals involved in the likelihood, we propose a hierarchical likelihood (h-likelihood) approach, treated in the spirit of a penalized likelihood. We give the asymptotic distribution of the maximum h-likelih...
Modeling diurnal hormone profiles by hierarchical state space models.
Liu, Ziyue; Guo, Wensheng
2015-10-30
Adrenocorticotropic hormone (ACTH) diurnal patterns contain both smooth circadian rhythms and pulsatile activities. How to evaluate and compare them between different groups is a challenging statistical task. In particular, we are interested in testing (1) whether the smooth ACTH circadian rhythms in chronic fatigue syndrome and fibromyalgia patients differ from those in healthy controls and (2) whether the patterns of pulsatile activities are different. In this paper, a hierarchical state space model is proposed to extract these signals from noisy observations. The smooth circadian rhythms shared by a group of subjects are modeled by periodic smoothing splines. The subject level pulsatile activities are modeled by autoregressive processes. A functional random effect is adopted at the pair level to account for the matched pair design. Parameters are estimated by maximizing the marginal likelihood. Signals are extracted as posterior means. Computationally efficient Kalman filter algorithms are adopted for implementation. Application of the proposed model reveals that the smooth circadian rhythms are similar in the two groups but the pulsatile activities in patients are weaker than those in the healthy controls. Copyright © 2015 John Wiley & Sons, Ltd.
Learning curve estimation in medical devices and procedures: hierarchical modeling.
Govindarajulu, Usha S; Stillo, Marco; Goldfarb, David; Matheny, Michael E; Resnic, Frederic S
2017-07-30
In the use of medical device procedures, learning effects have been shown to be a critical component of medical device safety surveillance. To support their estimation of these effects, we evaluated multiple methods for modeling these rates within a complex simulated dataset representing patients treated by physicians clustered within institutions. We employed unique modeling for the learning curves to incorporate the learning hierarchy between institution and physicians and then modeled them within established methods that work with hierarchical data such as generalized estimating equations (GEE) and generalized linear mixed effect models. We found that both methods performed well, but that the GEE may have some advantages over the generalized linear mixed effect models for ease of modeling and a substantially lower rate of model convergence failures. We then focused more on using GEE and performed a separate simulation to vary the shape of the learning curve as well as employed various smoothing methods to the plots. We concluded that while both hierarchical methods can be used with our mathematical modeling of the learning curve, the GEE tended to perform better across multiple simulated scenarios in order to accurately model the learning effect as a function of physician and hospital hierarchical data in the use of a novel medical device. We found that the choice of shape used to produce the 'learning-free' dataset would be dataset specific, while the choice of smoothing method was negligibly different from one another. This was an important application to understand how best to fit this unique learning curve function for hierarchical physician and hospital data. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Hierarchical Item Response Models for Cognitive Diagnosis
Hansen, Mark Patrick
2013-01-01
Cognitive diagnosis models (see, e.g., Rupp, Templin, & Henson, 2010) have received increasing attention within educational and psychological measurement. The popularity of these models may be largely due to their perceived ability to provide useful information concerning both examinees (classifying them according to their attribute profiles)…
Hierarchical model-based interferometric synthetic aperture radar image registration
Wang, Yang; Huang, Haifeng; Dong, Zhen; Wu, Manqing
2014-01-01
With the rapid development of spaceborne interferometric synthetic aperture radar technology, classical image registration methods are incompetent for high-efficiency and high-accuracy masses of real data processing. Based on this fact, we propose a new method. This method consists of two steps: coarse registration that is realized by cross-correlation algorithm and fine registration that is realized by hierarchical model-based algorithm. Hierarchical model-based algorithm is a high-efficiency optimization algorithm. The key features of this algorithm are a global model that constrains the overall structure of the motion estimated, a local model that is used in the estimation process, and a coarse-to-fine refinement strategy. Experimental results from different kinds of simulated and real data have confirmed that the proposed method is very fast and has high accuracy. Comparing with a conventional cross-correlation method, the proposed method provides markedly improved performance.
Concept Association and Hierarchical Hamming Clustering Model in Text Classification
Institute of Scientific and Technical Information of China (English)
Su Gui-yang; Li Jian-hua; Ma Ying-hua; Li Sheng-hong; Yin Zhong-hang
2004-01-01
We propose two models in this paper. The concept of association model is put forward to obtain the co-occurrence relationships among keywords in the documents and the hierarchical Hamming clustering model is used to reduce the dimensionality of the category feature vector space which can solve the problem of the extremely high dimensionality of the documents' feature space. The results of experiment indicate that it can obtain the co-occurrence relations among keywords in the documents which promote the recall of classification system effectively. The hierarchical Hamming clustering model can reduce the dimensionality of the category feature vector efficiently, the size of the vector space is only about 10% of the primary dimensionality.
Dissecting magnetar variability with Bayesian hierarchical models
Huppenkothen, D; Hogg, D W; Murray, I; Frean, M; Elenbaas, C; Watts, A L; Levin, Y; van der Horst, A J; Kouveliotou, C
2015-01-01
Neutron stars are a prime laboratory for testing physical processes under conditions of strong gravity, high density, and extreme magnetic fields. Among the zoo of neutron star phenomena, magnetars stand out for their bursting behaviour, ranging from extremely bright, rare giant flares to numerous, less energetic recurrent bursts. The exact trigger and emission mechanisms for these bursts are not known; favoured models involve either a crust fracture and subsequent energy release into the magnetosphere, or explosive reconnection of magnetic field lines. In the absence of a predictive model, understanding the physical processes responsible for magnetar burst variability is difficult. Here, we develop an empirical model that decomposes magnetar bursts into a superposition of small spike-like features with a simple functional form, where the number of model components is itself part of the inference problem. The cascades of spikes that we model might be formed by avalanches of reconnection, or crust rupture afte...
Fractal Derivative Model for Air Permeability in Hierarchic Porous Media
Directory of Open Access Journals (Sweden)
Jie Fan
2012-01-01
Full Text Available Air permeability in hierarchic porous media does not obey Fick's equation or its modification because fractal objects have well-defined geometric properties, which are discrete and discontinuous. We propose a theoretical model dealing with, for the first time, a seemingly complex air permeability process using fractal derivative method. The fractal derivative model has been successfully applied to explain the novel air permeability phenomenon of cocoon. The theoretical analysis was in agreement with experimental results.
A hierarchical model for spatial capture-recapture data
Royle, J. Andrew; Young, K.V.
2008-01-01
Estimating density is a fundamental objective of many animal population studies. Application of methods for estimating population size from ostensibly closed populations is widespread, but ineffective for estimating absolute density because most populations are subject to short-term movements or so-called temporary emigration. This phenomenon invalidates the resulting estimates because the effective sample area is unknown. A number of methods involving the adjustment of estimates based on heuristic considerations are in widespread use. In this paper, a hierarchical model of spatially indexed capture recapture data is proposed for sampling based on area searches of spatial sample units subject to uniform sampling intensity. The hierarchical model contains explicit models for the distribution of individuals and their movements, in addition to an observation model that is conditional on the location of individuals during sampling. Bayesian analysis of the hierarchical model is achieved by the use of data augmentation, which allows for a straightforward implementation in the freely available software WinBUGS. We present results of a simulation study that was carried out to evaluate the operating characteristics of the Bayesian estimator under variable densities and movement patterns of individuals. An application of the model is presented for survey data on the flat-tailed horned lizard (Phrynosoma mcallii) in Arizona, USA.
A hierarchical model for ordinal matrix factorization
DEFF Research Database (Denmark)
Paquet, Ulrich; Thomson, Blaise; Winther, Ole
2012-01-01
their ratings for other movies. The Netflix data set is used for evaluation, which consists of around 100 million ratings. Using root mean-squared error (RMSE) as an evaluation metric, results show that the suggested model outperforms alternative factorization techniques. Results also show how Gibbs sampling...
Hierarchical, model-based risk management of critical infrastructures
Energy Technology Data Exchange (ETDEWEB)
Baiardi, F. [Polo G.Marconi La Spezia, Universita di Pisa, Pisa (Italy); Dipartimento di Informatica, Universita di Pisa, L.go B.Pontecorvo 3 56127, Pisa (Italy)], E-mail: f.baiardi@unipi.it; Telmon, C.; Sgandurra, D. [Dipartimento di Informatica, Universita di Pisa, L.go B.Pontecorvo 3 56127, Pisa (Italy)
2009-09-15
Risk management is a process that includes several steps, from vulnerability analysis to the formulation of a risk mitigation plan that selects countermeasures to be adopted. With reference to an information infrastructure, we present a risk management strategy that considers a sequence of hierarchical models, each describing dependencies among infrastructure components. A dependency exists anytime a security-related attribute of a component depends upon the attributes of other components. We discuss how this notion supports the formal definition of risk mitigation plan and the evaluation of the infrastructure robustness. A hierarchical relation exists among models that are analyzed because each model increases the level of details of some components in a previous one. Since components and dependencies are modeled through a hypergraph, to increase the model detail level, some hypergraph nodes are replaced by more and more detailed hypergraphs. We show how critical information for the assessment can be automatically deduced from the hypergraph and define conditions that determine cases where a hierarchical decomposition simplifies the assessment. In these cases, the assessment has to analyze the hypergraph that replaces the component rather than applying again all the analyses to a more detailed, and hence larger, hypergraph. We also show how the proposed framework supports the definition of a risk mitigation plan and discuss some indicators of the overall infrastructure robustness. Lastly, the development of tools to support the assessment is discussed.
Electromagnetic Physics Models for Parallel Computing Architectures
Amadio, G.; Ananya, A.; Apostolakis, J.; Aurora, A.; Bandieramonte, M.; Bhattacharyya, A.; Bianchini, C.; Brun, R.; Canal, P.; Carminati, F.; Duhem, L.; Elvira, D.; Gheata, A.; Gheata, M.; Goulas, I.; Iope, R.; Jun, S. Y.; Lima, G.; Mohanty, A.; Nikitina, T.; Novak, M.; Pokorski, W.; Ribon, A.; Seghal, R.; Shadura, O.; Vallecorsa, S.; Wenzel, S.; Zhang, Y.
2016-10-01
The recent emergence of hardware architectures characterized by many-core or accelerated processors has opened new opportunities for concurrent programming models taking advantage of both SIMD and SIMT architectures. GeantV, a next generation detector simulation, has been designed to exploit both the vector capability of mainstream CPUs and multi-threading capabilities of coprocessors including NVidia GPUs and Intel Xeon Phi. The characteristics of these architectures are very different in terms of the vectorization depth and type of parallelization needed to achieve optimal performance. In this paper we describe implementation of electromagnetic physics models developed for parallel computing architectures as a part of the GeantV project. Results of preliminary performance evaluation and physics validation are presented as well.
Introduction to Hierarchical Bayesian Modeling for Ecological Data
Parent, Eric
2012-01-01
Making statistical modeling and inference more accessible to ecologists and related scientists, Introduction to Hierarchical Bayesian Modeling for Ecological Data gives readers a flexible and effective framework to learn about complex ecological processes from various sources of data. It also helps readers get started on building their own statistical models. The text begins with simple models that progressively become more complex and realistic through explanatory covariates and intermediate hidden states variables. When fitting the models to data, the authors gradually present the concepts a
A Hierarchical Probability Model of Colon Cancer
Kelly, Michael
2010-01-01
We consider a model of fixed size $N = 2^l$ in which there are $l$ generations of daughter cells and a stem cell. In each generation $i$ there are $2^{i-1}$ daughter cells. At each integral time unit the cells split so that the stem cell splits into a stem cell and generation 1 daughter cell and the generation $i$ daughter cells become two cells of generation $i+1$. The last generation is removed from the population. The stem cell gets first and second mutations at rates $u_1$ and $u_2$ and the daughter cells get first and second mutations at rates $v_1$ and $v_2$. We find the distribution for the time it takes to get two mutations as $N$ goes to infinity and the mutation rates go to 0. We also find the distribution for the location of the mutations. Several outcomes are possible depending on how fast the rates go to 0. The model considered has been proposed by Komarova (2007) as a model for colon cancer.
Hierarchical Model Predictive Control for Resource Distribution
DEFF Research Database (Denmark)
Bendtsen, Jan Dimon; Trangbæk, K; Stoustrup, Jakob
2010-01-01
This paper deals with hierarchichal model predictive control (MPC) of distributed systems. A three level hierachical approach is proposed, consisting of a high level MPC controller, a second level of so-called aggregators, controlled by an online MPC-like algorithm, and a lower level of autonomous...... facilitates plug-and-play addition of subsystems without redesign of any controllers. The method is supported by a number of simulations featuring a three-level smart-grid power control system for a small isolated power grid....
Continuum damage modeling and simulation of hierarchical dental enamel
Ma, Songyun; Scheider, Ingo; Bargmann, Swantje
2016-05-01
Dental enamel exhibits high fracture toughness and stiffness due to a complex hierarchical and graded microstructure, optimally organized from nano- to macro-scale. In this study, a 3D representative volume element (RVE) model is adopted to study the deformation and damage behavior of the fibrous microstructure. A continuum damage mechanics model coupled to hyperelasticity is developed for modeling the initiation and evolution of damage in the mineral fibers as well as protein matrix. Moreover, debonding of the interface between mineral fiber and protein is captured by employing a cohesive zone model. The dependence of the failure mechanism on the aspect ratio of the mineral fibers is investigated. In addition, the effect of the interface strength on the damage behavior is studied with respect to geometric features of enamel. Further, the effect of an initial flaw on the overall mechanical properties is analyzed to understand the superior damage tolerance of dental enamel. The simulation results are validated by comparison to experimental data from micro-cantilever beam testing at two hierarchical levels. The transition of the failure mechanism at different hierarchical levels is also well reproduced in the simulations.
Bayesian Hierarchical Models to Augment the Mediterranean Forecast System
2016-06-07
year. Our goal is to develop an ensemble ocean forecast methodology, using Bayesian Hierarchical Modelling (BHM) tools . The ocean ensemble forecast...from above); i.e. we assume Ut ~ Z Λt1/2. WORK COMPLETED The prototype MFS-Wind-BHM was designed and implemented based on stochastic...coding refinements we implemented on the prototype surface wind BHM. A DWF event in February 2005, in the Gulf of Lions, was identified for reforecast
Emergence of a 'visual number sense' in hierarchical generative models.
Stoianov, Ivilin; Zorzi, Marco
2012-01-08
Numerosity estimation is phylogenetically ancient and foundational to human mathematical learning, but its computational bases remain controversial. Here we show that visual numerosity emerges as a statistical property of images in 'deep networks' that learn a hierarchical generative model of the sensory input. Emergent numerosity detectors had response profiles resembling those of monkey parietal neurons and supported numerosity estimation with the same behavioral signature shown by humans and animals.
Hierarchical animal movement models for population-level inference
Hooten, Mevin B.; Buderman, Frances E.; Brost, Brian M.; Hanks, Ephraim M.; Ivans, Jacob S.
2016-01-01
New methods for modeling animal movement based on telemetry data are developed regularly. With advances in telemetry capabilities, animal movement models are becoming increasingly sophisticated. Despite a need for population-level inference, animal movement models are still predominantly developed for individual-level inference. Most efforts to upscale the inference to the population level are either post hoc or complicated enough that only the developer can implement the model. Hierarchical Bayesian models provide an ideal platform for the development of population-level animal movement models but can be challenging to fit due to computational limitations or extensive tuning required. We propose a two-stage procedure for fitting hierarchical animal movement models to telemetry data. The two-stage approach is statistically rigorous and allows one to fit individual-level movement models separately, then resample them using a secondary MCMC algorithm. The primary advantages of the two-stage approach are that the first stage is easily parallelizable and the second stage is completely unsupervised, allowing for an automated fitting procedure in many cases. We demonstrate the two-stage procedure with two applications of animal movement models. The first application involves a spatial point process approach to modeling telemetry data, and the second involves a more complicated continuous-time discrete-space animal movement model. We fit these models to simulated data and real telemetry data arising from a population of monitored Canada lynx in Colorado, USA.
Coordinated Resource Management Models in Hierarchical Systems
Directory of Open Access Journals (Sweden)
Gabsi Mounir
2013-03-01
Full Text Available In response to the trend of efficient global economy, constructing a global logistic model has garnered much attention from the industry .Location selection is an important issue for those international companies that are interested in building a global logistics management system. Infrastructure in Developing Countries are based on the use of both classical and modern control technology, for which the most important components are professional levels of structure knowledge, dynamics and management processes, threats and interference and external and internal attacks. The problem of control flows of energy and materials resources in local and regional structures in normal and marginal, emergency operation provoked information attacks or threats on failure flows are further relevant especially when considering the low level of professional ,psychological and cognitive training of operational personnel manager. Logistics Strategies include the business goals requirements, allowable decisions tactics, and vision for designing and operating a logistics system .In this paper described the selection module coordinating flow management strategies based on the use of resources and logistics systems concepts.
Hierarchical models and the analysis of bird survey information
Sauer, J.R.; Link, W.A.
2003-01-01
Management of birds often requires analysis of collections of estimates. We describe a hierarchical modeling approach to the analysis of these data, in which parameters associated with the individual species estimates are treated as random variables, and probability statements are made about the species parameters conditioned on the data. A Markov-Chain Monte Carlo (MCMC) procedure is used to fit the hierarchical model. This approach is computer intensive, and is based upon simulation. MCMC allows for estimation both of parameters and of derived statistics. To illustrate the application of this method, we use the case in which we are interested in attributes of a collection of estimates of population change. Using data for 28 species of grassland-breeding birds from the North American Breeding Bird Survey, we estimate the number of species with increasing populations, provide precision-adjusted rankings of species trends, and describe a measure of population stability as the probability that the trend for a species is within a certain interval. Hierarchical models can be applied to a variety of bird survey applications, and we are investigating their use in estimation of population change from survey data.
A new approach for modeling generalization gradients: A case for Hierarchical Models
Directory of Open Access Journals (Sweden)
Koen eVanbrabant
2015-05-01
Full Text Available A case is made for the use of hierarchical models in the analysis of generalization gradients. Hierarchical models overcome several restrictions that are imposed by repeated measures analysis-of-variance (rANOVA, the default statistical method in current generalization research. More specifically, hierarchical models allow to include continuous independent variables and overcomes problematic assumptions such as sphericity. We focus on how generalization research can benefit from this added flexibility. In a simulation study we demonstrate the dominance of hierarchical models over rANOVA. In addition, we show the lack of efficiency of the Mauchly's sphericity test in sample sizes typical for generalization research, and confirm how violations of sphericity increase the probability of type I errors. A worked example of a hierarchical model is provided, with a specific emphasis on the interpretation of parameters relevant for generalization research.
A new approach for modeling generalization gradients: a case for hierarchical models.
Vanbrabant, Koen; Boddez, Yannick; Verduyn, Philippe; Mestdagh, Merijn; Hermans, Dirk; Raes, Filip
2015-01-01
A case is made for the use of hierarchical models in the analysis of generalization gradients. Hierarchical models overcome several restrictions that are imposed by repeated measures analysis-of-variance (rANOVA), the default statistical method in current generalization research. More specifically, hierarchical models allow to include continuous independent variables and overcomes problematic assumptions such as sphericity. We focus on how generalization research can benefit from this added flexibility. In a simulation study we demonstrate the dominance of hierarchical models over rANOVA. In addition, we show the lack of efficiency of the Mauchly's sphericity test in sample sizes typical for generalization research, and confirm how violations of sphericity increase the probability of type I errors. A worked example of a hierarchical model is provided, with a specific emphasis on the interpretation of parameters relevant for generalization research.
Dynamic stiffness model of spherical parallel robots
Cammarata, Alessandro; Caliò, Ivo; D`Urso, Domenico; Greco, Annalisa; Lacagnina, Michele; Fichera, Gabriele
2016-12-01
A novel approach to study the elastodynamics of Spherical Parallel Robots is described through an exact dynamic model. Timoshenko arches are used to simulate flexible curved links while the base and mobile platforms are modelled as rigid bodies. Spatial joints are inherently included into the model without Lagrangian multipliers. At first, the equivalent dynamic stiffness matrix of each leg, made up of curved links joined by spatial joints, is derived; then these matrices are assembled to obtain the Global Dynamic Stiffness Matrix of the robot at a given pose. Actuator stiffness is also included into the model to verify its influence on vibrations and modes. The latter are found by applying the Wittrick-Williams algorithm. Finally, numerical simulations and direct comparison to commercial FE results are used to validate the proposed model.
Hierarchical Heteroclinics in Dynamical Model of Cognitive Processes: Chunking
Afraimovich, Valentin S.; Young, Todd R.; Rabinovich, Mikhail I.
Combining the results of brain imaging and nonlinear dynamics provides a new hierarchical vision of brain network functionality that is helpful in understanding the relationship of the network to different mental tasks. Using these ideas it is possible to build adequate models for the description and prediction of different cognitive activities in which the number of variables is usually small enough for analysis. The dynamical images of different mental processes depend on their temporal organization and, as a rule, cannot be just simple attractors since cognition is characterized by transient dynamics. The mathematical image for a robust transient is a stable heteroclinic channel consisting of a chain of saddles connected by unstable separatrices. We focus here on hierarchical chunking dynamics that can represent several cognitive activities. Chunking is the dynamical phenomenon that means dividing a long information chain into shorter items. Chunking is known to be important in many processes of perception, learning, memory and cognition. We prove that in the phase space of the model that describes chunking there exists a new mathematical object — heteroclinic sequence of heteroclinic cycles — using the technique of slow-fast approximations. This new object serves as a skeleton of motions reflecting sequential features of hierarchical chunking dynamics and is an adequate image of the chunking processing.
A parallel-pipelining software process model
Institute of Scientific and Technical Information of China (English)
无
2007-01-01
Software process is a framework for effective and timely delivery of software system. The framework plays a crucial role for software success. However, the development of large-scale software still faces the crisis of high risks, low quality, high costs and long cycle time.This paper proposed a three-phase parallel-pipelining software process model for improving speed and productivity, and reducing software costs and risks without sacrificing software quality. In this model, two strategies were presented. One strategy, based on subsystem-cost priority, Was used to prevent software development cost wasting and to reduce software complexity as well; the other strategy, used for balancing subsystem complexity, was designed to reduce the software complexity in the later development stages. Moreover. The proposed function-detailed and workload-simplified subsystem pipelining software process model presents much higher parallelity than the concurrent incremental model. Finally, the component-based product line technology not only ensures software quality and further reduces cycle time, software costs. And software risks but also sufficiently and rationally utilizes previous software product resources and enhances the competition ability of software development organizations.
Hierarchical modeling of cluster size in wildlife surveys
Royle, J. Andrew
2008-01-01
Clusters or groups of individuals are the fundamental unit of observation in many wildlife sampling problems, including aerial surveys of waterfowl, marine mammals, and ungulates. Explicit accounting of cluster size in models for estimating abundance is necessary because detection of individuals within clusters is not independent and detectability of clusters is likely to increase with cluster size. This induces a cluster size bias in which the average cluster size in the sample is larger than in the population at large. Thus, failure to account for the relationship between delectability and cluster size will tend to yield a positive bias in estimates of abundance or density. I describe a hierarchical modeling framework for accounting for cluster-size bias in animal sampling. The hierarchical model consists of models for the observation process conditional on the cluster size distribution and the cluster size distribution conditional on the total number of clusters. Optionally, a spatial model can be specified that describes variation in the total number of clusters per sample unit. Parameter estimation, model selection, and criticism may be carried out using conventional likelihood-based methods. An extension of the model is described for the situation where measurable covariates at the level of the sample unit are available. Several candidate models within the proposed class are evaluated for aerial survey data on mallard ducks (Anas platyrhynchos).
Parallelization of the Coupled Earthquake Model
Block, Gary; Li, P. Peggy; Song, Yuhe T.
2007-01-01
This Web-based tsunami simulation system allows users to remotely run a model on JPL s supercomputers for a given undersea earthquake. At the time of this reporting, predicting tsunamis on the Internet has never happened before. This new code directly couples the earthquake model and the ocean model on parallel computers and improves simulation speed. Seismometers can only detect information from earthquakes; they cannot detect whether or not a tsunami may occur as a result of the earthquake. When earthquake-tsunami models are coupled with the improved computational speed of modern, high-performance computers and constrained by remotely sensed data, they are able to provide early warnings for those coastal regions at risk. The software is capable of testing NASA s satellite observations of tsunamis. It has been successfully tested for several historical tsunamis, has passed all alpha and beta testing, and is well documented for users.
A hierarchical community occurrence model for North Carolina stream fish
Midway, S.R.; Wagner, Tyler; Tracy, B.H.
2016-01-01
The southeastern USA is home to one of the richest—and most imperiled and threatened—freshwater fish assemblages in North America. For many of these rare and threatened species, conservation efforts are often limited by a lack of data. Drawing on a unique and extensive data set spanning over 20 years, we modeled occurrence probabilities of 126 stream fish species sampled throughout North Carolina, many of which occur more broadly in the southeastern USA. Specifically, we developed species-specific occurrence probabilities from hierarchical Bayesian multispecies models that were based on common land use and land cover covariates. We also used index of biotic integrity tolerance classifications as a second level in the model hierarchy; we identify this level as informative for our work, but it is flexible for future model applications. Based on the partial-pooling property of the models, we were able to generate occurrence probabilities for many imperiled and data-poor species in addition to highlighting a considerable amount of occurrence heterogeneity that supports species-specific investigations whenever possible. Our results provide critical species-level information on many threatened and imperiled species as well as information that may assist with re-evaluation of existing management strategies, such as the use of surrogate species. Finally, we highlight the use of a relatively simple hierarchical model that can easily be generalized for similar situations in which conventional models fail to provide reliable estimates for data-poor groups.
Hierarchical Bayesian spatial models for multispecies conservation planning and monitoring.
Carroll, Carlos; Johnson, Devin S; Dunk, Jeffrey R; Zielinski, William J
2010-12-01
Biologists who develop and apply habitat models are often familiar with the statistical challenges posed by their data's spatial structure but are unsure of whether the use of complex spatial models will increase the utility of model results in planning. We compared the relative performance of nonspatial and hierarchical Bayesian spatial models for three vertebrate and invertebrate taxa of conservation concern (Church's sideband snails [Monadenia churchi], red tree voles [Arborimus longicaudus], and Pacific fishers [Martes pennanti pacifica]) that provide examples of a range of distributional extents and dispersal abilities. We used presence-absence data derived from regional monitoring programs to develop models with both landscape and site-level environmental covariates. We used Markov chain Monte Carlo algorithms and a conditional autoregressive or intrinsic conditional autoregressive model framework to fit spatial models. The fit of Bayesian spatial models was between 35 and 55% better than the fit of nonspatial analogue models. Bayesian spatial models outperformed analogous models developed with maximum entropy (Maxent) methods. Although the best spatial and nonspatial models included similar environmental variables, spatial models provided estimates of residual spatial effects that suggested how ecological processes might structure distribution patterns. Spatial models built from presence-absence data improved fit most for localized endemic species with ranges constrained by poorly known biogeographic factors and for widely distributed species suspected to be strongly affected by unmeasured environmental variables or population processes. By treating spatial effects as a variable of interest rather than a nuisance, hierarchical Bayesian spatial models, especially when they are based on a common broad-scale spatial lattice (here the national Forest Inventory and Analysis grid of 24 km(2) hexagons), can increase the relevance of habitat models to multispecies
Application of Bayesian Hierarchical Prior Modeling to Sparse Channel Estimation
DEFF Research Database (Denmark)
Pedersen, Niels Lovmand; Manchón, Carles Navarro; Shutin, Dmitriy
2012-01-01
. The estimators result as an application of the variational message-passing algorithm on the factor graph representing the signal model extended with the hierarchical prior models. Numerical results demonstrate the superior performance of our channel estimators as compared to traditional and state......Existing methods for sparse channel estimation typically provide an estimate computed as the solution maximizing an objective function defined as the sum of the log-likelihood function and a penalization term proportional to the l1-norm of the parameter of interest. However, other penalization......-of-the-art sparse methods....
Bayesian hierarchical modeling for detecting safety signals in clinical trials.
Xia, H Amy; Ma, Haijun; Carlin, Bradley P
2011-09-01
Detection of safety signals from clinical trial adverse event data is critical in drug development, but carries a challenging statistical multiplicity problem. Bayesian hierarchical mixture modeling is appealing for its ability to borrow strength across subgroups in the data, as well as moderate extreme findings most likely due merely to chance. We implement such a model for subject incidence (Berry and Berry, 2004 ) using a binomial likelihood, and extend it to subject-year adjusted incidence rate estimation under a Poisson likelihood. We use simulation to choose a signal detection threshold, and illustrate some effective graphics for displaying the flagged signals.
An Extended Hierarchical Trusted Model for Wireless Sensor Networks
Institute of Scientific and Technical Information of China (English)
DU Ruiying; XU Mingdi; ZHANG Huanguo
2006-01-01
Cryptography and authentication are traditional approach for providing network security. However, they are not sufficient for solving the problems which malicious nodes compromise whole wireless sensor network leading to invalid data transmission and wasting resource by using vicious behaviors. This paper puts forward an extended hierarchical trusted architecture for wireless sensor network, and establishes trusted congregations by three-tier framework. The method combines statistics, economics with encrypt mechanism for developing two trusted models which evaluate cluster head nodes and common sensor nodes respectively. The models form logical trusted-link from command node to common sensor nodes and guarantees the network can run in secure and reliable circumstance.
Ensemble renormalization group for the random-field hierarchical model.
Decelle, Aurélien; Parisi, Giorgio; Rocchi, Jacopo
2014-03-01
The renormalization group (RG) methods are still far from being completely understood in quenched disordered systems. In order to gain insight into the nature of the phase transition of these systems, it is common to investigate simple models. In this work we study a real-space RG transformation on the Dyson hierarchical lattice with a random field, which leads to a reconstruction of the RG flow and to an evaluation of the critical exponents of the model at T=0. We show that this method gives very accurate estimations of the critical exponents by comparing our results with those obtained by some of us using an independent method.
A Network Model for Parallel Line Balancing Problem
Recep Benzer; Hadi Gökçen; Tahsin Çetinyokus; Hakan Çerçioglu
2007-01-01
Gökçen et al. (2006) have proposed several procedures and a mathematical model on single-model (product) assembly line balancing (ALB) problem with parallel lines. In parallel ALB problem, the goal is to balance more than one assembly line together. In this paper, a network model for parallel ALB problem has been proposed and illustrated on a numerical example. This model is a new approach for parallel ALB and it provides a different point of view for i...
Advancing the extended parallel process model through the inclusion of response cost measures.
Rintamaki, Lance S; Yang, Z Janet
2014-01-01
This study advances the Extended Parallel Process Model through the inclusion of response cost measures, which are drawbacks associated with a proposed response to a health threat. A sample of 502 college students completed a questionnaire on perceptions regarding sexually transmitted infections and condom use after reading information from the Centers for Disease Control and Prevention on the health risks of sexually transmitted infections and the utility of latex condoms in preventing sexually transmitted infection transmission. The questionnaire included standard Extended Parallel Process Model assessments of perceived threat and efficacy, as well as questions pertaining to response costs associated with condom use. Results from hierarchical ordinary least squares regression demonstrated how the addition of response cost measures improved the predictive power of the Extended Parallel Process Model, supporting the inclusion of this variable in the model.
Facial animation on an anatomy-based hierarchical face model
Zhang, Yu; Prakash, Edmond C.; Sung, Eric
2003-04-01
In this paper we propose a new hierarchical 3D facial model based on anatomical knowledge that provides high fidelity for realistic facial expression animation. Like real human face, the facial model has a hierarchical biomechanical structure, incorporating a physically-based approximation to facial skin tissue, a set of anatomically-motivated facial muscle actuators and underlying skull structure. The deformable skin model has multi-layer structure to approximate different types of soft tissue. It takes into account the nonlinear stress-strain relationship of the skin and the fact that soft tissue is almost incompressible. Different types of muscle models have been developed to simulate distribution of the muscle force on the skin due to muscle contraction. By the presence of the skull model, our facial model takes advantage of both more accurate facial deformation and the consideration of facial anatomy during the interactive definition of facial muscles. Under the muscular force, the deformation of the facial skin is evaluated using numerical integration of the governing dynamic equations. The dynamic facial animation algorithm runs at interactive rate with flexible and realistic facial expressions to be generated.
A Bisimulation-based Hierarchical Framework for Software Development Models
Directory of Open Access Journals (Sweden)
Ping Liang
2013-08-01
Full Text Available Software development models have been ripen since the emergence of software engineering, like waterfall model, V-model, spiral model, etc. To ensure the successful implementation of those models, various metrics for software products and development process have been developed along, like CMMI, software metrics, and process re-engineering, etc. The quality of software products and processes can be ensured in consistence as much as possible and the abstract integrity of a software product can be achieved. However, in reality, the maintenance of software products is still high and even higher along with software evolution due to the inconsistence occurred by changes and inherent errors of software products. It is better to build up a robust software product that can sustain changes as many as possible. Therefore, this paper proposes a process algebra based hierarchical framework to extract an abstract equivalent of deliverable at the end of phases of a software product from its software development models. The process algebra equivalent of the deliverable is developed hierarchically with the development of the software product, applying bi-simulation to test run the deliverable of phases to guarantee the consistence and integrity of the software development and product in a trivially mathematical way. And an algorithm is also given to carry out the assessment of the phase deliverable in process algebra.
C-HiLasso: A Collaborative Hierarchical Sparse Modeling Framework
Sprechmann, Pablo; Sapiro, Guillermo; Eldar, Yonina
2010-01-01
Sparse modeling is a powerful framework for data analysis and processing. Traditionally, encoding in this framework is performed by solving an L1-regularized linear regression problem, commonly referred to as Lasso or Basis Pursuit. In this work we combine the sparsity-inducing property of the Lasso model at the individual feature level, with the block-sparsity property of the Group Lasso model, where sparse groups of features are jointly encoded, obtaining a sparsity pattern hierarchically structured. This results in the Hierarchical Lasso (HiLasso), which shows important practical modeling advantages. We then extend this approach to the collaborative case, where a set of simultaneously coded signals share the same sparsity pattern at the higher (group) level, but not necessarily at the lower (inside the group) level, obtaining the collaborative HiLasso model (C-HiLasso). Such signals then share the same active groups, or classes, but not necessarily the same active set. This model is very well suited for ap...
Institute of Scientific and Technical Information of China (English)
无
2010-01-01
Spherical molecular brushes with amphiphilic heteroarms were facilely synthesized by grafting the arms of hydrophobic 2-azidoethyle palmitate and hydrophilic monoazide-terminated poly(ethylene glycol) onto the core of alkyne-modified hyperbranched polyglycerol (HPG) with high molecular weight (Mn=122 kDa) via one-pot parallel click chemistry.The parallel click grafting strategy was demonstrated to be highly efficient (～100%),very fast (～ 2 h) and well controllable to the amphilicity of molecular brushes.Through adjusting the feeding ratio of hydrophobic and hydrophilic arms,a series of brushes with different arm ratios were readily obtained.The resulting miktoarms hyperbranched polymer brushes (HPG-g-C16/PEG350) were characterized by hydrogen-nuclear magnetic resonance (1H NMR),Fourier transform infrared (FT-IR) spectroscopy,gel permeation chromatography (GPC),and differential scanning calorimetry (DSC) measurements.The spherical molecular brushes showed high molecular weights up to 230 kDa,and thus could be visualized by atomic force microscopy (AFM).AFM and dynamic laser light scattering (DLS) were employed to investigate the self-assembly properties of amphiphilic molecular brushes with closed proportion of hydrophobic and hydrophilic arms.The brushes could self-assemble hierarchically into spherical micelles,and network-like fibre structures,and again spherical micelles by addition of n-hexane into the dichloromethane or chloroform solution of brushes.In addition,this kind of miktoarms polymer brush also showed the ability of dye loading via host-guest encapsulation,which promises the potential application of spherical molecular brushes in supramolecular chemistry.
o-HETM: An Online Hierarchical Entity Topic Model for News Streams
2015-05-22
Cao et al. (Eds.): PAKDD 2015, Part I, LNAI 9077, pp. 696–707, 2015. DOI: 10.1007/978-3-319-18038-0 54 o-HETM: An Online Hierarchical Entity Topic... 2004 ) o-HETM: An Online Hierarchical Entity Topic Model for News Streams 707 6. Mimno, D., Li, W., McCallum, A.: Mixtures of hierarchical topics with
About wave field modeling in hierarchic medium with fractal inclusions
Hachay, Olga; Khachay, Andrey
2014-05-01
The processes of oil gaseous deposits outworking are linked with moving of polyphase multicomponent media, which are characterized by no equilibrium and nonlinear rheological features. The real behavior of layered systems is defined as complicated rheology moving liquids and structural morphology of porous media. It is eargently needed to account those factors for substantial description of the filtration processes. Additionally we must account also the synergetic effects. That allows suggesting new methods of control and managing of complicated natural systems, which can research these effects. Thus our research is directed to the layered system, from which we have to outwork oil and which is a complicated hierarchic dynamical system with fractal inclusions. In that paper we suggest the algorithm of modeling of 2-d seismic field distribution in the heterogeneous medium with hierarchic inclusions. Also we can compare the integral 2-D for seismic field in a frame of local hierarchic heterogeneity with a porous inclusion and pure elastic inclusion for the case when the parameter Lame is equal to zero for the inclusions and the layered structure. For that case we can regard the problem for the latitude and longitudinal waves independently. Here we shall analyze the first case. The received results can be used for choosing criterions of joined seismic methods for high complicated media research.If the boundaries of the inclusion of the k rank are fractals, the surface and contour integrals in the integral equations must be changed to repeated fractional integrals of Riman-Liuvill type .Using the developed earlier 3-d method of induction electromagnetic frequency geometric monitoring we showed the opportunity of defining of physical and structural features of hierarchic oil layer structure and estimating of water saturating by crack inclusions. For visualization we had elaborated some algorithms and programs for constructing cross sections for two hierarchic structural
Parallel computing in atmospheric chemistry models
Energy Technology Data Exchange (ETDEWEB)
Rotman, D. [Lawrence Livermore National Lab., CA (United States). Atmospheric Sciences Div.
1996-02-01
Studies of atmospheric chemistry are of high scientific interest, involve computations that are complex and intense, and require enormous amounts of I/O. Current supercomputer computational capabilities are limiting the studies of stratospheric and tropospheric chemistry and will certainly not be able to handle the upcoming coupled chemistry/climate models. To enable such calculations, the authors have developed a computing framework that allows computations on a wide range of computational platforms, including massively parallel machines. Because of the fast paced changes in this field, the modeling framework and scientific modules have been developed to be highly portable and efficient. Here, the authors present the important features of the framework and focus on the atmospheric chemistry module, named IMPACT, and its capabilities. Applications of IMPACT to aircraft studies will be presented.
A Parallel, High-Fidelity Radar Model
Horsley, M.; Fasenfest, B.
2010-09-01
Accurate modeling of Space Surveillance sensors is necessary for a variety of applications. Accurate models can be used to perform trade studies on sensor designs, locations, and scheduling. In addition, they can be used to predict system-level performance of the Space Surveillance Network to a collision or satellite break-up event. A high fidelity physics-based radar simulator has been developed for Space Surveillance applications. This simulator is designed in a modular fashion, where each module describes a particular physical process or radar function (radio wave propagation & scattering, waveform generation, noise sources, etc.) involved in simulating the radar and its environment. For each of these modules, multiple versions are available in order to meet the end-users needs and requirements. For instance, the radar simulator supports different atmospheric models in order to facilitate different methods of simulating refraction of the radar beam. The radar model also has the capability to use highly accurate radar cross sections generated by the method of moments, accelerated by the fast multipole method. To accelerate this computationally expensive model, it is parallelized using MPI. As a testing framework for the radar model, it is incorporated into the Testbed Environment for Space Situational Awareness (TESSA). TESSA is based on a flexible, scalable architecture, designed to exploit high-performance computing resources and allow physics-based simulation of the SSA enterprise. In addition to the radar models, TESSA includes hydrodynamic models of satellite intercept and debris generation, orbital propagation algorithms, optical brightness calculations, optical system models, object detection algorithms, orbit determination algorithms, simulation analysis and visualization tools. Within this framework, observations and tracks generated by the new radar model are compared to results from a phenomenological radar model. In particular, the new model will be
Linguistic steganography on Twitter: hierarchical language modeling with manual interaction
Wilson, Alex; Blunsom, Phil; Ker, Andrew D.
2014-02-01
This work proposes a natural language stegosystem for Twitter, modifying tweets as they are written to hide 4 bits of payload per tweet, which is a greater payload than previous systems have achieved. The system, CoverTweet, includes novel components, as well as some already developed in the literature. We believe that the task of transforming covers during embedding is equivalent to unilingual machine translation (paraphrasing), and we use this equivalence to de ne a distortion measure based on statistical machine translation methods. The system incorporates this measure of distortion to rank possible tweet paraphrases, using a hierarchical language model; we use human interaction as a second distortion measure to pick the best. The hierarchical language model is designed to model the speci c language of the covers, which in this setting is the language of the Twitter user who is embedding. This is a change from previous work, where general-purpose language models have been used. We evaluate our system by testing the output against human judges, and show that humans are unable to distinguish stego tweets from cover tweets any better than random guessing.
Finite Population Correction for Two-Level Hierarchical Linear Models.
Lai, Mark H C; Kwok, Oi-Man; Hsiao, Yu-Yu; Cao, Qian
2017-03-16
The research literature has paid little attention to the issue of finite population at a higher level in hierarchical linear modeling. In this article, we propose a method to obtain finite-population-adjusted standard errors of Level-1 and Level-2 fixed effects in 2-level hierarchical linear models. When the finite population at Level-2 is incorrectly assumed as being infinite, the standard errors of the fixed effects are overestimated, resulting in lower statistical power and wider confidence intervals. The impact of ignoring finite population correction is illustrated by using both a real data example and a simulation study with a random intercept model and a random slope model. Simulation results indicated that the bias in the unadjusted fixed-effect standard errors was substantial when the Level-2 sample size exceeded 10% of the Level-2 population size; the bias increased with a larger intraclass correlation, a larger number of clusters, and a larger average cluster size. We also found that the proposed adjustment produced unbiased standard errors, particularly when the number of clusters was at least 30 and the average cluster size was at least 10. We encourage researchers to consider the characteristics of the target population for their studies and adjust for finite population when appropriate. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
A Hierarchical Model for Continuous Gesture Recognition Using Kinect
DEFF Research Database (Denmark)
Jensen, Søren Kejser; Moesgaard, Christoffer; Nielsen, Christoffer Samuel
2013-01-01
Human gesture recognition is an area, which has been studied thoroughly in recent years,and close to100% recognition rates in restricted environments have been achieved, often either with single separated gestures in the input stream, or with computationally intensive systems. The results...... are unfortunately not as striking, when it comes to a continuous stream of gestures. In this paper we introduce a hierarchical system for gesture recognition for use in a gaming setting, with a continuous stream of data. Layer 1 is based on Nearest Neighbor Search and layer 2 uses Hidden Markov Models. The system...
Dynamical Properties of Potassium Ion Channels with a Hierarchical Model
Institute of Scientific and Technical Information of China (English)
ZHAN Yong; AN Hai-Long; YU Hui; ZHANG Su-Hua; HAN Ying-Rong
2006-01-01
@@ It is well known that potassium ion channels have higher permeability than K ions, and the permeable rate of a single K ion channel is about 108 ions per second. We develop a hierarchical model of potassium ion channel permeation involving ab initio quantum calculations and Brownian dynamics simulations, which can consistently explain a range of channel dynamics. The results show that the average velocity of K ions, the mean permeable time of K ions and the permeable rate of single channel are about 0.92nm/ns, 4.35ns and 2.30×108 ions/s,respectively.
Hierarchical Stochastic Simulation Algorithm for SBML Models of Genetic Circuits
Directory of Open Access Journals (Sweden)
Leandro eWatanabe
2014-11-01
Full Text Available This paper describes a hierarchical stochastic simulation algorithm which has been implemented within iBioSim, a tool used to model, analyze, and visualize genetic circuits. Many biological analysis tools flatten out hierarchy before simulation, but there are many disadvantages associated with this approach. First, the memory required to represent the model can quickly expand in the process. Second, the flattening process is computationally expensive. Finally, when modeling a dynamic cellular population within iBioSim, inlining the hierarchy of the model is inefficient since models must grow dynamically over time. This paper discusses a new approach to handle hierarchy on the fly to make the tool faster and more memory-efficient. This approach yields significant performance improvements as compared to the former flat analysis method.
A Hierarchical Model Architecture for Enterprise Integration in Chemical Industries
Institute of Scientific and Technical Information of China (English)
华贲; 周章玉; 成思危
2001-01-01
Towards integration of supply chain, manufacturing/production and investment decision making, this paper presents a hierarchical model architecture which contains six sub-models covering the areas of manufacturing control, production operation, design and revamp, production management, supply chain and investment decision making. Six types of flow, material, energy, information, humanware, partsware and capital are ciasified. These flows connect enterprise components/subsystems to formulate system topology and logical structure. Enterprise components/subsystems are abstracted to generic elementary and composite classes. Finally, the model architecture is applied to a management system of an integrated suply chain, and suggestion are made on the usage of the model architecture and further development of the model as well as imvlementation issues.
Hierarchical Model for the Evolution of Cloud Complexes
Sánchez, N; Sanchez, Nestor; Parravano, Antonio
1999-01-01
The structure of cloud complexes appears to be well described by a "tree structure" representation when the image is partitioned into "clouds". In this representation, the parent-child relationships are assigned according to containment. Based on this picture, a hierarchical model for the evolution of Cloud Complexes, including star formation, is constructed, that follows the mass evolution of each sub-structure by computing its mass exchange (evaporation or condensation) with its parent and children, which depends on the radiation density at the interphase. For the set of parameters used as a reference model, the system produces IMFs with a maximum at too high mass (~2 M_sun) and the characteristic times for evolution seem too long. We show that these properties can be improved by adjusting model parameters. However, the emphasis here is to illustrate some general properties of this nonlinear model for the star formation process. Notwithstanding the simplifications involved, the model reveals an essential fe...
A Parallel Lattice Boltzmann Model of a Carotid Artery
Boyd, J.; Ryan, S. J.; Buick, J. M.
2008-11-01
A parallel implementation of the lattice Boltzmann model is considered for a three dimensional model of the carotid artery. The computational method and its parallel implementation are described. The performance of the parallel implementation on a Beowulf cluster is presented, as are preliminary hemodynamic results.
Spatial Bayesian hierarchical modelling of extreme sea states
Clancy, Colm; O'Sullivan, John; Sweeney, Conor; Dias, Frédéric; Parnell, Andrew C.
2016-11-01
A Bayesian hierarchical framework is used to model extreme sea states, incorporating a latent spatial process to more effectively capture the spatial variation of the extremes. The model is applied to a 34-year hindcast of significant wave height off the west coast of Ireland. The generalised Pareto distribution is fitted to declustered peaks over a threshold given by the 99.8th percentile of the data. Return levels of significant wave height are computed and compared against those from a model based on the commonly-used maximum likelihood inference method. The Bayesian spatial model produces smoother maps of return levels. Furthermore, this approach greatly reduces the uncertainty in the estimates, thus providing information on extremes which is more useful for practical applications.
Inference in HIV dynamics models via hierarchical likelihood
Commenges, D; Putter, H; Thiebaut, R
2010-01-01
HIV dynamical models are often based on non-linear systems of ordinary differential equations (ODE), which do not have analytical solution. Introducing random effects in such models leads to very challenging non-linear mixed-effects models. To avoid the numerical computation of multiple integrals involved in the likelihood, we propose a hierarchical likelihood (h-likelihood) approach, treated in the spirit of a penalized likelihood. We give the asymptotic distribution of the maximum h-likelihood estimators (MHLE) for fixed effects, a result that may be relevant in a more general setting. The MHLE are slightly biased but the bias can be made negligible by using a parametric bootstrap procedure. We propose an efficient algorithm for maximizing the h-likelihood. A simulation study, based on a classical HIV dynamical model, confirms the good properties of the MHLE. We apply it to the analysis of a clinical trial.
[A medical image semantic modeling based on hierarchical Bayesian networks].
Lin, Chunyi; Ma, Lihong; Yin, Junxun; Chen, Jianyu
2009-04-01
A semantic modeling approach for medical image semantic retrieval based on hierarchical Bayesian networks was proposed, in allusion to characters of medical images. It used GMM (Gaussian mixture models) to map low-level image features into object semantics with probabilities, then it captured high-level semantics through fusing these object semantics using a Bayesian network, so that it built a multi-layer medical image semantic model, aiming to enable automatic image annotation and semantic retrieval by using various keywords at different semantic levels. As for the validity of this method, we have built a multi-level semantic model from a small set of astrocytoma MRI (magnetic resonance imaging) samples, in order to extract semantics of astrocytoma in malignant degree. Experiment results show that this is a superior approach.
Item Response Theory Using Hierarchical Generalized Linear Models
Directory of Open Access Journals (Sweden)
Hamdollah Ravand
2015-03-01
Full Text Available Multilevel models (MLMs are flexible in that they can be employed to obtain item and person parameters, test for differential item functioning (DIF and capture both local item and person dependence. Papers on the MLM analysis of item response data have focused mostly on theoretical issues where applications have been add-ons to simulation studies with a methodological focus. Although the methodological direction was necessary as a first step to show how MLMs can be utilized and extended to model item response data, the emphasis needs to be shifted towards providing evidence on how applications of MLMs in educational testing can provide the benefits that have been promised. The present study uses foreign language reading comprehension data to illustrate application of hierarchical generalized models to estimate person and item parameters, differential item functioning (DIF, and local person dependence in a three-level model.
A Maximum Entropy Estimator for the Aggregate Hierarchical Logit Model
Directory of Open Access Journals (Sweden)
Pedro Donoso
2011-08-01
Full Text Available A new approach for estimating the aggregate hierarchical logit model is presented. Though usually derived from random utility theory assuming correlated stochastic errors, the model can also be derived as a solution to a maximum entropy problem. Under the latter approach, the Lagrange multipliers of the optimization problem can be understood as parameter estimators of the model. Based on theoretical analysis and Monte Carlo simulations of a transportation demand model, it is demonstrated that the maximum entropy estimators have statistical properties that are superior to classical maximum likelihood estimators, particularly for small or medium-size samples. The simulations also generated reduced bias in the estimates of the subjective value of time and consumer surplus.
Holan, S.H.; Davis, G.M.; Wildhaber, M.L.; DeLonay, A.J.; Papoulias, D.M.
2009-01-01
The timing of spawning in fish is tightly linked to environmental factors; however, these factors are not very well understood for many species. Specifically, little information is available to guide recruitment efforts for endangered species such as the sturgeon. Therefore, we propose a Bayesian hierarchical model for predicting the success of spawning of the shovelnose sturgeon which uses both biological and behavioural (longitudinal) data. In particular, we use data that were produced from a tracking study that was conducted in the Lower Missouri River. The data that were produced from this study consist of biological variables associated with readiness to spawn along with longitudinal behavioural data collected by using telemetry and archival data storage tags. These high frequency data are complex both biologically and in the underlying behavioural process. To accommodate such complexity we developed a hierarchical linear regression model that uses an eigenvalue predictor, derived from the transition probability matrix of a two-state Markov switching model with generalized auto-regressive conditional heteroscedastic dynamics. Finally, to minimize the computational burden that is associated with estimation of this model, a parallel computing approach is proposed. ?? Journal compilation 2009 Royal Statistical Society.
A hierarchical model of the evolution of human brain specializations.
Barrett, H Clark
2012-06-26
The study of information-processing adaptations in the brain is controversial, in part because of disputes about the form such adaptations might take. Many psychologists assume that adaptations come in two kinds, specialized and general-purpose. Specialized mechanisms are typically thought of as innate, domain-specific, and isolated from other brain systems, whereas generalized mechanisms are developmentally plastic, domain-general, and interactive. However, if brain mechanisms evolve through processes of descent with modification, they are likely to be heterogeneous, rather than coming in just two kinds. They are likely to be hierarchically organized, with some design features widely shared across brain systems and others specific to particular processes. Also, they are likely to be largely developmentally plastic and interactive with other brain systems, rather than canalized and isolated. This article presents a hierarchical model of brain specialization, reviewing evidence for the model from evolutionary developmental biology, genetics, brain mapping, and comparative studies. Implications for the search for uniquely human traits are discussed, along with ways in which conventional views of modularity in psychology may need to be revised.
Study of hierarchical federation architecture using multi-resolution modeling
Institute of Scientific and Technical Information of China (English)
HAO Yan-ling; SHEN Dong-hui; QIAN Hua-ming; DENG Ming-hui
2004-01-01
This paper aims at finding a solution to the problem aroused in complex system simulation, where a specific functional federation is coupled with other simulation systems. In other words, the communication information within the system may be received by other federates that participated in this united simulation. For the purpose of ensuring simulation system unitary character, a hierarchical federation architecture (HFA) is taken. Also considering the real situation, where federates in a complicated simulation system can be made simpler to an extent, a multi-resolution modeling (MRM) method is imported to implement the design of hierarchical federation. By utilizing the multiple resolution entity (MRE) modeling approach, MRE for federates are designed out. When different level training simulation is required, the appropriate MRE at corresponding layers can be called. The design method realizes the reuse feature of the simulation system and reduces simulation complexity and improves the validity of system Simulation Cost (SC). Taking submarine voyage training simulator (SVTS) for instance, a HFA for submarine is constructed inthis paper, which approves the feasibility of studied approach.
A stochastic model for detecting overlapping and hierarchical community structure.
Directory of Open Access Journals (Sweden)
Xiaochun Cao
Full Text Available Community detection is a fundamental problem in the analysis of complex networks. Recently, many researchers have concentrated on the detection of overlapping communities, where a vertex may belong to more than one community. However, most current methods require the number (or the size of the communities as a priori information, which is usually unavailable in real-world networks. Thus, a practical algorithm should not only find the overlapping community structure, but also automatically determine the number of communities. Furthermore, it is preferable if this method is able to reveal the hierarchical structure of networks as well. In this work, we firstly propose a generative model that employs a nonnegative matrix factorization (NMF formulization with a l(2,1 norm regularization term, balanced by a resolution parameter. The NMF has the nature that provides overlapping community structure by assigning soft membership variables to each vertex; the l(2,1 regularization term is a technique of group sparsity which can automatically determine the number of communities by penalizing too many nonempty communities; and hence the resolution parameter enables us to explore the hierarchical structure of networks. Thereafter, we derive the multiplicative update rule to learn the model parameters, and offer the proof of its correctness. Finally, we test our approach on a variety of synthetic and real-world networks, and compare it with some state-of-the-art algorithms. The results validate the superior performance of our new method.
The Hierarchical Dirichlet Process Hidden Semi-Markov Model
Johnson, Matthew J
2012-01-01
There is much interest in the Hierarchical Dirichlet Process Hidden Markov Model (HDP-HMM) as a natural Bayesian nonparametric extension of the traditional HMM. However, in many settings the HDP-HMM's strict Markovian constraints are undesirable, particularly if we wish to learn or encode non-geometric state durations. We can extend the HDP-HMM to capture such structure by drawing upon explicit-duration semi- Markovianity, which has been developed in the parametric setting to allow construction of highly interpretable models that admit natural prior information on state durations. In this paper we introduce the explicitduration HDP-HSMM and develop posterior sampling algorithms for efficient inference in both the direct-assignment and weak-limit approximation settings. We demonstrate the utility of the model and our inference methods on synthetic data as well as experiments on a speaker diarization problem and an example of learning the patterns in Morse code.
Learning Hierarchical User Interest Models from Web Pages
Institute of Scientific and Technical Information of China (English)
无
2006-01-01
We propose an algorithm for learning hierarchical user interest models according to the Web pages users have browsed. In this algorithm, the interests of a user are represented into a tree which is called a user interest tree, the content and the structure of which can change simultaneously to adapt to the changes in a user's interests. This expression represents a user's specific and general interests as a continuum. In some sense, specific interests correspond to short-term interests, while general interests correspond to long-term interests. So this representation more really reflects the users' interests. The algorithm can automatically model a user's multiple interest domains, dynamically generate the interest models and prune a user interest tree when the number of the nodes in it exceeds given value. Finally, we show the experiment results in a Chinese Web Site.
Multi-mode clustering model for hierarchical wireless sensor networks
Hu, Xiangdong; Li, Yongfu; Xu, Huifen
2017-03-01
The topology management, i.e., clusters maintenance, of wireless sensor networks (WSNs) is still a challenge due to its numerous nodes, diverse application scenarios and limited resources as well as complex dynamics. To address this issue, a multi-mode clustering model (M2 CM) is proposed to maintain the clusters for hierarchical WSNs in this study. In particular, unlike the traditional time-trigger model based on the whole-network and periodic style, the M2 CM is proposed based on the local and event-trigger operations. In addition, an adaptive local maintenance algorithm is designed for the broken clusters in the WSNs using the spatial-temporal demand changes accordingly. Numerical experiments are performed using the NS2 network simulation platform. Results validate the effectiveness of the proposed model with respect to the network maintenance costs, node energy consumption and transmitted data as well as the network lifetime.
Exploitation of Parallelism in Climate Models
Energy Technology Data Exchange (ETDEWEB)
Baer, F.; Tribbia, J.J.; Williamson, D.L.
1999-03-01
The US Department of Energy (DOE), through its CHAMMP initiative, hopes to develop the capability to make meaningful regional climate forecasts on time scales exceeding a decade, such capability to be based on numerical prediction type models. We propose research to contribute to each of the specific items enumerated in the CHAMMP announcement (Notice 91-3); i.e., to consider theoretical limits to prediction of climate and climate change on appropriate time scales, to develop new mathematical techniques to utilize massively parallel processors (MPP), to actually utilize MPPs as a research tool, and to develop improved representations of some processes essential to climate prediction. In particular, our goals are to: (1) Reconfigure the prediction equations such that the time iteration process can be compressed by use of MMP architecture, and to develop appropriate algorithms. (2) Develop local subgrid scale models which can provide time and space dependent parameterization for a state- of-the-art climate model to minimize the scale resolution necessary for a climate model, and to utilize MPP capability to simultaneously integrate those subgrid models and their statistics. (3) Capitalize on the MPP architecture to study the inherent ensemble nature of the climate problem. By careful choice of initial states, many realizations of the climate system can be determined concurrently and more realistic assessments of the climate prediction can be made in a realistic time frame. To explore these initiatives, we will exploit all available computing technology, and in particular MPP machines. We anticipate that significant improvements in modeling of climate on the decadal and longer time scales for regional space scales will result from our efforts.
Shared Variable Oriented Parallel Precompiler for SPMD Model
Institute of Scientific and Technical Information of China (English)
无
1995-01-01
For the moment,commercial parallel computer systems with distributed memory architecture are usually provided with parallel FORTRAN or parallel C compliers,which are just traditional sequential FORTRAN or C compilers expanded with communication statements.Programmers suffer from writing parallel programs with communication statements. The Shared Variable Oriented Parallel Precompiler (SVOPP) proposed in this paper can automatically generate appropriate communication statements based on shared variables for SPMD(Single Program Multiple Data) computation model and greatly ease the parallel programming with high communication efficiency.The core function of parallel C precompiler has been successfully verified on a transputer-based parallel computer.Its prominent performance shows that SVOPP is probably a break-through in parallel programming technique.
Modeling evolutionary dynamics of epigenetic mutations in hierarchically organized tumors.
Directory of Open Access Journals (Sweden)
Andrea Sottoriva
2011-05-01
Full Text Available The cancer stem cell (CSC concept is a highly debated topic in cancer research. While experimental evidence in favor of the cancer stem cell theory is apparently abundant, the results are often criticized as being difficult to interpret. An important reason for this is that most experimental data that support this model rely on transplantation studies. In this study we use a novel cellular Potts model to elucidate the dynamics of established malignancies that are driven by a small subset of CSCs. Our results demonstrate that epigenetic mutations that occur during mitosis display highly altered dynamics in CSC-driven malignancies compared to a classical, non-hierarchical model of growth. In particular, the heterogeneity observed in CSC-driven tumors is considerably higher. We speculate that this feature could be used in combination with epigenetic (methylation sequencing studies of human malignancies to prove or refute the CSC hypothesis in established tumors without the need for transplantation. Moreover our tumor growth simulations indicate that CSC-driven tumors display evolutionary features that can be considered beneficial during tumor progression. Besides an increased heterogeneity they also exhibit properties that allow the escape of clones from local fitness peaks. This leads to more aggressive phenotypes in the long run and makes the neoplasm more adaptable to stringent selective forces such as cancer treatment. Indeed when therapy is applied the clone landscape of the regrown tumor is more aggressive with respect to the primary tumor, whereas the classical model demonstrated similar patterns before and after therapy. Understanding these often counter-intuitive fundamental properties of (non-hierarchically organized malignancies is a crucial step in validating the CSC concept as well as providing insight into the therapeutical consequences of this model.
Research and application of hierarchical model for multiple fault diagnosis
Institute of Scientific and Technical Information of China (English)
An Ruoming; Jiang Xingwei; Song Zhengji
2005-01-01
Computational complexity of complex system multiple fault diagnosis is a puzzle at all times. Based on the well-known Mozetic's approach, a novel hierarchical model-based diagnosis methodology is put forward for improving efficiency of multi-fault recognition and localization. Structural abstraction and weighted fault propagation graphs are combined to build diagnosis model. The graphs have weighted arcs with fault propagation probabilities and propagation strength. For solving the problem of coupled faults, two diagnosis strategies are used: one is the Lagrangian relaxation and the primal heuristic algorithms; another is the method of propagation strength. Finally, an applied example shows the applicability of the approach and experimental results are given to show the superiority of the presented technique.
Hierarchical population model with a carrying capacity distribution
Indekeu, J O
2002-01-01
A time- and space-discrete model for the growth of a rapidly saturating local biological population $N(x,t)$ is derived from a hierarchical random deposition process previously studied in statistical physics. Two biologically relevant parameters, the probabilities of birth, $B$, and of death, $D$, determine the carrying capacity $K$. Due to the randomness the population depends strongly on position, $x$, and there is a distribution of carrying capacities, $\\Pi (K)$. This distribution has self-similar character owing to the imposed hierarchy. The most probable carrying capacity and its probability are studied as a function of $B$ and $D$. The effective growth rate decreases with time, roughly as in a Verhulst process. The model is possibly applicable, for example, to bacteria forming a "towering pillar" biofilm. The bacteria divide on randomly distributed nutrient-rich regions and are exposed to random local bactericidal agent (antibiotic spray). A gradual overall temperature change away from optimal growth co...
Hierarchical decision modeling essays in honor of Dundar F. Kocaoglu
2016-01-01
This volume, developed in honor of Dr. Dundar F. Kocaoglu, aims to demonstrate the applications of the Hierarchical Decision Model (HDM) in different sectors and its capacity in decision analysis. It is comprised of essays from noted scholars, academics and researchers of engineering and technology management around the world. This book is organized into four parts: Technology Assessment, Strategic Planning, National Technology Planning and Decision Making Tools. Dr. Dundar F. Kocaoglu is one of the pioneers of multiple decision models using hierarchies, and creator of the HDM in decision analysis. HDM is a mission-oriented method for evaluation and/or selection among alternatives. A wide range of alternatives can be considered, including but not limited to, different technologies, projects, markets, jobs, products, cities to live in, houses to buy, apartments to rent, and schools to attend. Dr. Kocaoglu’s approach has been adopted for decision problems in many industrial sectors, including electronics rese...
Bayesian hierarchical modelling of weak lensing - the golden goal
Heavens, Alan; Jaffe, Andrew; Hoffmann, Till; Kiessling, Alina; Wandelt, Benjamin
2016-01-01
To accomplish correct Bayesian inference from weak lensing shear data requires a complete statistical description of the data. The natural framework to do this is a Bayesian Hierarchical Model, which divides the chain of reasoning into component steps. Starting with a catalogue of shear estimates in tomographic bins, we build a model that allows us to sample simultaneously from the the underlying tomographic shear fields and the relevant power spectra (E-mode, B-mode, and E-B, for auto- and cross-power spectra). The procedure deals easily with masked data and intrinsic alignments. Using Gibbs sampling and messenger fields, we show with simulated data that the large (over 67000-)dimensional parameter space can be efficiently sampled and the full joint posterior probability density function for the parameters can feasibly be obtained. The method correctly recovers the underlying shear fields and all of the power spectra, including at levels well below the shot noise.
Directory of Open Access Journals (Sweden)
Guiyang Xin
2015-09-01
Full Text Available This paper presents a novel hexapod robot, hereafter named PH-Robot, with three degrees of freedom (3-DOF parallel leg mechanisms based on the concept of an integrated limb mechanism (ILM for the integration of legged locomotion and arm manipulation. The kinematic model plays an important role in the parametric optimal design and motion planning of robots. However, models of parallel mechanisms are often difficult to obtain because of the implicit relationship between the motions of actuated joints and the motion of a moving platform. In order to derive the kinematic equations of the proposed hexapod robot, an extended hierarchical kinematic modelling method is proposed. According to the kinematic model, the geometrical parameters of the leg are optimized utilizing a comprehensive objective function that considers both dexterity and payload. PH-Robot has distinct advantages in accuracy and load ability over a robot with serial leg mechanisms through the former's comparison of performance indices. The reachable workspace of the leg verifies its ability to walk and manipulate. The results of the trajectory tracking experiment demonstrate the correctness of the kinematic model of the hexapod robot.
DYNAMIC TASK PARTITIONING MODEL IN PARALLEL COMPUTING
Directory of Open Access Journals (Sweden)
Javed Ali
2012-04-01
Full Text Available Parallel computing systems compose task partitioning strategies in a true multiprocessing manner. Such systems share the algorithm and processing unit as computing resources which leads to highly inter process communications capabilities. The main part of the proposed algorithm is resource management unit which performs task partitioning and co-scheduling .In this paper, we present a technique for integrated task partitioning and co-scheduling on the privately owned network. We focus on real-time and non preemptive systems. A large variety of experiments have been conducted on the proposed algorithm using synthetic and real tasks. Goal of computation model is to provide a realistic representation of the costs of programming The results show the benefit of the task partitioning. The main characteristics of our method are optimal scheduling and strong link between partitioning, scheduling and communication. Some important models for task partitioning are also discussed in the paper. We target the algorithm for task partitioning which improve the inter process communication between the tasks and use the recourses of the system in the efficient manner. The proposed algorithm contributes the inter-process communication cost minimization amongst the executing processes.
The Modeling of the ERP Systems within Parallel Calculus
Directory of Open Access Journals (Sweden)
Loredana MOCEAN
2011-01-01
Full Text Available As we know from a few years, the basic characteristics of ERP systems are: modular-design, central common database, integration of the modules, data transfer between modules done automatically, complex systems and flexible configuration. Because this, is obviously a parallel approach to design and implement them within parallel algorithms, parallel calculus and distributed databases. This paper aims to support these assertions and provide a model, in summary, what could be an ERP system based on parallel computing and algorithms.
Regulator Loss Functions and Hierarchical Modeling for Safety Decision Making.
Hatfield, Laura A; Baugh, Christine M; Azzone, Vanessa; Normand, Sharon-Lise T
2017-07-01
Regulators must act to protect the public when evidence indicates safety problems with medical devices. This requires complex tradeoffs among risks and benefits, which conventional safety surveillance methods do not incorporate. To combine explicit regulator loss functions with statistical evidence on medical device safety signals to improve decision making. In the Hospital Cost and Utilization Project National Inpatient Sample, we select pediatric inpatient admissions and identify adverse medical device events (AMDEs). We fit hierarchical Bayesian models to the annual hospital-level AMDE rates, accounting for patient and hospital characteristics. These models produce expected AMDE rates (a safety target), against which we compare the observed rates in a test year to compute a safety signal. We specify a set of loss functions that quantify the costs and benefits of each action as a function of the safety signal. We integrate the loss functions over the posterior distribution of the safety signal to obtain the posterior (Bayes) risk; the preferred action has the smallest Bayes risk. Using simulation and an analysis of AMDE data, we compare our minimum-risk decisions to a conventional Z score approach for classifying safety signals. The 2 rules produced different actions for nearly half of hospitals (45%). In the simulation, decisions that minimize Bayes risk outperform Z score-based decisions, even when the loss functions or hierarchical models are misspecified. Our method is sensitive to the choice of loss functions; eliciting quantitative inputs to the loss functions from regulators is challenging. A decision-theoretic approach to acting on safety signals is potentially promising but requires careful specification of loss functions in consultation with subject matter experts.
Note on the equivalence of hierarchical variational models and auxiliary deep generative models
Brümmer, Niko
2016-01-01
This note compares two recently published machine learning methods for constructing flexible, but tractable families of variational hidden-variable posteriors. The first method, called "hierarchical variational models" enriches the inference model with an extra variable, while the other, called "auxiliary deep generative models", enriches the generative model instead. We conclude that the two methods are mathematically equivalent.
Improve Query Performance On Hierarchical Data. Adjacency List Model Vs. Nested Set Model
Directory of Open Access Journals (Sweden)
Cornelia Gyorödi
2016-04-01
Full Text Available Hierarchical data are found in a variety of database applications, including content management categories, forums, business organization charts, and product categories. In this paper, we will examine two models deal with hierarchical data in relational databases namely, adjacency list model and nested set model. We analysed these models by executing various operations and queries in a web-application for the management of categories, thus highlighting the results obtained during performance comparison tests. The purpose of this paper is to present the advantages and disadvantages of using an adjacency list model compared to nested set model in a relational database integrated into an application for the management of categories, which needs to manipulate a big amount of hierarchical data.
GSMNet: A Hierarchical Graph Model for Moving Objects in Networks
Directory of Open Access Journals (Sweden)
Hengcai Zhang
2017-03-01
Full Text Available Existing data models for moving objects in networks are often limited by flexibly controlling the granularity of representing networks and the cost of location updates and do not encompass semantic information, such as traffic states, traffic restrictions and social relationships. In this paper, we aim to fill the gap of traditional network-constrained models and propose a hierarchical graph model called the Geo-Social-Moving model for moving objects in Networks (GSMNet that adopts four graph structures, RouteGraph, SegmentGraph, ObjectGraph and MoveGraph, to represent the underlying networks, trajectories and semantic information in an integrated manner. The bulk of user-defined data types and corresponding operators is proposed to handle moving objects and answer a new class of queries supporting three kinds of conditions: spatial, temporal and semantic information. Then, we develop a prototype system with the native graph database system Neo4Jto implement the proposed GSMNet model. In the experiment, we conduct the performance evaluation using simulated trajectories generated from the BerlinMOD (Berlin Moving Objects Database benchmark and compare with the mature MOD system Secondo. The results of 17 benchmark queries demonstrate that our proposed GSMNet model has strong potential to reduce time-consuming table join operations an d shows remarkable advantages with regard to representing semantic information and controlling the cost of location updates.
A Bayesian hierarchical model for wind gust prediction
Friederichs, Petra; Oesting, Marco; Schlather, Martin
2014-05-01
A postprocessing method for ensemble wind gust forecasts given by a mesoscale limited area numerical weather prediction (NWP) model is presented, which is based on extreme value theory. A process layer for the parameters of a generalized extreme value distribution (GEV) is introduced using a Bayesian hierarchical model (BHM). Incorporating the information of the COMSO-DE forecasts, the process parameters model the spatial response surfaces of the GEV parameters as Gaussian random fields. The spatial BHM provides area wide forecasts of wind gusts in terms of a conditional GEV. It models the marginal distribution of the spatial gust process and provides not only forecasts of the conditional GEV at locations without observations, but also uncertainty information about the estimates. A disadvantages of BHM model is that it assumes conditional independent observations. In order to incorporate the dependence between gusts at neighboring locations as well as the spatial random fields of observed and forecasted maximal wind gusts, we propose to model them jointly by a bivariate Brown-Resnick process.
A Network Model for Parallel Line Balancing Problem
Directory of Open Access Journals (Sweden)
Recep Benzer
2007-01-01
Full Text Available Gökçen et al. (2006 have proposed several procedures and a mathematical model on single-model (product assembly line balancing (ALB problem with parallel lines. In parallel ALB problem, the goal is to balance more than one assembly line together. In this paper, a network model for parallel ALB problem has been proposed and illustrated on a numerical example. This model is a new approach for parallel ALB and it provides a different point of view for interested researchers.
Hierarchical modeling and its numerical implementation for layered thin elastic structures
Energy Technology Data Exchange (ETDEWEB)
Cho, Jin-Rae [Hongik University, Sejong (Korea, Republic of)
2017-05-15
Thin elastic structures such as beam- and plate-like structures and laminates are characterized by the small thickness, which lead to classical plate and laminate theories in which the displacement fields through the thickness are assumed linear or higher-order polynomials. These classical theories are either insufficient to represent the complex stress variation through the thickness or may encounter the accuracy-computational cost dilemma. In order to overcome the inherent problem of classical theories, the concept of hierarchical modeling has been emerged. In the hierarchical modeling, the hierarchical models with different model levels are selected and combined within a structure domain, in order to make the modeling error be distributed as uniformly as possible throughout the problem domain. The purpose of current study is to explore the potential of hierarchical modeling for the effective numerical analysis of layered structures such as laminated composite. For this goal, the hierarchical models are constructed and the hierarchical modeling is implemented by selectively adjusting the level of hierarchical models. As well, the major characteristics of hierarchical models are investigated through the numerical experiments.
Evolutionary optimization of a hierarchical object recognition model.
Schneider, Georg; Wersing, Heiko; Sendhoff, Bernhard; Körner, Edgar
2005-06-01
A major problem in designing artificial neural networks is the proper choice of the network architecture. Especially for vision networks classifying three-dimensional (3-D) objects this problem is very challenging, as these networks are necessarily large and therefore the search space for defining the needed networks is of a very high dimensionality. This strongly increases the chances of obtaining only suboptimal structures from standard optimization algorithms. We tackle this problem in two ways. First, we use biologically inspired hierarchical vision models to narrow the space of possible architectures and to reduce the dimensionality of the search space. Second, we employ evolutionary optimization techniques to determine optimal features and nonlinearities of the visual hierarchy. Here, we especially focus on higher order complex features in higher hierarchical stages. We compare two different approaches to perform an evolutionary optimization of these features. In the first setting, we directly code the features into the genome. In the second setting, in analogy to an ontogenetical development process, we suggest the new method of an indirect coding of the features via an unsupervised learning process, which is embedded into the evolutionary optimization. In both cases the processing nonlinearities are encoded directly into the genome and are thus subject to optimization. The fitness of the individuals for the evolutionary selection process is computed by measuring the network classification performance on a benchmark image database. Here, we use a nearest-neighbor classification approach, based on the hierarchical feature output. We compare the found solutions with respect to their ability to generalize. We differentiate between a first- and a second-order generalization. The first-order generalization denotes how well the vision system, after evolutionary optimization of the features and nonlinearities using a database A, can classify previously unseen test
Harmony Theory: Problem Solving, Parallel Cognitive Models, and Thermal Physics.
Smolensky, Paul; Riley, Mary S.
This document consists of three papers. The first, "A Parallel Model of (Sequential) Problem Solving," describes a parallel model designed to solve a class of relatively simple problems from elementary physics and discusses implications for models of problem-solving in general. It is shown that one of the most salient features of problem…
On the unnecessary ubiquity of hierarchical linear modeling.
McNeish, Daniel; Stapleton, Laura M; Silverman, Rebecca D
2017-03-01
In psychology and the behavioral sciences generally, the use of the hierarchical linear model (HLM) and its extensions for discrete outcomes are popular methods for modeling clustered data. HLM and its discrete outcome extensions, however, are certainly not the only methods available to model clustered data. Although other methods exist and are widely implemented in other disciplines, it seems that psychologists have yet to consider these methods in substantive studies. This article compares and contrasts HLM with alternative methods including generalized estimating equations and cluster-robust standard errors. These alternative methods do not model random effects and thus make a smaller number of assumptions and are interpreted identically to single-level methods with the benefit that estimates are adjusted to reflect clustering of observations. Situations where these alternative methods may be advantageous are discussed including research questions where random effects are and are not required, when random effects can change the interpretation of regression coefficients, challenges of modeling with random effects with discrete outcomes, and examples of published psychology articles that use HLM that may have benefitted from using alternative methods. Illustrative examples are provided and discussed to demonstrate the advantages of the alternative methods and also when HLM would be the preferred method. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Hierarchical Model Predictive Control for Plug-and-Play Resource Distribution
DEFF Research Database (Denmark)
Bendtsen, Jan Dimon; Trangbæk, K; Stoustrup, Jakob
2012-01-01
This chapter deals with hierarchical model predictive control (MPC) of distributed systems. A three level hierarchical approach is proposed, consisting of a high level MPC controller, a second level of so-called aggregators, controlled by an online MPC-like algorithm, and a lower level of autonom......This chapter deals with hierarchical model predictive control (MPC) of distributed systems. A three level hierarchical approach is proposed, consisting of a high level MPC controller, a second level of so-called aggregators, controlled by an online MPC-like algorithm, and a lower level...
A Bayesian hierarchical model for accident and injury surveillance.
MacNab, Ying C
2003-01-01
This article presents a recent study which applies Bayesian hierarchical methodology to model and analyse accident and injury surveillance data. A hierarchical Poisson random effects spatio-temporal model is introduced and an analysis of inter-regional variations and regional trends in hospitalisations due to motor vehicle accident injuries to boys aged 0-24 in the province of British Columbia, Canada, is presented. The objective of this article is to illustrate how the modelling technique can be implemented as part of an accident and injury surveillance and prevention system where transportation and/or health authorities may routinely examine accidents, injuries, and hospitalisations to target high-risk regions for prevention programs, to evaluate prevention strategies, and to assist in health planning and resource allocation. The innovation of the methodology is its ability to uncover and highlight important underlying structure of the data. Between 1987 and 1996, British Columbia hospital separation registry registered 10,599 motor vehicle traffic injury related hospitalisations among boys aged 0-24 who resided in British Columbia, of which majority (89%) of the injuries occurred to boys aged 15-24. The injuries were aggregated by three age groups (0-4, 5-14, and 15-24), 20 health regions (based of place-of-residence), and 10 calendar years (1987 to 1996) and the corresponding mid-year population estimates were used as 'at risk' population. An empirical Bayes inference technique using penalised quasi-likelihood estimation was implemented to model both rates and counts, with spline smoothing accommodating non-linear temporal effects. The results show that (a) crude rates and ratios at health region level are unstable, (b) the models with spline smoothing enable us to explore possible shapes of injury trends at both the provincial level and the regional level, and (c) the fitted models provide a wealth of information about the patterns (both over space and time
PDDP: A data parallel programming model. Revision 1
Energy Technology Data Exchange (ETDEWEB)
Warren, K.H.
1995-06-01
PDDP, the Parallel Data Distribution Preprocessor, is a data parallel programming model for distributed memory parallel computers. PDDP impelments High Performance Fortran compatible data distribution directives and parallelism expressed by the use of Fortran 90 array syntax, the FORALL statement, and the (WRERE?) construct. Distribued data objects belong to a global name space; other data objects are treated as local and replicated on each processor. PDDP allows the user to program in a shared-memory style and generates codes that are portable to a variety of parallel machines. For interprocessor communication, PDDP uses the fastest communication primitives on each platform.
Novel Hierarchical Fall Detection Algorithm Using a Multiphase Fall Model
Hsieh, Chia-Yeh; Liu, Kai-Chun; Huang, Chih-Ning; Chu, Woei-Chyn; Chan, Chia-Tai
2017-01-01
Falls are the primary cause of accidents for the elderly in the living environment. Reducing hazards in the living environment and performing exercises for training balance and muscles are the common strategies for fall prevention. However, falls cannot be avoided completely; fall detection provides an alarm that can decrease injuries or death caused by the lack of rescue. The automatic fall detection system has opportunities to provide real-time emergency alarms for improving the safety and quality of home healthcare services. Two common technical challenges are also tackled in order to provide a reliable fall detection algorithm, including variability and ambiguity. We propose a novel hierarchical fall detection algorithm involving threshold-based and knowledge-based approaches to detect a fall event. The threshold-based approach efficiently supports the detection and identification of fall events from continuous sensor data. A multiphase fall model is utilized, including free fall, impact, and rest phases for the knowledge-based approach, which identifies fall events and has the potential to deal with the aforementioned technical challenges of a fall detection system. Seven kinds of falls and seven types of daily activities arranged in an experiment are used to explore the performance of the proposed fall detection algorithm. The overall performances of the sensitivity, specificity, precision, and accuracy using a knowledge-based algorithm are 99.79%, 98.74%, 99.05% and 99.33%, respectively. The results show that the proposed novel hierarchical fall detection algorithm can cope with the variability and ambiguity of the technical challenges and fulfill the reliability, adaptability, and flexibility requirements of an automatic fall detection system with respect to the individual differences. PMID:28208694
Wei Wu; James Clark; James Vose
2010-01-01
Hierarchical Bayesian (HB) modeling allows for multiple sources of uncertainty by factoring complex relationships into conditional distributions that can be used to draw inference and make predictions. We applied an HB model to estimate the parameters and state variables of a parsimonious hydrological model â GR4J â by coherently assimilating the uncertainties from the...
A note on adding and deleting edges in hierarchical log-linear models
DEFF Research Database (Denmark)
Edwards, David
2012-01-01
The operations of edge addition and deletion for hierarchical log-linear models are defined, and polynomial-time algorithms for the operations are given......The operations of edge addition and deletion for hierarchical log-linear models are defined, and polynomial-time algorithms for the operations are given...
Optimum Binary Search Trees on the Hierarchical Memory Model
Thite, Shripad
2008-01-01
The Hierarchical Memory Model (HMM) of computation is similar to the standard Random Access Machine (RAM) model except that the HMM has a non-uniform memory organized in a hierarchy of levels numbered 1 through h. The cost of accessing a memory location increases with the level number, and accesses to memory locations belonging to the same level cost the same. Formally, the cost of a single access to the memory location at address a is given by m(a), where m: N -> N is the memory cost function, and the h distinct values of m model the different levels of the memory hierarchy. We study the problem of constructing and storing a binary search tree (BST) of minimum cost, over a set of keys, with probabilities for successful and unsuccessful searches, on the HMM with an arbitrary number of memory levels, and for the special case h=2. While the problem of constructing optimum binary search trees has been well studied for the standard RAM model, the additional parameter m for the HMM increases the combinatorial comp...
A Biological Hierarchical Model Based Underwater Moving Object Detection
Directory of Open Access Journals (Sweden)
Jie Shen
2014-01-01
Full Text Available Underwater moving object detection is the key for many underwater computer vision tasks, such as object recognizing, locating, and tracking. Considering the super ability in visual sensing of the underwater habitats, the visual mechanism of aquatic animals is generally regarded as the cue for establishing bionic models which are more adaptive to the underwater environments. However, the low accuracy rate and the absence of the prior knowledge learning limit their adaptation in underwater applications. Aiming to solve the problems originated from the inhomogeneous lumination and the unstable background, the mechanism of the visual information sensing and processing pattern from the eye of frogs are imitated to produce a hierarchical background model for detecting underwater objects. Firstly, the image is segmented into several subblocks. The intensity information is extracted for establishing background model which could roughly identify the object and the background regions. The texture feature of each pixel in the rough object region is further analyzed to generate the object contour precisely. Experimental results demonstrate that the proposed method gives a better performance. Compared to the traditional Gaussian background model, the completeness of the object detection is 97.92% with only 0.94% of the background region that is included in the detection results.
Mathematical model partitioning and packing for parallel computer calculation
Arpasi, Dale J.; Milner, Edward J.
1986-01-01
This paper deals with the development of multiprocessor simulations from a serial set of ordinary differential equations describing a physical system. The identification of computational parallelism within the model equations is discussed. A technique is presented for identifying this parallelism and for partitioning the equations for parallel solution on a multiprocessor. Next, an algorithm which packs the equations into a minimum number of processors is described. The results of applying the packing algorithm to a turboshaft engine model are presented.
Higher-order models versus direct hierarchical models: g as superordinate or breadth factor?
Directory of Open Access Journals (Sweden)
GILLES E. GIGNAC
2008-03-01
Full Text Available Intelligence research appears to have overwhelmingly endorsed a superordinate (higher-order model conceptualization of g, in comparison to the relatively less well-known breadth conceptualization of g, as represented by the direct hierarchical model. In this paper, several similarities and distinctions between the indirect and direct hierarchical models are delineated. Based on the re-analysis of five correlation matrices, it was demonstrated via CFA that the conventional conception of g as a higher-order superordinate factor was likely not as plausible as a first-order breadth factor. The results are discussed in light of theoretical advantages of conceptualizing g as a first-order factor. Further, because the associations between group-factors and g are constrained to zero within a direct hierarchical model, previous observations of isomorphic associations between a lower-order group factor and g are questioned.
National Research Council Canada - National Science Library
Royle, J. Andrew; Dorazio, Robert M
2008-01-01
"This book describes a general and flexible framework for modeling and inference in ecological systems based on hierarchical modeling in which a strict focus on probability models and parametric inference is adopted...
A hierarchical network modeling method for railway tunnels safety assessment
Zhou, Jin; Xu, Weixiang; Guo, Xin; Liu, Xumin
2017-02-01
Using network theory to model risk-related knowledge on accidents is regarded as potential very helpful in risk management. A large amount of defects detection data for railway tunnels is collected in autumn every year in China. It is extremely important to discover the regularities knowledge in database. In this paper, based on network theories and by using data mining techniques, a new method is proposed for mining risk-related regularities to support risk management in railway tunnel projects. A hierarchical network (HN) model which takes into account the tunnel structures, tunnel defects, potential failures and accidents is established. An improved Apriori algorithm is designed to rapidly and effectively mine correlations between tunnel structures and tunnel defects. Then an algorithm is presented in order to mine the risk-related regularities table (RRT) from the frequent patterns. At last, a safety assessment method is proposed by consideration of actual defects and possible risks of defects gained from the RRT. This method cannot only generate the quantitative risk results but also reveal the key defects and critical risks of defects. This paper is further development on accident causation network modeling methods which can provide guidance for specific maintenance measure.
Production optimisation in the petrochemical industry by hierarchical multivariate modelling
Energy Technology Data Exchange (ETDEWEB)
Andersson, Magnus; Furusjoe, Erik; Jansson, Aasa
2004-06-01
This project demonstrates the advantages of applying hierarchical multivariate modelling in the petrochemical industry in order to increase knowledge of the total process. The models indicate possible ways to optimise the process regarding the use of energy and raw material, which is directly linked to the environmental impact of the process. The refinery of Nynaes Refining AB (Goeteborg, Sweden) has acted as a demonstration site in this project. The models developed for the demonstration site resulted in: Detection of an unknown process disturbance and suggestions of possible causes; Indications on how to increase the yield in combination with energy savings; The possibility to predict product quality from on-line process measurements, making the results available at a higher frequency than customary laboratory analysis; Quantification of the gradually lowered efficiency of heat transfer in the furnace and increased fuel consumption as an effect of soot build-up on the furnace coils; Increased knowledge of the relation between production rate and the efficiency of the heat exchangers. This report is one of two reports from the project. It contains a technical discussion of the result with some degree of detail. A shorter and more easily accessible report is also available, see IVL report B1586-A.
Production optimisation in the petrochemical industry by hierarchical multivariate modelling
Energy Technology Data Exchange (ETDEWEB)
Andersson, Magnus; Furusjoe, Erik; Jansson, Aasa
2004-06-01
This project demonstrates the advantages of applying hierarchical multivariate modelling in the petrochemical industry in order to increase knowledge of the total process. The models indicate possible ways to optimise the process regarding the use of energy and raw material, which is directly linked to the environmental impact of the process. The refinery of Nynaes Refining AB (Goeteborg, Sweden) has acted as a demonstration site in this project. The models developed for the demonstration site resulted in: Detection of an unknown process disturbance and suggestions of possible causes; Indications on how to increase the yield in combination with energy savings; The possibility to predict product quality from on-line process measurements, making the results available at a higher frequency than customary laboratory analysis; Quantification of the gradually lowered efficiency of heat transfer in the furnace and increased fuel consumption as an effect of soot build-up on the furnace coils; Increased knowledge of the relation between production rate and the efficiency of the heat exchangers. This report is one of two reports from the project. It contains a technical discussion of the result with some degree of detail. A shorter and more easily accessible report is also available, see IVL report B1586-A.
Graph Partitioning Models for Parallel Computing
Energy Technology Data Exchange (ETDEWEB)
Hendrickson, B.; Kolda, T.G.
1999-03-02
Calculations can naturally be described as graphs in which vertices represent computation and edges reflect data dependencies. By partitioning the vertices of a graph, the calculation can be divided among processors of a parallel computer. However, the standard methodology for graph partitioning minimizes the wrong metric and lacks expressibility. We survey several recently proposed alternatives and discuss their relative merits.
Parallelism and optimization of numerical ocean forecasting model
Xu, Jianliang; Pang, Renbo; Teng, Junhua; Liang, Hongtao; Yang, Dandan
2016-10-01
According to the characteristics of Chinese marginal seas, the Marginal Sea Model of China (MSMC) has been developed independently in China. Because the model requires long simulation time, as a routine forecasting model, the parallelism of MSMC becomes necessary to be introduced to improve the performance of it. However, some methods used in MSMC, such as Successive Over Relaxation (SOR) algorithm, are not suitable for parallelism. In this paper, methods are developedto solve the parallel problem of the SOR algorithm following the steps as below. First, based on a 3D computing grid system, an automatic data partition method is implemented to dynamically divide the computing grid according to computing resources. Next, based on the characteristics of the numerical forecasting model, a parallel method is designed to solve the parallel problem of the SOR algorithm. Lastly, a communication optimization method is provided to avoid the cost of communication. In the communication optimization method, the non-blocking communication of Message Passing Interface (MPI) is used to implement the parallelism of MSMC with complex physical equations, and the process of communication is overlapped with the computations for improving the performance of parallel MSMC. The experiments show that the parallel MSMC runs 97.2 times faster than the serial MSMC, and root mean square error between the parallel MSMC and the serial MSMC is less than 0.01 for a 30-day simulation (172800 time steps), which meets the requirements of timeliness and accuracy for numerical ocean forecasting products.
Loss Function Based Ranking in Two-Stage, Hierarchical Models
Lin, Rongheng; Louis, Thomas A.; Paddock, Susan M.; Ridgeway, Greg
2009-01-01
Performance evaluations of health services providers burgeons. Similarly, analyzing spatially related health information, ranking teachers and schools, and identification of differentially expressed genes are increasing in prevalence and importance. Goals include valid and efficient ranking of units for profiling and league tables, identification of excellent and poor performers, the most differentially expressed genes, and determining “exceedances” (how many and which unit-specific true parameters exceed a threshold). These data and inferential goals require a hierarchical, Bayesian model that accounts for nesting relations and identifies both population values and random effects for unit-specific parameters. Furthermore, the Bayesian approach coupled with optimizing a loss function provides a framework for computing non-standard inferences such as ranks and histograms. Estimated ranks that minimize Squared Error Loss (SEL) between the true and estimated ranks have been investigated. The posterior mean ranks minimize SEL and are “general purpose,” relevant to a broad spectrum of ranking goals. However, other loss functions and optimizing ranks that are tuned to application-specific goals require identification and evaluation. For example, when the goal is to identify the relatively good (e.g., in the upper 10%) or relatively poor performers, a loss function that penalizes classification errors produces estimates that minimize the error rate. We construct loss functions that address this and other goals, developing a unified framework that facilitates generating candidate estimates, comparing approaches and producing data analytic performance summaries. We compare performance for a fully parametric, hierarchical model with Gaussian sampling distribution under Gaussian and a mixture of Gaussians prior distributions. We illustrate approaches via analysis of standardized mortality ratio data from the United States Renal Data System. Results show that SEL
Directory of Open Access Journals (Sweden)
Kuate-Defo, Bathélémy
2001-01-01
Full Text Available EnglishThis paper merges two parallel developments since the 1970s of newstatistical tools for data analysis: statistical methods known as hazard models that are used foranalyzing event-duration data and statistical methods for analyzing hierarchically clustered dataknown as multilevel models. These developments have rarely been integrated in research practice andthe formalization and estimation of models for hierarchically clustered survival data remain largelyuncharted. I attempt to fill some of this gap and demonstrate the merits of formulating and estimatingmultilevel hazard models with longitudinal data.FrenchCette étude intègre deux approches statistiques de pointe d'analyse des donnéesquantitatives depuis les années 70: les méthodes statistiques d'analyse desdonnées biographiques ou méthodes de survie et les méthodes statistiquesd'analyse des données hiérarchiques ou méthodes multi-niveaux. Ces deuxapproches ont été très peu mis en symbiose dans la pratique de recherche et parconséquent, la formulation et l'estimation des modèles appropriés aux donnéeslongitudinales et hiérarchiquement nichées demeure essentiellement un champd'investigation vierge. J'essaye de combler ce vide et j'utilise des données réellesen santé publique pour démontrer les mérites et contextes de formulation etd'estimation des modèles multi-niveaux et multi-états des données biographiqueset longitudinales.
The Hierarchical Sparse Selection Model of Visual Crowding
Directory of Open Access Journals (Sweden)
Wesley eChaney
2014-09-01
Full Text Available Because the environment is cluttered, objects rarely appear in isolation. The visual system must therefore attentionally select behaviorally relevant objects from among many irrelevant ones. A limit on our ability to select individual objects is revealed by the phenomenon of visual crowding: an object seen in the periphery, easily recognized in isolation, can become impossible to identify when surrounded by other, similar objects. The neural basis of crowding is hotly debated: while prevailing theories hold that crowded information is irrecoverable – destroyed due to over-integration in early-stage visual processing – recent evidence demonstrates otherwise. Crowding can occur between high-level, configural object representations, and crowded objects can contribute with high precision to judgments about the gist of a group of objects, even when they are individually unrecognizable. While existing models can account for the basic diagnostic criteria of crowding (e.g. specific critical spacing, spatial anisotropies, and temporal tuning, no present model explains how crowding can operate simultaneously at multiple levels in the visual processing hierarchy, including at the level of whole objects. Here, we present a new model of visual crowding— the hierarchical sparse selection (HSS model, which accounts for object-level crowding, as well as a number of puzzling findings in the recent literature. Counter to existing theories, we posit that crowding occurs not due to degraded visual representations in the brain, but due to impoverished sampling of visual representations for the sake of perception. The HSS model unifies findings from a disparate array of visual crowding studies and makes testable predictions about how information in crowded scenes can be accessed.
The hierarchical sparse selection model of visual crowding.
Chaney, Wesley; Fischer, Jason; Whitney, David
2014-01-01
Because the environment is cluttered, objects rarely appear in isolation. The visual system must therefore attentionally select behaviorally relevant objects from among many irrelevant ones. A limit on our ability to select individual objects is revealed by the phenomenon of visual crowding: an object seen in the periphery, easily recognized in isolation, can become impossible to identify when surrounded by other, similar objects. The neural basis of crowding is hotly debated: while prevailing theories hold that crowded information is irrecoverable - destroyed due to over-integration in early stage visual processing - recent evidence demonstrates otherwise. Crowding can occur between high-level, configural object representations, and crowded objects can contribute with high precision to judgments about the "gist" of a group of objects, even when they are individually unrecognizable. While existing models can account for the basic diagnostic criteria of crowding (e.g., specific critical spacing, spatial anisotropies, and temporal tuning), no present model explains how crowding can operate simultaneously at multiple levels in the visual processing hierarchy, including at the level of whole objects. Here, we present a new model of visual crowding-the hierarchical sparse selection (HSS) model, which accounts for object-level crowding, as well as a number of puzzling findings in the recent literature. Counter to existing theories, we posit that crowding occurs not due to degraded visual representations in the brain, but due to impoverished sampling of visual representations for the sake of perception. The HSS model unifies findings from a disparate array of visual crowding studies and makes testable predictions about how information in crowded scenes can be accessed.
Scheibehenne, Benjamin; Pachur, Thorsten
2015-04-01
To be useful, cognitive models with fitted parameters should show generalizability across time and allow accurate predictions of future observations. It has been proposed that hierarchical procedures yield better estimates of model parameters than do nonhierarchical, independent approaches, because the formers' estimates for individuals within a group can mutually inform each other. Here, we examine Bayesian hierarchical approaches to evaluating model generalizability in the context of two prominent models of risky choice-cumulative prospect theory (Tversky & Kahneman, 1992) and the transfer-of-attention-exchange model (Birnbaum & Chavez, 1997). Using empirical data of risky choices collected for each individual at two time points, we compared the use of hierarchical versus independent, nonhierarchical Bayesian estimation techniques to assess two aspects of model generalizability: parameter stability (across time) and predictive accuracy. The relative performance of hierarchical versus independent estimation varied across the different measures of generalizability. The hierarchical approach improved parameter stability (in terms of a lower absolute discrepancy of parameter values across time) and predictive accuracy (in terms of deviance; i.e., likelihood). With respect to test-retest correlations and posterior predictive accuracy, however, the hierarchical approach did not outperform the independent approach. Further analyses suggested that this was due to strong correlations between some parameters within both models. Such intercorrelations make it difficult to identify and interpret single parameters and can induce high degrees of shrinkage in hierarchical models. Similar findings may also occur in the context of other cognitive models of choice.
Deterministic Consistency: A Programming Model for Shared Memory Parallelism
Aviram, Amittai; Ford, Bryan
2009-01-01
The difficulty of developing reliable parallel software is generating interest in deterministic environments, where a given program and input can yield only one possible result. Languages or type systems can enforce determinism in new code, and runtime systems can impose synthetic schedules on legacy parallel code. To parallelize existing serial code, however, we would like a programming model that is naturally deterministic without language restrictions or artificial scheduling. We propose "...
Scale of association: hierarchical linear models and the measurement of ecological systems
Sean M. McMahon; Jeffrey M. Diez
2007-01-01
A fundamental challenge to understanding patterns in ecological systems lies in employing methods that can analyse, test and draw inference from measured associations between variables across scales. Hierarchical linear models (HLM) use advanced estimation algorithms to measure regression relationships and variance-covariance parameters in hierarchically structured...
Bayesian Hierarchical Modeling for Big Data Fusion in Soil Hydrology
Mohanty, B.; Kathuria, D.; Katzfuss, M.
2016-12-01
Soil moisture datasets from remote sensing (RS) platforms (such as SMOS and SMAP) and reanalysis products from land surface models are typically available on a coarse spatial granularity of several square km. Ground based sensors on the other hand provide observations on a finer spatial scale (meter scale or less) but are sparsely available. Soil moisture is affected by high variability due to complex interactions between geologic, topographic, vegetation and atmospheric variables. Hydrologic processes usually occur at a scale of 1 km or less and therefore spatially ubiquitous and temporally periodic soil moisture products at this scale are required to aid local decision makers in agriculture, weather prediction and reservoir operations. Past literature has largely focused on downscaling RS soil moisture for a small extent of a field or a watershed and hence the applicability of such products has been limited. The present study employs a spatial Bayesian Hierarchical Model (BHM) to derive soil moisture products at a spatial scale of 1 km for the state of Oklahoma by fusing point scale Mesonet data and coarse scale RS data for soil moisture and its auxiliary covariates such as precipitation, topography, soil texture and vegetation. It is seen that the BHM model handles change of support problems easily while performing accurate uncertainty quantification arising from measurement errors and imperfect retrieval algorithms. The computational challenge arising due to the large number of measurements is tackled by utilizing basis function approaches and likelihood approximations. The BHM model can be considered as a complex Bayesian extension of traditional geostatistical prediction methods (such as Kriging) for large datasets in the presence of uncertainties.
Parallel Evolutionary Modeling for Nonlinear Ordinary Differential Equations
Institute of Scientific and Technical Information of China (English)
无
2001-01-01
We introduce a new parallel evolutionary algorithm in modeling dynamic systems by nonlinear higher-order ordinary differential equations (NHODEs). The NHODEs models are much more universal than the traditional linear models. In order to accelerate the modeling process, we propose and realize a parallel evolutionary algorithm using distributed CORBA object on the heterogeneous networking. Some numerical experiments show that the new algorithm is feasible and efficient.
DEFF Research Database (Denmark)
Huang, Qian; Huang, Yue-Cai; Ko, King-Tim;
2011-01-01
dimensioning and planning. This paper investigates the computationally efficient loss performance modeling for multiservice in hierarchical heterogeneous wireless networks. A speed-sensitive call admission control (CAC) scheme is considered in our model to assign overflowed calls to appropriate tiers...
A Multilevel Secure Relation-Hierarchical Data Model for a Secure DBMS
Institute of Scientific and Technical Information of China (English)
无
2001-01-01
A multilevel secure relation-hierarchical data model formultilevel secure database is extended from the relation-hierarchical data model in single level environment in this paper. Based on the model, an upper-lower layer relational integrity is presented after we analyze and eliminate the covert channels caused by the database integrity. Two SQL statements are extended to process polyinstantiation in the multilevel secure environment. The system based on the multilevel secure relation-hierarchical data model is capable of integratively storing and manipulating complicated objects (e.g., multilevel spatial data) and conventional data (e.g., integer, real number and character string) in multilevel secure database.
Investigating follow-up outcome change using hierarchical linear modeling.
Ogrodniczuk, J S; Piper, W E; Joyce, A S
2001-03-01
Individual change in outcome during a one-year follow-up period for 98 patients who received either interpretive or supportive psychotherapy was examined using hierarchical linear modeling (HLM). This followed a previous study that had investigated average (treatment condition) change during follow-up using traditional methods of data analysis (repeated measures ANOVA, chi-square tests). We also investigated whether two patient personality characteristics-quality of object relations (QOR) and psychological mindedness (PM)-predicted individual change. HLM procedures yielded findings that were not detected using traditional methods of data analysis. New findings indicated that the rate of individual change in outcome during follow-up varied significantly among the patients. QOR was directly related to favorable individual change for supportive therapy patients, but not for patients who received interpretive therapy. The findings have implications for determining which patients will show long-term benefit following short-term supportive therapy and how to enhance it. The study also found significant associations between QOR and final outcome level.
Qian, Song S; Craig, J Kevin; Baustian, Melissa M; Rabalais, Nancy N
2009-12-01
We introduce the Bayesian hierarchical modeling approach for analyzing observational data from marine ecological studies using a data set intended for inference on the effects of bottom-water hypoxia on macrobenthic communities in the northern Gulf of Mexico off the coast of Louisiana, USA. We illustrate (1) the process of developing a model, (2) the use of the hierarchical model results for statistical inference through innovative graphical presentation, and (3) a comparison to the conventional linear modeling approach (ANOVA). Our results indicate that the Bayesian hierarchical approach is better able to detect a "treatment" effect than classical ANOVA while avoiding several arbitrary assumptions necessary for linear models, and is also more easily interpreted when presented graphically. These results suggest that the hierarchical modeling approach is a better alternative than conventional linear models and should be considered for the analysis of observational field data from marine systems.
Development of a Massively Parallel NOGAPS Forecast Model
2016-06-07
parallel computer architectures. These algorithms will be critical for inter- processor communication dependent and computationally intensive model...to exploit massively parallel processor (MPP), distributed memory computer architectures. Future increases in computer power from MPP’s will allow...passing (MPI) is the paradigm chosen for communication between distributed memory processors. APPROACH Use integrations of the current operational
Hierarchical Clustering and Active Galaxies
Hatziminaoglou, E; Manrique, A
2000-01-01
The growth of Super Massive Black Holes and the parallel development of activity in galactic nuclei are implemented in an analytic code of hierarchical clustering. The evolution of the luminosity function of quasars and AGN will be computed with special attention paid to the connection between quasars and Seyfert galaxies. One of the major interests of the model is the parallel study of quasar formation and evolution and the History of Star Formation.
基于规则的分层负载平衡调度模型%A Hierarchical Load Balancing Scheduling Model Based on Rules
Institute of Scientific and Technical Information of China (English)
李冬梅; 施海虎; 顾毓清
2003-01-01
On a massively parallel and distributed system and a network of workstations system, it is a critical problem to increase the utilization efficiency of resources and the answer speed of tasks by using effective load balancing scheduling strategy. This paper analyzes the scheduling strategy of dynamic load balancing and static load balancing,and then proposes a hierarchical load balancing scheduling model based on rules. Finally,making somecomparisons with Other scheduling models.
Hierarchical set of models to estimate soil thermal diffusivity
Arkhangelskaya, Tatiana; Lukyashchenko, Ksenia
2016-04-01
Soil thermal properties significantly affect the land-atmosphere heat exchange rates. Intra-soil heat fluxes depend both on temperature gradients and soil thermal conductivity. Soil temperature changes due to energy fluxes are determined by soil specific heat. Thermal diffusivity is equal to thermal conductivity divided by volumetric specific heat and reflects both the soil ability to transfer heat and its ability to change temperature when heat is supplied or withdrawn. The higher soil thermal diffusivity is, the thicker is the soil/ground layer in which diurnal and seasonal temperature fluctuations are registered and the smaller are the temperature fluctuations at the soil surface. Thermal diffusivity vs. moisture dependencies for loams, sands and clays of the East European Plain were obtained using the unsteady-state method. Thermal diffusivity of different soils differed greatly, and for a given soil it could vary by 2, 3 or even 5 times depending on soil moisture. The shapes of thermal diffusivity vs. moisture dependencies were different: peak curves were typical for sandy soils and sigmoid curves were typical for loamy and especially for compacted soils. The lowest thermal diffusivities and the smallest range of their variability with soil moisture were obtained for clays with high humus content. Hierarchical set of models will be presented, allowing an estimate of soil thermal diffusivity from available data on soil texture, moisture, bulk density and organic carbon. When developing these models the first step was to parameterize the experimental thermal diffusivity vs. moisture dependencies with a 4-parameter function; the next step was to obtain regression formulas to estimate the function parameters from available data on basic soil properties; the last step was to evaluate the accuracy of suggested models using independent data on soil thermal diffusivity. The simplest models were based on soil bulk density and organic carbon data and provided different
Damage modeling of small-scale experiments on dental enamel with hierarchical microstructure.
Scheider, I; Xiao, T; Yilmaz, E; Schneider, G A; Huber, N; Bargmann, S
2015-03-01
Dental enamel is a highly anisotropic and heterogeneous material, which exhibits an optimal reliability with respect to the various loads occurring over years. In this work, enamel's microstructure of parallel aligned rods of mineral fibers is modeled and mechanical properties are evaluated in terms of strength and toughness with the help of a multiscale modeling method. The established model is validated by comparing it with the stress-strain curves identified by microcantilever beam experiments extracted from these rods. Moreover, in order to gain further insight in the damage-tolerant behavior of enamel, the size of crystallites below which the structure becomes insensitive to flaws is studied by a microstructural finite element model. The assumption regarding the fiber strength is verified by a numerical study leading to accordance of fiber size and flaw tolerance size, and the debonding strength is estimated by optimizing the failure behavior of the microstructure on the hierarchical level above the individual fibers. Based on these well-grounded properties, the material behavior is predicted well by homogenization of a representative unit cell including damage, taking imperfections (like microcracks in the present case) into account. Copyright © 2014 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.
Modeling and Adaptive Control of a Planar Parallel Mechanism
Institute of Scientific and Technical Information of China (English)
敖银辉; 陈新
2004-01-01
Dynamic model and control strategy of parallel mechanism have always been a problem in robotics research. In this paper,different dynamics formulation methods are discussed first, A model of redundant driven parallel mechanism with a planar parallel manipulator is then constructed as an example. A nonlinear adaptive control method is introduced. Matrix pseudo-inversion is used to get a desired actuator torque from a desired end-effector coordinate while the feedback torque is directly calculated in the actuator space. This treatment avoids forward kinematics computation that is very difficult in a parallel mechanism. Experiments with PID together with the descibed adaptive control strategy were carried out for a planar parallel mechanism. The results show that the proposed adaptive controller outperforms conventional PID methods in tracking desired input at a high speed,
Hierarchical Shrinkage Priors and Model Fitting for High-dimensional Generalized Linear Models
Yi, Nengjun; Ma, Shuangge
2013-01-01
Genetic and other scientific studies routinely generate very many predictor variables, which can be naturally grouped, with predictors in the same groups being highly correlated. It is desirable to incorporate the hierarchical structure of the predictor variables into generalized linear models for simultaneous variable selection and coefficient estimation. We propose two prior distributions: hierarchical Cauchy and double-exponential distributions, on coefficients in generalized linear models. The hierarchical priors include both variable-specific and group-specific tuning parameters, thereby not only adopting different shrinkage for different coefficients and different groups but also providing a way to pool the information within groups. We fit generalized linear models with the proposed hierarchical priors by incorporating flexible expectation-maximization (EM) algorithms into the standard iteratively weighted least squares as implemented in the general statistical package R. The methods are illustrated with data from an experiment to identify genetic polymorphisms for survival of mice following infection with Listeria monocytogenes. The performance of the proposed procedures is further assessed via simulation studies. The methods are implemented in a freely available R package BhGLM (http://www.ssg.uab.edu/bhglm/). PMID:23192052
Intelligent multiagent coordination based on reinforcement hierarchical neuro-fuzzy models.
Mendoza, Leonardo Forero; Vellasco, Marley; Figueiredo, Karla
2014-12-01
This paper presents the research and development of two hybrid neuro-fuzzy models for the hierarchical coordination of multiple intelligent agents. The main objective of the models is to have multiple agents interact intelligently with each other in complex systems. We developed two new models of coordination for intelligent multiagent systems, which integrates the Reinforcement Learning Hierarchical Neuro-Fuzzy model with two proposed coordination mechanisms: the MultiAgent Reinforcement Learning Hierarchical Neuro-Fuzzy with a market-driven coordination mechanism (MA-RL-HNFP-MD) and the MultiAgent Reinforcement Learning Hierarchical Neuro-Fuzzy with graph coordination (MA-RL-HNFP-CG). In order to evaluate the proposed models and verify the contribution of the proposed coordination mechanisms, two multiagent benchmark applications were developed: the pursuit game and the robot soccer simulation. The results obtained demonstrated that the proposed coordination mechanisms greatly improve the performance of the multiagent system when compared with other strategies.
Parallel community climate model: Description and user`s guide
Energy Technology Data Exchange (ETDEWEB)
Drake, J.B.; Flanery, R.E.; Semeraro, B.D.; Worley, P.H. [and others
1996-07-15
This report gives an overview of a parallel version of the NCAR Community Climate Model, CCM2, implemented for MIMD massively parallel computers using a message-passing programming paradigm. The parallel implementation was developed on an Intel iPSC/860 with 128 processors and on the Intel Delta with 512 processors, and the initial target platform for the production version of the code is the Intel Paragon with 2048 processors. Because the implementation uses a standard, portable message-passing libraries, the code has been easily ported to other multiprocessors supporting a message-passing programming paradigm. The parallelization strategy used is to decompose the problem domain into geographical patches and assign each processor the computation associated with a distinct subset of the patches. With this decomposition, the physics calculations involve only grid points and data local to a processor and are performed in parallel. Using parallel algorithms developed for the semi-Lagrangian transport, the fast Fourier transform and the Legendre transform, both physics and dynamics are computed in parallel with minimal data movement and modest change to the original CCM2 source code. Sequential or parallel history tapes are written and input files (in history tape format) are read sequentially by the parallel code to promote compatibility with production use of the model on other computer systems. A validation exercise has been performed with the parallel code and is detailed along with some performance numbers on the Intel Paragon and the IBM SP2. A discussion of reproducibility of results is included. A user`s guide for the PCCM2 version 2.1 on the various parallel machines completes the report. Procedures for compilation, setup and execution are given. A discussion of code internals is included for those who may wish to modify and use the program in their own research.
Parallelization of the NASA Goddard Cumulus Ensemble Model for Massively Parallel Computing
Directory of Open Access Journals (Sweden)
Hann-Ming Henry Juang
2007-01-01
Full Text Available Massively parallel computing, using a message passing interface (MPI, has been implemented into a three-dimensional version of the Goddard Cumulus Ensemble (GCE model. The implementation uses the domainresemble concept to design a code structure for both the whole domain and sub-domains after decomposition. Instead of inserting a group of MPI related statements into the model routine, these statements are packed into a single routine. In other words, only a single call statement to the model code is utilized once in a place, thus there is minimal impact on the original code. Therefore, the model is easily modified and/or managed by the model developers and/or users, who have little knowledge of massively parallel computing.
Dynamic Distribution Model with Prime Granularity for Parallel Computing
Institute of Scientific and Technical Information of China (English)
无
2005-01-01
Dynamic distribution model is one of the best schemes for parallel volume rendering. However, in homogeneous cluster system, since the granularity is traditionally identical, all processors communicate almost simultaneously and computation load may lose balance. Due to problems above, a dynamic distribution model with prime granularity for parallel computing is presented.Granularities of each processor are relatively prime, and related theories are introduced. A high parallel performance can be achieved by minimizing network competition and using a load balancing strategy that ensures all processors finish almost simultaneously. Based on Master-Slave-Gleaner (MSG) scheme, the parallel Splatting Algorithm for volume rendering is used to test the model on IBM Cluster 1350 system. The experimental results show that the model can bring a considerable improvement in performance, including computation efficiency, total execution time, speed, and load balancing.
National Research Council Canada - National Science Library
Allison A Vaughn; Matthew Bergman; Barry Fass-Holmes
2015-01-01
...) in the fall term of the five most recent academic years. Hierarchical linear modeling analyses showed that the predictors with the largest effect sizes were English writing programs and class level...
LIMO EEG: a toolbox for hierarchical LInear MOdeling of ElectroEncephaloGraphic data
National Research Council Canada - National Science Library
Pernet, Cyril R; Chauveau, Nicolas; Gaspar, Carl; Rousselet, Guillaume A
2011-01-01
...). LIMO EEG is a Matlab toolbox (EEGLAB compatible) to analyse evoked responses over all space and time dimensions, while accounting for single trial variability using a simple hierarchical linear modelling of the data...
LIMO EEG: A Toolbox for Hierarchical LInear MOdeling of ElectroEncephaloGraphic Data
National Research Council Canada - National Science Library
Pernet, Cyril R; Chauveau, Nicolas; Gaspar, Carl; Rousselet, Guillaume A
2011-01-01
...). LIMO EEG is a Matlab toolbox (EEGLAB compatible) to analyse evoked responses over all space and time dimensions, while accounting for single trial variability using a simple hierarchical linear modelling of the data...
Higher Order Hierarchical Legendre Basis Functions for Electromagnetic Modeling
DEFF Research Database (Denmark)
Jørgensen, Erik; Volakis, John L.; Meincke, Peter
2004-01-01
This paper presents a new hierarchical basis of arbitrary order for integral equations solved with the Method of Moments (MoM). The basis is derived from orthogonal Legendre polynomials which are modified to impose continuity of vector quantities between neighboring elements while maintaining mos...
Higher Order Hierarchical Legendre Basis Functions for Electromagnetic Modeling
DEFF Research Database (Denmark)
Jørgensen, Erik; Volakis, John L.; Meincke, Peter
2004-01-01
This paper presents a new hierarchical basis of arbitrary order for integral equations solved with the Method of Moments (MoM). The basis is derived from orthogonal Legendre polynomials which are modified to impose continuity of vector quantities between neighboring elements while maintaining mos...
Heuristics for Hierarchical Partitioning with Application to Model Checking
DEFF Research Database (Denmark)
Möller, Michael Oliver; Alur, Rajeev
2001-01-01
Given a collection of connected components, it is often desired to cluster together parts of strong correspondence, yielding a hierarchical structure. We address the automation of this process and apply heuristics to battle the combinatorial and computational complexity. We define a cost function...
Directory of Open Access Journals (Sweden)
Sergio Briguglio
2003-01-01
Full Text Available A performance-prediction model is presented, which describes different hierarchical workload decomposition strategies for particle in cell (PIC codes on Clusters of Symmetric MultiProcessors. The devised workload decomposition is hierarchically structured: a higher-level decomposition among the computational nodes, and a lower-level one among the processors of each computational node. Several decomposition strategies are evaluated by means of the prediction model, with respect to the memory occupancy, the parallelization efficiency and the required programming effort. Such strategies have been implemented by integrating the high-level languages High Performance Fortran (at the inter-node stage and OpenMP (at the intra-node one. The details of these implementations are presented, and the experimental values of parallelization efficiency are compared with the predicted results.
Extending the Real-Time Maude Semantics of Ptolemy to Hierarchical DE Models
Bae, Kyungmin; 10.4204/EPTCS.36.3
2010-01-01
This paper extends our Real-Time Maude formalization of the semantics of flat Ptolemy II discrete-event (DE) models to hierarchical models, including modal models. This is a challenging task that requires combining synchronous fixed-point computations with hierarchical structure. The synthesis of a Real-Time Maude verification model from a Ptolemy II DE model, and the formal verification of the synthesized model in Real-Time Maude, have been integrated into Ptolemy II, enabling a model-engineering process that combines the convenience of Ptolemy II DE modeling and simulation with formal verification in Real-Time Maude.
Directory of Open Access Journals (Sweden)
Brian B. Mozaffari
2014-11-01
Full Text Available Based on the notion that the brain is equipped with a hierarchical organization, which embodies environmental contingencies across many time scales, this paper suggests that the medial temporal lobe (MTL – located deep in the hierarchy – serves as a bridge connecting supra to infra – MTL levels. Bridging the upper and lower regions of the hierarchy provides a parallel architecture that optimizes information flow between upper and lower regions to aid attention, encoding, and processing of quick complex visual phenomenon. Bypassing intermediate hierarchy levels, information conveyed through the MTL ‘bridge’ allows upper levels to make educated predictions about the prevailing context and accordingly select lower representations to increase the efficiency of predictive coding throughout the hierarchy. This selection or activation/deactivation is associated with endogenous attention. In the event that these ‘bridge’ predictions are inaccurate, this architecture enables the rapid encoding of novel contingencies. A review of hierarchical models in relation to memory is provided along with a new theory, Medial-temporal-lobe Conduit for Parallel Connectivity (MCPC. In this scheme, consolidation is considered as a secondary process, occurring after a MTL-bridged connection, which eventually allows upper and lower levels to access each other directly. With repeated reactivations, as contingencies become consolidated, less MTL activity is predicted. Finally, MTL bridging may aid processing transient but structured perceptual events, by allowing communication between upper and lower levels without calling on intermediate levels of representation.
Mozaffari, Brian
2014-01-01
Based on the notion that the brain is equipped with a hierarchical organization, which embodies environmental contingencies across many time scales, this paper suggests that the medial temporal lobe (MTL)-located deep in the hierarchy-serves as a bridge connecting supra- to infra-MTL levels. Bridging the upper and lower regions of the hierarchy provides a parallel architecture that optimizes information flow between upper and lower regions to aid attention, encoding, and processing of quick complex visual phenomenon. Bypassing intermediate hierarchy levels, information conveyed through the MTL "bridge" allows upper levels to make educated predictions about the prevailing context and accordingly select lower representations to increase the efficiency of predictive coding throughout the hierarchy. This selection or activation/deactivation is associated with endogenous attention. In the event that these "bridge" predictions are inaccurate, this architecture enables the rapid encoding of novel contingencies. A review of hierarchical models in relation to memory is provided along with a new theory, Medial-temporal-lobe Conduit for Parallel Connectivity (MCPC). In this scheme, consolidation is considered as a secondary process, occurring after a MTL-bridged connection, which eventually allows upper and lower levels to access each other directly. With repeated reactivations, as contingencies become consolidated, less MTL activity is predicted. Finally, MTL bridging may aid processing transient but structured perceptual events, by allowing communication between upper and lower levels without calling on intermediate levels of representation.
Bai, Hao; Zhang, Xi-wen
2017-06-01
While Chinese is learned as a second language, its characters are taught step by step from their strokes to components, radicals to components, and their complex relations. Chinese Characters in digital ink from non-native language writers are deformed seriously, thus the global recognition approaches are poorer. So a progressive approach from bottom to top is presented based on hierarchical models. Hierarchical information includes strokes and hierarchical components. Each Chinese character is modeled as a hierarchical tree. Strokes in one Chinese characters in digital ink are classified with Hidden Markov Models and concatenated to the stroke symbol sequence. And then the structure of components in one ink character is extracted. According to the extraction result and the stroke symbol sequence, candidate characters are traversed and scored. Finally, the recognition candidate results are listed by descending. The method of this paper is validated by testing 19815 copies of the handwriting Chinese characters written by foreign students.
Energy Technology Data Exchange (ETDEWEB)
Sumida, S. [U-shin Ltd., Tokyo (Japan); Nagamatsu, M.; Maruyama, K. [Hokkaido Institute of Technology, Sapporo (Japan); Hiramatsu, S. [Mazda Motor Corp., Hiroshima (Japan)
1997-10-01
A new approach on modeling is put forward in order to compose the virtual prototype which is indispensable for fully computer integrated concurrent development of automobile product. A basic concept of the hierarchical functional model is proposed as the concrete form of this new modeling technology. This model is used mainly for explaining and simulating functions and efficiencies of both the parts and the total product of automobile. All engineers who engage themselves in design and development of automobile can collaborate with one another using this model. Some application examples are shown, and usefulness of this model is demonstrated. 5 refs., 5 figs.
Parallel local approximation MCMC for expensive models
Conrad, Patrick; Davis, Andrew; Marzouk, Youssef; Pillai, Natesh; Smith, Aaron
2016-01-01
Performing Bayesian inference via Markov chain Monte Carlo (MCMC) can be exceedingly expensive when posterior evaluations invoke the evaluation of a computationally expensive model, such as a system of partial differential equations. In recent work [Conrad et al. JASA 2015, arXiv:1402.1694] we described a framework for constructing and refining local approximations of such models during an MCMC simulation. These posterior--adapted approximations harness regularity of the model to reduce the c...
Hierarchical model-based predictive control of a power plant portfolio
DEFF Research Database (Denmark)
Edlund, Kristian; Bendtsen, Jan Dimon; Jørgensen, John Bagterp
2011-01-01
control” – becomes increasingly important as the ratio of renewable energy in a power system grows. As a consequence, tomorrow's “smart grids” require highly flexible and scalable control systems compared to conventional power systems. This paper proposes a hierarchical model-based predictive control...... design for power system portfolio control, which aims specifically at meeting these demands.The design involves a two-layer hierarchical structure with clearly defined interfaces that facilitate an object-oriented implementation approach. The same hierarchical structure is reflected in the underlying...
Parallel Dynamics of Continuous Hopfield Model Revisited
Mimura, Kazushi
2009-03-01
We have applied the generating functional analysis (GFA) to the continuous Hopfield model. We have also confirmed that the GFA predictions in some typical cases exhibit good consistency with computer simulation results. When a retarded self-interaction term is omitted, the GFA result becomes identical to that obtained using the statistical neurodynamics as well as the case of the sequential binary Hopfield model.
Towards a streaming model for nested data parallelism
DEFF Research Database (Denmark)
Madsen, Frederik Meisner; Filinski, Andrzej
2013-01-01
-flattening execution strategy, comes at the price of potentially prohibitive space usage in the common case of computations with an excess of available parallelism, such as dense-matrix multiplication. We present a simple nested data-parallel functional language and associated cost semantics that retains NESL......'s intuitive work--depth model for time complexity, but also allows highly parallel computations to be expressed in a space-efficient way, in the sense that memory usage on a single (or a few) processors is of the same order as for a sequential formulation of the algorithm, and in general scales smoothly......-processable in a streaming fashion. This semantics is directly compatible with previously proposed piecewise execution models for nested data parallelism, but allows the expected space usage to be reasoned about directly at the source-language level. The language definition and implementation are still very much work...
Optimisation of a parallel ocean general circulation model
Directory of Open Access Journals (Sweden)
M. I. Beare
Full Text Available This paper presents the development of a general-purpose parallel ocean circulation model, for use on a wide range of computer platforms, from traditional scalar machines to workstation clusters and massively parallel processors. Parallelism is provided, as a modular option, via high-level message-passing routines, thus hiding the technical intricacies from the user. An initial implementation highlights that the parallel efficiency of the model is adversely affected by a number of factors, for which optimisations are discussed and implemented. The resulting ocean code is portable and, in particular, allows science to be achieved on local workstations that could otherwise only be undertaken on state-of-the-art supercomputers.
Hierarchical Modelling of Flood Risk for Engineering Decision Analysis
DEFF Research Database (Denmark)
Custer, Rocco
Societies around the world are faced with flood risk, prompting authorities and decision makers to manage risk to protect population and assets. With climate change, urbanisation and population growth, flood risk changes constantly, requiring flood risk management strategies that are flexible...... and robust. Traditional risk management solutions, e.g. dike construction, are not particularly flexible, as they are difficult to adapt to changing risk. Conversely, the recent concept of integrated flood risk management, entailing a combination of several structural and non-structural risk management...... measures, allows identifying flexible and robust flood risk management strategies. Based on it, this thesis investigates hierarchical flood protection systems, which encompass two, or more, hierarchically integrated flood protection structures on different spatial scales (e.g. dikes, local flood barriers...
Modeling and Control of Primary Parallel Isolated Boost Converter
DEFF Research Database (Denmark)
Mira Albert, Maria del Carmen; Hernandez Botella, Juan Carlos; Sen, Gökhan
2012-01-01
In this paper state space modeling and closed loop controlled operation have been presented for primary parallel isolated boost converter (PPIBC) topology as a battery charging unit. Parasitic resistances have been included to have an accurate dynamic model. The accuracy of the model has been tes...
Barnes, Brian; Leiter, Kenneth; Becker, Richard; Knap, Jaroslaw; Brennan, John
As part of a multiscale modeling effort, we present progress on a challenge in continuum-scale modeling: the direct incorporation of complex molecular-level processes in the constitutive evaluation. In this initial phase of the research we use a concurrent scale-bridging approach, with a hierarchical multiscale framework running in parallel to couple a particle-based model (the ''lower scale'') computing the equation of state (EOS) to the constitutive response in a finite-element multi-physics simulation (the ''upper scale''). The lower scale simulations of 1,3,5-trinitroperhydro-1,3,5-triazine (RDX) use a force-matched coarse-grain model and dissipative particle dynamics methods, and the upper scale simulation is of a Taylor anvil impact experiment. Results emphasize use of adaptive sampling (via dynamic kriging) that accelerates time to solution, and its comparison to fully ''on the fly'' runs. Work towards inclusion of a fully reactive EOS is also discussed.
Modeling place field activity with hierarchical slow feature analysis
Directory of Open Access Journals (Sweden)
Fabian eSchoenfeld
2015-05-01
Full Text Available In this paper we present six experimental studies from the literature on hippocampal place cells and replicate their main results in a computational framework based on the principle of slowness. Each of the chosen studies first allows rodents to develop stable place field activity and then examines a distinct property of the established spatial encoding, namely adaptation to cue relocation and removal; directional firing activity in the linear track and open field; and results of morphing and stretching the overall environment. To replicate these studies we employ a hierarchical Slow Feature Analysis (SFA network. SFA is an unsupervised learning algorithm extracting slowly varying information from a given stream of data, and hierarchical application of SFA allows for high dimensional input such as visual images to be processed efficiently and in a biologically plausible fashion. Training data for the network is produced in ratlab, a free basic graphics engine designed to quickly set up a wide range of 3D environments mimicking real life experimental studies, simulate a foraging rodent while recording its visual input, and training & sampling a hierarchical SFA network.
New aerial survey and hierarchical model to estimate manatee abundance
Langimm, Cahterine A.; Dorazio, Robert M.; Stith, Bradley M.; Doyle, Terry J.
2011-01-01
Monitoring the response of endangered and protected species to hydrological restoration is a major component of the adaptive management framework of the Comprehensive Everglades Restoration Plan. The endangered Florida manatee (Trichechus manatus latirostris) lives at the marine-freshwater interface in southwest Florida and is likely to be affected by hydrologic restoration. To provide managers with prerestoration information on distribution and abundance for postrestoration comparison, we developed and implemented a new aerial survey design and hierarchical statistical model to estimate and map abundance of manatees as a function of patch-specific habitat characteristics, indicative of manatee requirements for offshore forage (seagrass), inland fresh drinking water, and warm-water winter refuge. We estimated the number of groups of manatees from dual-observer counts and estimated the number of individuals within groups by removal sampling. Our model is unique in that we jointly analyzed group and individual counts using assumptions that allow probabilities of group detection to depend on group size. Ours is the first analysis of manatee aerial surveys to model spatial and temporal abundance of manatees in association with habitat type while accounting for imperfect detection. We conducted the study in the Ten Thousand Islands area of southwestern Florida, USA, which was expected to be affected by the Picayune Strand Restoration Project to restore hydrology altered for a failed real-estate development. We conducted 11 surveys in 2006, spanning the cold, dry season and warm, wet season. To examine short-term and seasonal changes in distribution we flew paired surveys 1–2 days apart within a given month during the year. Manatees were sparsely distributed across the landscape in small groups. Probability of detection of a group increased with group size; the magnitude of the relationship between group size and detection probability varied among surveys. Probability
Chulkov Vitaliy Olegovich; Rakhmonov Emomali Karimovich; Kas'yanov Vitaliy Fedorovich; Gusakova Elena Aleksandrovna
2012-01-01
This article deals with the infographic modeling of hierarchical management systems exposed to innovative conflicts. The authors analyze the facts that serve as conflict drivers in the construction management environment. The reasons for innovative conflicts include changes in hierarchical structures of management systems, adjustment of workers to new management conditions, changes in the ideology, etc. Conflicts under consideration may involve contradictions between requests placed by custom...
Hierarchical hybrid testability modeling and evaluation method based on information fusion
Institute of Scientific and Technical Information of China (English)
Xishan Zhang; Kaoli Huang; Pengcheng Yan; Guangyao Lian
2015-01-01
In order to meet the demand of testability analysis and evaluation for complex equipment under a smal sample test in the equipment life cycle, the hierarchical hybrid testability model-ing and evaluation method (HHTME), which combines the testabi-lity structure model (TSM) with the testability Bayesian networks model (TBNM), is presented. Firstly, the testability network topo-logy of complex equipment is built by using the hierarchical hybrid testability modeling method. Secondly, the prior conditional prob-ability distribution between network nodes is determined through expert experience. Then the Bayesian method is used to update the conditional probability distribution, according to history test information, virtual simulation information and similar product in-formation. Final y, the learned hierarchical hybrid testability model (HHTM) is used to estimate the testability of equipment. Compared with the results of other modeling methods, the relative deviation of the HHTM is only 0.52%, and the evaluation result is the most accurate.
Royle, J. Andrew; Converse, Sarah J.
2014-01-01
Capture–recapture studies are often conducted on populations that are stratified by space, time or other factors. In this paper, we develop a Bayesian spatial capture–recapture (SCR) modelling framework for stratified populations – when sampling occurs within multiple distinct spatial and temporal strata.We describe a hierarchical model that integrates distinct models for both the spatial encounter history data from capture–recapture sampling, and also for modelling variation in density among strata. We use an implementation of data augmentation to parameterize the model in terms of a latent categorical stratum or group membership variable, which provides a convenient implementation in popular BUGS software packages.We provide an example application to an experimental study involving small-mammal sampling on multiple trapping grids over multiple years, where the main interest is in modelling a treatment effect on population density among the trapping grids.Many capture–recapture studies involve some aspect of spatial or temporal replication that requires some attention to modelling variation among groups or strata. We propose a hierarchical model that allows explicit modelling of group or strata effects. Because the model is formulated for individual encounter histories and is easily implemented in the BUGS language and other free software, it also provides a general framework for modelling individual effects, such as are present in SCR models.
Usability Prediction & Ranking of SDLC Models Using Fuzzy Hierarchical Usability Model
Gupta, Deepak; Ahlawat, Anil K.; Sagar, Kalpna
2017-06-01
Evaluation of software quality is an important aspect for controlling and managing the software. By such evaluation, improvements in software process can be made. The software quality is significantly dependent on software usability. Many researchers have proposed numbers of usability models. Each model considers a set of usability factors but do not cover all the usability aspects. Practical implementation of these models is still missing, as there is a lack of precise definition of usability. Also, it is very difficult to integrate these models into current software engineering practices. In order to overcome these challenges, this paper aims to define the term `usability' using the proposed hierarchical usability model with its detailed taxonomy. The taxonomy considers generic evaluation criteria for identifying the quality components, which brings together factors, attributes and characteristics defined in various HCI and software models. For the first time, the usability model is also implemented to predict more accurate usability values. The proposed system is named as fuzzy hierarchical usability model that can be easily integrated into the current software engineering practices. In order to validate the work, a dataset of six software development life cycle models is created and employed. These models are ranked according to their predicted usability values. This research also focuses on the detailed comparison of proposed model with the existing usability models.
von Davier, Matthias; Haberman, Shelby J
2014-04-01
This commentary addresses the modeling and final analytical path taken, as well as the terminology used, in the paper "Hierarchical diagnostic classification models: a family of models for estimating and testing attribute hierarchies" by Templin and Bradshaw (Psychometrika, doi: 10.1007/s11336-013-9362-0, 2013). It raises several issues concerning use of cognitive diagnostic models that either assume attribute hierarchies or assume a certain form of attribute interactions. The issues raised are illustrated with examples, and references are provided for further examination.
Modelling parallel programs and multiprocessor architectures with AXE
Yan, Jerry C.; Fineman, Charles E.
1991-01-01
AXE, An Experimental Environment for Parallel Systems, was designed to model and simulate for parallel systems at the process level. It provides an integrated environment for specifying computation models, multiprocessor architectures, data collection, and performance visualization. AXE is being used at NASA-Ames for developing resource management strategies, parallel problem formulation, multiprocessor architectures, and operating system issues related to the High Performance Computing and Communications Program. AXE's simple, structured user-interface enables the user to model parallel programs and machines precisely and efficiently. Its quick turn-around time keeps the user interested and productive. AXE models multicomputers. The user may easily modify various architectural parameters including the number of sites, connection topologies, and overhead for operating system activities. Parallel computations in AXE are represented as collections of autonomous computing objects known as players. Their use and behavior is described. Performance data of the multiprocessor model can be observed on a color screen. These include CPU and message routing bottlenecks, and the dynamic status of the software.
Parallelizing the Cellular Potts Model on graphics processing units
Tapia, José Juan; D'Souza, Roshan M.
2011-04-01
The Cellular Potts Model (CPM) is a lattice based modeling technique used for simulating cellular structures in computational biology. The computational complexity of the model means that current serial implementations restrict the size of simulation to a level well below biological relevance. Parallelization on computing clusters enables scaling the size of the simulation but marginally addresses computational speed due to the limited memory bandwidth between nodes. In this paper we present new data-parallel algorithms and data structures for simulating the Cellular Potts Model on graphics processing units. Our implementations handle most terms in the Hamiltonian, including cell-cell adhesion constraint, cell volume constraint, cell surface area constraint, and cell haptotaxis. We use fine level checkerboards with lock mechanisms using atomic operations to enable consistent updates while maintaining a high level of parallelism. A new data-parallel memory allocation algorithm has been developed to handle cell division. Tests show that our implementation enables simulations of >10 cells with lattice sizes of up to 256 3 on a single graphics card. Benchmarks show that our implementation runs ˜80× faster than serial implementations, and ˜5× faster than previous parallel implementations on computing clusters consisting of 25 nodes. The wide availability and economy of graphics cards mean that our techniques will enable simulation of realistically sized models at a fraction of the time and cost of previous implementations and are expected to greatly broaden the scope of CPM applications.
Sun, Kaioqiong; Udupa, Jayaram K.; Odhner, Dewey; Tong, Yubing; Torigian, Drew A.
2014-03-01
This paper proposes a thoracic anatomy segmentation method based on hierarchical recognition and delineation guided by a built fuzzy model. Labeled binary samples for each organ are registered and aligned into a 3D fuzzy set representing the fuzzy shape model for the organ. The gray intensity distributions of the corresponding regions of the organ in the original image are recorded in the model. The hierarchical relation and mean location relation between different organs are also captured in the model. Following the hierarchical structure and location relation, the fuzzy shape model of different organs is registered to the given target image to achieve object recognition. A fuzzy connected delineation method is then used to obtain the final segmentation result of organs with seed points provided by recognition. The hierarchical structure and location relation integrated in the model provide the initial parameters for registration and make the recognition efficient and robust. The 3D fuzzy model combined with hierarchical affine registration ensures that accurate recognition can be obtained for both non-sparse and sparse organs. The results on real images are presented and shown to be better than a recently reported fuzzy model-based anatomy recognition strategy.
Towards an Accurate Performance Modeling of Parallel SparseFactorization
Energy Technology Data Exchange (ETDEWEB)
Grigori, Laura; Li, Xiaoye S.
2006-05-26
We present a performance model to analyze a parallel sparseLU factorization algorithm on modern cached-based, high-end parallelarchitectures. Our model characterizes the algorithmic behavior bytakingaccount the underlying processor speed, memory system performance, aswell as the interconnect speed. The model is validated using theSuperLU_DIST linear system solver, the sparse matrices from realapplications, and an IBM POWER3 parallel machine. Our modelingmethodology can be easily adapted to study performance of other types ofsparse factorizations, such as Cholesky or QR.
Advances in parallel computer technology for desktop atmospheric dispersion models
Energy Technology Data Exchange (ETDEWEB)
Bian, X.; Ionescu-Niscov, S.; Fast, J.D. [Pacific Northwest National Lab., Richland, WA (United States); Allwine, K.J. [Allwine Enviornmental Serv., Richland, WA (United States)
1996-12-31
Desktop models are those models used by analysts with varied backgrounds, for performing, for example, air quality assessment and emergency response activities. These models must be robust, well documented, have minimal and well controlled user inputs, and have clear outputs. Existing coarse-grained parallel computers can provide significant increases in computation speed in desktop atmospheric dispersion modeling without considerable increases in hardware cost. This increased speed will allow for significant improvements to be made in the scientific foundations of these applied models, in the form of more advanced diffusion schemes and better representation of the wind and turbulence fields. This is especially attractive for emergency response applications where speed and accuracy are of utmost importance. This paper describes one particular application of coarse-grained parallel computer technology to a desktop complex terrain atmospheric dispersion modeling system. By comparing performance characteristics of the coarse-grained parallel version of the model with the single-processor version, we will demonstrate that applying coarse-grained parallel computer technology to desktop atmospheric dispersion modeling systems will allow us to address critical issues facing future requirements of this class of dispersion models.
Royle, J. Andrew; Dorazio, Robert M.
2008-01-01
A guide to data collection, modeling and inference strategies for biological survey data using Bayesian and classical statistical methods. This book describes a general and flexible framework for modeling and inference in ecological systems based on hierarchical models, with a strict focus on the use of probability models and parametric inference. Hierarchical models represent a paradigm shift in the application of statistics to ecological inference problems because they combine explicit models of ecological system structure or dynamics with models of how ecological systems are observed. The principles of hierarchical modeling are developed and applied to problems in population, metapopulation, community, and metacommunity systems. The book provides the first synthetic treatment of many recent methodological advances in ecological modeling and unifies disparate methods and procedures. The authors apply principles of hierarchical modeling to ecological problems, including * occurrence or occupancy models for estimating species distribution * abundance models based on many sampling protocols, including distance sampling * capture-recapture models with individual effects * spatial capture-recapture models based on camera trapping and related methods * population and metapopulation dynamic models * models of biodiversity, community structure and dynamics.
Use of hierarchical models to analyze European trends in congenital anomaly prevalence.
Cavadino, Alana; Prieto-Merino, David; Addor, Marie-Claude; Arriola, Larraitz; Bianchi, Fabrizio; Draper, Elizabeth; Garne, Ester; Greenlees, Ruth; Haeusler, Martin; Khoshnood, Babak; Kurinczuk, Jenny; McDonnell, Bob; Nelen, Vera; O'Mahony, Mary; Randrianaivo, Hanitra; Rankin, Judith; Rissmann, Anke; Tucker, David; Verellen-Dumoulin, Christine; de Walle, Hermien; Wellesley, Diana; Morris, Joan K
2016-06-01
Surveillance of congenital anomalies is important to identify potential teratogens. Despite known associations between different anomalies, current surveillance methods examine trends within each subgroup separately. We aimed to evaluate whether hierarchical statistical methods that combine information from several subgroups simultaneously would enhance current surveillance methods using data collected by EUROCAT, a European network of population-based congenital anomaly registries. Ten-year trends (2003 to 2012) in 18 EUROCAT registries over 11 countries were analyzed for the following groups of anomalies: neural tube defects, congenital heart defects, digestive system, and chromosomal anomalies. Hierarchical Poisson regression models that combined related subgroups together according to EUROCAT's hierarchy of subgroup coding were applied. Results from hierarchical models were compared with those from Poisson models that consider each congenital anomaly separately. Hierarchical models gave similar results as those obtained when considering each anomaly subgroup in a separate analysis. Hierarchical models that included only around three subgroups showed poor convergence and were generally found to be over-parameterized. Larger sets of anomaly subgroups were found to be too heterogeneous to group together in this way. There were no substantial differences between independent analyses of each subgroup and hierarchical models when using the EUROCAT anomaly subgroups. Considering each anomaly separately, therefore, remains an appropriate method for the detection of potential changes in prevalence by surveillance systems. Hierarchical models do, however, remain an interesting alternative method of analysis when considering the risks of specific exposures in relation to the prevalence of congenital anomalies, which could be investigated in other studies. Birth Defects Research (Part A) 106:480-10, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
A Bayesian hierarchical diffusion model decomposition of performance in Approach-Avoidance Tasks.
Krypotos, Angelos-Miltiadis; Beckers, Tom; Kindt, Merel; Wagenmakers, Eric-Jan
2015-01-01
Common methods for analysing response time (RT) tasks, frequently used across different disciplines of psychology, suffer from a number of limitations such as the failure to directly measure the underlying latent processes of interest and the inability to take into account the uncertainty associated with each individual's point estimate of performance. Here, we discuss a Bayesian hierarchical diffusion model and apply it to RT data. This model allows researchers to decompose performance into meaningful psychological processes and to account optimally for individual differences and commonalities, even with relatively sparse data. We highlight the advantages of the Bayesian hierarchical diffusion model decomposition by applying it to performance on Approach-Avoidance Tasks, widely used in the emotion and psychopathology literature. Model fits for two experimental data-sets demonstrate that the model performs well. The Bayesian hierarchical diffusion model overcomes important limitations of current analysis procedures and provides deeper insight in latent psychological processes of interest.
Performance of Air Pollution Models on Massively Parallel Computers
DEFF Research Database (Denmark)
Brown, John; Hansen, Per Christian; Wasniewski, Jerzy
1996-01-01
To compare the performance and use of three massively parallel SIMD computers, we implemented a large air pollution model on the computers. Using a realistic large-scale model, we gain detailed insight about the performance of the three computers when used to solve large-scale scientific problems...
Term Structure Models with Parallel and Proportional Shifts
DEFF Research Database (Denmark)
Armerin, Frederik; Björk, Tomas; Astrup Jensen, Bjarne
this general framework we show that there does indeed exist a large variety of nontrivial parallel shift term structure models, and we also describe these in detail. We also show that there exists no nontrivial flat term structure model. The same analysis is repeated for the similar case, where the yield curve...
Modeling groundwater flow on massively parallel computers
Energy Technology Data Exchange (ETDEWEB)
Ashby, S.F.; Falgout, R.D.; Fogwell, T.W.; Tompson, A.F.B.
1994-12-31
The authors will explore the numerical simulation of groundwater flow in three-dimensional heterogeneous porous media. An interdisciplinary team of mathematicians, computer scientists, hydrologists, and environmental engineers is developing a sophisticated simulation code for use on workstation clusters and MPPs. To date, they have concentrated on modeling flow in the saturated zone (single phase), which requires the solution of a large linear system. they will discuss their implementation of preconditioned conjugate gradient solvers. The preconditioners under consideration include simple diagonal scaling, s-step Jacobi, adaptive Chebyshev polynomial preconditioning, and multigrid. They will present some preliminary numerical results, including simulations of groundwater flow at the LLNL site. They also will demonstrate the code`s scalability.
Maximizing Adaptivity in Hierarchical Topological Models Using Cancellation Trees
Energy Technology Data Exchange (ETDEWEB)
Bremer, P; Pascucci, V; Hamann, B
2008-12-08
We present a highly adaptive hierarchical representation of the topology of functions defined over two-manifold domains. Guided by the theory of Morse-Smale complexes, we encode dependencies between cancellations of critical points using two independent structures: a traditional mesh hierarchy to store connectivity information and a new structure called cancellation trees to encode the configuration of critical points. Cancellation trees provide a powerful method to increase adaptivity while using a simple, easy-to-implement data structure. The resulting hierarchy is significantly more flexible than the one previously reported. In particular, the resulting hierarchy is guaranteed to be of logarithmic height.
Energy Technology Data Exchange (ETDEWEB)
Korn, E L
1978-08-01
This thesis is concerned with the effect of classification error on contingency tables being analyzed with hierarchical log-linear models (independence in an I x J table is a particular hierarchical log-linear model). Hierarchical log-linear models provide a concise way of describing independence and partial independences between the different dimensions of a contingency table. The structure of classification errors on contingency tables that will be used throughout is defined. This structure is a generalization of Bross' model, but here attention is paid to the different possible ways a contingency table can be sampled. Hierarchical log-linear models and the effect of misclassification on them are described. Some models, such as independence in an I x J table, are preserved by misclassification, i.e., the presence of classification error will not change the fact that a specific table belongs to that model. Other models are not preserved by misclassification; this implies that the usual tests to see if a sampled table belong to that model will not be of the right significance level. A simple criterion will be given to determine which hierarchical log-linear models are preserved by misclassification. Maximum likelihood theory is used to perform log-linear model analysis in the presence of known misclassification probabilities. It will be shown that the Pitman asymptotic power of tests between different hierarchical log-linear models is reduced because of the misclassification. A general expression will be given for the increase in sample size necessary to compensate for this loss of power and some specific cases will be examined.
Hierarchical Bayesian Model for Simultaneous EEG Source and Forward Model Reconstruction (SOFOMORE)
DEFF Research Database (Denmark)
Stahlhut, Carsten; Mørup, Morten; Winther, Ole;
2009-01-01
In this paper we propose an approach to handle forward model uncertainty for EEG source reconstruction. A stochastic forward model is motivated by the many uncertain contributions that form the forward propagation model including the tissue conductivity distribution, the cortical surface, and ele......In this paper we propose an approach to handle forward model uncertainty for EEG source reconstruction. A stochastic forward model is motivated by the many uncertain contributions that form the forward propagation model including the tissue conductivity distribution, the cortical surface......, and electrode positions. We first present a hierarchical Bayesian framework for EEG source localization that jointly performs source and forward model reconstruction (SOFOMORE). Secondly, we evaluate the SOFOMORE model by comparison with source reconstruction methods that use fixed forward models. Simulated...... and real EEG data demonstrate that invoking a stochastic forward model leads to improved source estimates....
Ranking of Business Process Simulation Software Tools with DEX/QQ Hierarchical Decision Model.
Damij, Nadja; Boškoski, Pavle; Bohanec, Marko; Mileva Boshkoska, Biljana
2016-01-01
The omnipresent need for optimisation requires constant improvements of companies' business processes (BPs). Minimising the risk of inappropriate BP being implemented is usually performed by simulating the newly developed BP under various initial conditions and "what-if" scenarios. An effectual business process simulations software (BPSS) is a prerequisite for accurate analysis of an BP. Characterisation of an BPSS tool is a challenging task due to the complex selection criteria that includes quality of visual aspects, simulation capabilities, statistical facilities, quality reporting etc. Under such circumstances, making an optimal decision is challenging. Therefore, various decision support models are employed aiding the BPSS tool selection. The currently established decision support models are either proprietary or comprise only a limited subset of criteria, which affects their accuracy. Addressing this issue, this paper proposes a new hierarchical decision support model for ranking of BPSS based on their technical characteristics by employing DEX and qualitative to quantitative (QQ) methodology. Consequently, the decision expert feeds the required information in a systematic and user friendly manner. There are three significant contributions of the proposed approach. Firstly, the proposed hierarchical model is easily extendible for adding new criteria in the hierarchical structure. Secondly, a fully operational decision support system (DSS) tool that implements the proposed hierarchical model is presented. Finally, the effectiveness of the proposed hierarchical model is assessed by comparing the resulting rankings of BPSS with respect to currently available results.
Hierarchical ensemble of background models for PTZ-based video surveillance.
Liu, Ning; Wu, Hefeng; Lin, Liang
2015-01-01
In this paper, we study a novel hierarchical background model for intelligent video surveillance with the pan-tilt-zoom (PTZ) camera, and give rise to an integrated system consisting of three key components: background modeling, observed frame registration, and object tracking. First, we build the hierarchical background model by separating the full range of continuous focal lengths of a PTZ camera into several discrete levels and then partitioning the wide scene at each level into many partial fixed scenes. In this way, the wide scenes captured by a PTZ camera through rotation and zoom are represented by a hierarchical collection of partial fixed scenes. A new robust feature is presented for background modeling of each partial scene. Second, we locate the partial scenes corresponding to the observed frame in the hierarchical background model. Frame registration is then achieved by feature descriptor matching via fast approximate nearest neighbor search. Afterwards, foreground objects can be detected using background subtraction. Last, we configure the hierarchical background model into a framework to facilitate existing object tracking algorithms under the PTZ camera. Foreground extraction is used to assist tracking an object of interest. The tracking outputs are fed back to the PTZ controller for adjusting the camera properly so as to maintain the tracked object in the image plane. We apply our system on several challenging scenarios and achieve promising results.
Vectorial Preisach-type model designed for parallel computing
Energy Technology Data Exchange (ETDEWEB)
Stancu, Alexandru [Department of Solid State and Theoretical Physics, Al. I. Cuza University, Blvd. Carol I, 11, 700506 Iasi (Romania)]. E-mail: alstancu@uaic.ro; Stoleriu, Laurentiu [Department of Solid State and Theoretical Physics, Al. I. Cuza University, Blvd. Carol I, 11, 700506 Iasi (Romania); Andrei, Petru [Electrical and Computer Engineering, Florida State University, Tallahassee, FL (United States); Electrical and Computer Engineering, Florida A and M University, Tallahassee, FL (United States)
2007-09-15
Most of the hysteresis phenomenological models are scalar, while all the magnetization processes are vectorial. The vector models-phenomenological or micromagnetic (physical)-are time consuming and sometimes difficult to implement. In this paper, we introduce a new vector Preisach-type model that uses micromagnetic results to simulate the magnetic response of a system of several tens of thousands of pseudo-particles. The model has a modular structure that allows easy implementation for parallel computing.
A hybrid parallel framework for the cellular Potts model simulations
Energy Technology Data Exchange (ETDEWEB)
Jiang, Yi [Los Alamos National Laboratory; He, Kejing [SOUTH CHINA UNIV; Dong, Shoubin [SOUTH CHINA UNIV
2009-01-01
The Cellular Potts Model (CPM) has been widely used for biological simulations. However, most current implementations are either sequential or approximated, which can't be used for large scale complex 3D simulation. In this paper we present a hybrid parallel framework for CPM simulations. The time-consuming POE solving, cell division, and cell reaction operation are distributed to clusters using the Message Passing Interface (MPI). The Monte Carlo lattice update is parallelized on shared-memory SMP system using OpenMP. Because the Monte Carlo lattice update is much faster than the POE solving and SMP systems are more and more common, this hybrid approach achieves good performance and high accuracy at the same time. Based on the parallel Cellular Potts Model, we studied the avascular tumor growth using a multiscale model. The application and performance analysis show that the hybrid parallel framework is quite efficient. The hybrid parallel CPM can be used for the large scale simulation ({approx}10{sup 8} sites) of complex collective behavior of numerous cells ({approx}10{sup 6}).
Badlands: A parallel basin and landscape dynamics model
Directory of Open Access Journals (Sweden)
T. Salles
2016-01-01
Full Text Available Over more than three decades, a number of numerical landscape evolution models (LEMs have been developed to study the combined effects of climate, sea-level, tectonics and sediments on Earth surface dynamics. Most of them are written in efficient programming languages, but often cannot be used on parallel architectures. Here, I present a LEM which ports a common core of accepted physical principles governing landscape evolution into a distributed memory parallel environment. Badlands (acronym for BAsin anD LANdscape DynamicS is an open-source, flexible, TIN-based landscape evolution model, built to simulate topography development at various space and time scales.
Genetic Algorithm Modeling with GPU Parallel Computing Technology
Cavuoti, Stefano; Brescia, Massimo; Pescapé, Antonio; Longo, Giuseppe; Ventre, Giorgio
2012-01-01
We present a multi-purpose genetic algorithm, designed and implemented with GPGPU / CUDA parallel computing technology. The model was derived from a multi-core CPU serial implementation, named GAME, already scientifically successfully tested and validated on astrophysical massive data classification problems, through a web application resource (DAMEWARE), specialized in data mining based on Machine Learning paradigms. Since genetic algorithms are inherently parallel, the GPGPU computing paradigm has provided an exploit of the internal training features of the model, permitting a strong optimization in terms of processing performances and scalability.
Inverse kinematics model of parallel macro-micro manipulator system
Institute of Scientific and Technical Information of China (English)
无
2000-01-01
An improved design, which employs the integration of optic, mechanical and electronic technologies for the next generation large radio telescope, is presented in this note. The authors propose the concept of parallel macro-micro manipulator system from the feed support structure with a rough tuning subsystem based on a cable structure and a fine tuning subsystem based on the Stewart platform. According to the requirement of astronomical observation, the inverse kinematics model of this parallel macro-micro manipulator system is deduced. This inverse kinematics model is necessary for the computer-controlled motion of feed.
DEFF Research Database (Denmark)
Vasquez, Juan Carlos; Guerrero, Josep M.; Savaghebi, Mehdi;
2013-01-01
Power electronics based MicroGrids consist of a number of voltage source inverters (VSIs) operating in parallel. In this paper, the modeling, control design, and stability analysis of parallel connected three-phase VSIs are derived. The proposed voltage and current inner control loops and the mat......Power electronics based MicroGrids consist of a number of voltage source inverters (VSIs) operating in parallel. In this paper, the modeling, control design, and stability analysis of parallel connected three-phase VSIs are derived. The proposed voltage and current inner control loops...... and the mathematical models of the VSIs are based on the stationary reference frame. A hierarchical control scheme for the paralleled VSI system is developed comprising two levels. The primary control includes the droop method and the virtual impedance loops, in order to share active and reactive power. The secondary...... control restores the frequency and amplitude deviations produced by the primary control. Also, a synchronization algorithm is presented in order to connect the MicroGrid to the grid. Experimental results are provided to validate the performance and robustness of the parallel VSI system control...
A Hierarchical Linear Model with Factor Analysis Structure at Level 2
Miyazaki, Yasuo; Frank, Kenneth A.
2006-01-01
In this article the authors develop a model that employs a factor analysis structure at Level 2 of a two-level hierarchical linear model (HLM). The model (HLM2F) imposes a structure on a deficient rank Level 2 covariance matrix [tau], and facilitates estimation of a relatively large [tau] matrix. Maximum likelihood estimators are derived via the…
DEFF Research Database (Denmark)
Mantzouni, Irene; Sørensen, Helle; O'Hara, Robert B.;
2010-01-01
and Beverton and Holt stock–recruitment (SR) models were extended by applying hierarchical methods, mixed-effects models, and Bayesian inference to incorporate the influence of these ecosystem factors on model parameters representing cod maximum reproductive rate and carrying capacity. We identified...
Advanced parallel programming models research and development opportunities.
Energy Technology Data Exchange (ETDEWEB)
Wen, Zhaofang.; Brightwell, Ronald Brian
2004-07-01
There is currently a large research and development effort within the high-performance computing community on advanced parallel programming models. This research can potentially have an impact on parallel applications, system software, and computing architectures in the next several years. Given Sandia's expertise and unique perspective in these areas, particularly on very large-scale systems, there are many areas in which Sandia can contribute to this effort. This technical report provides a survey of past and present parallel programming model research projects and provides a detailed description of the Partitioned Global Address Space (PGAS) programming model. The PGAS model may offer several improvements over the traditional distributed memory message passing model, which is the dominant model currently being used at Sandia. This technical report discusses these potential benefits and outlines specific areas where Sandia's expertise could contribute to current research activities. In particular, we describe several projects in the areas of high-performance networking, operating systems and parallel runtime systems, compilers, application development, and performance evaluation.
Tuer, Adam E; Akens, Margarete K; Krouglov, Serguei; Sandkuijl, Daaf; Wilson, Brian C; Whyne, Cari M; Barzda, Virginijus
2012-11-21
The second-order nonlinear polarization properties of fibrillar collagen in various rat tissues (vertebrae, tibia, tail tendon, dermis, and cornea) are investigated with polarization-dependent second-harmonic generation (P-SHG) microscopy. Three parameters are extracted: the second-order susceptibility ratio, R = [Formula: see text] ; a measure of the fibril distribution asymmetry, |A|; and the weighted-average fibril orientation, . A hierarchical organizational model of fibrillar collagen is developed to interpret the second-harmonic generation polarization properties. Highlights of the model include: collagen type (e.g., type-I, type-II), fibril internal structure (e.g., straight, constant-tilt), and fibril architecture (e.g., parallel fibers, intertwined, lamellae). Quantifiable differences in internal structure and architecture of the fibrils are observed. Occurrence histograms of R and |A| distinguished parallel from nonparallel fibril distributions. Parallel distributions possessed low parameter values and variability, whereas nonparallel distributions displayed an increase in values and variability. From the P-SHG parameters of vertebrae tissue, a three-dimensional reconstruction of lamellae of intervertebral disk is presented.
Lininger, Monica; Spybrook, Jessaca; Cheatham, Christopher C
2015-04-01
Longitudinal designs are common in the field of athletic training. For example, in the Journal of Athletic Training from 2005 through 2010, authors of 52 of the 218 original research articles used longitudinal designs. In 50 of the 52 studies, a repeated-measures analysis of variance was used to analyze the data. A possible alternative to this approach is the hierarchical linear model, which has been readily accepted in other medical fields. In this short report, we demonstrate the use of the hierarchical linear model for analyzing data from a longitudinal study in athletic training. We discuss the relevant hypotheses, model assumptions, analysis procedures, and output from the HLM 7.0 software. We also examine the advantages and disadvantages of using the hierarchical linear model with repeated measures and repeated-measures analysis of variance for longitudinal data.
Lininger, Monica; Spybrook, Jessaca; Cheatham, Christopher C.
2015-01-01
Longitudinal designs are common in the field of athletic training. For example, in the Journal of Athletic Training from 2005 through 2010, authors of 52 of the 218 original research articles used longitudinal designs. In 50 of the 52 studies, a repeated-measures analysis of variance was used to analyze the data. A possible alternative to this approach is the hierarchical linear model, which has been readily accepted in other medical fields. In this short report, we demonstrate the use of the hierarchical linear model for analyzing data from a longitudinal study in athletic training. We discuss the relevant hypotheses, model assumptions, analysis procedures, and output from the HLM 7.0 software. We also examine the advantages and disadvantages of using the hierarchical linear model with repeated measures and repeated-measures analysis of variance for longitudinal data. PMID:25875072
Robust Real-Time Music Transcription with a Compositional Hierarchical Model
Pesek, Matevž; Leonardis, Aleš; Marolt, Matija
2017-01-01
The paper presents a new compositional hierarchical model for robust music transcription. Its main features are unsupervised learning of a hierarchical representation of input data, transparency, which enables insights into the learned representation, as well as robustness and speed which make it suitable for real-world and real-time use. The model consists of multiple layers, each composed of a number of parts. The hierarchical nature of the model corresponds well to hierarchical structures in music. The parts in lower layers correspond to low-level concepts (e.g. tone partials), while the parts in higher layers combine lower-level representations into more complex concepts (tones, chords). The layers are learned in an unsupervised manner from music signals. Parts in each layer are compositions of parts from previous layers based on statistical co-occurrences as the driving force of the learning process. In the paper, we present the model’s structure and compare it to other hierarchical approaches in the field of music information retrieval. We evaluate the model’s performance for the multiple fundamental frequency estimation. Finally, we elaborate on extensions of the model towards other music information retrieval tasks. PMID:28046074
Financial Data Modeling by Using Asynchronous Parallel Evolutionary Algorithms
Institute of Scientific and Technical Information of China (English)
Wang Chun; Li Qiao-yun
2003-01-01
In this paper, the high-level knowledge of financial data modeled by ordinary differential equations (ODEs) is discovered in dynamic data by using an asynchronous parallel evolutionary modeling algorithm (APHEMA). A numerical example of Nasdaq index analysis is used to demonstrate the potential of APHEMA. The results show that the dynamic models automatically discovered in dynamic data by computer can be used to predict the financial trends.
Nimon, Kim
2012-01-01
Using state achievement data that are openly accessible, this paper demonstrates the application of hierarchical linear modeling within the context of career technical education research. Three prominent approaches to analyzing clustered data (i.e., modeling aggregated data, modeling disaggregated data, modeling hierarchical data) are discussed…
Vathsangam, Harshvardhan; Emken, B Adar; Schroeder, E Todd; Spruijt-Metz, Donna; Sukhatme, Gaurav S
2013-12-01
Walking is a commonly available activity to maintain a healthy lifestyle. Accurately tracking and measuring calories expended during walking can improve user feedback and intervention measures. Inertial sensors are a promising measurement tool to achieve this purpose. An important aspect in mapping inertial sensor data to energy expenditure is the question of normalizing across physiological parameters. Common approaches such as weight scaling require validation for each new population. An alternative is to use a hierarchical approach to model subject-specific parameters at one level and cross-subject parameters connected by physiological variables at a higher level. In this paper, we evaluate an inertial sensor-based hierarchical model to measure energy expenditure across a target population. We first determine the optimal movement and physiological features set to represent data. Periodicity based features are more accurate (phierarchical model with a subject-specific regression model and weight exponent scaled models. Subject-specific models perform significantly better (pmodels at all exponent scales whereas the hierarchical model performed worse than both. However, using an informed prior from the hierarchical model produces similar errors to using a subject-specific model with large amounts of training data (phierarchical modeling is a promising technique for generalized prediction energy expenditure prediction across a target population in a clinical setting.
User Demand Aware Grid Scheduling Model with Hierarchical Load Balancing
Directory of Open Access Journals (Sweden)
P. Suresh
2013-01-01
Full Text Available Grid computing is a collection of computational and data resources, providing the means to support both computational intensive applications and data intensive applications. In order to improve the overall performance and efficient utilization of the resources, an efficient load balanced scheduling algorithm has to be implemented. The scheduling approach also needs to consider user demand to improve user satisfaction. This paper proposes a dynamic hierarchical load balancing approach which considers load of each resource and performs load balancing. It minimizes the response time of the jobs and improves the utilization of the resources in grid environment. By considering the user demand of the jobs, the scheduling algorithm also improves the user satisfaction. The experimental results show the improvement of the proposed load balancing method.
Parallel finite element modeling of earthquake ground response and liquefaction
Institute of Scientific and Technical Information of China (English)
Jinchi Lu(陆金池); Jun Peng(彭军); Ahmed Elgamal; Zhaohui Yang(杨朝晖); Kincho H. Law
2004-01-01
Parallel computing is a promising approach to alleviate the computational demand in conducting large-scale finite element analyses. This paper presents a numerical modeling approach for earthquake ground response and liquefaction using the parallel nonlinear finite element program, ParCYCLIC, designed for distributed-memory message-passing parallel computer systems. In ParCYCLIC, finite elements are employed within an incremental plasticity, coupled solid-fluid formulation. A constitutive model calibrated by physical tests represents the salient characteristics of sand liquefaction and associated accumulation of shear deformations. Key elements of the computational strategy employed in ParCYCLIC include the development of a parallel sparse direct solver, the deployment of an automatic domain decomposer, and the use of the Multilevel Nested Dissection algorithm for ordering of the finite element nodes. Simulation results of centrifuge test models using ParCYCLIC are presented. Performance results from grid models and geotechnical simulations show that ParCYCLIC is efficiently scalable to a large number of processors.
The Extended Parallel Process Model: Illuminating the Gaps in Research
Popova, Lucy
2012-01-01
This article examines constructs, propositions, and assumptions of the extended parallel process model (EPPM). Review of the EPPM literature reveals that its theoretical concepts are thoroughly developed, but the theory lacks consistency in operational definitions of some of its constructs. Out of the 12 propositions of the EPPM, a few have not…
Postscript: Parallel Distributed Processing in Localist Models without Thresholds
Plaut, David C.; McClelland, James L.
2010-01-01
The current authors reply to a response by Bowers on a comment by the current authors on the original article. Bowers (2010) mischaracterizes the goals of parallel distributed processing (PDP research)--explaining performance on cognitive tasks is the primary motivation. More important, his claim that localist models, such as the interactive…
Methods and models for the construction of weakly parallel tests
Adema, Jos J.
1992-01-01
Several methods are proposed for the construction of weakly parallel tests [i.e., tests with the same test information function (TIF)]. A mathematical programming model that constructs tests containing a prespecified TIF and a heuristic that assigns items to tests with information functions that are
Postscript: Parallel Distributed Processing in Localist Models without Thresholds
Plaut, David C.; McClelland, James L.
2010-01-01
The current authors reply to a response by Bowers on a comment by the current authors on the original article. Bowers (2010) mischaracterizes the goals of parallel distributed processing (PDP research)--explaining performance on cognitive tasks is the primary motivation. More important, his claim that localist models, such as the interactive…
Modeling and optimization of parallel and distributed embedded systems
Munir, Arslan; Ranka, Sanjay
2016-01-01
This book introduces the state-of-the-art in research in parallel and distributed embedded systems, which have been enabled by developments in silicon technology, micro-electro-mechanical systems (MEMS), wireless communications, computer networking, and digital electronics. These systems have diverse applications in domains including military and defense, medical, automotive, and unmanned autonomous vehicles. The emphasis of the book is on the modeling and optimization of emerging parallel and distributed embedded systems in relation to the three key design metrics of performance, power and dependability.
X: A Comprehensive Analytic Model for Parallel Machines
Energy Technology Data Exchange (ETDEWEB)
Li, Ang; Song, Shuaiwen; Brugel, Eric; Kumar, Akash; Chavarría-Miranda, Daniel; Corporaal, Henk
2016-05-23
To continuously comply with Moore’s Law, modern parallel machines become increasingly complex. Effectively tuning application performance for these machines therefore becomes a daunting task. Moreover, identifying performance bottlenecks at application and architecture level, as well as evaluating various optimization strategies, are becoming extremely difficult when the entanglement of numerous correlated factors is being presented. To tackle these challenges, we present a visual analytical model named “X”. It is intuitive and sufficiently flexible to track all the typical features of a parallel machine.
Hierarchical modeling and analysis of container terminal operations
Özdemir, Hacı Murat; Ozdemir, Haci Murat
2003-01-01
After the breakdown of trade barriers among countries, the volume of international trade has grown significantly in the last decade. This explosive growth in international trade has increased the importance of marine transportation which constitutes the major part of the global logistics network. The utilization of containers and container ships in marine transportation has also increased after the eighties due to various advantages such as packaging, flexibility, and reliability. Parallel to...
Hierarchical modeling for reliability analysis using Markov models. B.S./M.S. Thesis - MIT
Fagundo, Arturo
1994-01-01
Markov models represent an extremely attractive tool for the reliability analysis of many systems. However, Markov model state space grows exponentially with the number of components in a given system. Thus, for very large systems Markov modeling techniques alone become intractable in both memory and CPU time. Often a particular subsystem can be found within some larger system where the dependence of the larger system on the subsystem is of a particularly simple form. This simple dependence can be used to decompose such a system into one or more subsystems. A hierarchical technique is presented which can be used to evaluate these subsystems in such a way that their reliabilities can be combined to obtain the reliability for the full system. This hierarchical approach is unique in that it allows the subsystem model to pass multiple aggregate state information to the higher level model, allowing more general systems to be evaluated. Guidelines are developed to assist in the system decomposition. An appropriate method for determining subsystem reliability is also developed. This method gives rise to some interesting numerical issues. Numerical error due to roundoff and integration are discussed at length. Once a decomposition is chosen, the remaining analysis is straightforward but tedious. However, an approach is developed for simplifying the recombination of subsystem reliabilities. Finally, a real world system is used to illustrate the use of this technique in a more practical context.
Osei, Frank B.; Osei, F.B.; Duker, Alfred A.; Stein, A.
2011-01-01
This study analyses the joint effects of the two transmission routes of cholera on the space-time diffusion dynamics. Statistical models are developed and presented to investigate the transmission network routes of cholera diffusion. A hierarchical Bayesian modelling approach is employed for a joint
Measuring Service Quality in Higher Education: Development of a Hierarchical Model (HESQUAL)
Teeroovengadum, Viraiyan; Kamalanabhan, T. J.; Seebaluck, Ashley Keshwar
2016-01-01
Purpose: This paper aims to develop and empirically test a hierarchical model for measuring service quality in higher education. Design/methodology/approach: The first phase of the study consisted of qualitative research methods and a comprehensive literature review, which allowed the development of a conceptual model comprising 53 service quality…
Augmenting Visual Analysis in Single-Case Research with Hierarchical Linear Modeling
Davis, Dawn H.; Gagne, Phill; Fredrick, Laura D.; Alberto, Paul A.; Waugh, Rebecca E.; Haardorfer, Regine
2013-01-01
The purpose of this article is to demonstrate how hierarchical linear modeling (HLM) can be used to enhance visual analysis of single-case research (SCR) designs. First, the authors demonstrated the use of growth modeling via HLM to augment visual analysis of a sophisticated single-case study. Data were used from a delayed multiple baseline…
Boedeker, Peter
2017-01-01
Hierarchical linear modeling (HLM) is a useful tool when analyzing data collected from groups. There are many decisions to be made when constructing and estimating a model in HLM including which estimation technique to use. Three of the estimation techniques available when analyzing data with HLM are maximum likelihood, restricted maximum…
Missing Data Treatments at the Second Level of Hierarchical Linear Models
St. Clair, Suzanne W.
2011-01-01
The current study evaluated the performance of traditional versus modern MDTs in the estimation of fixed-effects and variance components for data missing at the second level of an hierarchical linear model (HLM) model across 24 different study conditions. Variables manipulated in the analysis included, (a) number of Level-2 variables with missing…
Osei, Frank B.; Duker, Alfred A.; Stein, Alfred
2011-01-01
This study analyses the joint effects of the two transmission routes of cholera on the space-time diffusion dynamics. Statistical models are developed and presented to investigate the transmission network routes of cholera diffusion. A hierarchical Bayesian modelling approach is employed for a joint
The Hierarchical Trend Model for property valuation and local price indices
M.K. Francke; G.A. Vos
2002-01-01
This paper presents a hierarchical trend model (HTM) for selling prices of houses, addressing three main problems: the spatial and temporal dependence of selling prices and the dependency of price index changes on housing quality. In this model the general price trend, cluster-level price trends, an
Measuring Service Quality in Higher Education: Development of a Hierarchical Model (HESQUAL)
Teeroovengadum, Viraiyan; Kamalanabhan, T. J.; Seebaluck, Ashley Keshwar
2016-01-01
Purpose: This paper aims to develop and empirically test a hierarchical model for measuring service quality in higher education. Design/methodology/approach: The first phase of the study consisted of qualitative research methods and a comprehensive literature review, which allowed the development of a conceptual model comprising 53 service quality…
Dynamic modeling of flexible-links planar parallel robots
Institute of Scientific and Technical Information of China (English)
2008-01-01
This paper presents a finite element-based method for dynamic modeling of parallel robots with flexible links and rigid moving platform.The elastic displacements of flexible links are investigated while considering the coupling effects between links due to the structural flexibility.The kinematic constraint conditions and dynamic constraint conditions for elastic displacements are presented.Considering the effects of distributed mass,lumped mass,shearing deformation,bending deformation,tensile deformation and lateral displacements,the Kineto-Elasto dynamics (KED) theory and Lagrange formula are used to derive the dynamic equations of planar flexible-links parallel robots.The dynamic behavior of the flexible-links planar parallel robot is well illustrated through numerical simulation of a planar 3-RRR parallel robot.Compared with the results of finite element software SAMCEF,the numerical simulation results show good coherence of the proposed method.The flexibility of links is demonstrated to have a significant impact on the position error and orientation error of the flexiblelinks planar parallel robot.
Terhorst, Lauren; Beck, Kelly Battle; McKeon, Ashlee B; Graham, Kristin M; Ye, Feifei; Shiffman, Saul
2017-08-01
Ecological momentary assessment (EMA) methods collect real-time data in real-world environments, which allow physical medicine and rehabilitation researchers to examine objective outcome data and reduces bias from retrospective recall. The statistical analysis of EMA data is directly related to the research question and the temporal design of the study. Hierarchical linear modeling, which accounts for multiple observations from the same participant, is a particularly useful approach to analyzing EMA data. The objective of this paper was to introduce the process of conducting hierarchical linear modeling analyses with EMA data. This is accomplished using exemplars from recent physical medicine and rehabilitation literature.
Ogle, Kiona; Ryan, Edmund; Dijkstra, Feike A.; Pendall, Elise
2016-12-01
Nonsteady state chambers are often employed to measure soil CO2 fluxes. CO2 concentrations (C) in the headspace are sampled at different times (t), and fluxes (f) are calculated from regressions of C versus t based on a limited number of observations. Variability in the data can lead to poor fits and unreliable f estimates; groups with too few observations or poor fits are often discarded, resulting in "missing" f values. We solve these problems by fitting linear (steady state) and nonlinear (nonsteady state, diffusion based) models of C versus t, within a hierarchical Bayesian framework. Data are from the Prairie Heating and CO2 Enrichment study that manipulated atmospheric CO2, temperature, soil moisture, and vegetation. CO2 was collected from static chambers biweekly during five growing seasons, resulting in >12,000 samples and >3100 groups and associated fluxes. We compare f estimates based on nonhierarchical and hierarchical Bayesian (B versus HB) versions of the linear and diffusion-based (L versus D) models, resulting in four different models (BL, BD, HBL, and HBD). Three models fit the data exceptionally well (R2 ≥ 0.98), but the BD model was inferior (R2 = 0.87). The nonhierarchical models (BL and BD) produced highly uncertain f estimates (wide 95% credible intervals), whereas the hierarchical models (HBL and HBD) produced very precise estimates. Of the hierarchical versions, the linear model (HBL) underestimated f by 33% relative to the nonsteady state model (HBD). The hierarchical models offer improvements upon traditional nonhierarchical approaches to estimating f, and we provide example code for the models.
Hosoda, Kazufumi; Tsuda, Soichiro; Kadowaki, Kohmei; Nakamura, Yutaka; Nakano, Tadashi; Ishii, Kojiro
2016-02-01
Understanding ecosystem dynamics is crucial as contemporary human societies face ecosystem degradation. One of the challenges that needs to be recognized is the complex hierarchical dynamics. Conventional dynamic models in ecology often represent only the population level and have yet to include the dynamics of the sub-organism level, which makes an ecosystem a complex adaptive system that shows characteristic behaviors such as resilience and regime shifts. The neglect of the sub-organism level in the conventional dynamic models would be because integrating multiple hierarchical levels makes the models unnecessarily complex unless supporting experimental data are present. Now that large amounts of molecular and ecological data are increasingly accessible in microbial experimental ecosystems, it is worthwhile to tackle the questions of their complex hierarchical dynamics. Here, we propose an approach that combines microbial experimental ecosystems and a hierarchical dynamic model named population-reaction model. We present a simple microbial experimental ecosystem as an example and show how the system can be analyzed by a population-reaction model. We also show that population-reaction models can be applied to various ecological concepts, such as predator-prey interactions, climate change, evolution, and stability of diversity. Our approach will reveal a path to the general understanding of various ecosystems and organisms. Copyright © 2015 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.
Fuzzy hierarchical model for risk assessment principles, concepts, and practical applications
Chan, Hing Kai
2013-01-01
Risk management is often complicated by situational uncertainties and the subjective preferences of decision makers. Fuzzy Hierarchical Model for Risk Assessment introduces a fuzzy-based hierarchical approach to solve risk management problems considering both qualitative and quantitative criteria to tackle imprecise information. This approach is illustrated through number of case studies using examples from the food, fashion and electronics sectors to cover a range of applications including supply chain management, green product design and green initiatives. These practical examples explore how this method can be adapted and fine tuned to fit other industries as well. Supported by an extensive literature review, Fuzzy Hierarchical Model for Risk Assessment comprehensively introduces a new method for project managers across all industries as well as researchers in risk management.
Parallelization of a hydrological model using the message passing interface
Wu, Yiping; Li, Tiejian; Sun, Liqun; Chen, Ji
2013-01-01
With the increasing knowledge about the natural processes, hydrological models such as the Soil and Water Assessment Tool (SWAT) are becoming larger and more complex with increasing computation time. Additionally, other procedures such as model calibration, which may require thousands of model iterations, can increase running time and thus further reduce rapid modeling and analysis. Using the widely-applied SWAT as an example, this study demonstrates how to parallelize a serial hydrological model in a Windows® environment using a parallel programing technology—Message Passing Interface (MPI). With a case study, we derived the optimal values for the two parameters (the number of processes and the corresponding percentage of work to be distributed to the master process) of the parallel SWAT (P-SWAT) on an ordinary personal computer and a work station. Our study indicates that model execution time can be reduced by 42%–70% (or a speedup of 1.74–3.36) using multiple processes (two to five) with a proper task-distribution scheme (between the master and slave processes). Although the computation time cost becomes lower with an increasing number of processes (from two to five), this enhancement becomes less due to the accompanied increase in demand for message passing procedures between the master and all slave processes. Our case study demonstrates that the P-SWAT with a five-process run may reach the maximum speedup, and the performance can be quite stable (fairly independent of a project size). Overall, the P-SWAT can help reduce the computation time substantially for an individual model run, manual and automatic calibration procedures, and optimization of best management practices. In particular, the parallelization method we used and the scheme for deriving the optimal parameters in this study can be valuable and easily applied to other hydrological or environmental models.
Performance of Air Pollution Models on Massively Parallel Computers
DEFF Research Database (Denmark)
Brown, John; Hansen, Per Christian; Wasniewski, Jerzy
1996-01-01
To compare the performance and use of three massively parallel SIMD computers, we implemented a large air pollution model on the computers. Using a realistic large-scale model, we gain detailed insight about the performance of the three computers when used to solve large-scale scientific problems...... that involve several types of numerical computations. The computers considered in our study are the Connection Machines CM-200 and CM-5, and the MasPar MP-2216...
Sensor Network Data Fault Detection using Hierarchical Bayesian Space-Time Modeling
Ni, Kevin; Pottie, G J
2009-01-01
We present a new application of hierarchical Bayesian space-time (HBST) modeling: data fault detection in sensor networks primarily used in environmental monitoring situations. To show the effectiveness of HBST modeling, we develop a rudimentary tagging system to mark data that does not fit with given models. Using this, we compare HBST modeling against first order linear autoregressive (AR) modeling, which is a commonly used alternative due to its simplicity. We show that while HBST is mo...
DEFF Research Database (Denmark)
Øjelund, Henrik; Sadegh, Payman
2000-01-01
, constraints are introduced to ensure the conformity of the estimates to a gien global structure. Hierarchical models are then utilized as a tool to ccomodate global model uncertainties via parametric variabilities within the structure. The global parameters and their associated uncertainties are estimated...... simultaneously with the (local estimates of) function values. The approach is applied to modelling of a linear time variant dynamic system under prior linear time invariant structure where local regression fails as a result of high dimensionality.......Local function approximations concern fitting low order models to weighted data in neighbourhoods of the points where the approximations are desired. Despite their generality and convenience of use, local models typically suffer, among others, from difficulties arising in physical interpretation...
一种改进的层次化SOCs并行测试封装扫描单元%A Modified Parallel Wrapper Cell for Hierarchical SOCs Test
Institute of Scientific and Technical Information of China (English)
邓立宝; 乔立岩; 俞洋; 彭喜元
2012-01-01
测试封装是实现SOC内部IP核可测性和可控性的关键,而扫描单元是测试封装最重要的组成部分.然而传统的测试封装扫描单元在应用于层次化SOCs测试时存在很多缺点,无法保证内部IP核的完全并行测试,并且在测试的安全性,功耗等方面表现出很大问题.本文提出一种改进的层次化SOCs测试封装扫描单元结构,能够有效解决上述问题,该结构的主要思想是对现有的扫描单元进行改进,实现并行测试的同时,通过在适当的位置增加一个传输门,阻止无序的数据在非测试时段进入IP核,使得IP核处于休眠状态,保证了测试的安全性,实现了测试时的低功耗.最后将这种方法应用在一个工业上的层次化SOCs,实验分析表明,改进的测试封装扫描单元比现有扫描单元在增加较小硬件开销的前提下,在并行测试、低功耗、测试安全性和测试覆盖率方面有着明显的优势.%Test wrapper,which to make IP cores in SOC measurable and controllable,is the key architecture,and its important part is wrapper cell.Traditional test wrapper has many shortcomings,such as parallel test,test secure and test power,when used in hierarchical SOCs.This paper presented a modified test wrapper design for embedded IP cores,which only inserted a CMOS transmission gate to the test wrapper cell to eliminate the precarious effect to IP cores,to make the IP cores dormancy.Experiments on an industry hierarchical SOCs show that the proposed test wrapper cell not only takes less area overhead and time delay,but also make test parallel,secure and fully,thus decreases the dynamic test power during scan shifting.
Parallel Computation of the Regional Ocean Modeling System (ROMS)
Energy Technology Data Exchange (ETDEWEB)
Wang, P; Song, Y T; Chao, Y; Zhang, H
2005-04-05
The Regional Ocean Modeling System (ROMS) is a regional ocean general circulation modeling system solving the free surface, hydrostatic, primitive equations over varying topography. It is free software distributed world-wide for studying both complex coastal ocean problems and the basin-to-global scale ocean circulation. The original ROMS code could only be run on shared-memory systems. With the increasing need to simulate larger model domains with finer resolutions and on a variety of computer platforms, there is a need in the ocean-modeling community to have a ROMS code that can be run on any parallel computer ranging from 10 to hundreds of processors. Recently, we have explored parallelization for ROMS using the MPI programming model. In this paper, an efficient parallelization strategy for such a large-scale scientific software package, based on an existing shared-memory computing model, is presented. In addition, scientific applications and data-performance issues on a couple of SGI systems, including Columbia, the world's third-fastest supercomputer, are discussed.
Directory of Open Access Journals (Sweden)
Chulkov Vitaliy Olegovich
2012-12-01
Full Text Available This article deals with the infographic modeling of hierarchical management systems exposed to innovative conflicts. The authors analyze the facts that serve as conflict drivers in the construction management environment. The reasons for innovative conflicts include changes in hierarchical structures of management systems, adjustment of workers to new management conditions, changes in the ideology, etc. Conflicts under consideration may involve contradictions between requests placed by customers and the legislation, any risks that may originate from the above contradiction, conflicts arising from any failure to comply with any accepted standards of conduct, etc. One of the main objectives of the theory of hierarchical structures is to develop a model capable of projecting potential innovative conflicts. Models described in the paper reflect dynamic changes in patterns of external impacts within the conflict area. The simplest model element is a monad, or an indivisible set of characteristics of participants at the pre-set level. Interaction between two monads forms a diad. Modeling of situations that involve a different number of monads, diads, resources and impacts can improve methods used to control and manage hierarchical structures in the construction industry. However, in the absence of any mathematical models employed to simulate conflict-related events, processes and situations, any research into, projection and management of interpersonal and group-to-group conflicts are to be performed in the legal environment
Arkin, Ethem; Tekinerdogan, Bedir
2016-01-01
Mapping parallel algorithms to parallel computing platforms requires several activities such as the analysis of the parallel algorithm, the definition of the logical configuration of the platform, the mapping of the algorithm to the logical configuration platform and the implementation of the sou
Parallelization of MATLAB for Euro50 integrated modeling
Browne, Michael; Andersen, Torben E.; Enmark, Anita; Moraru, Dan; Shearer, Andrew
2004-09-01
MATLAB and its companion product Simulink are commonly used tools in systems modelling and other scientific disciplines. A cross-disciplinary integrated MATLAB model is used to study the overall performance of the proposed 50m optical and infrared telescope, Euro50. However the computational requirements of this kind of end-to-end simulation of the telescope's behaviour, exceeds the capability of an individual contemporary Personal Computer. By parallelizing the model, primarily on a functional basis, it can be implemented across a Beowulf cluster of generic PCs. This requires MATLAB to distribute in some way data and calculations to the cluster nodes and combine completed results. There have been a number of attempts to produce toolkits to allow MATLAB to be used in a parallel fashion. They have used a variety of techniques. Here we present findings from using some of these toolkits and proposed advances.
HIERARCHICAL METHODOLOGY FOR MODELING HYDROGEN STORAGE SYSTEMS PART II: DETAILED MODELS
Energy Technology Data Exchange (ETDEWEB)
Hardy, B; Donald L. Anton, D
2008-12-22
There is significant interest in hydrogen storage systems that employ a media which either adsorbs, absorbs or reacts with hydrogen in a nearly reversible manner. In any media based storage system the rate of hydrogen uptake and the system capacity is governed by a number of complex, coupled physical processes. To design and evaluate such storage systems, a comprehensive methodology was developed, consisting of a hierarchical sequence of models that range from scoping calculations to numerical models that couple reaction kinetics with heat and mass transfer for both the hydrogen charging and discharging phases. The scoping models were presented in Part I [1] of this two part series of papers. This paper describes a detailed numerical model that integrates the phenomena occurring when hydrogen is charged and discharged. A specific application of the methodology is made to a system using NaAlH{sub 4} as the storage media.
Numerical modeling of parallel-plate based AMR
DEFF Research Database (Denmark)
In this work we present an improved 2-dimensional numerical model of a parallel-plate based AMR. The model includes heat transfer in ﬂuid and magnetocaloric domains respectively. The domains are coupled via inner thermal boundaries. The MCE is modeled either as an instantaneous change between high...... and low ﬁeld or as a magnetic ﬁeld proﬁle including the actual physical movement of the regenerator block in and out of ﬁeld, i.e. as a source term in the thermal equation for the magnetocaloric material (MCM). The model is further developed to include parasitic thermal losses throughout the bed...
Directory of Open Access Journals (Sweden)
Brodjol Sutijo Supri Ulama
2012-01-01
Full Text Available Problem statement: Household expenditure analysis was highly demanding for government in order to formulate its policy. Since household data was viewed as hierarchical structure with household nested in its regional residence which varies inter region, the contextual welfare analysis was needed. This study proposed to develop a hierarchical model for estimating household expenditure in an attempt to measure the effect of regional diversity by taking into account district characteristics and household attributes using a Bayesian approach. Approach: Due to the variation of household expenditure data which was captured by the three parameters of Log-Normal (LN3 distribution, the model was developed based on LN3 distribution. Data used in this study was household expenditure data in Central Java, Indonesia. Since, data were unbalanced and hierarchical models using a classical approach work well for balanced data, thus the estimation process was done by using Bayesian method with MCMC and Gibbs sampling. Results: The hierarchical Bayesian model based on LN3 distribution could be implemented to explain the variation of household expenditure using district characteristics and household attributes. Conclusion: The model shows that districts characteristics which include demographic and economic conditions of districts and the availability of public facilities which are strongly associated with a dimension of human development index, i.e., economic, education and health, do affect to household expenditure through its household attributes."
Application of hierarchical genetic models to Raven and WAIS subtests: a Dutch twin study.
Rijsdijk, Frühling V; Vernon, P A; Boomsma, Dorret I
2002-05-01
Hierarchical models of intelligence are highly informative and widely accepted. Application of these models to twin data, however, is sparse. This paper addresses the question of how a genetic hierarchical model fits the Wechsler Adult Intelligence Scale (WAIS) subtests and the Raven Standard Progressive test score, collected in 194 18-year-old Dutch twin pairs. We investigated whether first-order group factors possess genetic and environmental variance independent of the higher-order general factor and whether the hierarchical structure is significant for all sources of variance. A hierarchical model with the 3 Cohen group-factors (verbal comprehension, perceptual organisation and freedom-from-distractibility) and a higher-order g factor showed the best fit to the phenotypic data and to additive genetic influences (A), whereas the unique environmental source of variance (E) could be modeled by a single general factor and specifics. There was no evidence for common environmental influences. The covariation among the WAIS group factors and the covariation between the group factors and the Raven is predominantly influenced by a second-order genetic factor and strongly support the notion of a biological basis of g.
A Hierarchical Bayesian Model to Predict Self-Thinning Line for Chinese Fir in Southern China.
Directory of Open Access Journals (Sweden)
Xiongqing Zhang
Full Text Available Self-thinning is a dynamic equilibrium between forest growth and mortality at full site occupancy. Parameters of the self-thinning lines are often confounded by differences across various stand and site conditions. For overcoming the problem of hierarchical and repeated measures, we used hierarchical Bayesian method to estimate the self-thinning line. The results showed that the self-thinning line for Chinese fir (Cunninghamia lanceolata (Lamb.Hook. plantations was not sensitive to the initial planting density. The uncertainty of model predictions was mostly due to within-subject variability. The simulation precision of hierarchical Bayesian method was better than that of stochastic frontier function (SFF. Hierarchical Bayesian method provided a reasonable explanation of the impact of other variables (site quality, soil type, aspect, etc. on self-thinning line, which gave us the posterior distribution of parameters of self-thinning line. The research of self-thinning relationship could be benefit from the use of hierarchical Bayesian method.
Exploration Of Deep Learning Algorithms Using Openacc Parallel Programming Model
Hamam, Alwaleed A.
2017-03-13
Deep learning is based on a set of algorithms that attempt to model high level abstractions in data. Specifically, RBM is a deep learning algorithm that used in the project to increase it\\'s time performance using some efficient parallel implementation by OpenACC tool with best possible optimizations on RBM to harness the massively parallel power of NVIDIA GPUs. GPUs development in the last few years has contributed to growing the concept of deep learning. OpenACC is a directive based ap-proach for computing where directives provide compiler hints to accelerate code. The traditional Restricted Boltzmann Ma-chine is a stochastic neural network that essentially perform a binary version of factor analysis. RBM is a useful neural net-work basis for larger modern deep learning model, such as Deep Belief Network. RBM parameters are estimated using an efficient training method that called Contrastive Divergence. Parallel implementation of RBM is available using different models such as OpenMP, and CUDA. But this project has been the first attempt to apply OpenACC model on RBM.
Distributed parallel computing in stochastic modeling of groundwater systems.
Dong, Yanhui; Li, Guomin; Xu, Haizhen
2013-03-01
Stochastic modeling is a rapidly evolving, popular approach to the study of the uncertainty and heterogeneity of groundwater systems. However, the use of Monte Carlo-type simulations to solve practical groundwater problems often encounters computational bottlenecks that hinder the acquisition of meaningful results. To improve the computational efficiency, a system that combines stochastic model generation with MODFLOW-related programs and distributed parallel processing is investigated. The distributed computing framework, called the Java Parallel Processing Framework, is integrated into the system to allow the batch processing of stochastic models in distributed and parallel systems. As an example, the system is applied to the stochastic delineation of well capture zones in the Pinggu Basin in Beijing. Through the use of 50 processing threads on a cluster with 10 multicore nodes, the execution times of 500 realizations are reduced to 3% compared with those of a serial execution. Through this application, the system demonstrates its potential in solving difficult computational problems in practical stochastic modeling. © 2012, The Author(s). Groundwater © 2012, National Ground Water Association.
Meta-Analysis in Higher Education: An Illustrative Example Using Hierarchical Linear Modeling
Denson, Nida; Seltzer, Michael H.
2011-01-01
The purpose of this article is to provide higher education researchers with an illustrative example of meta-analysis utilizing hierarchical linear modeling (HLM). This article demonstrates the step-by-step process of meta-analysis using a recently-published study examining the effects of curricular and co-curricular diversity activities on racial…
An accessible method for implementing hierarchical models with spatio-temporal abundance data
Ross, Beth E.; Hooten, Melvin B.; Koons, David N.
2012-01-01
A common goal in ecology and wildlife management is to determine the causes of variation in population dynamics over long periods of time and across large spatial scales. Many assumptions must nevertheless be overcome to make appropriate inference about spatio-temporal variation in population dynamics, such as autocorrelation among data points, excess zeros, and observation error in count data. To address these issues, many scientists and statisticians have recommended the use of Bayesian hierarchical models. Unfortunately, hierarchical statistical models remain somewhat difficult to use because of the necessary quantitative background needed to implement them, or because of the computational demands of using Markov Chain Monte Carlo algorithms to estimate parameters. Fortunately, new tools have recently been developed that make it more feasible for wildlife biologists to fit sophisticated hierarchical Bayesian models (i.e., Integrated Nested Laplace Approximation, ‘INLA’). We present a case study using two important game species in North America, the lesser and greater scaup, to demonstrate how INLA can be used to estimate the parameters in a hierarchical model that decouples observation error from process variation, and accounts for unknown sources of excess zeros as well as spatial and temporal dependence in the data. Ultimately, our goal was to make unbiased inference about spatial variation in population trends over time.
The Hierarchical Factor Model of ADHD: Invariant across Age and National Groupings?
Toplak, Maggie E.; Sorge, Geoff B.; Flora, David B.; Chen, Wai; Banaschewski, Tobias; Buitelaar, Jan; Ebstein, Richard; Eisenberg, Jacques; Franke, Barbara; Gill, Michael; Miranda, Ana; Oades, Robert D.; Roeyers, Herbert; Rothenberger, Aribert; Sergeant, Joseph; Sonuga-Barke, Edmund; Steinhausen, Hans-Christoph; Thompson, Margaret; Tannock, Rosemary; Asherson, Philip; Faraone, Stephen V.
2012-01-01
Objective: To examine the factor structure of attention-deficit/hyperactivity disorder (ADHD) in a clinical sample of 1,373 children and adolescents with ADHD and their 1,772 unselected siblings recruited from different countries across a large age range. Hierarchical and correlated factor analytic models were compared separately in the ADHD and…
Raykov, Tenko
2011-01-01
Interval estimation of intraclass correlation coefficients in hierarchical designs is discussed within a latent variable modeling framework. A method accomplishing this aim is outlined, which is applicable in two-level studies where participants (or generally lower-order units) are clustered within higher-order units. The procedure can also be…
Putwain, Dave; Deveney, Carolyn
2009-01-01
The aim of this study was to examine an expanded integrative hierarchical model of test emotions and achievement goal orientations in predicting the examination performance of undergraduate students. Achievement goals were theorised as mediating the relationship between test emotions and performance. 120 undergraduate students completed…
2010-01-01
can also refer to hierarchical parameterization transcending any scale, such as mesoscopic to continuum levels. Such a multiscale modeling paradigm ...particularly suited for systems defined by long-chain polymers with relatively short persistence lengths, or systems that are entropically driven...mechanics. Thus, we introduce a universal framework through a finer-trains-coarser multiscale paradigm , which effectively defines coarse- grain
Michou, Aikaterini; Vansteenkiste, Maarten; Mouratidis, Athanasios; Lens, Willy
2014-01-01
Background: The hierarchical model of achievement motivation presumes that achievement goals channel the achievement motives of need for achievement and fear of failure towards motivational outcomes. Yet, less is known whether autonomous and controlling reasons underlying the pursuit of achievement goals can serve as additional pathways between…
Lam, Terence Yuk Ping; Lau, Kwok Chi
2014-01-01
This study uses hierarchical linear modeling to examine the influence of a range of factors on the science performances of Hong Kong students in PISA 2006. Hong Kong has been consistently ranked highly in international science assessments, such as Programme for International Student Assessment and Trends in International Mathematics and Science…
Meta-Analysis in Higher Education: An Illustrative Example Using Hierarchical Linear Modeling
Denson, Nida; Seltzer, Michael H.
2011-01-01
The purpose of this article is to provide higher education researchers with an illustrative example of meta-analysis utilizing hierarchical linear modeling (HLM). This article demonstrates the step-by-step process of meta-analysis using a recently-published study examining the effects of curricular and co-curricular diversity activities on racial…
Rocconi, Louis M.
2013-01-01
This study examined the differing conclusions one may come to depending upon the type of analysis chosen, hierarchical linear modeling or ordinary least squares (OLS) regression. To illustrate this point, this study examined the influences of seniors' self-reported critical thinking abilities three ways: (1) an OLS regression with the student…
Rademaker, A.R.; Minnen, A. van; Ebberink, F.; Zuiden, M. van; Geuze, E.
2012-01-01
Background: As of yet, no collective agreement has been reached regarding the precise factor structure of posttraumatic stress disorder (PTSD). Several alternative factor-models have been proposed in the last decades. Objective: The current study examined the fit of a hierarchical adaptation of the
Multi-Organ Contribution to the Metabolic Plasma Profile Using Hierarchical Modelling.
Directory of Open Access Journals (Sweden)
Frida Torell
Full Text Available Hierarchical modelling was applied in order to identify the organs that contribute to the levels of metabolites in plasma. Plasma and organ samples from gut, kidney, liver, muscle and pancreas were obtained from mice. The samples were analysed using gas chromatography time-of-flight mass spectrometry (GC TOF-MS at the Swedish Metabolomics centre, Umeå University, Sweden. The multivariate analysis was performed by means of principal component analysis (PCA and orthogonal projections to latent structures (OPLS. The main goal of this study was to investigate how each organ contributes to the metabolic plasma profile. This was performed using hierarchical modelling. Each organ was found to have a unique metabolic profile. The hierarchical modelling showed that the gut, kidney and liver demonstrated the greatest contribution to the metabolic pattern of plasma. For example, we found that metabolites were absorbed in the gut and transported to the plasma. The kidneys excrete branched chain amino acids (BCAAs and fatty acids are transported in the plasma to the muscles and liver. Lactic acid was also found to be transported from the pancreas to plasma. The results indicated that hierarchical modelling can be utilized to identify the organ contribution of unknown metabolites to the metabolic profile of plasma.
Hierarchical linear modeling of longitudinal pedigree data for genetic association analysis
DEFF Research Database (Denmark)
Tan, Qihua; B Hjelmborg, Jacob V; Thomassen, Mads;
2014-01-01
on the mean level of a phenotype, they are not sufficiently straightforward to handle the kinship correlation on the time-dependent trajectories of a phenotype. We introduce a 2-level hierarchical linear model to separately assess the genetic associations with the mean level and the rate of change...
A developmental model of hierarchical stage structure in objective moral judgements
J. Boom; P.C.M. Molenaar
1989-01-01
A hierarchical structural model of moral judgment is proposed in which an S is characterized as occupying a particular moral stage. During development, the S's characteristic stage progresses along a latent, ordered dimension in an age-dependent way. Evaluation of prototypic statements representativ
Schermelleh-Engel, Karin; Keith, Nina; Moosbrugger, Helfried; Hodapp, Volker
2004-01-01
An extension of latent state-trait (LST) theory to hierarchical LST models is presented. In hierarchical LST models, the covariances between 2 or more latent traits are explained by a general 3rd-order factor, and the covariances between latent state residuals pertaining to different traits measured on the same measurement occasion are explained…
Accuracy Improvement for Stiffness Modeling of Parallel Manipulators
Pashkevich, Anatoly; Chablat, Damien; Wenger, Philippe
2009-01-01
The paper focuses on the accuracy improvement of stiffness models for parallel manipulators, which are employed in high-speed precision machining. It is based on the integrated methodology that combines analytical and numerical techniques and deals with multidimensional lumped-parameter models of the links. The latter replace the link flexibility by localized 6-dof virtual springs describing both translational/rotational compliance and the coupling between them. There is presented detailed accuracy analysis of the stiffness identification procedures employed in the commercial CAD systems (including statistical analysis of round-off errors, evaluating the confidence intervals for stiffness matrices). The efficiency of the developed technique is confirmed by application examples, which deal with stiffness analysis of translational parallel manipulators.
Center for Programming Models for Scalable Parallel Computing
Energy Technology Data Exchange (ETDEWEB)
John Mellor-Crummey
2008-02-29
Rice University's achievements as part of the Center for Programming Models for Scalable Parallel Computing include: (1) design and implemention of cafc, the first multi-platform CAF compiler for distributed and shared-memory machines, (2) performance studies of the efficiency of programs written using the CAF and UPC programming models, (3) a novel technique to analyze explicitly-parallel SPMD programs that facilitates optimization, (4) design, implementation, and evaluation of new language features for CAF, including communication topologies, multi-version variables, and distributed multithreading to simplify development of high-performance codes in CAF, and (5) a synchronization strength reduction transformation for automatically replacing barrier-based synchronization with more efficient point-to-point synchronization. The prototype Co-array Fortran compiler cafc developed in this project is available as open source software from http://www.hipersoft.rice.edu/caf.
Final Report: Center for Programming Models for Scalable Parallel Computing
Energy Technology Data Exchange (ETDEWEB)
Mellor-Crummey, John [William Marsh Rice University
2011-09-13
As part of the Center for Programming Models for Scalable Parallel Computing, Rice University collaborated with project partners in the design, development and deployment of language, compiler, and runtime support for parallel programming models to support application development for the “leadership-class” computer systems at DOE national laboratories. Work over the course of this project has focused on the design, implementation, and evaluation of a second-generation version of Coarray Fortran. Research and development efforts of the project have focused on the CAF 2.0 language, compiler, runtime system, and supporting infrastructure. This has involved working with the teams that provide infrastructure for CAF that we rely on, implementing new language and runtime features, producing an open source compiler that enabled us to evaluate our ideas, and evaluating our design and implementation through the use of benchmarks. The report details the research, development, findings, and conclusions from this work.
Load-balancing algorithms for the parallel community climate model
Energy Technology Data Exchange (ETDEWEB)
Foster, I.T.; Toonen, B.R.
1995-01-01
Implementations of climate models on scalable parallel computer systems can suffer from load imbalances resulting from temporal and spatial variations in the amount of computation required for physical parameterizations such as solar radiation and convective adjustment. We have developed specialized techniques for correcting such imbalances. These techniques are incorporated in a general-purpose, programmable load-balancing library that allows the mapping of computation to processors to be specified as a series of maps generated by a programmer-supplied load-balancing module. The communication required to move from one map to another is performed automatically by the library, without programmer intervention. In this paper, we describe the load-balancing problem and the techniques that we have developed to solve it. We also describe specific load-balancing algorithms that we have developed for PCCM2, a scalable parallel implementation of the Community Climate Model, and present experimental results that demonstrate the effectiveness of these algorithms on parallel computers. The load-balancing library developed in this work is available for use in other climate models.
Exploitation of parallelism in climate models. Final report
Energy Technology Data Exchange (ETDEWEB)
Baer, Ferdinand; Tribbia, Joseph J.; Williamson, David L.
2001-02-05
This final report includes details on the research accomplished by the grant entitled 'Exploitation of Parallelism in Climate Models' to the University of Maryland. The purpose of the grant was to shed light on (a) how to reconfigure the atmospheric prediction equations such that the time iteration process could be compressed by use of MPP architecture; (b) how to develop local subgrid scale models which can provide time and space dependent parameterization for a state-of-the-art climate model to minimize the scale resolution necessary for a climate model, and to utilize MPP capability to simultaneously integrate those subgrid models and their statistics; and (c) how to capitalize on the MPP architecture to study the inherent ensemble nature of the climate problem. In the process of addressing these issues, we created parallel algorithms with spectral accuracy; we developed a process for concurrent climate simulations; we established suitable model reconstructions to speed up computation; we identified and tested optimum realization statistics; we undertook a number of parameterization studies to better understand model physics; and we studied the impact of subgrid scale motions and their parameterization in atmospheric models.
An Exactly Soluble Hierarchical Clustering Model Inverse Cascades, Self-Similarity, and Scaling
Gabrielov, A; Turcotte, D L
1999-01-01
We show how clustering as a general hierarchical dynamical process proceeds via a sequence of inverse cascades to produce self-similar scaling, as an intermediate asymptotic, which then truncates at the largest spatial scales. We show how this model can provide a general explanation for the behavior of several models that has been described as ``self-organized critical,'' including forest-fire, sandpile, and slider-block models.
Lee Chun Chang; Hui-Yu Lin
2012-01-01
Housing data are of a nested nature as houses are nested in a village, a town, or a county. This study thus applies HLM (hierarchical linear modelling) in an empirical study by adding neighborhood characteristic variables into the model for consideration. Using the housing data of 31 neighborhoods in the Taipei area as analysis samples and three HLM sub-models, this study discusses the impact of neighborhood characteristics on house prices. The empirical results indicate that the impact of va...
A first-order dynamical model of hierarchical triple stars and its application
Xu, Xingbo; Fu, Yanning
2015-01-01
For most hierarchical triple stars, the classical double two-body model of zeroth-order cannot describe the motions of the components under the current observational accuracy. In this paper, Marchal's first-order analytical solution is implemented and a more efficient simplified version is applied to real triple stars. The results show that, for most triple stars, the proposed first-order model is preferable to the zeroth-order model either in fitting observational data or in predicting component positions.
Hierarchical Web Page Classification Based on a Topic Model and Neighboring Pages Integration
Sriurai, Wongkot; Meesad, Phayung; Haruechaiyasak, Choochart
2010-01-01
Most Web page classification models typically apply the bag of words (BOW) model to represent the feature space. The original BOW representation, however, is unable to recognize semantic relationships between terms. One possible solution is to apply the topic model approach based on the Latent Dirichlet Allocation algorithm to cluster the term features into a set of latent topics. Terms assigned into the same topic are semantically related. In this paper, we propose a novel hierarchical class...
Hierarchical multi-scale modeling of texture induced plastic anisotropy in sheet forming
Gawad, J.; van Bael, Albert; Eyckens, P.; Samaey, G.; Van Houtte, P.; Roose, D.
2013-01-01
In this paper we present a Hierarchical Multi-Scale (HMS) model of coupled evolutions of crystallographic texture and plastic anisotropy in plastic forming of polycrystalline metallic alloys. The model exploits the Finite Element formulation to describe the macroscopic deformation of the material. Anisotropy of the plastic properties is derived from a physics-based polycrystalline plasticity micro-scale model by means of virtual experiments. The homogenized micro-scale stress response given b...
Directory of Open Access Journals (Sweden)
J. P. Werner
2015-03-01
Full Text Available Reconstructions of the late-Holocene climate rely heavily upon proxies that are assumed to be accurately dated by layer counting, such as measurements of tree rings, ice cores, and varved lake sediments. Considerable advances could be achieved if time-uncertain proxies were able to be included within these multiproxy reconstructions, and if time uncertainties were recognized and correctly modeled for proxies commonly treated as free of age model errors. Current approaches for accounting for time uncertainty are generally limited to repeating the reconstruction using each one of an ensemble of age models, thereby inflating the final estimated uncertainty – in effect, each possible age model is given equal weighting. Uncertainties can be reduced by exploiting the inferred space–time covariance structure of the climate to re-weight the possible age models. Here, we demonstrate how Bayesian hierarchical climate reconstruction models can be augmented to account for time-uncertain proxies. Critically, although a priori all age models are given equal probability of being correct, the probabilities associated with the age models are formally updated within the Bayesian framework, thereby reducing uncertainties. Numerical experiments show that updating the age model probabilities decreases uncertainty in the resulting reconstructions, as compared with the current de facto standard of sampling over all age models, provided there is sufficient information from other data sources in the spatial region of the time-uncertain proxy. This approach can readily be generalized to non-layer-counted proxies, such as those derived from marine sediments.
Directory of Open Access Journals (Sweden)
J. P. Werner
2014-12-01
Full Text Available Reconstructions of late-Holocene climate rely heavily upon proxies that are assumed to be accurately dated by layer counting, such as measurement on tree rings, ice cores, and varved lake sediments. Considerable advances may be achievable if time uncertain proxies could be included within these multiproxy reconstructions, and if time uncertainties were recognized and correctly modeled for proxies commonly treated as free of age model errors. Current approaches to accounting for time uncertainty are generally limited to repeating the reconstruction using each of an ensemble of age models, thereby inflating the final estimated uncertainty – in effect, each possible age model is given equal weighting. Uncertainties can be reduced by exploiting the inferred space–time covariance structure of the climate to re-weight the possible age models. Here we demonstrate how Bayesian Hierarchical climate reconstruction models can be augmented to account for time uncertain proxies. Critically, while a priori all age models are given equal probability of being correct, the probabilities associated with the age models are formally updated within the Bayesian framework, thereby reducing uncertainties. Numerical experiments show that updating the age-model probabilities decreases uncertainty in the climate reconstruction, as compared with the current de-facto standard of sampling over all age models, provided there is sufficient information from other data sources in the region of the time-uncertain proxy. This approach can readily be generalized to non-layer counted proxies, such as those derived from marine sediments.
Parallel Optimization of 3D Cardiac Electrophysiological Model Using GPU
Directory of Open Access Journals (Sweden)
Yong Xia
2015-01-01
Full Text Available Large-scale 3D virtual heart model simulations are highly demanding in computational resources. This imposes a big challenge to the traditional computation resources based on CPU environment, which already cannot meet the requirement of the whole computation demands or are not easily available due to expensive costs. GPU as a parallel computing environment therefore provides an alternative to solve the large-scale computational problems of whole heart modeling. In this study, using a 3D sheep atrial model as a test bed, we developed a GPU-based simulation algorithm to simulate the conduction of electrical excitation waves in the 3D atria. In the GPU algorithm, a multicellular tissue model was split into two components: one is the single cell model (ordinary differential equation and the other is the diffusion term of the monodomain model (partial differential equation. Such a decoupling enabled realization of the GPU parallel algorithm. Furthermore, several optimization strategies were proposed based on the features of the virtual heart model, which enabled a 200-fold speedup as compared to a CPU implementation. In conclusion, an optimized GPU algorithm has been developed that provides an economic and powerful platform for 3D whole heart simulations.
A Hierarchical Latent Stochastic Differential Equation Model for Affective Dynamics
Oravecz, Zita; Tuerlinckx, Francis; Vandekerckhove, Joachim
2011-01-01
In this article a continuous-time stochastic model (the Ornstein-Uhlenbeck process) is presented to model the perpetually altering states of the core affect, which is a 2-dimensional concept underlying all our affective experiences. The process model that we propose can account for the temporal changes in core affect on the latent level. The key…
Xu, Lei; Johnson, Timothy D.; Nichols, Thomas E.; Nee, Derek E.
2010-01-01
Summary The aim of this work is to develop a spatial model for multi-subject fMRI data. There has been extensive work on univariate modeling of each voxel for single and multi-subject data, some work on spatial modeling of single-subject data, and some recent work on spatial modeling of multi-subject data. However, there has been no work on spatial models that explicitly account for inter-subject variability in activation locations. In this work, we use the idea of activation centers and model the inter-subject variability in activation locations directly. Our model is specified in a Bayesian hierarchical frame work which allows us to draw inferences at all levels: the population level, the individual level and the voxel level. We use Gaussian mixtures for the probability that an individual has a particular activation. This helps answer an important question which is not addressed by any of the previous methods: What proportion of subjects had a significant activity in a given region. Our approach incorporates the unknown number of mixture components into the model as a parameter whose posterior distribution is estimated by reversible jump Markov Chain Monte Carlo. We demonstrate our method with a fMRI study of resolving proactive interference and show dramatically better precision of localization with our method relative to the standard mass-univariate method. Although we are motivated by fMRI data, this model could easily be modified to handle other types of imaging data. PMID:19210732
Error Modeling and Design Optimization of Parallel Manipulators
DEFF Research Database (Denmark)
Wu, Guanglei
challenges due to their highly nonlinear behaviors, thus, the parameter and performance analysis, especially the accuracy and stiness, are particularly important. Toward the requirements of robotic technology such as light weight, compactness, high accuracy and low energy consumption, utilizing optimization...... technique in the design procedure is a suitable approach to handle these complex tasks. As there is no unied design guideline for the parallel manipulators, the study described in this thesis aims to provide a systematic analysis for this type of mechanisms in the early design stage, focusing on accuracy...... analysis and design optimization. The proposed approach is illustrated with the planar and spherical parallel manipulators. The geometric design, kinematic and dynamic analysis, kinetostatic modeling and stiness analysis are also presented. Firstly, the study on the geometric architecture and kinematic...
Calibration of parallel kinematics machine using generalized distance error model
Institute of Scientific and Technical Information of China (English)
无
2007-01-01
This paper focus on the accuracy enhancement of parallel kinematics machine through kinematics calibration. In the calibration processing, well-structured identification Jacobian matrix construction and end-effector position and orientation measurement are two main difficulties. In this paper, the identification Jacobian matrix is constructed easily by numerical calculation utilizing the unit virtual velocity method. The generalized distance errors model is presented for avoiding measuring the position and orientation directly which is difficult to be measured. At last, a measurement tool is given for acquiring the data points in the calibration processing.Experimental studies confirmed the effectiveness of method. It is also shown in the paper that the proposed approach can be applied to other typed parallel manipulators.
Dettmer, Jan; Dosso, Stan E
2012-10-01
This paper develops a trans-dimensional approach to matched-field geoacoustic inversion, including interacting Markov chains to improve efficiency and an autoregressive model to account for correlated errors. The trans-dimensional approach and hierarchical seabed model allows inversion without assuming any particular parametrization by relaxing model specification to a range of plausible seabed models (e.g., in this case, the number of sediment layers is an unknown parameter). Data errors are addressed by sampling statistical error-distribution parameters, including correlated errors (covariance), by applying a hierarchical autoregressive error model. The well-known difficulty of low acceptance rates for trans-dimensional jumps is addressed with interacting Markov chains, resulting in a substantial increase in efficiency. The trans-dimensional seabed model and the hierarchical error model relax the degree of prior assumptions required in the inversion, resulting in substantially improved (more realistic) uncertainty estimates and a more automated algorithm. In particular, the approach gives seabed parameter uncertainty estimates that account for uncertainty due to prior model choice (layering and data error statistics). The approach is applied to data measured on a vertical array in the Mediterranean Sea.
Fraldi, M.; Perrella, G.; Ciervo, M.; Bosia, F.; Pugno, N. M.
2017-09-01
Very recently, a Weibull-based probabilistic strategy has been successfully applied to bundles of wires to determine their overall stress-strain behaviour, also capturing previously unpredicted nonlinear and post-elastic features of hierarchical strands. This approach is based on the so-called ;Equal Load Sharing (ELS); hypothesis by virtue of which, when a wire breaks, the load acting on the strand is homogeneously redistributed among the surviving wires. Despite the overall effectiveness of the method, some discrepancies between theoretical predictions and in silico Finite Element-based simulations or experimental findings might arise when more complex structures are analysed, e.g. helically arranged bundles. To overcome these limitations, an enhanced hybrid approach is proposed in which the probability of rupture is combined with a deterministic mechanical model of a strand constituted by helically-arranged and hierarchically-organized wires. The analytical model is validated comparing its predictions with both Finite Element simulations and experimental tests. The results show that generalized stress-strain responses - incorporating tension/torsion coupling - are naturally found and, once one or more elements break, the competition between geometry and mechanics of the strand microstructure, i.e. the different cross sections and helical angles of the wires in the different hierarchical levels of the strand, determines the no longer homogeneous stress redistribution among the surviving wires whose fate is hence governed by a ;Hierarchical Load Sharing; criterion.
The Evolution of Galaxy Clustering in Hierarchical Models
1999-01-01
The main ingredients of recent semi-analytic models of galaxy formation are summarised. We present predictions for the galaxy clustering properties of a well specified LCDM model whose parameters are constrained by observed local galaxy properties. We present preliminary predictions for evolution of clustering that can be probed with deep pencil beam surveys.
A Hierarchical Multiobjective Routing Model for MPLS Networks with Two Service Classes
Craveirinha, José; Girão-Silva, Rita; Clímaco, João; Martins, Lúcia
This work presents a model for multiobjective routing in MPLS networks formulated within a hierarchical network-wide optimization framework, with two classes of services, namely QoS and Best Effort (BE) services. The routing model uses alternative routing and hierarchical optimization with two optimization levels, including fairness objectives. Another feature of the model is the use of an approximate stochastic representation of the traffic flows in the network, based on the concept of effective bandwidth. The theoretical foundations of a heuristic strategy for finding “good” compromise solutions to the very complex bi-level routing optimization problem, based on a conjecture concerning the definition of marginal implied costs for QoS flows and BE flows, will be described. The main features of a first version of this heuristic based on a bi-objective shortest path model and some preliminary results for a benchmark network will also be revealed.
Leung, K M; Elashoff, R M; Rees, K S; Hasan, M M; Legorreta, A P
1998-03-01
The purpose of this study was to identify factors related to pregnancy and childbirth that might be predictive of a patient's length of stay after delivery and to model variations in length of stay. California hospital discharge data on maternity patients (n = 499,912) were analyzed. Hierarchical linear modeling was used to adjust for patient case mix and hospital characteristics and to account for the dependence of outcome variables within hospitals. Substantial variation in length of stay among patients was observed. The variation was mainly attributed to delivery type (vaginal or cesarean section), the patient's clinical risk factors, and severity of complications (if any). Furthermore, hospitals differed significantly in maternity lengths of stay even after adjustment for patient case mix. Developing risk-adjusted models for length of stay is a complex process but is essential for understanding variation. The hierarchical linear model approach described here represents a more efficient and appropriate way of studying interhospital variations than the traditional regression approach.
Directory of Open Access Journals (Sweden)
Nasim Nickbakhsh
2017-03-01
Full Text Available The distributed system of Grid subscribes the non-homogenous sources at a vast level in a dynamic manner. The resource discovery manner is very influential on the efficiency and of quality the system functionality. The “Bitmap” model is based on the hierarchical and conscious search model that allows for less traffic and low number of messages in relation to other methods in this respect. This proposed method is based on the hierarchical and conscious search model that enhances the Bitmap method with the objective to reduce traffic, reduce the load of resource management processing, reduce the number of emerged messages due to resource discovery and increase the resource according speed. The proposed method and the Bitmap method are simulated through Arena tool. This proposed model is abbreviated as RNTL.
cellGPU: Massively parallel simulations of dynamic vertex models
Sussman, Daniel M.
2017-10-01
Vertex models represent confluent tissue by polygonal or polyhedral tilings of space, with the individual cells interacting via force laws that depend on both the geometry of the cells and the topology of the tessellation. This dependence on the connectivity of the cellular network introduces several complications to performing molecular-dynamics-like simulations of vertex models, and in particular makes parallelizing the simulations difficult. cellGPU addresses this difficulty and lays the foundation for massively parallelized, GPU-based simulations of these models. This article discusses its implementation for a pair of two-dimensional models, and compares the typical performance that can be expected between running cellGPU entirely on the CPU versus its performance when running on a range of commercial and server-grade graphics cards. By implementing the calculation of topological changes and forces on cells in a highly parallelizable fashion, cellGPU enables researchers to simulate time- and length-scales previously inaccessible via existing single-threaded CPU implementations. Program Files doi:http://dx.doi.org/10.17632/6j2cj29t3r.1 Licensing provisions: MIT Programming language: CUDA/C++ Nature of problem: Simulations of off-lattice "vertex models" of cells, in which the interaction forces depend on both the geometry and the topology of the cellular aggregate. Solution method: Highly parallelized GPU-accelerated dynamical simulations in which the force calculations and the topological features can be handled on either the CPU or GPU. Additional comments: The code is hosted at https://gitlab.com/dmsussman/cellGPU, with documentation additionally maintained at http://dmsussman.gitlab.io/cellGPUdocumentation
DEFF Research Database (Denmark)
Thomadsen, Tommy
2005-01-01
of different types of hierarchical networks. This is supplemented by a review of ring network design problems and a presentation of a model allowing for modeling most hierarchical networks. We use methods based on linear programming to design the hierarchical networks. Thus, a brief introduction to the various....... The thesis investigates models for hierarchical network design and methods used to design such networks. In addition, ring network design is considered, since ring networks commonly appear in the design of hierarchical networks. The thesis introduces hierarchical networks, including a classification scheme...... linear programming based methods is included. The thesis is thus suitable as a foundation for study of design of hierarchical networks. The major contribution of the thesis consists of seven papers which are included in the appendix. The papers address hierarchical network design and/or ring network...
Bayesian Hierarchical Random Intercept Model Based on Three Parameter Gamma Distribution
Wirawati, Ika; Iriawan, Nur; Irhamah
2017-06-01
Hierarchical data structures are common throughout many areas of research. Beforehand, the existence of this type of data was less noticed in the analysis. The appropriate statistical analysis to handle this type of data is the hierarchical linear model (HLM). This article will focus only on random intercept model (RIM), as a subclass of HLM. This model assumes that the intercept of models in the lowest level are varied among those models, and their slopes are fixed. The differences of intercepts were suspected affected by some variables in the upper level. These intercepts, therefore, are regressed against those upper level variables as predictors. The purpose of this paper would demonstrate a proven work of the proposed two level RIM of the modeling on per capita household expenditure in Maluku Utara, which has five characteristics in the first level and three characteristics of districts/cities in the second level. The per capita household expenditure data in the first level were captured by the three parameters Gamma distribution. The model, therefore, would be more complex due to interaction of many parameters for representing the hierarchical structure and distribution pattern of the data. To simplify the estimation processes of parameters, the computational Bayesian method couple with Markov Chain Monte Carlo (MCMC) algorithm and its Gibbs Sampling are employed.
Efficient Parallel Statistical Model Checking of Biochemical Networks
Directory of Open Access Journals (Sweden)
Paolo Ballarini
2009-12-01
Full Text Available We consider the problem of verifying stochastic models of biochemical networks against behavioral properties expressed in temporal logic terms. Exact probabilistic verification approaches such as, for example, CSL/PCTL model checking, are undermined by a huge computational demand which rule them out for most real case studies. Less demanding approaches, such as statistical model checking, estimate the likelihood that a property is satisfied by sampling executions out of the stochastic model. We propose a methodology for efficiently estimating the likelihood that a LTL property P holds of a stochastic model of a biochemical network. As with other statistical verification techniques, the methodology we propose uses a stochastic simulation algorithm for generating execution samples, however there are three key aspects that improve the efficiency: first, the sample generation is driven by on-the-fly verification of P which results in optimal overall simulation time. Second, the confidence interval estimation for the probability of P to hold is based on an efficient variant of the Wilson method which ensures a faster convergence. Third, the whole methodology is designed according to a parallel fashion and a prototype software tool has been implemented that performs the sampling/verification process in parallel over an HPC architecture.
A systemic approach for modeling biological evolution using Parallel DEVS.
Heredia, Daniel; Sanz, Victorino; Urquia, Alfonso; Sandín, Máximo
2015-08-01
A new model for studying the evolution of living organisms is proposed in this manuscript. The proposed model is based on a non-neodarwinian systemic approach. The model is focused on considering several controversies and open discussions about modern evolutionary biology. Additionally, a simplification of the proposed model, named EvoDEVS, has been mathematically described using the Parallel DEVS formalism and implemented as a computer program using the DEVSLib Modelica library. EvoDEVS serves as an experimental platform to study different conditions and scenarios by means of computer simulations. Two preliminary case studies are presented to illustrate the behavior of the model and validate its results. EvoDEVS is freely available at http://www.euclides.dia.uned.es. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
The high redshift galaxy population in hierarchical galaxy formation models
Kitzbichler, M G; Kitzbichler, Manfred G.; White, Simon D. M.
2006-01-01
We compare observations of the high redshift galaxy population to the predictions of the galaxy formation model of Croton et al. (2006). This model, implemented on the Millennium Simulation of the concordance LCDM cosmogony, introduces "radio mode" feedback from the central galaxies of groups and clusters in order to obtain quantitative agreement with the luminosity, colour, morphology and clustering properties of the low redshift galaxy population. Here we compare the predictions of this same model to the observed counts and redshift distributions of faint galaxies, as well as to their inferred luminosity and mass functions out to redshift 5. With the exception of the mass functions, all these properties are sensitive to modelling of dust obscuration. A simple but plausible treatment gives moderately good agreement with most of the data, although the predicted abundance of relatively massive (~M*) galaxies appears systematically high at high redshift, suggesting that such galaxies assemble earlier in this mo...
Sparse Event Modeling with Hierarchical Bayesian Kernel Methods
2016-01-05
the kernel function which depends on the application and the model user. This research uses the most popular kernel function, the radial basis...an important role in the nation’s economy. Unfortunately, the system’s reliability is declining due to the aging components of the network [Grier...kernel function. Gaussian Bayesian kernel models became very popular recently and were extended and applied to a number of classification problems. An
Parallel algorithms for interactive manipulation of digital terrain models
Davis, E. W.; Mcallister, D. F.; Nagaraj, V.
1988-01-01
Interactive three-dimensional graphics applications, such as terrain data representation and manipulation, require extensive arithmetic processing. Massively parallel machines are attractive for this application since they offer high computational rates, and grid connected architectures provide a natural mapping for grid based terrain models. Presented here are algorithms for data movement on the massive parallel processor (MPP) in support of pan and zoom functions over large data grids. It is an extension of earlier work that demonstrated real-time performance of graphics functions on grids that were equal in size to the physical dimensions of the MPP. When the dimensions of a data grid exceed the processing array size, data is packed in the array memory. Windows of the total data grid are interactively selected for processing. Movement of packed data is needed to distribute items across the array for efficient parallel processing. Execution time for data movement was found to exceed that for arithmetic aspects of graphics functions. Performance figures are given for routines written in MPP Pascal.
Ski Control Model for Parallel Turn Using Multibody System
Kawai, Shigehiro; Yamaguchi, Keishi; Sakata, Toshiyuki
Now, it is possible to discuss qualitatively the effects of skis, skier’s ski control and slope on a ski turn by simulation. The reliability of a simulation depends on the accuracy of the models used in the simulation. In the present study, we attempt to develop a new ski control model for a “parallel turn” using a computer graphics technique. The “ski control” necessary for the simulation is the relative motion of the skier’s center of gravity to the ski and the force acting on the ski from the skier. The developed procedure is as follows. First, the skier is modeled using a multibody system consisting of body parts. Second, various postures of the skier during the “parallel turn” are drawn using a 3D-CAD (three dimensional computer aided design) system referring to the pictures videotaped on a slope. The position of the skier’s center of gravity is estimated from the produced posture. Third, the skier’s ski control is obtained by arranging these postures in a time schedule. One can watch the ski control on a TV. Last, the three types of forces acting on the ski from the skier are estimated from the gravity force and the three relative types of inertia forces acting on the skier. Consequently, one can obtain accurate ski control for the simulation of the “parallel turn”, that is, the relative motion of the skier’s center of gravity to the ski and the force acting on the ski from the skier. Furthermore, it follows that one can numerically estimate the edging angle from the ski control model.
Building hierarchical models of avian distributions for the State of Georgia
Howell, J.E.; Peterson, J.T.; Conroy, M.J.
2008-01-01
To predict the distributions of breeding birds in the state of Georgia, USA, we built hierarchical models consisting of 4 levels of nested mapping units of decreasing area: 90,000 ha, 3,600 ha, 144 ha, and 5.76 ha. We used the Partners in Flight database of point counts to generate presence and absence data at locations across the state of Georgia for 9 avian species: Acadian flycatcher (Empidonax virescens), brownheaded nuthatch (Sitta pusilla), Carolina wren (Thryothorus ludovicianus), indigo bunting (Passerina cyanea), northern cardinal (Cardinalis cardinalis), prairie warbler (Dendroica discolor), yellow-billed cuckoo (Coccyxus americanus), white-eyed vireo (Vireo griseus), and wood thrush (Hylocichla mustelina). At each location, we estimated hierarchical-level-specific habitat measurements using the Georgia GAP Analysis18 class land cover and other Geographic Information System sources. We created candidate, species-specific occupancy models based on previously reported relationships, and fit these using Markov chain Monte Carlo procedures implemented in OpenBugs. We then created a confidence model set for each species based on Akaike's Information Criterion. We found hierarchical habitat relationships for all species. Three-fold cross-validation estimates of model accuracy indicated an average overall correct classification rate of 60.5%. Comparisons with existing Georgia GAP Analysis models indicated that our models were more accurate overall. Our results provide guidance to wildlife scientists and managers seeking predict avian occurrence as a function of local and landscape-level habitat attributes.
Cressie, Noel; Calder, Catherine A; Clark, James S; Ver Hoef, Jay M; Wikle, Christopher K
2009-04-01
Analyses of ecological data should account for the uncertainty in the process(es) that generated the data. However, accounting for these uncertainties is a difficult task, since ecology is known for its complexity. Measurement and/or process errors are often the only sources of uncertainty modeled when addressing complex ecological problems, yet analyses should also account for uncertainty in sampling design, in model specification, in parameters governing the specified model, and in initial and boundary conditions. Only then can we be confident in the scientific inferences and forecasts made from an analysis. Probability and statistics provide a framework that accounts for multiple sources of uncertainty. Given the complexities of ecological studies, the hierarchical statistical model is an invaluable tool. This approach is not new in ecology, and there are many examples (both Bayesian and non-Bayesian) in the literature illustrating the benefits of this approach. In this article, we provide a baseline for concepts, notation, and methods, from which discussion on hierarchical statistical modeling in ecology can proceed. We have also planted some seeds for discussion and tried to show where the practical difficulties lie. Our thesis is that hierarchical statistical modeling is a powerful way of approaching ecological analysis in the presence of inevitable but quantifiable uncertainties, even if practical issues sometimes require pragmatic compromises.
Chen, Yongsheng; Persaud, Bhagwant
2014-09-01
Crash modification factors (CMFs) for road safety treatments are developed as multiplicative factors that are used to reflect the expected changes in safety performance associated with changes in highway design and/or the traffic control features. However, current CMFs have methodological drawbacks. For example, variability with application circumstance is not well understood, and, as important, correlation is not addressed when several CMFs are applied multiplicatively. These issues can be addressed by developing safety performance functions (SPFs) with components of crash modification functions (CM-Functions), an approach that includes all CMF related variables, along with others, while capturing quantitative and other effects of factors and accounting for cross-factor correlations. CM-Functions can capture the safety impact of factors through a continuous and quantitative approach, avoiding the problematic categorical analysis that is often used to capture CMF variability. There are two formulations to develop such SPFs with CM-Function components - fully specified models and hierarchical models. Based on sample datasets from two Canadian cities, both approaches are investigated in this paper. While both model formulations yielded promising results and reasonable CM-Functions, the hierarchical model was found to be more suitable in retaining homogeneity of first-level SPFs, while addressing CM-Functions in sub-level modeling. In addition, hierarchical models better capture the correlations between different impact factors.
Boos, Moritz; Seer, Caroline; Lange, Florian; Kopp, Bruno
2016-01-01
Cognitive determinants of probabilistic inference were examined using hierarchical Bayesian modeling techniques. A classic urn-ball paradigm served as experimental strategy, involving a factorial two (prior probabilities) by two (likelihoods) design. Five computational models of cognitive processes were compared with the observed behavior. Parameter-free Bayesian posterior probabilities and parameter-free base rate neglect provided inadequate models of probabilistic inference. The introduction of distorted subjective probabilities yielded more robust and generalizable results. A general class of (inverted) S-shaped probability weighting functions had been proposed; however, the possibility of large differences in probability distortions not only across experimental conditions, but also across individuals, seems critical for the model's success. It also seems advantageous to consider individual differences in parameters of probability weighting as being sampled from weakly informative prior distributions of individual parameter values. Thus, the results from hierarchical Bayesian modeling converge with previous results in revealing that probability weighting parameters show considerable task dependency and individual differences. Methodologically, this work exemplifies the usefulness of hierarchical Bayesian modeling techniques for cognitive psychology. Theoretically, human probabilistic inference might be best described as the application of individualized strategic policies for Bayesian belief revision.
Directory of Open Access Journals (Sweden)
Fidel Ernesto Castro Morales
2016-03-01
Full Text Available Abstract Objectives: to propose the use of a Bayesian hierarchical model to study the allometric scaling of the fetoplacental weight ratio, including possible confounders. Methods: data from 26 singleton pregnancies with gestational age at birth between 37 and 42 weeks were analyzed. The placentas were collected immediately after delivery and stored under refrigeration until the time of analysis, which occurred within up to 12 hours. Maternal data were collected from medical records. A Bayesian hierarchical model was proposed and Markov chain Monte Carlo simulation methods were used to obtain samples from distribution a posteriori. Results: the model developed showed a reasonable fit, even allowing for the incorporation of variables and a priori information on the parameters used. Conclusions: new variables can be added to the modelfrom the available code, allowing many possibilities for data analysis and indicating the potential for use in research on the subject.
Directory of Open Access Journals (Sweden)
Dan WU
2009-06-01
Full Text Available The principal-subordinate hierarchical multi-objective programming model of initial water rights allocation was developed based on the principle of coordinated and sustainable development of different regions and water sectors within a basin. With the precondition of strictly controlling maximum emissions rights, initial water rights were allocated between the first and the second levels of the hierarchy in order to promote fair and coordinated development across different regions of the basin and coordinated and efficient water use across different water sectors, realize the maximum comprehensive benefits to the basin, promote the unity of quantity and quality of initial water rights allocation, and eliminate water conflict across different regions and water sectors. According to interactive decision-making theory, a principal-subordinate hierarchical interactive iterative algorithm based on the satisfaction degree was developed and used to solve the initial water rights allocation model. A case study verified the validity of the model.
Institute of Scientific and Technical Information of China (English)
Dan WU; Feng-ping WU; Yan-ping CHEN
2009-01-01
The principal-subordinate hierarchical multi-objective programming model of initial water rights allocation was developed based on the principle of coordinated and sustainable development of different regions and water sectors within a basin. With the precondition of strictly controlling maximum emissions rights, initial water rights were allocated between the first and the second levels of the hierarchy in order to promote fair and coordinated development across different regions of the basin and coordinated and efficient water use across different water sectors, realize the maximum comprehensive benefits to the basin, promote the unity of quantity and quality of initial water rights allocation, and eliminate water conflict across different regions and water sectors. According to interactive decision-making theory, a principal-subordinate hierarchical interactive iterative algorithm based on the satisfaction degree was developed and used to solve the initial water rights allocation model. A case study verified the validity of the model.
Jeong, Sungmoon; Lee, Minho
2012-01-01
This paper presents an adaptive object recognition model based on incremental feature representation and a hierarchical feature classifier that offers plasticity to accommodate additional input data and reduces the problem of forgetting previously learned information. The incremental feature representation method applies adaptive prototype generation with a cortex-like mechanism to conventional feature representation to enable an incremental reflection of various object characteristics, such as feature dimensions in the learning process. A feature classifier based on using a hierarchical generative model recognizes various objects with variant feature dimensions during the learning process. Experimental results show that the adaptive object recognition model successfully recognizes single and multiple-object classes with enhanced stability and flexibility.
Design of Experiments for Factor Hierarchization in Complex Structure Modelling
Directory of Open Access Journals (Sweden)
C. Kasmi
2013-07-01
Full Text Available Modelling the power-grid network is of fundamental interest to analyse the conducted propagation of unintentional and intentional electromagnetic interferences. The propagation is indeed highly influenced by the channel behaviour. In this paper, we investigate the effects of appliances and the position of cables in a low voltage network. First, the power-grid architecture is described. Then, the principle of Experimental Design is recalled. Next, the methodology is applied to power-grid modelling. Finally, we propose an analysis of the statistical moments of the experimental design results. Several outcomes are provided to describe the effects induced by parameter variability on the conducted propagation of spurious compromising emanations.
fast_protein_cluster: parallel and optimized clustering of large-scale protein modeling data.
Hung, Ling-Hong; Samudrala, Ram
2014-06-15
fast_protein_cluster is a fast, parallel and memory efficient package used to cluster 60 000 sets of protein models (with up to 550 000 models per set) generated by the Nutritious Rice for the World project. fast_protein_cluster is an optimized and extensible toolkit that supports Root Mean Square Deviation after optimal superposition (RMSD) and Template Modeling score (TM-score) as metrics. RMSD calculations using a laptop CPU are 60× faster than qcprot and 3× faster than current graphics processing unit (GPU) implementations. New GPU code further increases the speed of RMSD and TM-score calculations. fast_protein_cluster provides novel k-means and hierarchical clustering methods that are up to 250× and 2000× faster, respectively, than Clusco, and identify significantly more accurate models than Spicker and Clusco. fast_protein_cluster is written in C++ using OpenMP for multi-threading support. Custom streaming Single Instruction Multiple Data (SIMD) extensions and advanced vector extension intrinsics code accelerate CPU calculations, and OpenCL kernels support AMD and Nvidia GPUs. fast_protein_cluster is available under the M.I.T. license. (http://software.compbio.washington.edu/fast_protein_cluster) © The Author 2014. Published by Oxford University Press.
A hierarchical Bayes error correction model to explain dynamic effects
D. Fok (Dennis); C. Horváth (Csilla); R. Paap (Richard); Ph.H.B.F. Franses (Philip Hans)
2004-01-01
textabstractFor promotional planning and market segmentation it is important to understand the short-run and long-run effects of the marketing mix on category and brand sales. In this paper we put forward a sales response model to explain the differences in short-run and long-run effects of promotio
Models to relate species to environment: a hierarchical statistical approac
Jamil, T.
2012-01-01
In the last two decades, the interest of community ecologists in trait-based approaches has grown dramatically and these approaches have been increasingly applied to explain and predict response of species to environmental conditions. A variety of modelling techniques are available. The dominant
Models to relate species to environment: a hierarchical statistical approac
Jamil, T.
2012-01-01
In the last two decades, the interest of community ecologists in trait-based approaches has grown dramatically and these approaches have been increasingly applied to explain and predict response of species to environmental conditions. A variety of modelling techniques are available. The dominant tec
Directory of Open Access Journals (Sweden)
Moritz eBoos
2016-05-01
Full Text Available Cognitive determinants of probabilistic inference were examined using hierarchical Bayesian modelling techniques. A classic urn-ball paradigm served as experimental strategy, involving a factorial two (prior probabilities by two (likelihoods design. Five computational models of cognitive processes were compared with the observed behaviour. Parameter-free Bayesian posterior probabilities and parameter-free base rate neglect provided inadequate models of probabilistic inference. The introduction of distorted subjective probabilities yielded more robust and generalizable results. A general class of (inverted S-shaped probability weighting functions had been proposed; however, the possibility of large differences in probability distortions not only across experimental conditions, but also across individuals, seems critical for the model’s success. It also seems advantageous to consider individual differences in parameters of probability weighting as being sampled from weakly informative prior distributions of individual parameter values. Thus, the results from hierarchical Bayesian modelling converge with previous results in revealing that probability weighting parameters show considerable task dependency and individual differences. Methodologically, this work exemplifies the usefulness of hierarchical Bayesian modelling techniques for cognitive psychology. Theoretically, human probabilistic inference might be best described as the application of individualized strategic policies for Bayesian belief revision.
Energy Technology Data Exchange (ETDEWEB)
Makeechev, V.A. [Industrial Power Company, Krasnopresnenskaya Naberejnaya 12, 123610 Moscow (Russian Federation); Soukhanov, O.A. [Energy Systems Institute, 1 st Yamskogo Polya Street 15, 125040 Moscow (Russian Federation); Sharov, Y.V. [Moscow Power Engineering Institute, Krasnokazarmennaya Street 14, 111250 Moscow (Russian Federation)
2008-07-15
This paper presents foundations of the optimization method intended for solution of power systems operation problems and based on the principles of functional modeling (FM). This paper also presents several types of hierarchical FM algorithms for economic dispatch in these systems derived from this method. According to the FM method a power system is represented by hierarchical model consisting of systems of equations of lower (subsystem) levels and higher level system of connection equations (SCE), in which only boundary variables of subsystems are present. Solution of optimization problem in accordance with the FM method consists of the following operations: (1) solution of optimization problem for each subsystem (values of boundary variables for subsystems should be determined on the higher level of model); (2) calculation of functional characteristic (FC) of each subsystem, pertaining to state of subsystem on current iteration (these two steps are carried out on the lower level of the model); (3) formation and solution of the higher level system of equations (SCE), which gives values of boundary and supplementary boundary variables on current iteration. The key elements in the general structure of the FM method are FCs of subsystems, which represent them on the higher level of the model as ''black boxes''. Important advantage of hierarchical FM algorithms is that results obtained with them on each iteration are identical to those of corresponding basic one level algorithms. (author)
Hybrid fluid/kinetic model for parallel heat conduction
Energy Technology Data Exchange (ETDEWEB)
Callen, J.D.; Hegna, C.C.; Held, E.D. [Univ. of Wisconsin, Madison, WI (United States)
1998-12-31
It is argued that in order to use fluid-like equations to model low frequency ({omega} < {nu}) phenomena such as neoclassical tearing modes in low collisionality ({nu} < {omega}{sub b}) tokamak plasmas, a Chapman-Enskog-like approach is most appropriate for developing an equation for the kinetic distortion (F) of the distribution function whose velocity-space moments lead to the needed fluid moment closure relations. Further, parallel heat conduction in a long collision mean free path regime can be described through a combination of a reduced phase space Chapman-Enskog-like approach for the kinetics and a multiple-time-scale analysis for the fluid and kinetic equations.
Phase dynamics modeling of parallel stacks of Josephson junctions
Rahmonov, I. R.; Shukrinov, Yu. M.
2014-11-01
The phase dynamics of two parallel connected stacks of intrinsic Josephson junctions (JJs) in high temperature superconductors is numerically investigated. The calculations are based on the system of nonlinear differential equations obtained within the CCJJ + DC model, which allows one to determine the general current-voltage characteristic of the system, as well as each individual stack. The processes with increasing and decreasing base currents are studied. The features in the behavior of the current in each stack of the system due to the switching between the states with rotating and oscillating phases are analyzed.
PKind: A parallel k-induction based model checker
Kahsai, Temesghen; 10.4204/EPTCS.72.6
2011-01-01
PKind is a novel parallel k-induction-based model checker of invariant properties for finite- or infinite-state Lustre programs. Its architecture, which is strictly message-based, is designed to minimize synchronization delays and easily accommodate the incorporation of incremental invariant generators to enhance basic k-induction. We describe PKind's functionality and main features, and present experimental evidence that PKind significantly speeds up the verification of safety properties and, due to incremental invariant generation, also considerably increases the number of provable ones.
Experiments in Error Propagation within Hierarchal Combat Models
2015-09-01
and variances of Blue MTTK, Red MTTK, and P[Blue Wins] by Experimental Design are statistically different (Wackerly, Mendenhall III and Schaeffer...2008). Although the data is not normally distributed, the t-test is robust to non-normality (Wackerly, Mendenhall III and Schaeffer 2008). There is...this is handled by transforming the predicted values with a natural logarithm (Wackerly, Mendenhall III and Schaeffer 2008). The model considers
Hierarchical Models for Batteries: Overview with Some Case Studies
Energy Technology Data Exchange (ETDEWEB)
Pannala, Sreekanth [ORNL; Mukherjee, Partha P [ORNL; Allu, Srikanth [ORNL; Nanda, Jagjit [ORNL; Martha, Surendra K [ORNL; Dudney, Nancy J [ORNL; Turner, John A [ORNL
2012-01-01
Batteries are complex multiscale systems and a hierarchy of models has been employed to study different aspects of batteries at different resolutions. For the electrochemistry and charge transport, the models span from electric circuits, single-particle, pseudo 2D, detailed 3D, and microstructure resolved at the continuum scales and various techniques such as molecular dynamics and density functional theory to resolve the atomistic structure. Similar analogies exist for the thermal, mechanical, and electrical aspects of the batteries. We have been recently working on the development of a unified formulation for the continuum scales across the electrode-electrolyte-electrode system - using a rigorous volume averaging approach typical of multiphase formulation. This formulation accounts for any spatio-temporal variation of the different properties such as electrode/void volume fractions and anisotropic conductivities. In this talk the following will be presented: The background and the hierarchy of models that need to be integrated into a battery modeling framework to carry out predictive simulations, Our recent work on the unified 3D formulation addressing the missing links in the multiscale description of the batteries, Our work on microstructure resolved simulations for diffusion processes, Upscaling of quantities of interest to construct closures for the 3D continuum description, Sample results for a standard Carbon/Spinel cell will be presented and compared to experimental data, Finally, the infrastructure we are building to bring together components with different physics operating at different resolution will be presented. The presentation will also include details about how this generalized approach can be applied to other electrochemical storage systems such as supercapacitors, Li-Air batteries, and Lithium batteries with 3D architectures.
Bello, Nora M; Steibel, Juan P; Tempelman, Robert J
2010-06-01
Bivariate mixed effects models are often used to jointly infer upon covariance matrices for both random effects (u) and residuals (e) between two different phenotypes in order to investigate the architecture of their relationship. However, these (co)variances themselves may additionally depend upon covariates as well as additional sets of exchangeable random effects that facilitate borrowing of strength across a large number of clusters. We propose a hierarchical Bayesian extension of the classical bivariate mixed effects model by embedding additional levels of mixed effects modeling of reparameterizations of u-level and e-level (co)variances between two traits. These parameters are based upon a recently popularized square-root-free Cholesky decomposition and are readily interpretable, each conveniently facilitating a generalized linear model characterization. Using Markov Chain Monte Carlo methods, we validate our model based on a simulation study and apply it to a joint analysis of milk yield and calving interval phenotypes in Michigan dairy cows. This analysis indicates that the e-level relationship between the two traits is highly heterogeneous across herds and depends upon systematic herd management factors.
Hierarchical Model Predictive Control for Sustainable Building Automation
Directory of Open Access Journals (Sweden)
Barbara Mayer
2017-02-01
Full Text Available A hierarchicalmodel predictive controller (HMPC is proposed for flexible and sustainable building automation. The implications of a building automation system for sustainability are defined, and model predictive control is introduced as an ideal tool to cover all requirements. The HMPC is presented as a development suitable for the optimization of modern buildings, as well as retrofitting. The performance and flexibility of the HMPC is demonstrated by simulation studies of a modern office building, and the perfect interaction with future smart grids is shown.
MacCann, Carolyn; Joseph, Dana L; Newman, Daniel A; Roberts, Richard D
2014-04-01
This article examines the status of emotional intelligence (EI) within the structure of human cognitive abilities. To evaluate whether EI is a 2nd-stratum factor of intelligence, data were fit to a series of structural models involving 3 indicators each for fluid intelligence, crystallized intelligence, quantitative reasoning, visual processing, and broad retrieval ability, as well as 2 indicators each for emotion perception, emotion understanding, and emotion management. Unidimensional, multidimensional, hierarchical, and bifactor solutions were estimated in a sample of 688 college and community college students. Results suggest adequate fit for 2 models: (a) an oblique 8-factor model (with 5 traditional cognitive ability factors and 3 EI factors) and (b) a hierarchical solution (with cognitive g at the highest level and EI representing a 2nd-stratum factor that loads onto g at λ = .80). The acceptable relative fit of the hierarchical model confirms the notion that EI is a group factor of cognitive ability, marking the expression of intelligence in the emotion domain. The discussion proposes a possible expansion of Cattell-Horn-Carroll theory to include EI as a 2nd-stratum factor of similar standing to factors such as fluid intelligence and visual processing.
Aging through hierarchical coalescence in the East model
Faggionato, A; Roberto, C; Toninelli, C
2010-01-01
We rigorously analyze the low temperature non-equilibrium dynamics of the East model, a special example of a one dimensional oriented kinetically constrained particle model, when the initial distribution is different from the reversible one and for times much smaller than the global relaxation time. This setting has been intensively studied in the physics literature to analyze the slow dynamics which follows a sudden quench from the liquid to the glass phase. In the limit of zero temperature (i.e. a vanishing density of vacancies) and for initial distributions such that the vacancies form a renewal process we prove that the density of vacancies, the persistence function and the two-time autocorrelation function behave as staircase functions with several plateaux. Furthermore the two-time autocorrelation function displays an aging behavior. We also provide a sharp description of the statistics of the domain length as a function of time, a domain being the interval between two consecutive vacancies. When the in...
Hierarchic stochastic modelling applied to intracellular Ca(2+ signals.
Directory of Open Access Journals (Sweden)
Gregor Moenke
Full Text Available Important biological processes like cell signalling and gene expression have noisy components and are very complex at the same time. Mathematical analysis of such systems has often been limited to the study of isolated subsystems, or approximations are used that are difficult to justify. Here we extend a recently published method (Thurley and Falcke, PNAS 2011 which is formulated in observable system configurations instead of molecular transitions. This reduces the number of system states by several orders of magnitude and avoids fitting of kinetic parameters. The method is applied to Ca(2+ signalling. Ca(2+ is a ubiquitous second messenger transmitting information by stochastic sequences of concentration spikes, which arise by coupling of subcellular Ca(2+ release events (puffs. We derive analytical expressions for a mechanistic Ca(2+ model, based on recent data from live cell imaging, and calculate Ca(2+ spike statistics in dependence on cellular parameters like stimulus strength or number of Ca(2+ channels. The new approach substantiates a generic Ca(2+ model, which is a very convenient way to simulate Ca(2+ spike sequences with correct spiking statistics.
Parallel tempering and 3D spin glass models
Papakonstantinou, T.; Malakis, A.
2014-03-01
We review parallel tempering schemes and examine their main ingredients for accuracy and efficiency. We discuss two selection methods of temperatures and some alternatives for the exchange of replicas, including all-pair exchange methods. We measure specific heat errors and round-trip efficiency using the two-dimensional (2D) Ising model, and also test the efficiency for the ground state production in 3D spin glass models. We find that the optimization of the GS problem is highly influenced by the choice of the temperature range of the PT process. Finally, we present numerical evidence concerning the universality aspects of an anisotropic case of the 3D spin-glass model.
The parallel network dynamic DEA model with interval data
Directory of Open Access Journals (Sweden)
S. Keikha-Javan
2014-09-01
Full Text Available In original DEA models, data apply precisely for measuring the relative efficiency whereas in reality, we do not always deal with precise data, also, be noted that when data are non-precision, it is expected to attain non-precision efficiency due to these data. In this article, we apply the parallel network dynamic DEA model for non-precision data in which the carry-overs among periods are assumed as desired and undesired. Then Upper and lower efficiency bounds are obtained for overall-, periodical-, divisional and periodical efficiencies the part which is computed considering the subunits of DMU under evaluation. Finally, having exerted this model on data set of branches of several banks in Iran, we compute the efficiency interval.
Methods to model-check parallel systems software.
Energy Technology Data Exchange (ETDEWEB)
Matlin, O. S.; McCune, W.; Lusk, E.
2003-12-15
We report on an effort to develop methodologies for formal verification of parts of the Multi-Purpose Daemon (MPD) parallel process management system. MPD is a distributed collection of communicating processes. While the individual components of the collection execute simple algorithms, their interaction leads to unexpected errors that are difficult to uncover by conventional means. Two verification approaches are discussed here: the standard model checking approach using the software model checker SPIN and the nonstandard use of a general-purpose first-order resolution-style theorem prover OTTER to conduct the traditional state space exploration. We compare modeling methodology and analyze performance and scalability of the two methods with respect to verification of MPD.
[Determinants of malnutrition in a low-income population: hierarchical analytical model].
Olinto, M T; Victora, C G; Barros, F C; Tomasi, E
1993-01-01
To investigate the determinants of malnutrition among low-income children, the effects of socioeconomic, environmental, reproductive, morbidity, child care, birthweight and breastfeeding variables on stunting and wasting were studied. All 354 children below two years of age living in two urban slum areas of Pelotas, southern Brazil, were included. The multivariate analyses took into account the hierarchical structure of the risk factors for each type of deficit. Variables selected as significant on a given level of the model were considered as risk factors, even if their statistical significance was subsequently lost when hierarchically inferior variables were included. The final model for stunting included the variables education and presence of the father, maternal education and employment, birthweight and age. For wasting, the variables selected were the number of household appliances, birth interval, housing conditions, borough, birthweight, age, gender and previous hospitalizations.
Cerrolaza, Juan J; Villanueva, Arantxa; Cabeza, Rafael
2012-03-01
The accurate segmentation of subcortical brain structures in magnetic resonance (MR) images is of crucial importance in the interdisciplinary field of medical imaging. Although statistical approaches such as active shape models (ASMs) have proven to be particularly useful in the modeling of multiobject shapes, they are inefficient when facing challenging problems. Based on the wavelet transform, the fully generic multiresolution framework presented in this paper allows us to decompose the interobject relationships into different levels of detail. The aim of this hierarchical decomposition is twofold: to efficiently characterize the relationships between objects and their particular localities. Experiments performed on an eight-object structure defined in axial cross sectional MR brain images show that the new hierarchical segmentation significantly improves the accuracy of the segmentation, and while it exhibits a remarkable robustness with respect to the size of the training set.
Noma, Hisashi; Matsui, Shigeyuki
2013-05-20
The main purpose of microarray studies is screening of differentially expressed genes as candidates for further investigation. Because of limited resources in this stage, prioritizing genes are relevant statistical tasks in microarray studies. For effective gene selections, parametric empirical Bayes methods for ranking and selection of genes with largest effect sizes have been proposed (Noma et al., 2010; Biostatistics 11: 281-289). The hierarchical mixture model incorporates the differential and non-differential components and allows information borrowing across differential genes with separation from nuisance, non-differential genes. In this article, we develop empirical Bayes ranking methods via a semiparametric hierarchical mixture model. A nonparametric prior distribution, rather than parametric prior distributions, for effect sizes is specified and estimated using the "smoothing by roughening" approach of Laird and Louis (1991; Computational statistics and data analysis 12: 27-37). We present applications to childhood and infant leukemia clinical studies with microarrays for exploring genes related to prognosis or disease progression.
On hierarchical models for visual recognition and learning of objects, scenes, and activities
Spehr, Jens
2015-01-01
In many computer vision applications, objects have to be learned and recognized in images or image sequences. This book presents new probabilistic hierarchical models that allow an efficient representation of multiple objects of different categories, scales, rotations, and views. The idea is to exploit similarities between objects and object parts in order to share calculations and avoid redundant information. Furthermore inference approaches for fast and robust detection are presented. These new approaches combine the idea of compositional and similarity hierarchies and overcome limitations of previous methods. Besides classical object recognition the book shows the use for detection of human poses in a project for gait analysis. The use of activity detection is presented for the design of environments for ageing, to identify activities and behavior patterns in smart homes. In a presented project for parking spot detection using an intelligent vehicle, the proposed approaches are used to hierarchically model...
Heuristics for Hierarchical Partitioning with Application to Model Checking
DEFF Research Database (Denmark)
Möller, Michael Oliver; Alur, Rajeev
2001-01-01
for a temporal scaling technique, called “Next” heuristic [2]. The latter is applicable in reachability analysis and is included in a recent version of the Mocha model checking tool. We demonstrate performance and benefits of our method and use an asynchronous parity computer and an opinion poll protocol as case...... that captures the quality of a structure relative to the connections and favors shallow structures with a low degree of branching. Finding a structure with minimal cost is NP-complete. We present a greedy polynomial-time algorithm that approximates good solutions incrementally by local evaluation of a heuristic...... function. We argue for a heuristic function based on four criteria: the number of enclosed connections, the number of components, the number of touched connections and the depth of the structure. We report on an application in the context of formal verification, where our algorithm serves as a preprocessor...
A hierarchical lattice spring model to simulate the mechanics of 2-D materials-based composites
Directory of Open Access Journals (Sweden)
Lucas eBrely
2015-07-01
Full Text Available In the field of engineering materials, strength and toughness are typically two mutually exclusive properties. Structural biological materials such as bone, tendon or dentin have resolved this conflict and show unprecedented damage tolerance, toughness and strength levels. The common feature of these materials is their hierarchical heterogeneous structure, which contributes to increased energy dissipation before failure occurring at different scale levels. These structural properties are the key to exceptional bioinspired material mechanical properties, in particular for nanocomposites. Here, we develop a numerical model in order to simulate the mechanisms involved in damage progression and energy dissipation at different size scales in nano- and macro-composites, which depend both on the heterogeneity of the material and on the type of hierarchical structure. Both these aspects have been incorporated into a 2-dimensional model based on a Lattice Spring Model, accounting for geometrical nonlinearities and including statistically-based fracture phenomena. The model has been validated by comparing numerical results to continuum and fracture mechanics results as well as finite elements simulations, and then employed to study how structural aspects impact on hierarchical composite material properties. Results obtained with the numerical code highlight the dependence of stress distributions on matrix properties and reinforcement dispersion, geometry and properties, and how failure of sacrificial elements is directly involved in the damage tolerance of the material. Thanks to the rapidly developing field of nanocomposite manufacture, it is already possible to artificially create materials with multi-scale hierarchical reinforcements. The developed code could be a valuable support in the design and optimization of these advanced materials, drawing inspiration and going beyond biological materials with exceptional mechanical properties.
Joint hierarchical models for sparsely sampled high-dimensional LiDAR and forest variables
Finley, Andrew O.; Banerjee, Sudipto; Zhou, Yuzhen; Cook, Bruce D; Babcock, Chad
2016-01-01
Recent advancements in remote sensing technology, specifically Light Detection and Ranging (LiDAR) sensors, provide the data needed to quantify forest characteristics at a fine spatial resolution over large geographic domains. From an inferential standpoint, there is interest in prediction and interpolation of the often sparsely sampled and spatially misaligned LiDAR signals and forest variables. We propose a fully process-based Bayesian hierarchical model for above ground biomass (AGB) and L...
A Hierarchical Slicing Tool Model%一个分层切片工具模型
Institute of Scientific and Technical Information of China (English)
谭毅; 朱平; 李必信; 郑国梁
2001-01-01
Most of the traditional methods of slicing are based on dependence graph. But constructing dependence graph for object oriented programs directly is very complicated. The design and implementation of a hierarchical slicing tool model are described. By constructing the package level dependence graph, class level dependence graph, method level dependence graph and statement level dependence graph, package level slice, class level slice, method level slice and program slice are obtained step by step.
Jansen, P.G.W.
2003-01-01
Using hierarchical linear modeling the author investigated temporal trends in the predictive validity of an assessment center for career advancement (measured as salary growth) over a 13-year period, for a sample of 456 academic graduates. Using year of entry and tenure as controls, linear and quadratic properties of individual salary curves could be predicted by the assessment center dimensions. The validity of the (clinical) overall assessment rating for persons with tenure of at least 12 y...
Julia sets and complex singularities in diamond-like hierarchical Potts models
Institute of Scientific and Technical Information of China (English)
QIAO; Jianyong
2005-01-01
We study the phase transition of the Potts model on diamond-like hierarchical lattices. It is shown that the set of the complex singularities is the Julia set of a rational mapping. An interesting problem is how are these singularities continued to the complex plane. In this paper, by the method of complex dynamics, we give a complete description about the connectivity of the set of the complex singularities.
Chung-Chang Lee
2009-01-01
This paper uses hierarchical linear modeling (HLM) to explore the influence of satisfaction with public facilities on both individual residential and overall (or regional) levels on housing prices. The empirical results indicate that the average housing prices between local cities and counties exhibit significant variance. At the macro level, the explanatory power of the variable ¡§convenience of life¡¨ on the average housing prices of all counties and cities reaches the 5% significance level...
Guo,Qiang; Rajewski, Daniel; Takle, Eugene; Ganapathysubramanian, Baskar
2016-01-01
Current wind turbine simulations successfully use turbulence generating tools for modeling behavior. However, they lack the ability to reproduce variabilities in wind dynamics and inherent stochastic structures (like temporal and spatial coherences, sporadic bursts, high shear regions). This necessitates a more realistic parameterization of the wind that encodes location-, topography-, diurnal-, seasonal and stochastic affects. In this work, we develop a hierarchical temporal and spatial deco...
Cluster based hierarchical resource searching model in P2P network
Institute of Scientific and Technical Information of China (English)
Yang Ruijuan; Liu Jian; Tian Jingwen
2007-01-01
For the problem of large network load generated by the Gnutella resource-searching model in Peer to Peer (P2P) network, a improved model to decrease the network expense is proposed, which establishes a duster in P2P network,auto-organizes logical layers, and applies a hybrid mechanism of directional searching and flooding. The performance analysis and simulation results show that the proposed hierarchical searching model has availably reduced the generated message load and that its searching-response time performance is as fairly good as that of the Gnutella model.
Hsieh, W. C.; Saravanan, R.; Chang, P.; Mahajan, S.
2014-12-01
In this study, we use a hierarchical modeling approach to investigate the influence of tropical air-sea feedbacks on climate impacts of aerosols in the Community Earth System Model (CESM). We construct four different models by coupling the atmospheric component of CESM, the Community Atmospheric Model (CAM), to four different ocean models: (i) the Data Ocean Model (DOM; prescribed SST), (i) Slab Ocean Model (SOM; thermodynamic coupling), (iii) Reduced Gravity Ocean Model (RGOM; dynamic coupling), and (iv) the Parallel Ocean Program (POP; full ocean model). These four models represent progressively increasing degree of coupling between the atmosphere and the ocean. The RGOM model, in particular, is tuned to produce a good simulation of ENSO and the associated tropical air-sea interaction, without being impacted by the climate drifts exhibited by fully-coupled GCMs. For each method of coupling, a pair of numerical experiments, including present day (year 2000) and preindustrial (year 1850) sulfate aerosol loading, were carried out. Our results indicate that the inclusion of air-sea interaction has large impacts on the spatial structure of the climate response induced by aerosols. In response to sulfate aerosol forcing, ITCZ shifts southwards as a result of the anomalous clockwise MMC change which transports moisture southwardly across the Equator. We present analyses of the regional response to sulfate aerosol forcing in the equatorial Pacific as well as the zonally-averaged response. The decomposition of the change in the net surface energy flux shows the most dominant terms are net shortwave radiative flux at the surface and latent heat flux. Further analyses show all ocean model simulations simulate a positive change of northward atmospheric energy transport across the Equator in response to the perturbed radiative sulfate forcing. This positive northward atmospheric energy transport change plays a role in compensating partially cooling caused by sulfate aerosols.
Energy consumption model over parallel programs implemented on multicore architectures
Directory of Open Access Journals (Sweden)
Ricardo Isidro-Ramirez
2015-06-01
Full Text Available In High Performance Computing, energy consump-tion is becoming an important aspect to consider. Due to the high costs that represent energy production in all countries it holds an important role and it seek to find ways to save energy. It is reflected in some efforts to reduce the energy requirements of hardware components and applications. Some options have been appearing in order to scale down energy use and, con-sequently, scale up energy efficiency. One of these strategies is the multithread programming paradigm, whose purpose is to produce parallel programs able to use the full amount of computing resources available in a microprocessor. That energy saving strategy focuses on efficient use of multicore processors that are found in various computing devices, like mobile devices. Actually, as a growing trend, multicore processors are found as part of various specific purpose computers since 2003, from High Performance Computing servers to mobile devices. However, it is not clear how multiprogramming affects energy efficiency. This paper presents an analysis of different types of multicore-based architectures used in computing, and then a valid model is presented. Based on Amdahl’s Law, a model that considers different scenarios of energy use in multicore architectures it is proposed. Some interesting results were found from experiments with the developed algorithm, that it was execute of a parallel and sequential way. A lower limit of energy consumption was found in a type of multicore architecture and this behavior was observed experimentally.
The Case for A Hierarchal System Model for Linux Clusters
Energy Technology Data Exchange (ETDEWEB)
Seager, M; Gorda, B
2009-06-05
The computer industry today is no longer driven, as it was in the 40s, 50s and 60s, by High-performance computing requirements. Rather, HPC systems, especially Leadership class systems, sit on top of a pyramid investment mode. Figure 1 shows a representative pyramid investment model for systems hardware. At the base of the pyramid is the huge investment (order 10s of Billions of US Dollars per year) in semiconductor fabrication and process technologies. These costs, which are approximately doubling with every generation, are funded from investments multiple markets: enterprise, desktops, games, embedded and specialized devices. Over and above these base technology investments are investments for critical technology elements such as microprocessor, chipsets and memory ASIC components. Investments for these components are spread across the same markets as the base semiconductor processes investments. These second tier investments are approximately half the size of the lower level of the pyramid. The next technology investment layer up, tier 3, is more focused on scalable computing systems such as those needed for HPC and other markets. These tier 3 technology elements include networking (SAN, WAN and LAN), interconnects and large scalable SMP designs. Above these is tier 4 are relatively small investments necessary to build very large, scalable systems high-end or Leadership class systems. Primary among these are the specialized network designs of vertically integrated systems, etc.
Institute of Scientific and Technical Information of China (English)
WU; Jianhua; WANG; Zhaohui
2009-01-01
Digital libraries are complex systems and this brings difficulties for their evaluation.This paper proposes a hierarchical model to solve this problem,and puts the entangled matters into a clear-layered structure.Firstly,digital libraries(DLs thereafter)are classified into 5 groups in ascending gradations,i.e.mini DLs,small DLs,medium DLs,large DLs,and huge DLs by their scope of operation.Then,according to the characteristics of DLs at different operational scope and level of sophistication,they are further grouped into unitary DLs,union DLs and hybrid DLs accordingly.Based on this simulated structure,a hierarchical model for digital library evaluation is introduced,which evaluates DLs differentiatingly within a hierarchical scheme by using varying criteria based on their specific level of operational complexity such as at the micro-level,medium-level,and/or at the macro-level.Based on our careful examination and analysis of the current literature about DL evaluation system,an experiment is conducted by using the DL evaluation model along with its criteria for unitary DLs at micro-level.The main contents resulting from this evaluation experimentation and also those evaluation indicators and relevant issues of major concerns for DLs at medium-level and macro-level are also to be presented at some length.
A Predictive Model of Fragmentation using Adaptive Mesh Refinement and a Hierarchical Material Model
Energy Technology Data Exchange (ETDEWEB)
Koniges, A E; Masters, N D; Fisher, A C; Anderson, R W; Eder, D C; Benson, D; Kaiser, T B; Gunney, B T; Wang, P; Maddox, B R; Hansen, J F; Kalantar, D H; Dixit, P; Jarmakani, H; Meyers, M A
2009-03-03
Fragmentation is a fundamental material process that naturally spans spatial scales from microscopic to macroscopic. We developed a mathematical framework using an innovative combination of hierarchical material modeling (HMM) and adaptive mesh refinement (AMR) to connect the continuum to microstructural regimes. This framework has been implemented in a new multi-physics, multi-scale, 3D simulation code, NIF ALE-AMR. New multi-material volume fraction and interface reconstruction algorithms were developed for this new code, which is leading the world effort in hydrodynamic simulations that combine AMR with ALE (Arbitrary Lagrangian-Eulerian) techniques. The interface reconstruction algorithm is also used to produce fragments following material failure. In general, the material strength and failure models have history vector components that must be advected along with other properties of the mesh during remap stage of the ALE hydrodynamics. The fragmentation models are validated against an electromagnetically driven expanding ring experiment and dedicated laser-based fragmentation experiments conducted at the Jupiter Laser Facility. As part of the exit plan, the NIF ALE-AMR code was applied to a number of fragmentation problems of interest to the National Ignition Facility (NIF). One example shows the added benefit of multi-material ALE-AMR that relaxes the requirement that material boundaries must be along mesh boundaries.
Parallel multiscale modeling of biopolymer dynamics with hydrodynamic correlations
Fyta, Maria; Kaxiras, Efthimios; Melchionna, Simone; Bernaschi, Massimo; Succi, Sauro
2007-01-01
We employ a multiscale approach to model the translocation of biopolymers through nanometer size pores. Our computational scheme combines microscopic Molecular Dynamics (MD) with a mesoscopic Lattice Boltzmann (LB) method for the solvent dynamics, explicitly taking into account the interactions of the molecule with the surrounding fluid. We describe an efficient parallel implementation of the method which exhibits excellent scalability on the Blue Gene platform. We investigate both dynamical and statistical aspects of the translocation process by simulating polymers of various initial configurations and lengths. For a representative molecule size, we explore the effects of important parameters that enter in the simulation, paying particular attention to the strength of the molecule-solvent coupling and of the external electric field which drives the translocation process. Finally, we explore the connection between the generic polymers modeled in the simulation and DNA, for which interesting recent experimenta...
Applying the Extended Parallel Process Model to workplace safety messages.
Basil, Michael; Basil, Debra; Deshpande, Sameer; Lavack, Anne M
2013-01-01
The extended parallel process model (EPPM) proposes fear appeals are most effective when they combine threat and efficacy. Three studies conducted in the workplace safety context examine the use of various EPPM factors and their effects, especially multiplicative effects. Study 1 was a content analysis examining the use of EPPM factors in actual workplace safety messages. Study 2 experimentally tested these messages with 212 construction trainees. Study 3 replicated this experiment with 1,802 men across four English-speaking countries-Australia, Canada, the United Kingdom, and the United States. The results of these three studies (1) demonstrate the inconsistent use of EPPM components in real-world work safety communications, (2) support the necessity of self-efficacy for the effective use of threat, (3) show a multiplicative effect where communication effectiveness is maximized when all model components are present (severity, susceptibility, and efficacy), and (4) validate these findings with gory appeals across four English-speaking countries.
Parallel Application Development Using Architecture View Driven Model Transformations
Arkin, E.; Tekinerdogan, B.
2015-01-01
o realize the increased need for computing performance the current trend is towards applying parallel computing in which the tasks are run in parallel on multiple nodes. On its turn we can observe the rapid increase of the scale of parallel computing platforms. This situation has led to a complexity
Anderson, Daniel
2012-01-01
This manuscript provides an overview of hierarchical linear modeling (HLM), as part of a series of papers covering topics relevant to consumers of educational research. HLM is tremendously flexible, allowing researchers to specify relations across multiple "levels" of the educational system (e.g., students, classrooms, schools, etc.).…
Hou, Fujun
2016-01-01
This paper provides a description of how market competitiveness evaluations concerning mechanical equipment can be made in the context of multi-criteria decision environments. It is assumed that, when we are evaluating the market competitiveness, there are limited number of candidates with some required qualifications, and the alternatives will be pairwise compared on a ratio scale. The qualifications are depicted as criteria in hierarchical structure. A hierarchical decision model called PCbHDM was used in this study based on an analysis of its desirable traits. Illustration and comparison shows that the PCbHDM provides a convenient and effective tool for evaluating the market competitiveness of mechanical equipment. The researchers and practitioners might use findings of this paper in application of PCbHDM.
DEFF Research Database (Denmark)
Mishnaevsky, Leon; Dai, Gaoming
2014-01-01
Hybrid and hierarchical polymer composites represent a promising group of materials for engineering applications. In this paper, computational studies of the strength and damage resistance of hybrid and hierarchical composites are reviewed. The reserves of the composite improvement are explored...... by using computational micromechanical models. It is shown that while glass/carbon fibers hybrid composites clearly demonstrate higher stiffness and lower weight with increasing the carbon content, they can have lower strength as compared with usual glass fiber polymer composites. Secondary...... nanoreinforcement can drastically increase the fatigue lifetime of composites. Especially, composites with the nanoplatelets localized in the fiber/matrix interface layer (fiber sizing) ensure much higher fatigue lifetime than those with the nanoplatelets in the matrix....
Xu, Lizhen; Paterson, Andrew D; Xu, Wei
2017-04-01
Motivated by the multivariate nature of microbiome data with hierarchical taxonomic clusters, counts that are often skewed and zero inflated, and repeated measures, we propose a Bayesian latent variable methodology to jointly model multiple operational taxonomic units within a single taxonomic cluster. This novel method can incorporate both negative binomial and zero-inflated negative binomial responses, and can account for serial and familial correlations. We develop a Markov chain Monte Carlo algorithm that is built on a data augmentation scheme using Pólya-Gamma random variables. Hierarchical centering and parameter expansion techniques are also used to improve the convergence of the Markov chain. We evaluate the performance of our proposed method through extensive simulations. We also apply our method to a human microbiome study.
Li, Ben; Li, Yunxiao; Qin, Zhaohui S
2017-06-01
Modern high-throughput biotechnologies such as microarray and next generation sequencing produce a massive amount of information for each sample assayed. However, in a typical high-throughput experiment, only limited amount of data are observed for each individual feature, thus the classical 'large p, small n' problem. Bayesian hierarchical model, capable of borrowing strength across features within the same dataset, has been recognized as an effective tool in analyzing such data. However, the shrinkage effect, the most prominent feature of hierarchical features, can lead to undesirable over-correction for some features. In this work, we discuss possible causes of the over-correction problem and propose several alternative solutions. Our strategy is rooted in the fact that in the Big Data era, large amount of historical data are available which should be taken advantage of. Our strategy presents a new framework to enhance the Bayesian hierarchical model. Through simulation and real data analysis, we demonstrated superior performance of the proposed strategy. Our new strategy also enables borrowing information across different platforms which could be extremely useful with emergence of new technologies and accumulation of data from different platforms in the Big Data era. Our method has been implemented in R package "adaptiveHM", which is freely available from https://github.com/benliemory/adaptiveHM.
Gas turbine engine prognostics using Bayesian hierarchical models: A variational approach
Zaidan, Martha A.; Mills, Andrew R.; Harrison, Robert F.; Fleming, Peter J.
2016-03-01
Prognostics is an emerging requirement of modern health monitoring that aims to increase the fidelity of failure-time predictions by the appropriate use of sensory and reliability information. In the aerospace industry it is a key technology to reduce life-cycle costs, improve reliability and asset availability for a diverse fleet of gas turbine engines. In this work, a Bayesian hierarchical model is selected to utilise fleet data from multiple assets to perform probabilistic estimation of remaining useful life (RUL) for civil aerospace gas turbine engines. The hierarchical formulation allows Bayesian updates of an individual predictive model to be made, based upon data received asynchronously from a fleet of assets with different in-service lives and for the entry of new assets into the fleet. In this paper, variational inference is applied to the hierarchical formulation to overcome the computational and convergence concerns that are raised by the numerical sampling techniques needed for inference in the original formulation. The algorithm is tested on synthetic data, where the quality of approximation is shown to be satisfactory with respect to prediction performance, computational speed, and ease of use. A case study of in-service gas turbine engine data demonstrates the value of integrating fleet data for accurately predicting degradation trajectories of assets.
Critical behavior of Gaussian model on diamond-type hierarchical lattices
Institute of Scientific and Technical Information of China (English)
孔祥木; 李崧
1999-01-01
It is proposed that the Gaussian type distribution constant bqi in the Gaussian model depends on the coordination number qi of site i, and that the relation bqi/bqj = qi/qj holds among bqi’s. The Gaussian model is then studied on a family of the diamond-type hierarchical （or DH） lattices, by the decimation real-space renormalization group following spin-resealing method. It is found that the magnetic property of the Gaussian model belongs to the same universal class, and that the critical point K* and the critical exponent v are given by K*= bqi/qi and v=1/2, respectively.
Hierarchical Colored Timed Petri Nets for Maintenance Process Modeling of Civil Aircraft
Institute of Scientific and Technical Information of China (English)
FU Cheng-cheng; SUN You-chao; LU Zhong
2008-01-01
Civil aircraft maintenance process simulation model is an effective method for analyzing the maintainability of a civil aircraft. First, we present the Hierarchical Colored Timed Petri Nets for maintenance process modeling of civil aircraft. Then, we expound a general method of civil aircraft maintenance activities, determine the maintenance level for decomposition, and propose the methods of describing logic of relations between the maintenance activities based on Petri Net. Finally, a time Colored Petri multi-level network modeling and simulation procedures and steps are given with the maintenance example of the landing gear burst tire of a certain type of aircraft. The feasibility of the method is proved by the example.
Tarmo: A Framework for Parallelized Bounded Model Checking
Wieringa, Siert; Heljanko, Keijo; 10.4204/EPTCS.14.5
2009-01-01
This paper investigates approaches to parallelizing Bounded Model Checking (BMC) for shared memory environments as well as for clusters of workstations. We present a generic framework for parallelized BMC named Tarmo. Our framework can be used with any incremental SAT encoding for BMC but for the results in this paper we use only the current state-of-the-art encoding for full PLTL. Using this encoding allows us to check both safety and liveness properties, contrary to an earlier work on distributing BMC that is limited to safety properties only. Despite our focus on BMC after it has been translated to SAT, existing distributed SAT solvers are not well suited for our application. This is because solving a BMC problem is not solving a set of independent SAT instances but rather involves solving multiple related SAT instances, encoded incrementally, where the satisfiability of each instance corresponds to the existence of a counterexample of a specific length. Our framework includes a generic architecture for a ...
Analysis of household data on influenza epidemic with Bayesian hierarchical model.
Hsu, C Y; Yen, A M F; Chen, L S; Chen, H H
2015-03-01
Data used for modelling the household transmission of infectious diseases, such as influenza, have inherent multilevel structures and correlated property, which make the widely used conventional infectious disease transmission models (including the Greenwood model and the Reed-Frost model) not directly applicable within the context of a household (due to the crowded domestic condition or socioeconomic status of the household). Thus, at the household level, the effects resulting from individual-level factors, such as vaccination, may be confounded or modified in some way. We proposed the Bayesian hierarchical random-effects (random intercepts and random slopes) model under the context of generalised linear model to capture heterogeneity and variation on the individual, generation, and household levels. It was applied to empirical surveillance data on the influenza epidemic in Taiwan. The parameters of interest were estimated by using the Markov chain Monte Carlo method in conjunction with the Bayesian directed acyclic graphical models. Comparisons between models were made using the deviance information criterion. Based on the result of the random-slope Bayesian hierarchical method under the context of the Reed-Frost transmission model, the regression coefficient regarding the protective effect of vaccination varied statistically significantly from household to household. The result of such a heterogeneity was robust to the use of different prior distributions (including non-informative, sceptical, and enthusiastic ones). By integrating out the uncertainty of the parameters of the posterior distribution, the predictive distribution was computed to forecast the number of influenza cases allowing for random-household effect.
Hierarchical graphs for better annotations of rule-based models of biochemical systems
Energy Technology Data Exchange (ETDEWEB)
Hu, Bin [Los Alamos National Laboratory; Hlavacek, William [Los Alamos National Laboratory
2009-01-01
In the graph-based formalism of the BioNetGen language (BNGL), graphs are used to represent molecules, with a colored vertex representing a component of a molecule, a vertex label representing the internal state of a component, and an edge representing a bond between components. Components of a molecule share the same color. Furthermore, graph-rewriting rules are used to represent molecular interactions, with a rule that specifies addition (removal) of an edge representing a class of association (dissociation) reactions and with a rule that specifies a change of vertex label representing a class of reactions that affect the internal state of a molecular component. A set of rules comprises a mathematical/computational model that can be used to determine, through various means, the system-level dynamics of molecular interactions in a biochemical system. Here, for purposes of model annotation, we propose an extension of BNGL that involves the use of hierarchical graphs to represent (1) relationships among components and subcomponents of molecules and (2) relationships among classes of reactions defined by rules. We illustrate how hierarchical graphs can be used to naturally document the structural organization of the functional components and subcomponents of two proteins: the protein tyrosine kinase Lck and the T cell receptor (TCR)/CD3 complex. Likewise, we illustrate how hierarchical graphs can be used to document the similarity of two related rules for kinase-catalyzed phosphorylation of a protein substrate. We also demonstrate how a hierarchical graph representing a protein can be encoded in an XML-based format.
DEFF Research Database (Denmark)
Kristensen, Anders Ringgaard; Søllested, Thomas Algot
2004-01-01
that really uses all these methodological improvements. In this paper, the biological model describing the performance and feed intake of sows is presented. In particular, estimation of herd specific parameters is emphasized. The optimization model is described in a subsequent paper......Several replacement models have been presented in literature. In other applicational areas like dairy cow replacement, various methodological improvements like hierarchical Markov processes and Bayesian updating have been implemented, but not in sow models. Furthermore, there are methodological...... improvements like multi-level hierarchical Markov processes with decisions on multiple time scales, efficient methods for parameter estimations at herd level and standard software that has been hardly implemented at all in any replacement model. The aim of this study is to present a sow replacement model...
Hierarchical Modeling and Robust Synthesis for the Preliminary Design of Large Scale Complex Systems
Koch, Patrick N.
1997-01-01
Large-scale complex systems are characterized by multiple interacting subsystems and the analysis of multiple disciplines. The design and development of such systems inevitably requires the resolution of multiple conflicting objectives. The size of complex systems, however, prohibits the development of comprehensive system models, and thus these systems must be partitioned into their constituent parts. Because simultaneous solution of individual subsystem models is often not manageable iteration is inevitable and often excessive. In this dissertation these issues are addressed through the development of a method for hierarchical robust preliminary design exploration to facilitate concurrent system and subsystem design exploration, for the concurrent generation of robust system and subsystem specifications for the preliminary design of multi-level, multi-objective, large-scale complex systems. This method is developed through the integration and expansion of current design techniques: Hierarchical partitioning and modeling techniques for partitioning large-scale complex systems into more tractable parts, and allowing integration of subproblems for system synthesis; Statistical experimentation and approximation techniques for increasing both the efficiency and the comprehensiveness of preliminary design exploration; and Noise modeling techniques for implementing robust preliminary design when approximate models are employed. Hierarchical partitioning and modeling techniques including intermediate responses, linking variables, and compatibility constraints are incorporated within a hierarchical compromise decision support problem formulation for synthesizing subproblem solutions for a partitioned system. Experimentation and approximation techniques are employed for concurrent investigations and modeling of partitioned subproblems. A modified composite experiment is introduced for fitting better predictive models across the ranges of the factors, and an approach for
Directory of Open Access Journals (Sweden)
Yufeng Zhuang
2015-01-01
Full Text Available This paper presents a unified singularity modeling and reconfiguration analysis of variable topologies of a class of metamorphic parallel mechanisms with parallel constraint screws. The new parallel mechanisms consist of three reconfigurable rTPS limbs that have two working phases stemming from the reconfigurable Hooke (rT joint. While one phase has full mobility, the other supplies a constraint force to the platform. Based on these, the platform constraint screw systems show that the new metamorphic parallel mechanisms have four topologies by altering the limb phases with mobility change among 1R2T (one rotation with two translations, 2R2T, and 3R2T and mobility 6. Geometric conditions of the mechanism design are investigated with some special topologies illustrated considering the limb arrangement. Following this and the actuation scheme analysis, a unified Jacobian matrix is formed using screw theory to include the change between geometric constraints and actuation constraints in the topology reconfiguration. Various singular configurations are identified by analyzing screw dependency in the Jacobian matrix. The work in this paper provides basis for singularity-free workspace analysis and optimal design of the class of metamorphic parallel mechanisms with parallel constraint screws which shows simple geometric constraints with potential simple kinematics and dynamics properties.
Engel, Dave W.; Reichardt, Thomas A.; Kulp, Thomas J.; Graff, David L.; Thompson, Sandra E.
2016-05-01
Validating predictive models and quantifying uncertainties inherent in the modeling process is a critical component of the HARD Solids Venture program [1]. Our current research focuses on validating physics-based models predicting the optical properties of solid materials for arbitrary surface morphologies and characterizing the uncertainties in these models. We employ a systematic and hierarchical approach by designing physical experiments and comparing the experimental results with the outputs of computational predictive models. We illustrate this approach through an example comparing a micro-scale forward model to an idealized solid-material system and then propagating the results through a system model to the sensor level. Our efforts should enhance detection reliability of the hyper-spectral imaging technique and the confidence in model utilization and model outputs by users and stakeholders.
Energy Technology Data Exchange (ETDEWEB)
Engel, David W.; Reichardt, Thomas A.; Kulp, Thomas J.; Graff, David; Thompson, Sandra E.
2016-09-17
Validating predictive models and quantifying uncertainties inherent in the modeling process is a critical component of the HARD Solids Venture program [1]. Our current research focuses on validating physics-based models predicting the optical properties of solid materials for arbitrary surface morphologies and characterizing the uncertainties in these models. We employ a systematic and hierarchical approach by designing physical experiments and comparing the experimental results with the outputs of computational predictive models. We illustrate this approach through an example comparing a micro-scale forward model to an idealized solid-material system and then propagating the results through a system model to the sensor level. Our efforts should enhance detection reliability of the hyper-spectral imaging technique and the confidence in model utilization and model outputs by users and stakeholders.
2013-01-01
This paper proposes a hierarchical Bayesian framework for modeling the life cycle of marine exploited fish with a spatial perspective. The application was developed for a nursery-dependent fish species, the common sole (Solea solea), on the Eastern Channel population (Western Europe). The approach combined processes of different natures and various sources of observations within an integrated framework for life-cycle modeling: (1) outputs of an individual-based model for larval drift and surv...
Parallel programming practical aspects, models and current limitations
Tarkov, Mikhail S
2014-01-01
Parallel programming is designed for the use of parallel computer systems for solving time-consuming problems that cannot be solved on a sequential computer in a reasonable time. These problems can be divided into two classes: 1. Processing large data arrays (including processing images and signals in real time)2. Simulation of complex physical processes and chemical reactions For each of these classes, prospective methods are designed for solving problems. For data processing, one of the most promising technologies is the use of artificial neural networks. Particles-in-cell method and cellular automata are very useful for simulation. Problems of scalability of parallel algorithms and the transfer of existing parallel programs to future parallel computers are very acute now. An important task is to optimize the use of the equipment (including the CPU cache) of parallel computers. Along with parallelizing information processing, it is essential to ensure the processing reliability by the relevant organization ...
Hierarchical Agent-Based Integrated Modelling Approach for Microgrids with Adoption of EVs and HRES
Directory of Open Access Journals (Sweden)
Peng Han
2014-01-01
Full Text Available The large adoption of electric vehicles (EVs, hybrid renewable energy systems (HRESs, and the increasing of the loads shall bring significant challenges to the microgrid. The methodology to model microgrid with high EVs and HRESs penetrations is the key to EVs adoption assessment and optimized HRESs deployment. However, considering the complex interactions of the microgrid containing massive EVs and HRESs, any previous single modelling approaches are insufficient. Therefore in this paper, the methodology named Hierarchical Agent-based Integrated Modelling Approach (HAIMA is proposed. With the effective integration of the agent-based modelling with other advanced modelling approaches, the proposed approach theoretically contributes to a new microgrid model hierarchically constituted by microgrid management layer, component layer, and event layer. Then the HAIMA further links the key parameters and interconnects them to achieve the interactions of the whole model. Furthermore, HAIMA practically contributes to a comprehensive microgrid operation system, through which the assessment of the proposed model and the impact of the EVs adoption are achieved. Simulations show that the proposed HAIMA methodology will be beneficial for the microgrid study and EV’s operation assessment and shall be further utilized for the energy management, electricity consumption prediction, the EV scheduling control, and HRES deployment optimization.
Requirements and Problems in Parallel Model Development at DWD
Directory of Open Access Journals (Sweden)
Ulrich Schäattler
2000-01-01
Full Text Available Nearly 30 years after introducing the first computer model for weather forecasting, the Deutscher Wetterdienst (DWD is developing the 4th generation of its numerical weather prediction (NWP system. It consists of a global grid point model (GME based on a triangular grid and a non-hydrostatic Lokal Modell (LM. The operational demand for running this new system is immense and can only be met by parallel computers. From the experience gained in developing earlier NWP models, several new problems had to be taken into account during the design phase of the system. Most important were portability (including efficieny of the programs on several computer architectures and ease of code maintainability. Also the organization and administration of the work done by developers from different teams and institutions is more complex than it used to be. This paper describes the models and gives some performance results. The modular approach used for the design of the LM is explained and the effects on the development are discussed.
Dynamic modeling of Tampa Bay urban development using parallel computing
Xian, G.; Crane, M.; Steinwand, D.
2005-01-01
Urban land use and land cover has changed significantly in the environs of Tampa Bay, Florida, over the past 50 years. Extensive urbanization has created substantial change to the region's landscape and ecosystems. This paper uses a dynamic urban-growth model, SLEUTH, which applies six geospatial data themes (slope, land use, exclusion, urban extent, transportation, hillside), to study the process of urbanization and associated land use and land cover change in the Tampa Bay area. To reduce processing time and complete the modeling process within an acceptable period, the model is recoded and ported to a Beowulf cluster. The parallel-processing computer system accomplishes the massive amount of computation the modeling simulation requires. SLEUTH calibration process for the Tampa Bay urban growth simulation spends only 10 h CPU time. The model predicts future land use/cover change trends for Tampa Bay from 1992 to 2025. Urban extent is predicted to double in the Tampa Bay watershed between 1992 and 2025. Results show an upward trend of urbanization at the expense of a decline of 58% and 80% in agriculture and forested lands, respectively. ?? 2005 Elsevier Ltd. All rights reserved.
"Let's Move" campaign: applying the extended parallel process model.
Batchelder, Alicia; Matusitz, Jonathan
2014-01-01
This article examines Michelle Obama's health campaign, "Let's Move," through the lens of the extended parallel process model (EPPM). "Let's Move" aims to reduce the childhood obesity epidemic in the United States. Developed by Kim Witte, EPPM rests on the premise that people's attitudes can be changed when fear is exploited as a factor of persuasion. Fear appeals work best (a) when a person feels a concern about the issue or situation, and (b) when he or she believes to have the capability of dealing with that issue or situation. Overall, the analysis found that "Let's Move" is based on past health campaigns that have been successful. An important element of the campaign is the use of fear appeals (as it is postulated by EPPM). For example, part of the campaign's strategies is to explain the severity of the diseases associated with obesity. By looking at the steps of EPPM, readers can also understand the strengths and weaknesses of "Let's Move."
Parallel imaging enhanced MR colonography using a phantom model.
LENUS (Irish Health Repository)
Morrin, Martina M
2008-09-01
To compare various Array Spatial and Sensitivity Encoding Technique (ASSET)-enhanced T2W SSFSE (single shot fast spin echo) and T1-weighted (T1W) 3D SPGR (spoiled gradient recalled echo) sequences for polyp detection and image quality at MR colonography (MRC) in a phantom model. Limitations of MRC using standard 3D SPGR T1W imaging include the long breath-hold required to cover the entire colon within one acquisition and the relatively low spatial resolution due to the long acquisition time. Parallel imaging using ASSET-enhanced T2W SSFSE and 3D T1W SPGR imaging results in much shorter imaging times, which allows for increased spatial resolution.
HDDM: Hierarchical Bayesian estimation of the Drift-Diffusion Model in Python.
Wiecki, Thomas V; Sofer, Imri; Frank, Michael J
2013-01-01
The diffusion model is a commonly used tool to infer latent psychological processes underlying decision-making, and to link them to neural mechanisms based on response times. Although efficient open source software has been made available to quantitatively fit the model to data, current estimation methods require an abundance of response time measurements to recover meaningful parameters, and only provide point estimates of each parameter. In contrast, hierarchical Bayesian parameter estimation methods are useful for enhancing statistical power, allowing for simultaneous estimation of individual subject parameters and the group distribution that they are drawn from, while also providing measures of uncertainty in these parameters in the posterior distribution. Here, we present a novel Python-based toolbox called HDDM (hierarchical drift diffusion model), which allows fast and flexible estimation of the the drift-diffusion model and the related linear ballistic accumulator model. HDDM requires fewer data per subject/condition than non-hierarchical methods, allows for full Bayesian data analysis, and can handle outliers in the data. Finally, HDDM supports the estimation of how trial-by-trial measurements (e.g., fMRI) influence decision-making parameters. This paper will first describe the theoretical background of the drift diffusion model and Bayesian inference. We then illustrate usage of the toolbox on a real-world data set from our lab. Finally, parameter recovery studies show that HDDM beats alternative fitting methods like the χ(2)-quantile method as well as maximum likelihood estimation. The software and documentation can be downloaded at: http://ski.clps.brown.edu/hddm_docs/
HDDM: Hierarchical Bayesian estimation of the Drift-Diffusion Model in Python
Directory of Open Access Journals (Sweden)
Thomas V Wiecki
2013-08-01
Full Text Available The diffusion model is a commonly used tool to infer latent psychological processes underlying decision making, and to link them to neural mechanisms based on reaction times. Although efficient open source software has been made available to quantitatively fit the model to data, current estimation methods require an abundance of reaction time measurements to recover meaningful parameters, and only provide point estimates of each parameter. In contrast, hierarchical Bayesian parameter estimation methods are useful for enhancing statistical power, allowing for simultaneous estimation of individual subject parameters and the group distribution that they are drawn from, while also providing measures of uncertainty in these parameters in the posterior distribution. Here, we present a novel Python-based toolbox called HDDM (hierarchical drift diffusion model, which allows fast and flexible estimation of the the drift-diffusion model and the related linear ballistic accumulator model. HDDM requires fewer data per subject / condition than non-hierarchical method, allows for full Bayesian data analysis, and can handle outliers in the data. Finally, HDDM supports the estimation of how trial-by-trial measurements (e.g. fMRI influence decision making parameters. This paper will first describe the theoretical background of drift-diffusion model and Bayesian inference. We then illustrate usage of the toolbox on a real-world data set from our lab. Finally, parameter recovery studies show that HDDM beats alternative fitting methods like the chi-quantile method as well as maximum likelihood estimation. The software and documentation can be downloaded at: http://ski.clps.brown.edu/hddm_docs
Branco, N S; de Sousa, J Ricardo; Ghosh, Angsula
2008-03-01
Using a real-space renormalization-group approximation, we study the anisotropic quantum Heisenberg model on hierarchical lattices, with interactions following aperiodic sequences. Three different sequences are considered, with relevant and irrelevant fluctuations, according to the Luck-Harris criterion. The phase diagram is discussed as a function of the anisotropy parameter Delta (such that Delta=0 and 1 correspond to the isotropic Heisenberg and Ising models, respectively). We find three different types of phase diagrams, with general characteristics: the isotropic Heisenberg plane is always an invariant one (as expected by symmetry arguments) and the critical behavior of the anisotropic Heisenberg model is governed by fixed points on the Ising-model plane. Our results for the isotropic Heisenberg model show that the relevance or irrelevance of aperiodic models, when compared to their uniform counterpart, is as predicted by the Harris-Luck criterion. A low-temperature renormalization-group procedure was applied to the classical isotropic Heisenberg model in two-dimensional hierarchical lattices: the relevance criterion is obtained, again in accordance with the Harris-Luck criterion.
Parallel Semi-Implicit Spectral Element Atmospheric Model
Fournier, A.; Thomas, S.; Loft, R.
2001-05-01
The shallow-water equations (SWE) have long been used to test atmospheric-modeling numerical methods. The SWE contain essential wave-propagation and nonlinear effects of more complete models. We present a semi-implicit (SI) improvement of the Spectral Element Atmospheric Model to solve the SWE (SEAM, Taylor et al. 1997, Fournier et al. 2000, Thomas & Loft 2000). SE methods are h-p finite element methods combining the geometric flexibility of size-h finite elements with the accuracy of degree-p spectral methods. Our work suggests that exceptional parallel-computation performance is achievable by a General-Circulation-Model (GCM) dynamical core, even at modest climate-simulation resolutions (>1o). The code derivation involves weak variational formulation of the SWE, Gauss(-Lobatto) quadrature over the collocation points, and Legendre cardinal interpolators. Appropriate weak variation yields a symmetric positive-definite Helmholtz operator. To meet the Ladyzhenskaya-Babuska-Brezzi inf-sup condition and avoid spurious modes, we use a staggered grid. The SI scheme combines leapfrog and Crank-Nicholson schemes for the nonlinear and linear terms respectively. The localization of operations to elements ideally fits the method to cache-based microprocessor computer architectures --derivatives are computed as collections of small (8x8), naturally cache-blocked matrix-vector products. SEAM also has desirable boundary-exchange communication, like finite-difference models. Timings on on the IBM SP and Compaq ES40 supercomputers indicate that the SI code (20-min timestep) requires 1/3 the CPU time of the explicit code (2-min timestep) for T42 resolutions. Both codes scale nearly linearly out to 400 processors. We achieved single-processor performance up to 30% of peak for both codes on the 375-MHz IBM Power-3 processors. Fast computation and linear scaling lead to a useful climate-simulation dycore only if enough model time is computed per unit wall-clock time. An efficient SI
Multiphysics & Parallel Kinematics Modeling of a 3DOF MEMS Mirror
Directory of Open Access Journals (Sweden)
Mamat N.
2015-01-01
Full Text Available This paper presents a modeling for a 3DoF electrothermal actuated micro-electro-mechanical (MEMS mirror used to achieve scanning for optical coherence tomography (OCT imaging. The device is integrated into an OCT endoscopic probe, it is desired that the optical scanner have small footprint for minimum invasiveness, large and flat optical aperture for large scanning range, low driving voltage and low power consumption for safety reason. With a footprint of 2mm×2mm, the MEMS scanner which is also called as Tip-Tilt-Piston micro-mirror, can perform two rotations around x and y-axis and a vertical translation along z-axis. This work develops a complete model and experimental characterization. The modeling is divided into two parts: multiphysics characterization of the actuators and parallel kinematics studies of the overall system. With proper experimental procedures, we are able to validate the model via Visual Servoing Platform (ViSP. The results give a detailed overview on the performance of the mirror platform while varying the applied voltage at a stable working frequency. The paper also presents a discussion on the MEMS control system based on several scanning trajectories.
A model for dealing with parallel processes in supervision
Directory of Open Access Journals (Sweden)
Lilja Cajvert
2011-03-01
Supervision in social work is essential for successful outcomes when working with clients. In social work, unconscious difficulties may arise and similar difficulties may occur in supervision as parallel processes. In this article, the development of a practice-based model of supervision to deal with parallel processes in supervision is described. The model has six phases. In the first phase, the focus is on the supervisor’s inner world, his/her own reflections and observations. In the second phase, the supervision situation is “frozen”, and the supervisees are invited to join the supervisor in taking a meta-perspective on the current situation of supervision. The focus in the third phase is on the inner world of all the group members as well as the visualization and identification of reflections and feelings that arose during the supervision process. Phase four focuses on the supervisee who presented a case, and in phase five the focus shifts to the common understanding and theorization of the supervision process as well as the definition and identification of possible parallel processes. In the final phase, the supervisee, with the assistance of the supervisor and other members of the group, develops a solution and determines how to proceed with the client in treatment. This article uses phenomenological concepts to provide a theoretical framework for the supervision model. Phenomenological reduction is an important approach to examine and to externalize and visualize the inner words of the supervisor and supervisees. Een model voor het hanteren van parallelle processen tijdens supervisie Om succesvol te zijn in de hulpverlening aan cliënten, is supervisie cruciaal in het sociaal werk. Tijdens de hulpverlening kunnen impliciete moeilijkheden de kop opsteken en soortgelijke moeilijkheden duiken soms ook op tijdens supervisie. Dit worden parallelle processen genoemd. Dit artikel beschrijft een op praktijkervaringen gebaseerd model om dergelijke parallelle
A hierarchical model for probabilistic independent component analysis of multi-subject fMRI studies.
Guo, Ying; Tang, Li
2013-12-01
An important goal in fMRI studies is to decompose the observed series of brain images to identify and characterize underlying brain functional networks. Independent component analysis (ICA) has been shown to be a powerful computational tool for this purpose. Classic ICA has been successfully applied to single-subject fMRI data. The extension of ICA to group inferences in neuroimaging studies, however, is challenging due to the unavailability of a pre-specified group design matrix. Existing group ICA methods generally concatenate observed fMRI data across subjects on the temporal domain and then decompose multi-subject data in a similar manner to single-subject ICA. The major limitation of existing methods is that they ignore between-subject variability in spatial distributions of brain functional networks in group ICA. In this article, we propose a new hierarchical probabilistic group ICA method to formally model subject-specific effects in both temporal and spatial domains when decomposing multi-subject fMRI data. The proposed method provides model-based estimation of brain functional networks at both the population and subject level. An important advantage of the hierarchical model is that it provides a formal statistical framework to investigate similarities and differences in brain functional networks across subjects, for example, subjects with mental disorders or neurodegenerative diseases such as Parkinson's as compared to normal subjects. We develop an EM algorithm for model estimation where both the E-step and M-step have explicit forms. We compare the performance of the proposed hierarchical model with that of two popular group ICA methods via simulation studies. We illustrate our method with application to an fMRI study of Zen meditation.
Directory of Open Access Journals (Sweden)
Andrew Cron
Full Text Available Flow cytometry is the prototypical assay for multi-parameter single cell analysis, and is essential in vaccine and biomarker research for the enumeration of antigen-specific lymphocytes that are often found in extremely low frequencies (0.1% or less. Standard analysis of flow cytometry data relies on visual identification of cell subsets by experts, a process that is subjective and often difficult to reproduce. An alternative and more objective approach is the use of statistical models to identify cell subsets of interest in an automated fashion. Two specific challenges for automated analysis are to detect extremely low frequency event subsets without biasing the estimate by pre-processing enrichment, and the ability to align cell subsets across multiple data samples for comparative analysis. In this manuscript, we develop hierarchical modeling extensions to the Dirichlet Process Gaussian Mixture Model (DPGMM approach we have previously described for cell subset identification, and show that the hierarchical DPGMM (HDPGMM naturally generates an aligned data model that captures both commonalities and variations across multiple samples. HDPGMM also increases the sensitivity to extremely low frequency events by sharing information across multiple samples analyzed simultaneously. We validate the accuracy and reproducibility of HDPGMM estimates of antigen-specific T cells on clinically relevant reference peripheral blood mononuclear cell (PBMC samples with known frequencies of antigen-specific T cells. These cell samples take advantage of retrovirally TCR-transduced T cells spiked into autologous PBMC samples to give a defined number of antigen-specific T cells detectable by HLA-peptide multimer binding. We provide open source software that can take advantage of both multiple processors and GPU-acceleration to perform the numerically-demanding computations. We show that hierarchical modeling is a useful probabilistic approach that can provide a
Directory of Open Access Journals (Sweden)
X. Chen
2013-09-01
Full Text Available A Hierarchal Bayesian model for forecasting regional summer rainfall and streamflow season-ahead using exogenous climate variables for East Central China is presented. The model provides estimates of the posterior forecasted probability distribution for 12 rainfall and 2 streamflow stations considering parameter uncertainty, and cross-site correlation. The model has a multilevel structure with regression coefficients modeled from a common multivariate normal distribution results in partial-pooling of information across multiple stations and better representation of parameter and posterior distribution uncertainty. Covariance structure of the residuals across stations is explicitly modeled. Model performance is tested under leave-10-out cross-validation. Frequentist and Bayesian performance metrics used include Receiver Operating Characteristic, Reduction of Error, Coefficient of Efficiency, Rank Probability Skill Scores, and coverage by posterior credible intervals. The ability of the model to reliably forecast regional summer rainfall and streamflow season-ahead offers potential for developing adaptive water risk management strategies.
Efficient Actor-Critic Algorithm with Hierarchical Model Learning and Planning
Fu, QiMing
2016-01-01
To improve the convergence rate and the sample efficiency, two efficient learning methods AC-HMLP and RAC-HMLP (AC-HMLP with ℓ2-regularization) are proposed by combining actor-critic algorithm with hierarchical model learning and planning. The hierarchical models consisting of the local and the global models, which are learned at the same time during learning of the value function and the policy, are approximated by local linear regression (LLR) and linear function approximation (LFA), respectively. Both the local model and the global model are applied to generate samples for planning; the former is used only if the state-prediction error does not surpass the threshold at each time step, while the latter is utilized at the end of each episode. The purpose of taking both models is to improve the sample efficiency and accelerate the convergence rate of the whole algorithm through fully utilizing the local and global information. Experimentally, AC-HMLP and RAC-HMLP are compared with three representative algorithms on two Reinforcement Learning (RL) benchmark problems. The results demonstrate that they perform best in terms of convergence rate and sample efficiency. PMID:27795704
Multi-scale hierarchical approach for parametric mapping: assessment on multi-compartmental models.
Rizzo, G; Turkheimer, F E; Bertoldo, A
2013-02-15
This paper investigates a new hierarchical method to apply basis function to mono- and multi-compartmental models (Hierarchical-Basis Function Method, H-BFM) at a voxel level. This method identifies the parameters of the compartmental model in its nonlinearized version, integrating information derived at the region of interest (ROI) level by segmenting the cerebral volume based on anatomical definition or functional clustering. We present the results obtained by using a two tissue-four rate constant model with two different tracers ([(11)C]FLB457 and [carbonyl-(11)C]WAY100635), one of the most complex models used in receptor studies, especially at the voxel level. H-BFM is robust and its application on both [(11)C]FLB457 and [carbonyl-(11)C]WAY100635 allows accurate and precise parameter estimates, good quality parametric maps and a low percentage of voxels out of physiological bound (approach for PET quantification by using compartmental modeling at the voxel level. In particular, different from other proposed approaches, this method can also be used when the linearization of the model is not appropriate. We expect that applying it to clinical data will generate reliable parametric maps. Copyright © 2012 Elsevier Inc. All rights reserved.
Dettmer, Jan; Molnar, Sheri; Steininger, Gavin; Dosso, Stan E.; Cassidy, John F.
2012-02-01
This paper applies a general trans-dimensional Bayesian inference methodology and hierarchical autoregressive data-error models to the inversion of microtremor array dispersion data for shear wave velocity (vs) structure. This approach accounts for the limited knowledge of the optimal earth model parametrization (e.g. the number of layers in the vs profile) and of the data-error statistics in the resulting vs parameter uncertainty estimates. The assumed earth model parametrization influences estimates of parameter values and uncertainties due to different parametrizations leading to different ranges of data predictions. The support of the data for a particular model is often non-unique and several parametrizations may be supported. A trans-dimensional formulation accounts for this non-uniqueness by including a model-indexing parameter as an unknown so that groups of models (identified by the indexing parameter) are considered in the results. The earth model is parametrized in terms of a partition model with interfaces given over a depth-range of interest. In this work, the number of interfaces (layers) in the partition model represents the trans-dimensional model indexing. In addition, serial data-error correlations are addressed by augmenting the geophysical forward model with a hierarchical autoregressive error model that can account for a wide range of error processes with a small number of parameters. Hence, the limited knowledge about the true statistical distribution of data errors is also accounted for in the earth model parameter estimates, resulting in more realistic uncertainties and parameter values. Hierarchical autoregressive error models do not rely on point estimates of the model vector to estimate data-error statistics, and have no requirement for computing the inverse or determinant of a data-error covariance matrix. This approach is particularly useful for trans-dimensional inverse problems, as point estimates may not be representative of the
DEFF Research Database (Denmark)
Vasquez, Juan Carlos; Guerrero, Josep M.; Savaghebi, Mehdi
2011-01-01
and discussed. Experimental results are provided to validate the performance and robustness of the VSIs functionality during Islanded and grid-connected operations, allowing a seamless transition between these modes through control hierarchies by regulating frequency and voltage, main-grid interactivity......Power electronics based microgrids consist of a number of voltage source inverters (VSIs) operating in parallel. In this paper, the modeling, control design, and stability analysis of three-phase VSIs are derived. The proposed voltage and current inner control loops and the mathematical models...... the frequency and amplitude deviations produced by the primary control. And the tertiary control regulates the power flow between the grid and the microgrid. Also, a synchronization algorithm is presented in order to connect the microgrid to the grid. The evaluation of the hierarchical control is presented...
Parallel processing for efficient 3D slope stability modelling
Marchesini, Ivan; Mergili, Martin; Alvioli, Massimiliano; Metz, Markus; Schneider-Muntau, Barbara; Rossi, Mauro; Guzzetti, Fausto
2014-05-01
We test the performance of the GIS-based, three-dimensional slope stability model r.slope.stability. The model was developed as a C- and python-based raster module of the GRASS GIS software. It considers the three-dimensional geometry of the sliding surface, adopting a modification of the model proposed by Hovland (1977), and revised and extended by Xie and co-workers (2006). Given a terrain elevation map and a set of relevant thematic layers, the model evaluates the stability of slopes for a large number of randomly selected potential slip surfaces, ellipsoidal or truncated in shape. Any single raster cell may be intersected by multiple sliding surfaces, each associated with a value of the factor of safety, FS. For each pixel, the minimum value of FS and the depth of the associated slip surface are stored. This information is used to obtain a spatial overview of the potentially unstable slopes in the study area. We test the model in the Collazzone area, Umbria, central Italy, an area known to be susceptible to landslides of different type and size. Availability of a comprehensive and detailed landslide inventory map allowed for a critical evaluation of the model results. The r.slope.stability code automatically splits the study area into a defined number of tiles, with proper overlap in order to provide the same statistical significance for the entire study area. The tiles are then processed in parallel by a given number of processors, exploiting a multi-purpose computing environment at CNR IRPI, Perugia. The map of the FS is obtained collecting the individual results, taking the minimum values on the overlapping cells. This procedure significantly reduces the processing time. We show how the gain in terms of processing time depends on the tile dimensions and on the number of cores.
Tarmo: A Framework for Parallelized Bounded Model Checking
Directory of Open Access Journals (Sweden)
Siert Wieringa
2009-12-01
Full Text Available This paper investigates approaches to parallelizing Bounded Model Checking (BMC for shared memory environments as well as for clusters of workstations. We present a generic framework for parallelized BMC named Tarmo. Our framework can be used with any incremental SAT encoding for BMC but for the results in this paper we use only the current state-of-the-art encoding for full PLTL. Using this encoding allows us to check both safety and liveness properties, contrary to an earlier work on distributing BMC that is limited to safety properties only. Despite our focus on BMC after it has been translated to SAT, existing distributed SAT solvers are not well suited for our application. This is because solving a BMC problem is not solving a set of independent SAT instances but rather involves solving multiple related SAT instances, encoded incrementally, where the satisfiability of each instance corresponds to the existence of a counterexample of a specific length. Our framework includes a generic architecture for a shared clause database that allows easy clause sharing between SAT solver threads solving various such instances. We present extensive experimental results obtained with multiple variants of our Tarmo implementation. Our shared memory variants have a significantly better performance than conventional single threaded approaches, which is a result that many users can benefit from as multi-core and multi-processor technology is widely available. Furthermore we demonstrate that our framework can be deployed in a typical cluster of workstations, where several multi-core machines are connected by a network.
Contextual Hierarchical Part-Driven Conditional Random Field Model for Object Category Detection
Directory of Open Access Journals (Sweden)
Lizhen Wu
2012-01-01
Full Text Available Even though several promising approaches have been proposed in the literature, generic category-level object detection is still challenging due to high intraclass variability and ambiguity in the appearance among different object instances. From the view of constructing object models, the balance between flexibility and discrimination must be taken into consideration. Motivated by these demands, we propose a novel contextual hierarchical part-driven conditional random field (CRF model, which is based on not only individual object part appearance but also model contextual interactions of the parts simultaneously. By using a latent two-layer hierarchical formulation of labels and a weighted neighborhood structure, the model can effectively encode the dependencies among object parts. Meanwhile, beta-stable local features are introduced as observed data to ensure the discriminative and robustness of part description. The object category detection problem can be solved in a probabilistic framework using a supervised learning method based on maximum a posteriori (MAP estimation. The benefits of the proposed model are demonstrated on the standard dataset and satellite images.
A hierarchical bayesian model to quantify uncertainty of stream water temperature forecasts.
Directory of Open Access Journals (Sweden)
Guillaume Bal
Full Text Available Providing generic and cost effective modelling approaches to reconstruct and forecast freshwater temperature using predictors as air temperature and water discharge is a prerequisite to understanding ecological processes underlying the impact of water temperature and of global warming on continental aquatic ecosystems. Using air temperature as a simple linear predictor of water temperature can lead to significant bias in forecasts as it does not disentangle seasonality and long term trends in the signal. Here, we develop an alternative approach based on hierarchical Bayesian statistical time series modelling of water temperature, air temperature and water discharge using seasonal sinusoidal periodic signals and time varying means and amplitudes. Fitting and forecasting performances of this approach are compared with that of simple linear regression between water and air temperatures using i an emotive simulated example, ii application to three French coastal streams with contrasting bio-geographical conditions and sizes. The time series modelling approach better fit data and does not exhibit forecasting bias in long term trends contrary to the linear regression. This new model also allows for more accurate forecasts of water temperature than linear regression together with a fair assessment of the uncertainty around forecasting. Warming of water temperature forecast by our hierarchical Bayesian model was slower and more uncertain than that expected with the classical regression approach. These new forecasts are in a form that is readily usable in further ecological analyses and will allow weighting of outcomes from different scenarios to manage climate change impacts on freshwater wildlife.
A hierarchical statistical model for estimating population properties of quantitative genes
Directory of Open Access Journals (Sweden)
Wu Rongling
2002-06-01
Full Text Available Abstract Background Earlier methods for detecting major genes responsible for a quantitative trait rely critically upon a well-structured pedigree in which the segregation pattern of genes exactly follow Mendelian inheritance laws. However, for many outcrossing species, such pedigrees are not available and genes also display population properties. Results In this paper, a hierarchical statistical model is proposed to monitor the existence of a major gene based on its segregation and transmission across two successive generations. The model is implemented with an EM algorithm to provide maximum likelihood estimates for genetic parameters of the major locus. This new method is successfully applied to identify an additive gene having a large effect on stem height growth of aspen trees. The estimates of population genetic parameters for this major gene can be generalized to the original breeding population from which the parents were sampled. A simulation study is presented to evaluate finite sample properties of the model. Conclusions A hierarchical model was derived for detecting major genes affecting a quantitative trait based on progeny tests of outcrossing species. The new model takes into account the population genetic properties of genes and is expected to enhance the accuracy, precision and power of gene detection.
Energy Technology Data Exchange (ETDEWEB)
Lai, Canhai; Xu, Zhijie; Pan, Wenxiao; Sun, Xin; Storlie, Curtis; Marcy, Peter; Dietiker, Jean-François; Li, Tingwen; Spenik, James
2016-01-01
To quantify the predictive confidence of a solid sorbent-based carbon capture design, a hierarchical validation methodology—consisting of basic unit problems with increasing physical complexity coupled with filtered model-based geometric upscaling has been developed and implemented. This paper describes the computational fluid dynamics (CFD) multi-phase reactive flow simulations and the associated data flows among different unit problems performed within the said hierarchical validation approach. The bench-top experiments used in this calibration and validation effort were carefully designed to follow the desired simple-to-complex unit problem hierarchy, with corresponding data acquisition to support model parameters calibrations at each unit problem level. A Bayesian calibration procedure is employed and the posterior model parameter distributions obtained at one unit-problem level are used as prior distributions for the same parameters in the next-tier simulations. Overall, the results have demonstrated that the multiphase reactive flow models within MFIX can be used to capture the bed pressure, temperature, CO2 capture capacity, and kinetics with quantitative accuracy. The CFD modeling methodology and associated uncertainty quantification techniques presented herein offer a solid framework for estimating the predictive confidence in the virtual scale up of a larger carbon capture device.
DEFF Research Database (Denmark)
Thomadsen, Tommy
2005-01-01
Communication networks are immensely important today, since both companies and individuals use numerous services that rely on them. This thesis considers the design of hierarchical (communication) networks. Hierarchical networks consist of layers of networks and are well-suited for coping...... the clusters. The design of hierarchical networks involves clustering of nodes, hub selection, and network design, i.e. selection of links and routing of ows. Hierarchical networks have been in use for decades, but integrated design of these networks has only been considered for very special types of networks....... The thesis investigates models for hierarchical network design and methods used to design such networks. In addition, ring network design is considered, since ring networks commonly appear in the design of hierarchical networks. The thesis introduces hierarchical networks, including a classification scheme...
Directory of Open Access Journals (Sweden)
Roland Y.H. Silitonga
2013-01-01
Full Text Available Indonesian Palm Oil Industry has the largest market share in the world, but still faces problems in order to strengthen the level of competitiveness. Those problems are in the industry chains, government regulation and policy as meso environment, and macro economic condition. Therefore these three elements should be considered when analyzing the improvement of competitiveness. Here, the governmental element is hoped to create a conducive environment. This paper presents the industry competitiveness conceptual model, using hierarchical multilevel system approach. The Hierarchical multilevel system approach is used to accommodate the complexity of the industrial relation and the government position as the meso environment. The step to develop the model firstly is to define the relevant system. Secondly, is to formulate the output of the model that is competitiveness in the form of indicator. Then, the relevant system with competitiveness as the output is built into a conceptual model using hierarchical multilevel system. The conceptual model is then discussed to see if it can explain the relevant system, and the potential of it to be developed into mathematical model.
Parallelized CCHE2D flow model with CUDA Fortran on Graphics Process Units
This paper presents the CCHE2D implicit flow model parallelized using CUDA Fortran programming technique on Graphics Processing Units (GPUs). A parallelized implicit Alternating Direction Implicit (ADI) solver using Parallel Cyclic Reduction (PCR) algorithm on GPU is developed and tested. This solve...
A model of shape memory materials with hierarchical twinning: Statics and dynamics
Energy Technology Data Exchange (ETDEWEB)
Saxena, A.; Bishop, A.R. [Los Alamos National Lab., NM (United States); Shenoy, S.R. [International Center for Theoretical Physics, Trieste (Italy); Wu, Y.; Lookman, T. [Western Ontario Univ., London, Ontario (Canada). Dept. of Applied Mathematics
1995-07-01
We consider a model of shape memory material in which hierarchical twinning near the habit plane (austenite-martensite interface) is a new and crucial ingredient. The model includes (1) a triple-well potential ({phi} model) in local shear strain, (2) strain gradient terms up to second order in strain and fourth order in gradient, and (3) all symmetry allowed compositional fluctuation induced strain gradient terms. The last term favors hierarchy which enables communication between macroscopic (cm) and microscopic ({Angstrom}) regions essential for shape memory. Hierarchy also stabilizes between formation (critical pattern of twins). External stress or pressure (pattern) modulates the spacing of domain walls. Therefore the ``pattern`` is encoded in the modulated hierarchical variation of the depth and width of the twins. This hierarchy of length scales provides a hierarchy of time scales and thus the possibility of non-exponential decay. The four processes of the complete shape memory cycle -- write, record, erase and recall -- are explained within this model. Preliminary results based on 2D Langevin dynamics are shown for tweed and hierarchy formation.
Clustering dynamic textures with the hierarchical em algorithm for modeling video.
Mumtaz, Adeel; Coviello, Emanuele; Lanckriet, Gert R G; Chan, Antoni B
2013-07-01
Dynamic texture (DT) is a probabilistic generative model, defined over space and time, that represents a video as the output of a linear dynamical system (LDS). The DT model has been applied to a wide variety of computer vision problems, such as motion segmentation, motion classification, and video registration. In this paper, we derive a new algorithm for clustering DT models that is based on the hierarchical EM algorithm. The proposed clustering algorithm is capable of both clustering DTs and learning novel DT cluster centers that are representative of the cluster members in a manner that is consistent with the underlying generative probabilistic model of the DT. We also derive an efficient recursive algorithm for sensitivity analysis of the discrete-time Kalman smoothing filter, which is used as the basis for computing expectations in the E-step of the HEM algorithm. Finally, we demonstrate the efficacy of the clustering algorithm on several applications in motion analysis, including hierarchical motion clustering, semantic motion annotation, and learning bag-of-systems (BoS) codebooks for dynamic texture recognition.
Ghanbari, J; Naghdabadi, R
2009-07-22
We have used a hierarchical multiscale modeling scheme for the analysis of cortical bone considering it as a nanocomposite. This scheme consists of definition of two boundary value problems, one for macroscale, and another for microscale. The coupling between these scales is done by using the homogenization technique. At every material point in which the constitutive model is needed, a microscale boundary value problem is defined using a macroscopic kinematical quantity and solved. Using the described scheme, we have studied elastic properties of cortical bone considering its nanoscale microstructural constituents with various mineral volume fractions. Since the microstructure of bone consists of mineral platelet with nanometer size embedded in a protein matrix, it is similar to the microstructure of soft matrix nanocomposites reinforced with hard nanostructures. Considering a representative volume element (RVE) of the microstructure of bone as the microscale problem in our hierarchical multiscale modeling scheme, the global behavior of bone is obtained under various macroscopic loading conditions. This scheme may be suitable for modeling arbitrary bone geometries subjected to a variety of loading conditions. Using the presented method, mechanical properties of cortical bone including elastic moduli and Poisson's ratios in two major directions and shear modulus is obtained for different mineral volume fractions.
Parallel family trees for transfer matrices in the Potts model
Navarro, Cristobal A; Kahler, Nancy Hitschfeld; Navarro, Gonzalo
2013-01-01
The computational cost of transfer matrix methods for the Potts model is directly related to the problem of \\textit{into how many ways can two adjacent blocks of a lattice be connected}. Answering this question leads to the generation of a combinatorial set of lattice configurations. This set defines the \\textit{configuration space} of the problem, and the smaller it is, the faster the transfer matrix method can be. The configuration space of generic transfer matrix methods for strip lattices in the Potts model is in the order of the Catalan numbers, leading to an asymptotic cost of $O(4^m)$ with $m$ being the width of the strip. Transfer matrix methods with a smaller configuration space indeed exist but they make assumptions on the temperature, number of spin states, or restrict the topology of the lattice in order to work. In this paper we propose a general and parallel transfer matrix method, based on family trees, that uses a sub-Catalan configuration space of size $O(3^m)$. The improvement is achieved by...
Hierarchical spatial models for predicting pygmy rabbit distribution and relative abundance
Wilson, T.L.; Odei, J.B.; Hooten, M.B.; Edwards, T.C.
2010-01-01
Conservationists routinely use species distribution models to plan conservation, restoration and development actions, while ecologists use them to infer process from pattern. These models tend to work well for common or easily observable species, but are of limited utility for rare and cryptic species. This may be because honest accounting of known observation bias and spatial autocorrelation are rarely included, thereby limiting statistical inference of resulting distribution maps. We specified and implemented a spatially explicit Bayesian hierarchical model for a cryptic mammal species (pygmy rabbit Brachylagus idahoensis). Our approach used two levels of indirect sign that are naturally hierarchical (burrows and faecal pellets) to build a model that allows for inference on regression coefficients as well as spatially explicit model parameters. We also produced maps of rabbit distribution (occupied burrows) and relative abundance (number of burrows expected to be occupied by pygmy rabbits). The model demonstrated statistically rigorous spatial prediction by including spatial autocorrelation and measurement uncertainty. We demonstrated flexibility of our modelling framework by depicting probabilistic distribution predictions using different assumptions of pygmy rabbit habitat requirements. Spatial representations of the variance of posterior predictive distributions were obtained to evaluate heterogeneity in model fit across the spatial domain. Leave-one-out cross-validation was conducted to evaluate the overall model fit. Synthesis and applications. Our method draws on the strengths of previous work, thereby bridging and extending two active areas of ecological research: species distribution models and multi-state occupancy modelling. Our framework can be extended to encompass both larger extents and other species for which direct estimation of abundance is difficult. ?? 2010 The Authors. Journal compilation ?? 2010 British Ecological Society.
A hierarchical Markov decision process modeling feeding and marketing decisions of growing pigs
DEFF Research Database (Denmark)
Pourmoayed, Reza; Nielsen, Lars Relund; Kristensen, Anders Ringgaard
2016-01-01
Feeding is the most important cost in the production of growing pigs and has a direct impact on the marketing decisions, growth and the final quality of the meat. In this paper, we address the sequential decision problem of when to change the feed-mix within a finisher pig pen and when to pick pigs...... for marketing. We formulate a hierarchical Markov decision process with three levels representing the decision process. The model considers decisions related to feeding and marketing and finds the optimal decision given the current state of the pen. The state of the system is based on information from on...