WorldWideScience

Sample records for hierarchical holographic modeling

  1. Holographic twin Higgs model.

    Science.gov (United States)

    Geller, Michael; Telem, Ofri

    2015-05-15

    We present the first realization of a "twin Higgs" model as a holographic composite Higgs model. Uniquely among composite Higgs models, the Higgs potential is protected by a new standard model (SM) singlet elementary "mirror" sector at the sigma model scale f and not by the composite states at m_{KK}, naturally allowing for m_{KK} beyond the LHC reach. As a result, naturalness in our model cannot be constrained by the LHC, but may be probed by precision Higgs measurements at future lepton colliders, and by direct searches for Kaluza-Klein excitations at a 100 TeV collider.

  2. Holographic Twin Higgs Model

    Science.gov (United States)

    Geller, Michael; Telem, Ofri

    2015-05-01

    We present the first realization of a "twin Higgs" model as a holographic composite Higgs model. Uniquely among composite Higgs models, the Higgs potential is protected by a new standard model (SM) singlet elementary "mirror" sector at the sigma model scale f and not by the composite states at mKK , naturally allowing for mKK beyond the LHC reach. As a result, naturalness in our model cannot be constrained by the LHC, but may be probed by precision Higgs measurements at future lepton colliders, and by direct searches for Kaluza-Klein excitations at a 100 TeV collider.

  3. Adventures in holographic dimer models

    International Nuclear Information System (INIS)

    Kachru, Shamit; Karch, Andreas; Yaida, Sho

    2011-01-01

    We abstract the essential features of holographic dimer models, and develop several new applications of these models. Firstly, semi-holographically coupling free band fermions to holographic dimers, we uncover novel phase transitions between conventional Fermi liquids and non-Fermi liquids, accompanied by a change in the structure of the Fermi surface. Secondly, we make dimer vibrations propagate through the whole crystal by way of double trace deformations, obtaining nontrivial band structure. In a simple toy model, the topology of the band structure experiences an interesting reorganization as we vary the strength of the double trace deformations. Finally, we develop tools that would allow one to build, in a bottom-up fashion, a holographic avatar of the Hubbard model.

  4. Holography and holographic dark energy model

    International Nuclear Information System (INIS)

    Gong Yungui; Zhang Yuanzhong

    2005-01-01

    The holographic principle is used to discuss the holographic dark energy model. We find that the Bekenstein-Hawking entropy bound is far from saturation under certain conditions. A more general constraint on the parameter of the holographic dark energy model is also derived

  5. Holographic models with anisotropic scaling

    Science.gov (United States)

    Brynjolfsson, E. J.; Danielsson, U. H.; Thorlacius, L.; Zingg, T.

    2013-12-01

    We consider gravity duals to d+1 dimensional quantum critical points with anisotropic scaling. The primary motivation comes from strongly correlated electron systems in condensed matter theory but the main focus of the present paper is on the gravity models in their own right. Physics at finite temperature and fixed charge density is described in terms of charged black branes. Some exact solutions are known and can be used to obtain a maximally extended spacetime geometry, which has a null curvature singularity inside a single non-degenerate horizon, but generic black brane solutions in the model can only be obtained numerically. Charged matter gives rise to black branes with hair that are dual to the superconducting phase of a holographic superconductor. Our numerical results indicate that holographic superconductors with anisotropic scaling have vanishing zero temperature entropy when the back reaction of the hair on the brane geometry is taken into account.

  6. Exploring holographic Composite Higgs models

    Energy Technology Data Exchange (ETDEWEB)

    Croon, Djuna [Department of Physics and Astronomy, University of Sussex,BN1 9QH Brighton (United Kingdom); Perimeter Institute for Theoretical Physics,Waterloo, ON (Canada); Dillon, Barry M.; Huber, Stephan J.; Sanz, Veronica [Department of Physics and Astronomy, University of Sussex,BN1 9QH Brighton (United Kingdom)

    2016-07-13

    Simple Composite Higgs models predict new vector-like fermions not too far from the electroweak scale, yet LHC limits are now sensitive to the TeV scale. Motivated by this tension, we explore the holographic dual of the minimal model, MCHM{sub 5}, to try and alleviate this tension without increasing the fine-tuning in the Higgs potential. Interestingly, we find that lowering the UV cutoff in the 5D picture allows for heavier top partners and less fine-tuning. In the 4D dual this corresponds to increasing the number of “colours” N, thus increasing the decay constant of the Goldstone Higgs. This is essentially a ‘Little Randall-Sundrum Model’, which are known to reduce some flavour and electroweak constraints. Furthermore, in anticipation of the ongoing efforts at the LHC to put bounds on the top Yukawa, we demonstrate that deviations from the SM can be suppressed or enhanced with respect to what is expected from mere symmetry arguments in 4D. We conclude that the 5D holographic realisation of the MCHM{sub 5} with a small UV cutoff is not in tension with the current experimental data.

  7. Origin of holographic dark energy models

    International Nuclear Information System (INIS)

    Myung, Yun Soo; Seo, Min-Gyun

    2009-01-01

    We investigate the origin of holographic dark energy models which were recently proposed to explain the dark energy-dominated universe. For this purpose, we introduce the spacetime foam uncertainty of δl≥l p α l α-1 . It was argued that the case of α=2/3 could describe the dark energy with infinite statistics, while the case of α=1/2 can describe the ordinary matter with Bose-Fermi statistics. However, two cases may lead to the holographic energy density if the latter recovers from the geometric mean of UV and IR scales. Hence the dark energy with infinite statistics based on the entropy bound is not an ingredient for deriving the holographic dark energy model. Furthermore, it is shown that the agegraphic dark energy models are the holographic dark energy model with different IR length scales

  8. Comparing holographic dark energy models with statefinder

    International Nuclear Information System (INIS)

    Cui, Jing-Lei; Zhang, Jing-Fei

    2014-01-01

    We apply the statefinder diagnostic to the holographic dark energy models, including the original holographic dark energy (HDE) model, the new holographic dark energy model, the new agegraphic dark energy (NADE) model, and the Ricci dark energy model. In the low-redshift region the holographic dark energy models are degenerate with each other and with the ΛCDM model in the H(z) and q(z) evolutions. In particular, the HDE model is highly degenerate with the ΛCDM model, and in the HDE model the cases with different parameter values are also in strong degeneracy. Since the observational data are mainly within the low-redshift region, it is very important to break this lowredshift degeneracy in the H(z) and q(z) diagnostics by using some quantities with higher order derivatives of the scale factor. It is shown that the statefinder diagnostic r(z) is very useful in breaking the low-redshift degeneracies. By employing the statefinder diagnostic the holographic dark energy models can be differentiated efficiently in the low-redshift region. The degeneracy between the holographic dark energy models and the ΛCDM model can also be broken by this method. Especially for the HDE model, all the previous strong degeneracies appearing in the H(z) and q(z) diagnostics are broken effectively. But for the NADE model, the degeneracy between the cases with different parameter values cannot be broken, even though the statefinder diagnostic is used. A direct comparison of the holographic dark energy models in the r-s plane is also made, in which the separations between the models (including the ΛCDM model) can be directly measured in the light of the current values {r 0 , s 0 } of the models. (orig.)

  9. Holographic dark energy in the DGP model

    International Nuclear Information System (INIS)

    Cruz, Norman; Lepe, Samuel; Pena, Francisco; Avelino, Arturo

    2012-01-01

    The braneworld model proposed by Dvali, Gabadadze, and Porrati leads to an accelerated universe without cosmological constant or any other form of dark energy. Nevertheless, we have investigated the consequences of this model when an holographic dark energy is included, taking the Hubble scale as IR cutoff. We have found that the holographic dark energy leads to an accelerated flat universe (de Sitter-like expansion) for the two branches: ε=±1, of the DGP model. Nevertheless, in universes with no null curvature the dark energy presents an EoS corresponding to a phantom fluid during the present era and evolving to a de Sitter-like phase for future cosmic time. In the special case in which the holographic parameter c is equal to one we have found a sudden singularity in closed universes. In this case the expansion is decelerating. (orig.)

  10. Holographic dark energy in the DGP model

    Energy Technology Data Exchange (ETDEWEB)

    Cruz, Norman [Universidad de Santiago, Departamento de Fisica, Facultad de Ciencia, Santiago (Chile); Lepe, Samuel [Pontificia Universidad Catolica de Valparaiso, Instituto de Fisica, Facultad de Ciencias, Valparaiso (Chile); Pena, Francisco [Universidad de La Frontera, Departamento de Ciencias Fisicas, Facultad de Ingenieria, Ciencias y Administracion, Avda. Francisco Salazar 01145, Casilla 54-D, Temuco (Chile); Avelino, Arturo [Universidad de Guanajuato, Departamento de Fisica, DCI, Codigo Postal 37150, Leon, Guanajuato (Mexico)

    2012-09-15

    The braneworld model proposed by Dvali, Gabadadze, and Porrati leads to an accelerated universe without cosmological constant or any other form of dark energy. Nevertheless, we have investigated the consequences of this model when an holographic dark energy is included, taking the Hubble scale as IR cutoff. We have found that the holographic dark energy leads to an accelerated flat universe (de Sitter-like expansion) for the two branches: {epsilon}={+-}1, of the DGP model. Nevertheless, in universes with no null curvature the dark energy presents an EoS corresponding to a phantom fluid during the present era and evolving to a de Sitter-like phase for future cosmic time. In the special case in which the holographic parameter c is equal to one we have found a sudden singularity in closed universes. In this case the expansion is decelerating. (orig.)

  11. Hierarchical species distribution models

    Science.gov (United States)

    Hefley, Trevor J.; Hooten, Mevin B.

    2016-01-01

    Determining the distribution pattern of a species is important to increase scientific knowledge, inform management decisions, and conserve biodiversity. To infer spatial and temporal patterns, species distribution models have been developed for use with many sampling designs and types of data. Recently, it has been shown that count, presence-absence, and presence-only data can be conceptualized as arising from a point process distribution. Therefore, it is important to understand properties of the point process distribution. We examine how the hierarchical species distribution modeling framework has been used to incorporate a wide array of regression and theory-based components while accounting for the data collection process and making use of auxiliary information. The hierarchical modeling framework allows us to demonstrate how several commonly used species distribution models can be derived from the point process distribution, highlight areas of potential overlap between different models, and suggest areas where further research is needed.

  12. Bayesian nonparametric hierarchical modeling.

    Science.gov (United States)

    Dunson, David B

    2009-04-01

    In biomedical research, hierarchical models are very widely used to accommodate dependence in multivariate and longitudinal data and for borrowing of information across data from different sources. A primary concern in hierarchical modeling is sensitivity to parametric assumptions, such as linearity and normality of the random effects. Parametric assumptions on latent variable distributions can be challenging to check and are typically unwarranted, given available prior knowledge. This article reviews some recent developments in Bayesian nonparametric methods motivated by complex, multivariate and functional data collected in biomedical studies. The author provides a brief review of flexible parametric approaches relying on finite mixtures and latent class modeling. Dirichlet process mixture models are motivated by the need to generalize these approaches to avoid assuming a fixed finite number of classes. Focusing on an epidemiology application, the author illustrates the practical utility and potential of nonparametric Bayes methods.

  13. A holographic model for black hole complementarity

    Energy Technology Data Exchange (ETDEWEB)

    Lowe, David A. [Physics Department, Brown University,Providence, RI 02912 (United States); Thorlacius, Larus [University of Iceland, Science Institute,Dunhaga 3, IS-107, Reykjavik (Iceland); The Oskar Klein Centre for Cosmoparticle Physics,Department of Physics, Stockholm University,AlbaNova University Centre, 10691 Stockholm (Sweden)

    2016-12-07

    We explore a version of black hole complementarity, where an approximate semiclassical effective field theory for interior infalling degrees of freedom emerges holographically from an exact evolution of exterior degrees of freedom. The infalling degrees of freedom have a complementary description in terms of outgoing Hawking radiation and must eventually decohere with respect to the exterior Hamiltonian, leading to a breakdown of the semiclassical description for an infaller. Trace distance is used to quantify the difference between the complementary time evolutions, and to define a decoherence time. We propose a dictionary where the evolution with respect to the bulk effective Hamiltonian corresponds to mean field evolution in the holographic theory. In a particular model for the holographic theory, which exhibits fast scrambling, the decoherence time coincides with the scrambling time. The results support the hypothesis that decoherence of the infalling holographic state and disruptive bulk effects near the curvature singularity are complementary descriptions of the same physics, which is an important step toward resolving the black hole information paradox.

  14. Holographic models and the QCD trace anomaly

    International Nuclear Information System (INIS)

    Goity, Jose L.; Trinchero, Roberto C.

    2012-01-01

    Five dimensional dilaton models are considered as possible holographic duals of the pure gauge QCD vacuum. In the framework of these models, the QCD trace anomaly equation is considered. Each quantity appearing in that equation is computed by holographic means. Two exact solutions for different dilaton potentials corresponding to perturbative and non-perturbative β-functions are studied. It is shown that in the perturbative case, where the β-function is the QCD one at leading order, the resulting space is not asymptotically AdS. In the non-perturbative case, the model considered presents confinement of static quarks and leads to a non-vanishing gluon condensate, although it does not correspond to an asymptotically free theory. In both cases analyses based on the trace anomaly and on Wilson loops are carried out.

  15. Inflation via logarithmic entropy-corrected holographic dark energy model

    Energy Technology Data Exchange (ETDEWEB)

    Darabi, F.; Felegary, F. [Azarbaijan Shahid Madani University, Department of Physics, Tabriz (Iran, Islamic Republic of); Setare, M.R. [University of Kurdistan, Department of Science, Bijar (Iran, Islamic Republic of)

    2016-12-15

    We study the inflation in terms of the logarithmic entropy-corrected holographic dark energy (LECHDE) model with future event horizon, particle horizon, and Hubble horizon cut-offs, and we compare the results with those obtained in the study of inflation by the holographic dark energy HDE model. In comparison, the spectrum of primordial scalar power spectrum in the LECHDE model becomes redder than the spectrum in the HDE model. Moreover, the consistency with the observational data in the LECHDE model of inflation constrains the reheating temperature and Hubble parameter by one parameter of holographic dark energy and two new parameters of logarithmic corrections. (orig.)

  16. Inflation via logarithmic entropy-corrected holographic dark energy model

    International Nuclear Information System (INIS)

    Darabi, F.; Felegary, F.; Setare, M.R.

    2016-01-01

    We study the inflation in terms of the logarithmic entropy-corrected holographic dark energy (LECHDE) model with future event horizon, particle horizon, and Hubble horizon cut-offs, and we compare the results with those obtained in the study of inflation by the holographic dark energy HDE model. In comparison, the spectrum of primordial scalar power spectrum in the LECHDE model becomes redder than the spectrum in the HDE model. Moreover, the consistency with the observational data in the LECHDE model of inflation constrains the reheating temperature and Hubble parameter by one parameter of holographic dark energy and two new parameters of logarithmic corrections. (orig.)

  17. Holographic kinetic k-essence model

    Energy Technology Data Exchange (ETDEWEB)

    Cruz, Norman [Departamento de Fisica, Facultad de Ciencia, Universidad de Santiago de Chile, Casilla 307, Santiago (Chile)], E-mail: ncruz@lauca.usach.cl; Gonzalez-Diaz, Pedro F.; Rozas-Fernandez, Alberto [Colina de los Chopos, Instituto de Fisica Fundamental, Consejo Superior de Investigaciones Cientificas, Serrano 121, 28006 Madrid (Spain)], E-mail: a.rozas@cfmac.csic.es; Sanchez, Guillermo [Departamento de Matematica y Ciencia de la Computacion, Facultad de Ciencia, Universidad de Santiago de Chile, Casilla 307, Santiago (Chile)], E-mail: gsanchez@usach.cl

    2009-08-31

    We consider a connection between the holographic dark energy density and the kinetic k-essence energy density in a flat FRW universe. With the choice c{>=}1, the holographic dark energy can be described by a kinetic k-essence scalar field in a certain way. In this Letter we show this kinetic k-essential description of the holographic dark energy with c{>=}1 and reconstruct the kinetic k-essence function F(X)

  18. Note on the butterfly effect in holographic superconductor models

    Directory of Open Access Journals (Sweden)

    Yi Ling

    2017-05-01

    Full Text Available In this note we remark that the butterfly effect can be used to diagnose the phase transition of superconductivity in a holographic framework. Specifically, we compute the butterfly velocity in a charged black hole background as well as anisotropic backgrounds with Q-lattice structure. In both cases we find its derivative to the temperature is discontinuous at critical points. We also propose that the butterfly velocity can signalize the occurrence of thermal phase transition in general holographic models.

  19. Note on the butterfly effect in holographic superconductor models

    International Nuclear Information System (INIS)

    Ling, Yi; Liu, Peng; Wu, Jian-Pin

    2017-01-01

    In this note we remark that the butterfly effect can be used to diagnose the phase transition of superconductivity in a holographic framework. Specifically, we compute the butterfly velocity in a charged black hole background as well as anisotropic backgrounds with Q-lattice structure. In both cases we find its derivative to the temperature is discontinuous at critical points. We also propose that the butterfly velocity can signalize the occurrence of thermal phase transition in general holographic models.

  20. Note on the butterfly effect in holographic superconductor models

    Energy Technology Data Exchange (ETDEWEB)

    Ling, Yi, E-mail: lingy@ihep.ac.cn [Institute of High Energy Physics, Chinese Academy of Sciences, Beijing 100049 (China); Shanghai Key Laboratory of High Temperature Superconductors, Shanghai 200444 (China); School of Physics, University of Chinese Academy of Sciences, Beijing 100049 (China); Liu, Peng, E-mail: liup51@ihep.ac.cn [Institute of High Energy Physics, Chinese Academy of Sciences, Beijing 100049 (China); Wu, Jian-Pin, E-mail: jianpinwu@mail.bnu.edu.cn [Institute of Gravitation and Cosmology, Department of Physics, School of Mathematics and Physics, Bohai University, Jinzhou 121013 (China); Shanghai Key Laboratory of High Temperature Superconductors, Shanghai 200444 (China)

    2017-05-10

    In this note we remark that the butterfly effect can be used to diagnose the phase transition of superconductivity in a holographic framework. Specifically, we compute the butterfly velocity in a charged black hole background as well as anisotropic backgrounds with Q-lattice structure. In both cases we find its derivative to the temperature is discontinuous at critical points. We also propose that the butterfly velocity can signalize the occurrence of thermal phase transition in general holographic models.

  1. Entanglement in holographic dark energy models

    International Nuclear Information System (INIS)

    Horvat, R.

    2010-01-01

    We study a process of equilibration of holographic dark energy (HDE) with the cosmic horizon around the dark-energy dominated epoch. This process is characterized by a huge amount of information conveyed across the horizon, filling thereby a large gap in entropy between the system on the brink of experiencing a sudden collapse to a black hole and the black hole itself. At the same time, even in the absence of interaction between dark matter and dark energy, such a process marks a strong jump in the entanglement entropy, measuring the quantum-mechanical correlations between the horizon and its interior. Although the effective quantum field theory (QFT) with a peculiar relationship between the UV and IR cutoffs, a framework underlying all HDE models, may formally account for such a huge shift in the number of distinct quantum states, we show that the scope of such a framework becomes tremendously restricted, devoid virtually any application in other cosmological epochs or particle-physics phenomena. The problem of negative entropies for the non-phantom stuff is also discussed.

  2. Entanglement in holographic dark energy models

    Energy Technology Data Exchange (ETDEWEB)

    Horvat, R., E-mail: horvat@lei3.irb.h [Rudjer Boskovic Institute, P.O. Box 180, 10002 Zagreb (Croatia)

    2010-10-18

    We study a process of equilibration of holographic dark energy (HDE) with the cosmic horizon around the dark-energy dominated epoch. This process is characterized by a huge amount of information conveyed across the horizon, filling thereby a large gap in entropy between the system on the brink of experiencing a sudden collapse to a black hole and the black hole itself. At the same time, even in the absence of interaction between dark matter and dark energy, such a process marks a strong jump in the entanglement entropy, measuring the quantum-mechanical correlations between the horizon and its interior. Although the effective quantum field theory (QFT) with a peculiar relationship between the UV and IR cutoffs, a framework underlying all HDE models, may formally account for such a huge shift in the number of distinct quantum states, we show that the scope of such a framework becomes tremendously restricted, devoid virtually any application in other cosmological epochs or particle-physics phenomena. The problem of negative entropies for the non-phantom stuff is also discussed.

  3. Interacting holographic dark energy models: a general approach

    Science.gov (United States)

    Som, S.; Sil, A.

    2014-08-01

    Dark energy models inspired by the cosmological holographic principle are studied in homogeneous isotropic spacetime with a general choice for the dark energy density . Special choices of the parameters enable us to obtain three different holographic models, including the holographic Ricci dark energy (RDE) model. Effect of interaction between dark matter and dark energy on the dynamics of those models are investigated for different popular forms of interaction. It is found that crossing of phantom divide can be avoided in RDE models for β>0.5 irrespective of the presence of interaction. A choice of α=1 and β=2/3 leads to a varying Λ-like model introducing an IR cutoff length Λ -1/2. It is concluded that among the popular choices an interaction of the form Q∝ Hρ m suits the best in avoiding the coincidence problem in this model.

  4. Quantum quenches in a holographic Kondo model

    Science.gov (United States)

    Erdmenger, Johanna; Flory, Mario; Newrzella, Max-Niklas; Strydom, Migael; Wu, Jackson M. S.

    2017-04-01

    We study non-equilibrium dynamics and quantum quenches in a recent gauge/gravity duality model for a strongly coupled system interacting with a magnetic impurity with SU( N ) spin. At large N , it is convenient to write the impurity spin as a bilinear in Abrikosov fermions. The model describes an RG flow triggered by the marginally relevant Kondo operator. There is a phase transition at a critical temperature, below which an operator condenses which involves both an electron and an Abrikosov fermion field. This corresponds to a holographic superconductor in AdS2 and models the impurity screening. We quench the Kondo coupling either by a Gaussian pulse or by a hyperbolic tangent, the latter taking the system from the condensed to the uncondensed phase or vice-versa. We study the time dependence of the condensate induced by this quench. The timescale for equilibration is generically given by the leading quasinormal mode of the dual gravity model. This mode also governs the formation of the screening cloud, which is obtained as the decrease of impurity degrees of freedom with time. In the condensed phase, the leading quasinormal mode is imaginary and the relaxation of the condensate is over-damped. For quenches whose final state is close to the critical point of the large N phase transition, we study the critical slowing down and obtain the combination of critical exponents zν = 1. When the final state is exactly at the phase transition, we find that the exponential ringing of the quasinormal modes is replaced by a power-law behaviour of the form ˜ t - a sin( b log t). This indicates the emergence of a discrete scale invariance.

  5. Quantum quenches in a holographic Kondo model

    Energy Technology Data Exchange (ETDEWEB)

    Erdmenger, Johanna [Max-Planck-Institut für Physik (Werner-Heisenberg-Institut),Föhringer Ring 6, 80805, Munich (Germany); Institut für Theoretische Physik und Astrophysik, Julius-Maximilians-Universität Würzburg,Am Hubland, 97074 Würzburg (Germany); Flory, Mario [Max-Planck-Institut für Physik (Werner-Heisenberg-Institut),Föhringer Ring 6, 80805, Munich (Germany); Institute of Physics, Jagiellonian University,Łojasiewicza 11, 30-348 Kraków (Poland); Newrzella, Max-Niklas; Strydom, Migael [Max-Planck-Institut für Physik (Werner-Heisenberg-Institut),Föhringer Ring 6, 80805, Munich (Germany); Wu, Jackson M. S. [Department of Physics and Astronomy, University of Alabama,Tuscaloosa, AL 35487 (United States)

    2017-04-10

    We study non-equilibrium dynamics and quantum quenches in a recent gauge/ gravity duality model for a strongly coupled system interacting with a magnetic impurity with SU(N) spin. At large N, it is convenient to write the impurity spin as a bilinear in Abrikosov fermions. The model describes an RG flow triggered by the marginally relevant Kondo operator. There is a phase transition at a critical temperature, below which an operator condenses which involves both an electron and an Abrikosov fermion field. This corresponds to a holographic superconductor in AdS{sub 2} and models the impurity screening. We quench the Kondo coupling either by a Gaussian pulse or by a hyperbolic tangent, the latter taking the system from the condensed to the uncondensed phase or vice-versa. We study the time dependence of the condensate induced by this quench. The timescale for equilibration is generically given by the leading quasinormal mode of the dual gravity model. This mode also governs the formation of the screening cloud, which is obtained as the decrease of impurity degrees of freedom with time. In the condensed phase, the leading quasinormal mode is imaginary and the relaxation of the condensate is over-damped. For quenches whose final state is close to the critical point of the large N phase transition, we study the critical slowing down and obtain the combination of critical exponents zν=1. When the final state is exactly at the phase transition, we find that the exponential ringing of the quasinormal modes is replaced by a power-law behaviour of the form ∼t{sup −a}sin (blog t). This indicates the emergence of a discrete scale invariance.

  6. Hierarchical Bass model

    International Nuclear Information System (INIS)

    Tashiro, Tohru

    2014-01-01

    We propose a new model about diffusion of a product which includes a memory of how many adopters or advertisements a non-adopter met, where (non-)adopters mean people (not) possessing the product. This effect is lacking in the Bass model. As an application, we utilize the model to fit the iPod sales data, and so the better agreement is obtained than the Bass model

  7. Hierarchical Bass model

    Science.gov (United States)

    Tashiro, Tohru

    2014-03-01

    We propose a new model about diffusion of a product which includes a memory of how many adopters or advertisements a non-adopter met, where (non-)adopters mean people (not) possessing the product. This effect is lacking in the Bass model. As an application, we utilize the model to fit the iPod sales data, and so the better agreement is obtained than the Bass model.

  8. Hierarchical Semantic Model of Geovideo

    Directory of Open Access Journals (Sweden)

    XIE Xiao

    2015-05-01

    Full Text Available The public security incidents were getting increasingly challenging with regard to their new features, including multi-scale mobility, multistage dynamic evolution, as well as spatiotemporal concurrency and uncertainty in the complex urban environment. However, the existing video models, which were used/designed for independent archive or local analysis of surveillance video, have seriously inhibited emergency response to the urgent requirements.Aiming at the explicit representation of change mechanism in video, the paper proposed a novel hierarchical geovideo semantic model using UML. This model was characterized by the hierarchical representation of both data structure and semantics based on the change-oriented three domains (feature domain, process domain and event domain instead of overall semantic description of video streaming; combining both geographical semantics and video content semantics, in support of global semantic association between multiple geovideo data. The public security incidents by video surveillance are inspected as an example to illustrate the validity of this model.

  9. Holographic cosmological models on the braneworld

    Energy Technology Data Exchange (ETDEWEB)

    Lepe, Samuel [Instituto de Fisica, Pontificia Universidad Catolica de Valparaiso, Casilla 4950, Valparaiso (Chile); Saavedra, Joel [Instituto de Fisica, Pontificia Universidad Catolica de Valparaiso, Casilla 4950, Valparaiso (Chile)], E-mail: joel.saavedra@ucv.cl; Pena, Francisco [Departamento de Ciencias Fisicas, Facultad de Ingenieria, Ciencias y Administracion, Universidad de la Frontera, Avda. Francisco Salazar 01145, Casilla 54-D, Temuco (Chile)

    2009-01-26

    In this Letter we have studied a closed universe which a holographic energy on the brane whose energy density is described by {rho}(H)=3c{sup 2}H{sup 2} and we obtain an equation for the Hubble parameter. This equation gave us different physical behavior depending if c{sup 2}>1 or c{sup 2}<1 against of the sign of the brane tension.

  10. Holographic shell model: Stack data structure inside black holes?

    Science.gov (United States)

    Davidson, Aharon

    2014-03-01

    Rather than tiling the black hole horizon by Planck area patches, we suggest that bits of information inhabit, universally and holographically, the entire black core interior, a bit per a light sheet unit interval of order Planck area difference. The number of distinguishable (tagged by a binary code) configurations, counted within the context of a discrete holographic shell model, is given by the Catalan series. The area entropy formula is recovered, including Cardy's universal logarithmic correction, and the equipartition of mass per degree of freedom is proven. The black hole information storage resembles, in the count procedure, the so-called stack data structure.

  11. Multicollinearity in hierarchical linear models.

    Science.gov (United States)

    Yu, Han; Jiang, Shanhe; Land, Kenneth C

    2015-09-01

    This study investigates an ill-posed problem (multicollinearity) in Hierarchical Linear Models from both the data and the model perspectives. We propose an intuitive, effective approach to diagnosing the presence of multicollinearity and its remedies in this class of models. A simulation study demonstrates the impacts of multicollinearity on coefficient estimates, associated standard errors, and variance components at various levels of multicollinearity for finite sample sizes typical in social science studies. We further investigate the role multicollinearity plays at each level for estimation of coefficient parameters in terms of shrinkage. Based on these analyses, we recommend a top-down method for assessing multicollinearity in HLMs that first examines the contextual predictors (Level-2 in a two-level model) and then the individual predictors (Level-1) and uses the results for data collection, research problem redefinition, model re-specification, variable selection and estimation of a final model. Copyright © 2015 Elsevier Inc. All rights reserved.

  12. Hierarchical modeling of active materials

    International Nuclear Information System (INIS)

    Taya, Minoru

    2003-01-01

    Intelligent (or smart) materials are increasingly becoming key materials for use in actuators and sensors. If an intelligent material is used as a sensor, it can be embedded in a variety of structure functioning as a health monitoring system to make their life longer with high reliability. If an intelligent material is used as an active material in an actuator, it plays a key role of making dynamic movement of the actuator under a set of stimuli. This talk intends to cover two different active materials in actuators, (1) piezoelectric laminate with FGM microstructure, (2) ferromagnetic shape memory alloy (FSMA). The advantage of using the FGM piezo laminate is to enhance its fatigue life while maintaining large bending displacement, while that of use in FSMA is its fast actuation while providing a large force and stroke capability. Use of hierarchical modeling of the above active materials is a key design step in optimizing its microstructure for enhancement of their performance. I will discuss briefly hierarchical modeling of the above two active materials. For FGM piezo laminate, we will use both micromechanical model and laminate theory, while for FSMA, the modeling interfacing nano-structure, microstructure and macro-behavior is discussed. (author)

  13. Generalized entropy formalism and a new holographic dark energy model

    Science.gov (United States)

    Sayahian Jahromi, A.; Moosavi, S. A.; Moradpour, H.; Morais Graça, J. P.; Lobo, I. P.; Salako, I. G.; Jawad, A.

    2018-05-01

    Recently, the Rényi and Tsallis generalized entropies have extensively been used in order to study various cosmological and gravitational setups. Here, using a special type of generalized entropy, a generalization of both the Rényi and Tsallis entropy, together with holographic principle, we build a new model for holographic dark energy. Thereinafter, considering a flat FRW universe, filled by a pressureless component and the new obtained dark energy model, the evolution of cosmos has been investigated showing satisfactory results and behavior. In our model, the Hubble horizon plays the role of IR cutoff, and there is no mutual interaction between the cosmos components. Our results indicate that the generalized entropy formalism may open a new window to become more familiar with the nature of spacetime and its properties.

  14. Holographic p-wave superconductor models with Weyl corrections

    Directory of Open Access Journals (Sweden)

    Lu Zhang

    2015-04-01

    Full Text Available We study the effect of the Weyl corrections on the holographic p-wave dual models in the backgrounds of AdS soliton and AdS black hole via a Maxwell complex vector field model by using the numerical and analytical methods. We find that, in the soliton background, the Weyl corrections do not influence the properties of the holographic p-wave insulator/superconductor phase transition, which is different from that of the Yang–Mills theory. However, in the black hole background, we observe that similarly to the Weyl correction effects in the Yang–Mills theory, the higher Weyl corrections make it easier for the p-wave metal/superconductor phase transition to be triggered, which shows that these two p-wave models with Weyl corrections share some similar features for the condensation of the vector operator.

  15. Classification using Hierarchical Naive Bayes models

    DEFF Research Database (Denmark)

    Langseth, Helge; Dyhre Nielsen, Thomas

    2006-01-01

    Classification problems have a long history in the machine learning literature. One of the simplest, and yet most consistently well-performing set of classifiers is the Naïve Bayes models. However, an inherent problem with these classifiers is the assumption that all attributes used to describe......, termed Hierarchical Naïve Bayes models. Hierarchical Naïve Bayes models extend the modeling flexibility of Naïve Bayes models by introducing latent variables to relax some of the independence statements in these models. We propose a simple algorithm for learning Hierarchical Naïve Bayes models...

  16. Cosmology of a holographic induced gravity model with curvature effects

    International Nuclear Information System (INIS)

    Bouhmadi-Lopez, Mariam; Errahmani, Ahmed; Ouali, Taoufiq

    2011-01-01

    We present a holographic model of the Dvali-Gabadadze-Porrati scenario with a Gauss-Bonnet term in the bulk. We concentrate on the solution that generalizes the normal Dvali-Gabadadze-Porrati branch. It is well known that this branch cannot describe the late-time acceleration of the universe even with the inclusion of a Gauss-Bonnet term. Here, we show that this branch in the presence of a Gauss-Bonnet curvature effect and a holographic dark energy with the Hubble scale as the infrared cutoff can describe the late-time acceleration of the universe. It is worthwhile to stress that such an energy density component cannot do the same job on the normal Dvali-Gabadadze-Porrati branch (without Gauss-Bonnet modifications) nor in a standard four-dimensional relativistic model. The acceleration on the brane is also presented as being induced through an effective dark energy which corresponds to a balance between the holographic one and geometrical effects encoded through the Hubble parameter.

  17. Two-point functions in a holographic Kondo model

    Science.gov (United States)

    Erdmenger, Johanna; Hoyos, Carlos; O'Bannon, Andy; Papadimitriou, Ioannis; Probst, Jonas; Wu, Jackson M. S.

    2017-03-01

    We develop the formalism of holographic renormalization to compute two-point functions in a holographic Kondo model. The model describes a (0 + 1)-dimensional impurity spin of a gauged SU( N ) interacting with a (1 + 1)-dimensional, large- N , strongly-coupled Conformal Field Theory (CFT). We describe the impurity using Abrikosov pseudo-fermions, and define an SU( N )-invariant scalar operator O built from a pseudo-fermion and a CFT fermion. At large N the Kondo interaction is of the form O^{\\dagger}O, which is marginally relevant, and generates a Renormalization Group (RG) flow at the impurity. A second-order mean-field phase transition occurs in which O condenses below a critical temperature, leading to the Kondo effect, including screening of the impurity. Via holography, the phase transition is dual to holographic superconductivity in (1 + 1)-dimensional Anti-de Sitter space. At all temperatures, spectral functions of O exhibit a Fano resonance, characteristic of a continuum of states interacting with an isolated resonance. In contrast to Fano resonances observed for example in quantum dots, our continuum and resonance arise from a (0 + 1)-dimensional UV fixed point and RG flow, respectively. In the low-temperature phase, the resonance comes from a pole in the Green's function of the form - i2, which is characteristic of a Kondo resonance.

  18. Two-point functions in a holographic Kondo model

    Energy Technology Data Exchange (ETDEWEB)

    Erdmenger, Johanna [Institut für Theoretische Physik und Astrophysik, Julius-Maximilians-Universität Würzburg,Am Hubland, D-97074 Würzburg (Germany); Max-Planck-Institut für Physik (Werner-Heisenberg-Institut),Föhringer Ring 6, D-80805 Munich (Germany); Hoyos, Carlos [Department of Physics, Universidad de Oviedo, Avda. Calvo Sotelo 18, 33007, Oviedo (Spain); O’Bannon, Andy [STAG Research Centre, Physics and Astronomy, University of Southampton,Highfield, Southampton SO17 1BJ (United Kingdom); Papadimitriou, Ioannis [SISSA and INFN - Sezione di Trieste, Via Bonomea 265, I 34136 Trieste (Italy); Probst, Jonas [Rudolf Peierls Centre for Theoretical Physics, University of Oxford,1 Keble Road, Oxford OX1 3NP (United Kingdom); Wu, Jackson M.S. [Department of Physics and Astronomy, University of Alabama, Tuscaloosa, AL 35487 (United States)

    2017-03-07

    We develop the formalism of holographic renormalization to compute two-point functions in a holographic Kondo model. The model describes a (0+1)-dimensional impurity spin of a gauged SU(N) interacting with a (1+1)-dimensional, large-N, strongly-coupled Conformal Field Theory (CFT). We describe the impurity using Abrikosov pseudo-fermions, and define an SU(N)-invariant scalar operator O built from a pseudo-fermion and a CFT fermion. At large N the Kondo interaction is of the form O{sup †}O, which is marginally relevant, and generates a Renormalization Group (RG) flow at the impurity. A second-order mean-field phase transition occurs in which O condenses below a critical temperature, leading to the Kondo effect, including screening of the impurity. Via holography, the phase transition is dual to holographic superconductivity in (1+1)-dimensional Anti-de Sitter space. At all temperatures, spectral functions of O exhibit a Fano resonance, characteristic of a continuum of states interacting with an isolated resonance. In contrast to Fano resonances observed for example in quantum dots, our continuum and resonance arise from a (0+1)-dimensional UV fixed point and RG flow, respectively. In the low-temperature phase, the resonance comes from a pole in the Green’s function of the form −i〈O〉{sup 2}, which is characteristic of a Kondo resonance.

  19. Quantisation of the holographic Ricci dark energy model

    Energy Technology Data Exchange (ETDEWEB)

    Albarran, Imanol; Bouhmadi-López, Mariam, E-mail: imanol@ubi.pt, E-mail: mbl@ubi.pt [Departamento de Física, Universidade da Beira Interior, 6200 Covilhã (Portugal)

    2015-08-01

    While general relativity is an extremely robust theory to describe the gravitational interaction in our Universe, it is expected to fail close to singularities like the cosmological ones. On the other hand, it is well known that some dark energy models might induce future singularities; this can be the case for example within the setup of the Holographic Ricci Dark Energy model (HRDE). On this work, we perform a cosmological quantisation of the HRDE model and obtain under which conditions a cosmic doomsday can be avoided within the quantum realm. We show as well that this quantum model not only avoid future singularities but also the past Big Bang.

  20. Modelling of a holographic interferometry based calorimeter for radiation dosimetry

    Science.gov (United States)

    Beigzadeh, A. M.; Vaziri, M. R. Rashidian; Ziaie, F.

    2017-08-01

    In this research work, a model for predicting the behaviour of holographic interferometry based calorimeters for radiation dosimetry is introduced. Using this technique for radiation dosimetry via measuring the variations of refractive index due to energy deposition of radiation has several considerable advantages such as extreme sensitivity and ability of working without normally used temperature sensors that disturb the radiation field. We have shown that the results of our model are in good agreement with the experiments performed by other researchers under the same conditions. This model also reveals that these types of calorimeters have the additional and considerable merits of transforming the dose distribution to a set of discernible interference fringes.

  1. Hierarchical modeling and analysis for spatial data

    CERN Document Server

    Banerjee, Sudipto; Gelfand, Alan E

    2003-01-01

    Among the many uses of hierarchical modeling, their application to the statistical analysis of spatial and spatio-temporal data from areas such as epidemiology And environmental science has proven particularly fruitful. Yet to date, the few books that address the subject have been either too narrowly focused on specific aspects of spatial analysis, or written at a level often inaccessible to those lacking a strong background in mathematical statistics.Hierarchical Modeling and Analysis for Spatial Data is the first accessible, self-contained treatment of hierarchical methods, modeling, and dat

  2. Dynamics of holographic vacuum energy in the DGP model

    International Nuclear Information System (INIS)

    Wu Xing; Zhu Zonghong; Cai Ronggen

    2008-01-01

    We consider the evolution of the vacuum energy in the Dvali-Gabadadze-Porrati (DGP) model according to the holographic principle under the assumption that the relation linking the IR and UV cutoffs still holds in this scenario. The model is studied when the IR cutoff is chosen to be the Hubble scale H -1 , the particle horizon R ph , and the future event horizon R eh , respectively. The two branches of the DGP model are also taken into account. Through numerical analysis, we find that in the cases of H -1 in the (+) branch and R eh in both branches, the vacuum energy can play the role of dark energy. Moreover, when considering the combination of the vacuum energy and the 5D gravity effect in both branches, the equation of state of the effective dark energy may cross -1, which may lead to the big rip singularity. Besides, we constrain the model with the Type Ia supernovae and baryon oscillation data and find that our model is consistent with current data within 1σ, and that the observations prefer either a pure holographic dark energy or a pure DGP model

  3. Method of computer generation and projection recording of microholograms for holographic memory systems: mathematical modelling and experimental implementation

    International Nuclear Information System (INIS)

    Betin, A Yu; Bobrinev, V I; Evtikhiev, N N; Zherdev, A Yu; Zlokazov, E Yu; Lushnikov, D S; Markin, V V; Odinokov, S B; Starikov, S N; Starikov, R S

    2013-01-01

    A method of computer generation and projection recording of microholograms for holographic memory systems is presented; the results of mathematical modelling and experimental implementation of the method are demonstrated. (holographic memory)

  4. Analysis hierarchical model for discrete event systems

    Science.gov (United States)

    Ciortea, E. M.

    2015-11-01

    The This paper presents the hierarchical model based on discrete event network for robotic systems. Based on the hierarchical approach, Petri network is analysed as a network of the highest conceptual level and the lowest level of local control. For modelling and control of complex robotic systems using extended Petri nets. Such a system is structured, controlled and analysed in this paper by using Visual Object Net ++ package that is relatively simple and easy to use, and the results are shown as representations easy to interpret. The hierarchical structure of the robotic system is implemented on computers analysed using specialized programs. Implementation of hierarchical model discrete event systems, as a real-time operating system on a computer network connected via a serial bus is possible, where each computer is dedicated to local and Petri model of a subsystem global robotic system. Since Petri models are simplified to apply general computers, analysis, modelling, complex manufacturing systems control can be achieved using Petri nets. Discrete event systems is a pragmatic tool for modelling industrial systems. For system modelling using Petri nets because we have our system where discrete event. To highlight the auxiliary time Petri model using transport stream divided into hierarchical levels and sections are analysed successively. Proposed robotic system simulation using timed Petri, offers the opportunity to view the robotic time. Application of goods or robotic and transmission times obtained by measuring spot is obtained graphics showing the average time for transport activity, using the parameters sets of finished products. individually.

  5. Quantum chaos and holographic tensor models

    Energy Technology Data Exchange (ETDEWEB)

    Krishnan, Chethan [Center for High Energy Physics, Indian Institute of Science,Bangalore 560012 (India); Sanyal, Sambuddha [International Center for Theoretical Sciences, Tata Institute of Fundamental Research,Bangalore 560089 (India); Subramanian, P.N. Bala [Center for High Energy Physics, Indian Institute of Science,Bangalore 560012 (India)

    2017-03-10

    A class of tensor models were recently outlined as potentially calculable examples of holography: their perturbative large-N behavior is similar to the Sachdev-Ye-Kitaev (SYK) model, but they are fully quantum mechanical (in the sense that there is no quenched disorder averaging). These facts make them intriguing tentative models for quantum black holes. In this note, we explicitly diagonalize the simplest non-trivial Gurau-Witten tensor model and study its spectral and late-time properties. We find parallels to (a single sample of) SYK where some of these features were recently attributed to random matrix behavior and quantum chaos. In particular, the spectral form factor exhibits a dip-ramp-plateau structure after a running time average, in qualitative agreement with SYK. But we also observe that even though the spectrum has a unique ground state, it has a huge (quasi-?)degeneracy of intermediate energy states, not seen in SYK. If one ignores the delta function due to the degeneracies however, there is level repulsion in the unfolded spacing distribution hinting chaos. Furthermore, there are gaps in the spectrum. The system also has a spectral mirror symmetry which we trace back to the presence of a unitary operator with which the Hamiltonian anticommutes. We use it to argue that to the extent that the model exhibits random matrix behavior, it is controlled not by the Dyson ensembles, but by the BDI (chiral orthogonal) class in the Altland-Zirnbauer classification.

  6. Quantum chaos and holographic tensor models

    International Nuclear Information System (INIS)

    Krishnan, Chethan; Sanyal, Sambuddha; Subramanian, P.N. Bala

    2017-01-01

    A class of tensor models were recently outlined as potentially calculable examples of holography: their perturbative large-N behavior is similar to the Sachdev-Ye-Kitaev (SYK) model, but they are fully quantum mechanical (in the sense that there is no quenched disorder averaging). These facts make them intriguing tentative models for quantum black holes. In this note, we explicitly diagonalize the simplest non-trivial Gurau-Witten tensor model and study its spectral and late-time properties. We find parallels to (a single sample of) SYK where some of these features were recently attributed to random matrix behavior and quantum chaos. In particular, the spectral form factor exhibits a dip-ramp-plateau structure after a running time average, in qualitative agreement with SYK. But we also observe that even though the spectrum has a unique ground state, it has a huge (quasi-?)degeneracy of intermediate energy states, not seen in SYK. If one ignores the delta function due to the degeneracies however, there is level repulsion in the unfolded spacing distribution hinting chaos. Furthermore, there are gaps in the spectrum. The system also has a spectral mirror symmetry which we trace back to the presence of a unitary operator with which the Hamiltonian anticommutes. We use it to argue that to the extent that the model exhibits random matrix behavior, it is controlled not by the Dyson ensembles, but by the BDI (chiral orthogonal) class in the Altland-Zirnbauer classification.

  7. Entanglement entropy in a holographic p-wave superconductor model

    Directory of Open Access Journals (Sweden)

    Li-Fang Li

    2015-05-01

    Full Text Available In a recent paper, arXiv:1309.4877, a holographic p-wave model has been proposed in an Einstein–Maxwell-complex vector field theory with a negative cosmological constant. The model exhibits rich phase structure depending on the mass and the charge of the vector field. We investigate the behavior of the entanglement entropy of dual field theory in this model. When the above two model parameters change, we observe the second order, first order and zeroth order phase transitions from the behavior of the entanglement entropy at some intermediate temperatures. These imply that the entanglement entropy can indicate not only the occurrence of the phase transition, but also the order of the phase transition. The entanglement entropy is indeed a good probe to phase transition. Furthermore, the “retrograde condensation” which is a sub-dominated phase is also reflected on the entanglement entropy.

  8. Entanglement entropy in a holographic p-wave superconductor model

    Energy Technology Data Exchange (ETDEWEB)

    Li, Li-Fang, E-mail: lilf@itp.ac.cn [State Key Laboratory of Space Weather, Center for Space Science and Applied Research, Chinese Academy of Sciences, Beijing 100190 (China); Cai, Rong-Gen, E-mail: cairg@itp.ac.cn [State Key Laboratory of Theoretical Physics, Institute of Theoretical Physics, Chinese Academy of Sciences, Beijing 100190 (China); Li, Li, E-mail: liliphy@itp.ac.cn [State Key Laboratory of Theoretical Physics, Institute of Theoretical Physics, Chinese Academy of Sciences, Beijing 100190 (China); Shen, Chao, E-mail: sc@nssc.ac.cn [State Key Laboratory of Space Weather, Center for Space Science and Applied Research, Chinese Academy of Sciences, Beijing 100190 (China)

    2015-05-15

    In a recent paper, (arXiv:1309.4877), a holographic p-wave model has been proposed in an Einstein–Maxwell-complex vector field theory with a negative cosmological constant. The model exhibits rich phase structure depending on the mass and the charge of the vector field. We investigate the behavior of the entanglement entropy of dual field theory in this model. When the above two model parameters change, we observe the second order, first order and zeroth order phase transitions from the behavior of the entanglement entropy at some intermediate temperatures. These imply that the entanglement entropy can indicate not only the occurrence of the phase transition, but also the order of the phase transition. The entanglement entropy is indeed a good probe to phase transition. Furthermore, the “retrograde condensation” which is a sub-dominated phase is also reflected on the entanglement entropy.

  9. Learning with hierarchical-deep models.

    Science.gov (United States)

    Salakhutdinov, Ruslan; Tenenbaum, Joshua B; Torralba, Antonio

    2013-08-01

    We introduce HD (or “Hierarchical-Deep”) models, a new compositional learning architecture that integrates deep learning models with structured hierarchical Bayesian (HB) models. Specifically, we show how we can learn a hierarchical Dirichlet process (HDP) prior over the activities of the top-level features in a deep Boltzmann machine (DBM). This compound HDP-DBM model learns to learn novel concepts from very few training example by learning low-level generic features, high-level features that capture correlations among low-level features, and a category hierarchy for sharing priors over the high-level features that are typical of different kinds of concepts. We present efficient learning and inference algorithms for the HDP-DBM model and show that it is able to learn new concepts from very few examples on CIFAR-100 object recognition, handwritten character recognition, and human motion capture datasets.

  10. Correlation Functions in Holographic Minimal Models

    CERN Document Server

    Papadodimas, Kyriakos

    2012-01-01

    We compute exact three and four point functions in the W_N minimal models that were recently conjectured to be dual to a higher spin theory in AdS_3. The boundary theory has a large number of light operators that are not only invisible in the bulk but grow exponentially with N even at small conformal dimensions. Nevertheless, we provide evidence that this theory can be understood in a 1/N expansion since our correlators look like free-field correlators corrected by a power series in 1/N . However, on examining these corrections we find that the four point function of the two bulk scalar fields is corrected at leading order in 1/N through the contribution of one of the additional light operators in an OPE channel. This suggests that, to correctly reproduce even tree-level correlators on the boundary, the bulk theory needs to be modified by the inclusion of additional fields. As a technical by-product of our analysis, we describe two separate methods -- including a Coulomb gas type free-field formalism -- that ...

  11. A holographic view on matrix model of black hole

    International Nuclear Information System (INIS)

    Suyama, Takao; Yi Piljin

    2004-01-01

    We investigate a deformed matrix model proposed by Kazakov et.al. in relation to Witten's two-dimensional black hole. The existing conjectures assert the equivalence of the two by mapping each to a deformed c=1 theory called the sine-Liouville theory. We point out that the matrix theory in question may be naturally interpreted as a gauged quantum mechanics deformed by insertion of an exponentiated Wilson loop operator, which gives us more direct and holographic map between the two sides. The matrix model in the usual scaling limit must correspond to the bosonic SL(2,R)/U(1) theory in genus expansion but exact in α'. We successfully test this by computing the Wilson loop expectation value and comparing it against the bulk computation. For the latter, we employ the α'-exact geometry proposed by Dijkgraaf, Verlinde, and Verlinde, which was further advocated by Tseytlin. We close with comments on open problems. (author)

  12. Entropic information of dynamical AdS/QCD holographic models

    Energy Technology Data Exchange (ETDEWEB)

    Bernardini, Alex E., E-mail: alexeb@ufscar.br [Departamento de Física, Universidade Federal de São Carlos, PO Box 676, 13565-905, São Carlos, SP (Brazil); Rocha, Roldão da, E-mail: roldao.rocha@ufabc.edu.br [Centro de Matemática, Computação e Cognição, Universidade Federal do ABC, UFABC, 09210-580, Santo André (Brazil)

    2016-11-10

    The Shannon based conditional entropy that underlies five-dimensional Einstein–Hilbert gravity coupled to a dilaton field is investigated in the context of dynamical holographic AdS/QCD models. Considering the UV and IR dominance limits of such AdS/QCD models, the conditional entropy is shown to shed some light onto the meson classification schemes, which corroborate with the existence of light-flavor mesons of lower spins in Nature. Our analysis is supported by a correspondence between statistical mechanics and information entropy which establishes the physical grounds to the Shannon information entropy, also in the context of statistical mechanics, and provides some specificities for accurately extending the entropic discussion to continuous modes of physical systems. From entropic informational grounds, the conditional entropy allows one to identify the lower experimental/phenomenological occurrence of higher spin mesons in Nature. Moreover, it introduces a quantitative theoretical apparatus for studying the instability of high spin light-flavor mesons.

  13. Thermodynamical Aspects of Modified Holographic Dark Energy Model

    International Nuclear Information System (INIS)

    Li Hui; Zhang Yi

    2014-01-01

    We investigate the unified first law and the generalized second law in a modified holographic dark energy model. The thermodynamical analysis on the apparent horizon can work and the corresponding entropy formula is extracted from the systematic algorithm. The entropy correction term depends on the extra-dimension number of the brane as expected, but the interplay between the correction term and the extra dimensions is more complicated. With the unified first law of thermodynamics well-founded, the generalized second law of thermodynamics is discussed and it is found that the second law can be violated in certain circumstances. Particularly, if the number of the extra dimensions is larger than one, the generalized law of thermodynamics is always satisfied; otherwise, the validity of the second law can only be guaranteed with the Hubble radius greatly smaller than the crossover scale r c of the 5-dimensional DGP model. (geophysics, astronomy, and astrophysics)

  14. A more general interacting model of holographic dark energy

    International Nuclear Information System (INIS)

    Yu Fei; Zhang Jingfei; Lu Jianbo; Wang Wei; Gui Yuanxing

    2010-01-01

    So far, there have been no theories or observational data that deny the presence of interaction between dark energy and dark matter. We extend naturally the holographic dark energy (HDE) model, proposed by Granda and Oliveros, in which the dark energy density includes not only the square of the Hubble scale, but also the time derivative of the Hubble scale to the case with interaction and the analytic forms for the cosmic parameters are obtained under the specific boundary conditions. The various behaviors concerning the cosmic expansion depend on the introduced numerical parameters which are also constrained. The more general interacting model inherits the features of the previous ones of HDE, keeping the consistency of the theory.

  15. A hierarchical model for ordinal matrix factorization

    DEFF Research Database (Denmark)

    Paquet, Ulrich; Thomson, Blaise; Winther, Ole

    2012-01-01

    This paper proposes a hierarchical probabilistic model for ordinal matrix factorization. Unlike previous approaches, we model the ordinal nature of the data and take a principled approach to incorporating priors for the hidden variables. Two algorithms are presented for inference, one based...

  16. Holographic superconductor in a deformed four-dimensional STU model

    Energy Technology Data Exchange (ETDEWEB)

    Pourhassan, B.; Bagheri-Mohagheghi, M.M. [Damghan University, School of Physics, Damghan (Iran, Islamic Republic of)

    2017-11-15

    In this paper, we consider a deformed STU model in four dimensions including both electric and magnetic charges. Using the AdS/CFT correspondence, we study holographic superconductors and obtain transport properties like electrical and thermal conductivities. We obtain transport properties in terms of the magnetic charge of the black hole and interpret it as the magnetic monopole of dual field theory. We find that the presence of the magnetic charge is necessary to have maximum conductivities, and the existence of a magnetic monopole with a critical charge (137 e) to reach the maximum superconductivity is important. Also, we show that the thermal conductivity increases with increasing of the magnetic charge. It may be concluded that the origin of superconductivity is the magnetic monopole. (orig.)

  17. Hierarchical Context Modeling for Video Event Recognition.

    Science.gov (United States)

    Wang, Xiaoyang; Ji, Qiang

    2016-10-11

    Current video event recognition research remains largely target-centered. For real-world surveillance videos, targetcentered event recognition faces great challenges due to large intra-class target variation, limited image resolution, and poor detection and tracking results. To mitigate these challenges, we introduced a context-augmented video event recognition approach. Specifically, we explicitly capture different types of contexts from three levels including image level, semantic level, and prior level. At the image level, we introduce two types of contextual features including the appearance context features and interaction context features to capture the appearance of context objects and their interactions with the target objects. At the semantic level, we propose a deep model based on deep Boltzmann machine to learn event object representations and their interactions. At the prior level, we utilize two types of prior-level contexts including scene priming and dynamic cueing. Finally, we introduce a hierarchical context model that systematically integrates the contextual information at different levels. Through the hierarchical context model, contexts at different levels jointly contribute to the event recognition. We evaluate the hierarchical context model for event recognition on benchmark surveillance video datasets. Results show that incorporating contexts in each level can improve event recognition performance, and jointly integrating three levels of contexts through our hierarchical model achieves the best performance.

  18. Hierarchical Bayesian Models of Subtask Learning

    Science.gov (United States)

    Anglim, Jeromy; Wynton, Sarah K. A.

    2015-01-01

    The current study used Bayesian hierarchical methods to challenge and extend previous work on subtask learning consistency. A general model of individual-level subtask learning was proposed focusing on power and exponential functions with constraints to test for inconsistency. To study subtask learning, we developed a novel computer-based booking…

  19. Hierarchical models in the brain.

    Directory of Open Access Journals (Sweden)

    Karl Friston

    2008-11-01

    Full Text Available This paper describes a general model that subsumes many parametric models for continuous data. The model comprises hidden layers of state-space or dynamic causal models, arranged so that the output of one provides input to another. The ensuing hierarchy furnishes a model for many types of data, of arbitrary complexity. Special cases range from the general linear model for static data to generalised convolution models, with system noise, for nonlinear time-series analysis. Crucially, all of these models can be inverted using exactly the same scheme, namely, dynamic expectation maximization. This means that a single model and optimisation scheme can be used to invert a wide range of models. We present the model and a brief review of its inversion to disclose the relationships among, apparently, diverse generative models of empirical data. We then show that this inversion can be formulated as a simple neural network and may provide a useful metaphor for inference and learning in the brain.

  20. Topic Modeling of Hierarchical Corpora /

    OpenAIRE

    Kim, Do-kyum

    2014-01-01

    The sizes of modern digital libraries have grown beyond our capacity to comprehend manually. Thus we need new tools to help us in organizing and browsing large corpora of text that do not require manually examining each document. To this end, machine learning researchers have developed topic models, statistical learning algorithms for automatic comprehension of large collections of text. Topic models provide both global and local views of a corpus; they discover topics that run through the co...

  1. AN INTEGER PROGRAMMING MODEL FOR HIERARCHICAL WORKFORCE

    Directory of Open Access Journals (Sweden)

    BANU SUNGUR

    2013-06-01

    Full Text Available The model presented in this paper is based on the model developed by Billionnet for the hierarchical workforce problem. In Billionnet’s Model, while determining the workers’ weekly costs, weekly working hours of workers are not taken into consideration. In our model, the weekly costs per worker are reduced in proportion to the working hours per week. Our model is illustrated on the Billionnet’s Example. The models in question are compared and evaluated on the basis of the results obtained from the example problem. A reduction is achieved in the total cost by the proposed model.

  2. Model-based magnetization retrieval from holographic phase images

    Energy Technology Data Exchange (ETDEWEB)

    Röder, Falk, E-mail: f.roeder@hzdr.de [Helmholtz-Zentrum Dresden-Rossendorf, Institut für Ionenstrahlphysik und Materialforschung, Bautzner Landstr. 400, D-01328 Dresden (Germany); Triebenberg Labor, Institut für Strukturphysik, Technische Universität Dresden, D-01062 Dresden (Germany); Vogel, Karin [Triebenberg Labor, Institut für Strukturphysik, Technische Universität Dresden, D-01062 Dresden (Germany); Wolf, Daniel [Helmholtz-Zentrum Dresden-Rossendorf, Institut für Ionenstrahlphysik und Materialforschung, Bautzner Landstr. 400, D-01328 Dresden (Germany); Triebenberg Labor, Institut für Strukturphysik, Technische Universität Dresden, D-01062 Dresden (Germany); Hellwig, Olav [Helmholtz-Zentrum Dresden-Rossendorf, Institut für Ionenstrahlphysik und Materialforschung, Bautzner Landstr. 400, D-01328 Dresden (Germany); AG Magnetische Funktionsmaterialien, Institut für Physik, Technische Universität Chemnitz, D-09126 Chemnitz (Germany); HGST, A Western Digital Company, 3403 Yerba Buena Rd., San Jose, CA 95135 (United States); Wee, Sung Hun [HGST, A Western Digital Company, 3403 Yerba Buena Rd., San Jose, CA 95135 (United States); Wicht, Sebastian; Rellinghaus, Bernd [IFW Dresden, Institute for Metallic Materials, P.O. Box 270116, D-01171 Dresden (Germany)

    2017-05-15

    The phase shift of the electron wave is a useful measure for the projected magnetic flux density of magnetic objects at the nanometer scale. More important for materials science, however, is the knowledge about the magnetization in a magnetic nano-structure. As demonstrated here, a dominating presence of stray fields prohibits a direct interpretation of the phase in terms of magnetization modulus and direction. We therefore present a model-based approach for retrieving the magnetization by considering the projected shape of the nano-structure and assuming a homogeneous magnetization therein. We apply this method to FePt nano-islands epitaxially grown on a SrTiO{sub 3} substrate, which indicates an inclination of their magnetization direction relative to the structural easy magnetic [001] axis. By means of this real-world example, we discuss prospects and limits of this approach. - Highlights: • Retrieval of the magnetization from holographic phase images. • Magnetostatic model constructed for a magnetic nano-structure. • Decomposition into homogeneously magnetized components. • Discretization of a each component by elementary cuboids. • Analytic solution for the phase of a magnetized cuboid considered. • Fitting a set of magnetization vectors to experimental phase images.

  3. A holographic model for the fractional quantum Hall effect

    Energy Technology Data Exchange (ETDEWEB)

    Lippert, Matthew [Institute for Theoretical Physics, University of Amsterdam,Science Park 904, 1090GL Amsterdam (Netherlands); Meyer, René [Kavli Institute for the Physics and Mathematics of the Universe (WPI), The University of Tokyo,Kashiwa, Chiba 277-8568 (Japan); Taliotis, Anastasios [Theoretische Natuurkunde, Vrije Universiteit Brussel andThe International Solvay Institutes,Pleinlaan 2, B-1050 Brussels (Belgium)

    2015-01-08

    Experimental data for fractional quantum Hall systems can to a large extent be explained by assuming the existence of a Γ{sub 0}(2) modular symmetry group commuting with the renormalization group flow and hence mapping different phases of two-dimensional electron gases into each other. Based on this insight, we construct a phenomenological holographic model which captures many features of the fractional quantum Hall effect. Using an SL(2,ℤ)-invariant Einstein-Maxwell-axio-dilaton theory capturing the important modular transformation properties of quantum Hall physics, we find dyonic diatonic black hole solutions which are gapped and have a Hall conductivity equal to the filling fraction, as expected for quantum Hall states. We also provide several technical results on the general behavior of the gauge field fluctuations around these dyonic dilatonic black hole solutions: we specify a sufficient criterion for IR normalizability of the fluctuations, demonstrate the preservation of the gap under the SL(2,ℤ) action, and prove that the singularity of the fluctuation problem in the presence of a magnetic field is an accessory singularity. We finish with a preliminary investigation of the possible IR scaling solutions of our model and some speculations on how they could be important for the observed universality of quantum Hall transitions.

  4. A holographic model for the fractional quantum Hall effect

    Science.gov (United States)

    Lippert, Matthew; Meyer, René; Taliotis, Anastasios

    2015-01-01

    Experimental data for fractional quantum Hall systems can to a large extent be explained by assuming the existence of a Γ0(2) modular symmetry group commuting with the renormalization group flow and hence mapping different phases of two-dimensional electron gases into each other. Based on this insight, we construct a phenomenological holographic model which captures many features of the fractional quantum Hall effect. Using an -invariant Einstein-Maxwell-axio-dilaton theory capturing the important modular transformation properties of quantum Hall physics, we find dyonic diatonic black hole solutions which are gapped and have a Hall conductivity equal to the filling fraction, as expected for quantum Hall states. We also provide several technical results on the general behavior of the gauge field fluctuations around these dyonic dilatonic black hole solutions: we specify a sufficient criterion for IR normalizability of the fluctuations, demonstrate the preservation of the gap under the action, and prove that the singularity of the fluctuation problem in the presence of a magnetic field is an accessory singularity. We finish with a preliminary investigation of the possible IR scaling solutions of our model and some speculations on how they could be important for the observed universality of quantum Hall transitions.

  5. Internet advertising effectiveness by using hierarchical model

    OpenAIRE

    RAHMANI, Samaneh

    2015-01-01

    Abstract. Present paper has been developed with the title of internet advertising effectiveness by using hierarchical model. Presenting the question: Today Internet is an important channel in marketing and advertising. The reason for this could be the ability of the Internet to reduce costs and people’s access to online services[1]. Also advertisers can easily access a multitude of users and communicate with them at low cost [9]. On the other hand, compared to traditional advertising, interne...

  6. A Hierarchical Agency Model of Deposit Insurance

    OpenAIRE

    Jonathan Carroll; Shino Takayama

    2010-01-01

    This paper develops a hierarchical agency model of deposit insurance. The main purpose is to undertake a game theoretic analysis of the consequences of deposit insurance schemes and their effects on monitoring incentives for banks. Using this simple framework, we analyze both risk- independent and risk-dependent premium schemes along with reserve requirement constraints. The results provide policymakers with not only a better understanding of the effects of deposit insurance on welfare and th...

  7. On the internal consistency of holographic dark energy models

    International Nuclear Information System (INIS)

    Horvat, R

    2008-01-01

    Holographic dark energy (HDE) models, underpinned by an effective quantum field theory (QFT) with a manifest UV/IR connection, have become convincing candidates for providing an explanation of the dark energy in the universe. On the other hand, the maximum number of quantum states that a conventional QFT for a box of size L is capable of describing relates to those boxes which are on the brink of experiencing a sudden collapse to a black hole. Another restriction on the underlying QFT is that the UV cut-off, which cannot be chosen independently of the IR cut-off and therefore becomes a function of time in a cosmological setting, should stay the largest energy scale even in the standard cosmological epochs preceding a dark energy dominated one. We show that, irrespective of whether one deals with the saturated form of HDE or takes a certain degree of non-saturation in the past, the above restrictions cannot be met in a radiation dominated universe, an epoch in the history of the universe which is expected to be perfectly describable within conventional QFT

  8. Probing interaction and spatial curvature in the holographic dark energy model

    International Nuclear Information System (INIS)

    Li, Miao; Li, Xiao-Dong; Wang, Shuang; Wang, Yi; Zhang, Xin

    2009-01-01

    In this paper we place observational constraints on the interaction and spatial curvature in the holographic dark energy model. We consider three kinds of phenomenological interactions between holographic dark energy and matter, i.e., the interaction term Q is proportional to the energy densities of dark energy (ρ Λ ), matter (ρ m ), and matter plus dark energy (ρ m +ρ Λ ). For probing the interaction and spatial curvature in the holographic dark energy model, we use the latest observational data including the type Ia supernovae (SNIa) Constitution data, the shift parameter of the cosmic microwave background (CMB) given by the five-year Wilkinson Microwave Anisotropy Probe (WMAP5) observations, and the baryon acoustic oscillation (BAO) measurement from the Sloan Digital Sky Survey (SDSS). Our results show that the interaction and spatial curvature in the holographic dark energy model are both rather small. Besides, it is interesting to find that there exists significant degeneracy between the phenomenological interaction and the spatial curvature in the holographic dark energy model

  9. The effect of anisotropy on the thermodynamics of the interacting holographic dark energy model

    Science.gov (United States)

    Hossienkhani, H.; Jafari, A.; Fayaz, V.; Ramezani, A. H.

    2018-02-01

    By considering a holographic model for the dark energy in an anisotropic universe, the thermodynamics of a scheme of dark matter and dark energy interaction has been investigated. The results suggest that when holographic dark energy and dark matter evolve separately, each of them remains in thermodynamic equilibrium, therefore the interaction between them may be viewed as a stable thermal fluctuation that brings a logarithmic correction to the equilibrium entropy. Also the relation between the interaction term of the dark components and this thermal fluctuation has been obtained. Additionally, for a cosmological interaction as a free function, the anisotropy effects on the generalized second law of thermodynamics have been studied. By using the latest observational data on the holographic dark energy models as the unification of dark matter and dark energy, the observational constraints have been probed. To do this, we focus on observational determinations of the Hubble expansion rate H( z). Finally, we evaluate the anisotropy effects (although low) on various topics, such as the evolution of the statefinder diagnostic, the distance modulus and the spherical collapse from the holographic dark energy model and compare them with the results of the holographic dark energy of the Friedmann-Robertson-Walker and Λ CDM models.

  10. Hierarchic modeling of heat exchanger thermal hydraulics

    International Nuclear Information System (INIS)

    Horvat, A.; Koncar, B.

    2002-01-01

    Volume Averaging Technique (VAT) is employed in order to model the heat exchanger cross-flow as a porous media flow. As the averaging of the transport equations lead to a closure problem, separate relations are introduced to model interphase momentum and heat transfer between fluid flow and the solid structure. The hierarchic modeling is used to calculate the local drag coefficient C d as a function of Reynolds number Re h . For that purpose a separate model of REV is built and DNS of flow through REV is performed. The local values of heat transfer coefficient h are obtained from available literature. The geometry of the simulation domain and boundary conditions follow the geometry of the experimental test section used at U.C.L.A. The calculated temperature fields reveal that the geometry with denser pin-fins arrangement (HX1) heats fluid flow faster. The temperature field in the HX2 exhibits the formation of thermal boundary layer between pin-fins, which has a significant role in overall thermal performance of the heat exchanger. Although presented discrepancies of the whole-section drag coefficient C d are large, we believe that hierarchic modeling is an appropriate strategy for calculation of complex transport phenomena in heat exchanger geometries.(author)

  11. Holographic dark energy models: a comparison from the latest observational data

    International Nuclear Information System (INIS)

    Li, Miao; Li, Xiao-Dong; Wang, Shuang; Zhang, Xin

    2009-01-01

    The holographic principle of quantum gravity theory has been applied to the dark energy (DE) problem, and so far three holographic DE models have been proposed: the original holographic dark energy (HDE) model, the agegraphic dark energy (ADE) model, and the holographic Ricci dark energy (RDE) model. In this work, we perform the best-fit analysis on these three models, by using the latest observational data including the Union+CFA3 sample of 397 Type Ia supernovae (SNIa), the shift parameter of the cosmic microwave background (CMB) given by the five-year Wilkinson Microwave Anisotropy Probe (WMAP5) observations, and the baryon acoustic oscillation (BAO) measurement from the Sloan Digital Sky Survey (SDSS). The analysis shows that for HDE, χ min 2 = 465.912; for RDE, χ min 2 = 483.130; for ADE, χ min 2 = 481.694. Among these models, HDE model can give the smallest χ 2 min . Besides, we also use the Bayesian evidence (BE) as a model selection criterion to make a comparison. It is found that for HDE, ADE, and RDE, Δln BE = −0.86, −5.17, and −8.14, respectively. So, it seems that the HDE model is more favored by the observational data

  12. Reconstructing an interacting holographic polytropic gas model in a non-flat FRW universe

    International Nuclear Information System (INIS)

    Karami, K; Abdolmaleki, A

    2010-01-01

    We study the correspondence between the interacting holographic dark energy and the polytropic gas model of dark energy in a non-flat FRW universe. This correspondence allows one to reconstruct the potential and the dynamics for the scalar field of the polytropic model, which describe accelerated expansion of the universe.

  13. Reconstructing an interacting holographic polytropic gas model in a non-flat FRW universe

    Energy Technology Data Exchange (ETDEWEB)

    Karami, K; Abdolmaleki, A, E-mail: KKarami@uok.ac.i [Department of Physics, University of Kurdistan, Pasdaran Street, Sanandaj (Iran, Islamic Republic of)

    2010-05-01

    We study the correspondence between the interacting holographic dark energy and the polytropic gas model of dark energy in a non-flat FRW universe. This correspondence allows one to reconstruct the potential and the dynamics for the scalar field of the polytropic model, which describe accelerated expansion of the universe.

  14. Galactic chemical evolution in hierarchical formation models

    Science.gov (United States)

    Arrigoni, Matias

    2010-10-01

    The chemical properties and abundance ratios of galaxies provide important information about their formation histories. Galactic chemical evolution has been modelled in detail within the monolithic collapse scenario. These models have successfully described the abundance distributions in our Galaxy and other spiral discs, as well as the trends of metallicity and abundance ratios observed in early-type galaxies. In the last three decades, however, the paradigm of hierarchical assembly in a Cold Dark Matter (CDM) cosmology has revised the picture of how structure in the Universe forms and evolves. In this scenario, galaxies form when gas radiatively cools and condenses inside dark matter haloes, which themselves follow dissipationless gravitational collapse. The CDM picture has been successful at predicting many observed properties of galaxies (for example, the luminosity and stellar mass function of galaxies, color-magnitude or star formation rate vs. stellar mass distributions, relative numbers of early and late-type galaxies, gas fractions and size distributions of spiral galaxies, and the global star formation history), though many potential problems and open questions remain. It is therefore interesting to see whether chemical evolution models, when implemented within this modern cosmological context, are able to correctly predict the observed chemical properties of galaxies. With the advent of more powerfull telescopes and detectors, precise observations of chemical abundances and abundance ratios in various phases (stellar, ISM, ICM) offer the opportunity to obtain strong constraints on galaxy formation histories and the physics that shapes them. However, in order to take advantage of these observations, it is necessary to implement detailed modeling of chemical evolution into a modern cosmological model of hierarchical assembly.

  15. A general holographic insulator/superconductor model with dark matter sector away from the probe limit

    Energy Technology Data Exchange (ETDEWEB)

    Peng, Yan, E-mail: yanpengphy@163.com [School of Mathematical Sciences, Qufu Normal University, Qufu, Shandong 273165 (China); School of Mathematics and Computer Science, Shaanxi Sci-Tech University, Hanzhong, Shaanxi 723000 (China); Pan, Qiyuan, E-mail: panqiyuan@126.com [Department of Physics, Key Laboratory of Low Dimensional Quantum Structures and Quantum Control of Ministry of Education, Hunan Normal University, Changsha, Hunan 410081 (China); Liu, Yunqi, E-mail: liuyunqi@hust.edu.cn [School of Physics, Huazhong University of Science and Technology, Wuhan, Hubei 430074 (China)

    2017-02-15

    We investigate holographic phase transitions with dark matter sector in the AdS soliton background away from the probe limit. In cases of weak backreaction, we find that the larger coupling parameter α makes the gap of condensation shallower and the critical chemical potential keeps as a constant. In contrast, for very heavy backreaction, the dark matter sector could affect the critical chemical potential and the order of phase transitions. We also find the jump of the holographic topological entanglement entropy corresponds to a first order transition between superconducting states in this model with dark matter sector. More importantly, for certain sets of parameters, we observe novel phenomenon of retrograde condensation. In a word, the dark matter sector provides richer physics in the phase structure and the holographic superconductor properties are helpful in understanding dark matter.

  16. A general holographic insulator/superconductor model with dark matter sector away from the probe limit

    International Nuclear Information System (INIS)

    Peng, Yan; Pan, Qiyuan; Liu, Yunqi

    2017-01-01

    We investigate holographic phase transitions with dark matter sector in the AdS soliton background away from the probe limit. In cases of weak backreaction, we find that the larger coupling parameter α makes the gap of condensation shallower and the critical chemical potential keeps as a constant. In contrast, for very heavy backreaction, the dark matter sector could affect the critical chemical potential and the order of phase transitions. We also find the jump of the holographic topological entanglement entropy corresponds to a first order transition between superconducting states in this model with dark matter sector. More importantly, for certain sets of parameters, we observe novel phenomenon of retrograde condensation. In a word, the dark matter sector provides richer physics in the phase structure and the holographic superconductor properties are helpful in understanding dark matter.

  17. A general holographic insulator/superconductor model with dark matter sector away from the probe limit

    Directory of Open Access Journals (Sweden)

    Yan Peng

    2017-02-01

    Full Text Available We investigate holographic phase transitions with dark matter sector in the AdS soliton background away from the probe limit. In cases of weak backreaction, we find that the larger coupling parameter α makes the gap of condensation shallower and the critical chemical potential keeps as a constant. In contrast, for very heavy backreaction, the dark matter sector could affect the critical chemical potential and the order of phase transitions. We also find the jump of the holographic topological entanglement entropy corresponds to a first order transition between superconducting states in this model with dark matter sector. More importantly, for certain sets of parameters, we observe novel phenomenon of retrograde condensation. In a word, the dark matter sector provides richer physics in the phase structure and the holographic superconductor properties are helpful in understanding dark matter.

  18. Entrepreneurial intention modeling using hierarchical multiple regression

    Directory of Open Access Journals (Sweden)

    Marina Jeger

    2014-12-01

    Full Text Available The goal of this study is to identify the contribution of effectuation dimensions to the predictive power of the entrepreneurial intention model over and above that which can be accounted for by other predictors selected and confirmed in previous studies. As is often the case in social and behavioral studies, some variables are likely to be highly correlated with each other. Therefore, the relative amount of variance in the criterion variable explained by each of the predictors depends on several factors such as the order of variable entry and sample specifics. The results show the modest predictive power of two dimensions of effectuation prior to the introduction of the theory of planned behavior elements. The article highlights the main advantages of applying hierarchical regression in social sciences as well as in the specific context of entrepreneurial intention formation, and addresses some of the potential pitfalls that this type of analysis entails.

  19. Hierarchical Multinomial Processing Tree Models: A Latent-Trait Approach

    Science.gov (United States)

    Klauer, Karl Christoph

    2010-01-01

    Multinomial processing tree models are widely used in many areas of psychology. A hierarchical extension of the model class is proposed, using a multivariate normal distribution of person-level parameters with the mean and covariance matrix to be estimated from the data. The hierarchical model allows one to take variability between persons into…

  20. Holographic non-Gaussianity

    International Nuclear Information System (INIS)

    McFadden, Paul; Skenderis, Kostas

    2011-01-01

    We investigate the non-Gaussianity of primordial cosmological perturbations within our recently proposed holographic description of inflationary universes. We derive a holographic formula that determines the bispectrum of cosmological curvature perturbations in terms of correlation functions of a holographically dual three-dimensional non-gravitational quantum field theory (QFT). This allows us to compute the primordial bispectrum for a universe which started in a non-geometric holographic phase, using perturbative QFT calculations. Strikingly, for a class of models specified by a three-dimensional super-renormalisable QFT, the primordial bispectrum is of exactly the factorisable equilateral form with f NL equil. = 5/36, irrespective of the details of the dual QFT. A by-product of this investigation is a holographic formula for the three-point function of the trace of the stress-energy tensor along general holographic RG flows, which should have applications outside the remit of this work

  1. An information entropy model on clinical assessment of patients based on the holographic field of meridian

    Science.gov (United States)

    Wu, Jingjing; Wu, Xinming; Li, Pengfei; Li, Nan; Mao, Xiaomei; Chai, Lihe

    2017-04-01

    Meridian system is not only the basis of traditional Chinese medicine (TCM) method (e.g. acupuncture, massage), but also the core of TCM's basic theory. This paper has introduced a new informational perspective to understand the reality and the holographic field of meridian. Based on maximum information entropy principle (MIEP), a dynamic equation for the holographic field has been deduced, which reflects the evolutionary characteristics of meridian. By using self-organizing artificial neural network as algorithm, the evolutionary dynamic equation of the holographic field can be resolved to assess properties of meridians and clinically diagnose the health characteristics of patients. Finally, through some cases from clinical patients (e.g. a 30-year-old male patient, an apoplectic patient, an epilepsy patient), we use this model to assess the evolutionary properties of meridians. It is proved that this model not only has significant implications in revealing the essence of meridian in TCM, but also may play a guiding role in clinical assessment of patients based on the holographic field of meridians.

  2. A hierarchical stochastic model for bistable perception.

    Directory of Open Access Journals (Sweden)

    Stefan Albert

    2017-11-01

    Full Text Available Viewing of ambiguous stimuli can lead to bistable perception alternating between the possible percepts. During continuous presentation of ambiguous stimuli, percept changes occur as single events, whereas during intermittent presentation of ambiguous stimuli, percept changes occur at more or less regular intervals either as single events or bursts. Response patterns can be highly variable and have been reported to show systematic differences between patients with schizophrenia and healthy controls. Existing models of bistable perception often use detailed assumptions and large parameter sets which make parameter estimation challenging. Here we propose a parsimonious stochastic model that provides a link between empirical data analysis of the observed response patterns and detailed models of underlying neuronal processes. Firstly, we use a Hidden Markov Model (HMM for the times between percept changes, which assumes one single state in continuous presentation and a stable and an unstable state in intermittent presentation. The HMM captures the observed differences between patients with schizophrenia and healthy controls, but remains descriptive. Therefore, we secondly propose a hierarchical Brownian model (HBM, which produces similar response patterns but also provides a relation to potential underlying mechanisms. The main idea is that neuronal activity is described as an activity difference between two competing neuronal populations reflected in Brownian motions with drift. This differential activity generates switching between the two conflicting percepts and between stable and unstable states with similar mechanisms on different neuronal levels. With only a small number of parameters, the HBM can be fitted closely to a high variety of response patterns and captures group differences between healthy controls and patients with schizophrenia. At the same time, it provides a link to mechanistic models of bistable perception, linking the group

  3. A hierarchical stochastic model for bistable perception.

    Science.gov (United States)

    Albert, Stefan; Schmack, Katharina; Sterzer, Philipp; Schneider, Gaby

    2017-11-01

    Viewing of ambiguous stimuli can lead to bistable perception alternating between the possible percepts. During continuous presentation of ambiguous stimuli, percept changes occur as single events, whereas during intermittent presentation of ambiguous stimuli, percept changes occur at more or less regular intervals either as single events or bursts. Response patterns can be highly variable and have been reported to show systematic differences between patients with schizophrenia and healthy controls. Existing models of bistable perception often use detailed assumptions and large parameter sets which make parameter estimation challenging. Here we propose a parsimonious stochastic model that provides a link between empirical data analysis of the observed response patterns and detailed models of underlying neuronal processes. Firstly, we use a Hidden Markov Model (HMM) for the times between percept changes, which assumes one single state in continuous presentation and a stable and an unstable state in intermittent presentation. The HMM captures the observed differences between patients with schizophrenia and healthy controls, but remains descriptive. Therefore, we secondly propose a hierarchical Brownian model (HBM), which produces similar response patterns but also provides a relation to potential underlying mechanisms. The main idea is that neuronal activity is described as an activity difference between two competing neuronal populations reflected in Brownian motions with drift. This differential activity generates switching between the two conflicting percepts and between stable and unstable states with similar mechanisms on different neuronal levels. With only a small number of parameters, the HBM can be fitted closely to a high variety of response patterns and captures group differences between healthy controls and patients with schizophrenia. At the same time, it provides a link to mechanistic models of bistable perception, linking the group differences to

  4. Constructive Epistemic Modeling: A Hierarchical Bayesian Model Averaging Method

    Science.gov (United States)

    Tsai, F. T. C.; Elshall, A. S.

    2014-12-01

    Constructive epistemic modeling is the idea that our understanding of a natural system through a scientific model is a mental construct that continually develops through learning about and from the model. Using the hierarchical Bayesian model averaging (HBMA) method [1], this study shows that segregating different uncertain model components through a BMA tree of posterior model probabilities, model prediction, within-model variance, between-model variance and total model variance serves as a learning tool [2]. First, the BMA tree of posterior model probabilities permits the comparative evaluation of the candidate propositions of each uncertain model component. Second, systemic model dissection is imperative for understanding the individual contribution of each uncertain model component to the model prediction and variance. Third, the hierarchical representation of the between-model variance facilitates the prioritization of the contribution of each uncertain model component to the overall model uncertainty. We illustrate these concepts using the groundwater modeling of a siliciclastic aquifer-fault system. The sources of uncertainty considered are from geological architecture, formation dip, boundary conditions and model parameters. The study shows that the HBMA analysis helps in advancing knowledge about the model rather than forcing the model to fit a particularly understanding or merely averaging several candidate models. [1] Tsai, F. T.-C., and A. S. Elshall (2013), Hierarchical Bayesian model averaging for hydrostratigraphic modeling: Uncertainty segregation and comparative evaluation. Water Resources Research, 49, 5520-5536, doi:10.1002/wrcr.20428. [2] Elshall, A.S., and F. T.-C. Tsai (2014). Constructive epistemic modeling of groundwater flow with geological architecture and boundary condition uncertainty under Bayesian paradigm, Journal of Hydrology, 517, 105-119, doi: 10.1016/j.jhydrol.2014.05.027.

  5. Reconstructing interacting entropy-corrected holographic scalar field models of dark energy in the non-flat universe

    Energy Technology Data Exchange (ETDEWEB)

    Karami, K; Khaledian, M S [Department of Physics, University of Kurdistan, Pasdaran Street, Sanandaj (Iran, Islamic Republic of); Jamil, Mubasher, E-mail: KKarami@uok.ac.ir, E-mail: MS.Khaledian@uok.ac.ir, E-mail: mjamil@camp.nust.edu.pk [Center for Advanced Mathematics and Physics (CAMP), National University of Sciences and Technology (NUST), Islamabad (Pakistan)

    2011-02-15

    Here we consider the entropy-corrected version of the holographic dark energy (DE) model in the non-flat universe. We obtain the equation of state parameter in the presence of interaction between DE and dark matter. Moreover, we reconstruct the potential and the dynamics of the quintessence, tachyon, K-essence and dilaton scalar field models according to the evolutionary behavior of the interacting entropy-corrected holographic DE model.

  6. Bayesian hierarchical modelling of North Atlantic windiness

    Science.gov (United States)

    Vanem, E.; Breivik, O. N.

    2013-03-01

    Extreme weather conditions represent serious natural hazards to ship operations and may be the direct cause or contributing factor to maritime accidents. Such severe environmental conditions can be taken into account in ship design and operational windows can be defined that limits hazardous operations to less extreme conditions. Nevertheless, possible changes in the statistics of extreme weather conditions, possibly due to anthropogenic climate change, represent an additional hazard to ship operations that is less straightforward to account for in a consistent way. Obviously, there are large uncertainties as to how future climate change will affect the extreme weather conditions at sea and there is a need for stochastic models that can describe the variability in both space and time at various scales of the environmental conditions. Previously, Bayesian hierarchical space-time models have been developed to describe the variability and complex dependence structures of significant wave height in space and time. These models were found to perform reasonably well and provided some interesting results, in particular, pertaining to long-term trends in the wave climate. In this paper, a similar framework is applied to oceanic windiness and the spatial and temporal variability of the 10-m wind speed over an area in the North Atlantic ocean is investigated. When the results from the model for North Atlantic windiness is compared to the results for significant wave height over the same area, it is interesting to observe that whereas an increasing trend in significant wave height was identified, no statistically significant long-term trend was estimated in windiness. This may indicate that the increase in significant wave height is not due to an increase in locally generated wind waves, but rather to increased swell. This observation is also consistent with studies that have suggested a poleward shift of the main storm tracks.

  7. Bayesian hierarchical modelling of North Atlantic windiness

    Directory of Open Access Journals (Sweden)

    E. Vanem

    2013-03-01

    Full Text Available Extreme weather conditions represent serious natural hazards to ship operations and may be the direct cause or contributing factor to maritime accidents. Such severe environmental conditions can be taken into account in ship design and operational windows can be defined that limits hazardous operations to less extreme conditions. Nevertheless, possible changes in the statistics of extreme weather conditions, possibly due to anthropogenic climate change, represent an additional hazard to ship operations that is less straightforward to account for in a consistent way. Obviously, there are large uncertainties as to how future climate change will affect the extreme weather conditions at sea and there is a need for stochastic models that can describe the variability in both space and time at various scales of the environmental conditions. Previously, Bayesian hierarchical space-time models have been developed to describe the variability and complex dependence structures of significant wave height in space and time. These models were found to perform reasonably well and provided some interesting results, in particular, pertaining to long-term trends in the wave climate. In this paper, a similar framework is applied to oceanic windiness and the spatial and temporal variability of the 10-m wind speed over an area in the North Atlantic ocean is investigated. When the results from the model for North Atlantic windiness is compared to the results for significant wave height over the same area, it is interesting to observe that whereas an increasing trend in significant wave height was identified, no statistically significant long-term trend was estimated in windiness. This may indicate that the increase in significant wave height is not due to an increase in locally generated wind waves, but rather to increased swell. This observation is also consistent with studies that have suggested a poleward shift of the main storm tracks.

  8. Polarization and switching properties of holographic polymer-dispersed liquid-crystal gratings. I. Theoretical model

    Science.gov (United States)

    Sutherland, Richard L.

    2002-12-01

    Polarization properties and electro-optical switching behavior of holographic polymer-dispersed liquid-crystal (HPDLC) reflection and transmission gratings are studied. A theoretical model is developed that combines anisotropic coupled-wave theory with an elongated liquid-crystal-droplet switching model and includes the effects of a statistical orientational distribution of droplet-symmetry axes. Angle- and polarization-dependent switching behaviors of HPDLC gratings are elucidated, and the effects on dynamic range are described. A new type of electro-optical switching not seen in ordinary polymer-dispersed liquid crystals, to the best of the author's knowledge, is presented and given a physical interpretation. The model provides valuable insight to the physics of these gratings and can be applied to the design of HPDLC holographic optical elements.

  9. What are hierarchical models and how do we analyze them?

    Science.gov (United States)

    Royle, Andy

    2016-01-01

    In this chapter we provide a basic definition of hierarchical models and introduce the two canonical hierarchical models in this book: site occupancy and N-mixture models. The former is a hierarchical extension of logistic regression and the latter is a hierarchical extension of Poisson regression. We introduce basic concepts of probability modeling and statistical inference including likelihood and Bayesian perspectives. We go through the mechanics of maximizing the likelihood and characterizing the posterior distribution by Markov chain Monte Carlo (MCMC) methods. We give a general perspective on topics such as model selection and assessment of model fit, although we demonstrate these topics in practice in later chapters (especially Chapters 5, 6, 7, and 10 Chapter 5 Chapter 6 Chapter 7 Chapter 10)

  10. Hierarchical Neural Regression Models for Customer Churn Prediction

    Directory of Open Access Journals (Sweden)

    Golshan Mohammadi

    2013-01-01

    Full Text Available As customers are the main assets of each industry, customer churn prediction is becoming a major task for companies to remain in competition with competitors. In the literature, the better applicability and efficiency of hierarchical data mining techniques has been reported. This paper considers three hierarchical models by combining four different data mining techniques for churn prediction, which are backpropagation artificial neural networks (ANN, self-organizing maps (SOM, alpha-cut fuzzy c-means (α-FCM, and Cox proportional hazards regression model. The hierarchical models are ANN + ANN + Cox, SOM + ANN + Cox, and α-FCM + ANN + Cox. In particular, the first component of the models aims to cluster data in two churner and nonchurner groups and also filter out unrepresentative data or outliers. Then, the clustered data as the outputs are used to assign customers to churner and nonchurner groups by the second technique. Finally, the correctly classified data are used to create Cox proportional hazards model. To evaluate the performance of the hierarchical models, an Iranian mobile dataset is considered. The experimental results show that the hierarchical models outperform the single Cox regression baseline model in terms of prediction accuracy, Types I and II errors, RMSE, and MAD metrics. In addition, the α-FCM + ANN + Cox model significantly performs better than the two other hierarchical models.

  11. The Revised Hierarchical Model: A critical review and assessment

    OpenAIRE

    Kroll, Judith F.; van Hell, Janet G.; Tokowicz, Natasha; Green, David W.

    2010-01-01

    Brysbaert and Duyck (2009) suggest that it is time to abandon the Revised Hierarchical Model (Kroll and Stewart, 1994) in favor of connectionist models such as BIA+ (Dijkstra and Van Heuven, 2002) that more accurately account for the recent evidence on nonselective access in bilingual word recognition. In this brief response, we first review the history of the Revised Hierarchical Model (RHM), consider the set of issues that it was proposed to address, and then evaluate the evidence that supp...

  12. Salty popcorn in a homogeneous low-dimensional toy model of holographic QCD

    International Nuclear Information System (INIS)

    Elliot-Ripley, Matthew

    2017-01-01

    Recently, a homogeneous ansatz has been used to study cold dense nuclear matter in the Sakai–Sugimoto model of holographic QCD. To justify this homogeneous approximation we here investigate a homogeneous ansatz within a low-dimensional toy version of Sakai–Sugimoto to study finite baryon density configurations and compare it to full numerical solutions. We find the ansatz corresponds to enforcing a dyon salt arrangement in which the soliton solutions are split into half-soliton layers. Within this ansatz we find analogues of the proposed baryonic popcorn transitions, in which solutions split into multiple layers in the holographic direction. The homogeneous results are found to qualitatively match the full numerical solutions, lending confidence to the homogeneous approximations of the full Sakai–Sugimoto model. In addition, we find exact compact solutions in the high density, flat space limit which demonstrate the existence of further popcorn transitions to three layers and beyond. (paper)

  13. Hierarchical regression analysis in structural Equation Modeling

    NARCIS (Netherlands)

    de Jong, P.F.

    1999-01-01

    In a hierarchical or fixed-order regression analysis, the independent variables are entered into the regression equation in a prespecified order. Such an analysis is often performed when the extra amount of variance accounted for in a dependent variable by a specific independent variable is the main

  14. Holographic Phonons

    Science.gov (United States)

    Alberte, Lasma; Ammon, Martin; Jiménez-Alba, Amadeo; Baggioli, Matteo; Pujolàs, Oriol

    2018-04-01

    We present a class of holographic massive gravity models that realize a spontaneous breaking of translational symmetry—they exhibit transverse phonon modes whose speed relates to the elastic shear modulus according to elasticity theory. Massive gravity theories thus emerge as versatile and convenient theories to model generic types of translational symmetry breaking: explicit, spontaneous, and a mixture of both. The nature of the breaking is encoded in the radial dependence of the graviton mass. As an application of the model, we compute the temperature dependence of the shear modulus and find that it features a glasslike melting transition.

  15. P-T phase diagram of a holographic s+p model from Gauss-Bonnet gravity

    International Nuclear Information System (INIS)

    Nie, Zhang-Yu; Zeng, Hui

    2015-01-01

    In this paper, we study the holographic s+p model in 5-dimensional bulk gravity with the Gauss-Bonnet term. We work in the probe limit and give the Δ-T phase diagrams at three different values of the Gauss-Bonnet coefficient to show the effect of the Gauss-Bonnet term. We also construct the P-T phase diagrams for the holographic system using two different definitions of the pressure and compare the results.

  16. Slow logarithmic relaxation in models with hierarchically constrained dynamics

    OpenAIRE

    Brey, J. J.; Prados, A.

    2000-01-01

    A general kind of models with hierarchically constrained dynamics is shown to exhibit logarithmic anomalous relaxation, similarly to a variety of complex strongly interacting materials. The logarithmic behavior describes most of the decay of the response function.

  17. Bayesian disease mapping: hierarchical modeling in spatial epidemiology

    National Research Council Canada - National Science Library

    Lawson, Andrew

    2013-01-01

    .... Exploring these new developments, Bayesian Disease Mapping: Hierarchical Modeling in Spatial Epidemiology, Second Edition provides an up-to-date, cohesive account of the full range of Bayesian disease mapping methods and applications...

  18. Nonlinear evolution dynamics of holographic superconductor model with scalar self-interaction

    Science.gov (United States)

    Li, Ran; Zi, Tieguang; Zhang, Hongbao

    2018-04-01

    We investigate the holographic superconductor model that is described by the Einstein-Maxwell theory with the self-interaction term λ |Ψ |4 of complex scalar field in asymptotic anti-de Sitter (AdS) spacetime. Below critical temperature Tc, the planar Reissner-Nordström-AdS black hole is unstable due to the near-horizon scalar condensation instability. We study the full nonlinear development of this instability by numerically solving the gravitational dynamics in the asymptotic AdS spacetime, and observe a dynamical process from the perturbed Reissner-Nordström-AdS black hole to a hairy black hole when the initial black hole temperature T process is then holographically dual to the dynamical superconducting phase transition process in the boundary theory. Furthermore, we also study the effect of the scalar self-interaction on time evolution of superconducting condensate operator, event and apparent horizon areas of the final hairy black hole.

  19. Quantum Ising model on hierarchical structures

    International Nuclear Information System (INIS)

    Lin Zhifang; Tao Ruibao.

    1989-11-01

    A quantum Ising chain with both the exchange couplings and the transverse fields arranged in a hierarchical way is considered. Exact analytical results for the critical line and energy gap are obtained. It is shown that when R 1 not= R 2 , where R 1 and R 2 are the hierarchical parameters for the exchange couplings and the transverse fields, respectively, the system undergoes a phase transition in a different universality class from the pure quantum Ising chain with R 1 =R 2 =1. On the other hand, when R 1 =R 2 =R, there exists a critical value R c dependent on the furcating number of the hierarchy. In case of R > R c , the system is shown to exhibit as Ising-like critical point with the critical behaviour the same as in the pure case, while for R c the system belongs to another universality class. (author). 19 refs, 2 figs

  20. Road network safety evaluation using Bayesian hierarchical joint model.

    Science.gov (United States)

    Wang, Jie; Huang, Helai

    2016-05-01

    Safety and efficiency are commonly regarded as two significant performance indicators of transportation systems. In practice, road network planning has focused on road capacity and transport efficiency whereas the safety level of a road network has received little attention in the planning stage. This study develops a Bayesian hierarchical joint model for road network safety evaluation to help planners take traffic safety into account when planning a road network. The proposed model establishes relationships between road network risk and micro-level variables related to road entities and traffic volume, as well as socioeconomic, trip generation and network density variables at macro level which are generally used for long term transportation plans. In addition, network spatial correlation between intersections and their connected road segments is also considered in the model. A road network is elaborately selected in order to compare the proposed hierarchical joint model with a previous joint model and a negative binomial model. According to the results of the model comparison, the hierarchical joint model outperforms the joint model and negative binomial model in terms of the goodness-of-fit and predictive performance, which indicates the reasonableness of considering the hierarchical data structure in crash prediction and analysis. Moreover, both random effects at the TAZ level and the spatial correlation between intersections and their adjacent segments are found to be significant, supporting the employment of the hierarchical joint model as an alternative in road-network-level safety modeling as well. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. Advances in Applications of Hierarchical Bayesian Methods with Hydrological Models

    Science.gov (United States)

    Alexander, R. B.; Schwarz, G. E.; Boyer, E. W.

    2017-12-01

    Mechanistic and empirical watershed models are increasingly used to inform water resource decisions. Growing access to historical stream measurements and data from in-situ sensor technologies has increased the need for improved techniques for coupling models with hydrological measurements. Techniques that account for the intrinsic uncertainties of both models and measurements are especially needed. Hierarchical Bayesian methods provide an efficient modeling tool for quantifying model and prediction uncertainties, including those associated with measurements. Hierarchical methods can also be used to explore spatial and temporal variations in model parameters and uncertainties that are informed by hydrological measurements. We used hierarchical Bayesian methods to develop a hybrid (statistical-mechanistic) SPARROW (SPAtially Referenced Regression On Watershed attributes) model of long-term mean annual streamflow across diverse environmental and climatic drainages in 18 U.S. hydrological regions. Our application illustrates the use of a new generation of Bayesian methods that offer more advanced computational efficiencies than the prior generation. Evaluations of the effects of hierarchical (regional) variations in model coefficients and uncertainties on model accuracy indicates improved prediction accuracies (median of 10-50%) but primarily in humid eastern regions, where model uncertainties are one-third of those in arid western regions. Generally moderate regional variability is observed for most hierarchical coefficients. Accounting for measurement and structural uncertainties, using hierarchical state-space techniques, revealed the effects of spatially-heterogeneous, latent hydrological processes in the "localized" drainages between calibration sites; this improved model precision, with only minor changes in regional coefficients. Our study can inform advances in the use of hierarchical methods with hydrological models to improve their integration with stream

  2. Holographik, the k-essential approach to interactive models with modified holographic Ricci dark energy

    Energy Technology Data Exchange (ETDEWEB)

    Forte, Monica [Universidad de Buenos Aires, Departamento de Fisica, Facultad de ciencias Exactas y Naturales, Buenos Aires (Argentina)

    2016-12-15

    We make a scalar representation of interactive models with cold dark matter and modified holographic Ricci dark energy through unified models driven by scalar fields with non-canonical kinetic term. These models are applications of the formalism of exotic k-essences generated by the global description of cosmological models with two interactive fluids in the dark sector and in these cases they correspond to the usual k-essences. The formalism is applied to the cases of constant potential in Friedmann-Robertson-Walker geometries. (orig.)

  3. Viscous cosmology in new holographic dark energy model and the cosmic acceleration

    International Nuclear Information System (INIS)

    Singh, C.P.; Srivastava, Milan

    2018-01-01

    In this work, we study a flat Friedmann-Robertson-Walker universe filled with dark matter and viscous new holographic dark energy. We present four possible solutions of the model depending on the choice of the viscous term. We obtain the evolution of the cosmological quantities such as scale factor, deceleration parameter and transition redshift to observe the effect of viscosity in the evolution. We also emphasis upon the two independent geometrical diagnostics for our model, namely the statefinder and the Om diagnostics. In the first case we study new holographic dark energy model without viscous and obtain power-law expansion of the universe which gives constant deceleration parameter and statefinder parameters. In the limit of the parameter, the model approaches to ΛCDM model. In new holographic dark energy model with viscous, the bulk viscous coefficient is assumed as ζ = ζ 0 + ζ 1 H, where ζ 0 and ζ 1 are constants, and H is the Hubble parameter. In this model, we obtain all possible solutions with viscous term and analyze the expansion history of the universe. We draw the evolution graphs of the scale factor and deceleration parameter. It is observed that the universe transits from deceleration to acceleration for small values of ζ in late time. However, it accelerates very fast from the beginning for large values of ζ. By illustrating the evolutionary trajectories in r - s and r - q planes, we find that our model behaves as an quintessence like for small values of viscous coefficient and a Chaplygin gas like for large values of bulk viscous coefficient at early stage. However, model has close resemblance to that of the ΛCDM cosmology in late time. The Om has positive and negative curvatures for phantom and quintessence models, respectively depending on ζ. Our study shows that the bulk viscosity plays very important role in the expansion history of the universe. (orig.)

  4. Viscous cosmology in new holographic dark energy model and the cosmic acceleration

    Science.gov (United States)

    Singh, C. P.; Srivastava, Milan

    2018-03-01

    In this work, we study a flat Friedmann-Robertson-Walker universe filled with dark matter and viscous new holographic dark energy. We present four possible solutions of the model depending on the choice of the viscous term. We obtain the evolution of the cosmological quantities such as scale factor, deceleration parameter and transition redshift to observe the effect of viscosity in the evolution. We also emphasis upon the two independent geometrical diagnostics for our model, namely the statefinder and the Om diagnostics. In the first case we study new holographic dark energy model without viscous and obtain power-law expansion of the universe which gives constant deceleration parameter and statefinder parameters. In the limit of the parameter, the model approaches to Λ CDM model. In new holographic dark energy model with viscous, the bulk viscous coefficient is assumed as ζ =ζ 0+ζ 1H, where ζ 0 and ζ 1 are constants, and H is the Hubble parameter. In this model, we obtain all possible solutions with viscous term and analyze the expansion history of the universe. We draw the evolution graphs of the scale factor and deceleration parameter. It is observed that the universe transits from deceleration to acceleration for small values of ζ in late time. However, it accelerates very fast from the beginning for large values of ζ . By illustrating the evolutionary trajectories in r-s and r-q planes, we find that our model behaves as an quintessence like for small values of viscous coefficient and a Chaplygin gas like for large values of bulk viscous coefficient at early stage. However, model has close resemblance to that of the Λ CDM cosmology in late time. The Om has positive and negative curvatures for phantom and quintessence models, respectively depending on ζ . Our study shows that the bulk viscosity plays very important role in the expansion history of the universe.

  5. New holographic scalar field models of dark energy in non-flat universe

    Energy Technology Data Exchange (ETDEWEB)

    Karami, K., E-mail: KKarami@uok.ac.i [Department of Physics, University of Kurdistan, Pasdaran St., Sanandaj (Iran, Islamic Republic of); Research Institute for Astronomy and Astrophysics of Maragha (RIAAM), Maragha (Iran, Islamic Republic of); Fehri, J. [Department of Physics, University of Kurdistan, Pasdaran St., Sanandaj (Iran, Islamic Republic of)

    2010-02-08

    Motivated by the work of Granda and Oliveros [L.N. Granda, A. Oliveros, Phys. Lett. B 671 (2009) 199], we generalize their work to the non-flat case. We study the correspondence between the quintessence, tachyon, K-essence and dilaton scalar field models with the new holographic dark energy model in the non-flat FRW universe. We reconstruct the potentials and the dynamics for these scalar field models, which describe accelerated expansion of the universe. In the limiting case of a flat universe, i.e. k=0, all results given in [L.N. Granda, A. Oliveros, Phys. Lett. B 671 (2009) 199] are obtained.

  6. New holographic scalar field models of dark energy in non-flat universe

    International Nuclear Information System (INIS)

    Karami, K.; Fehri, J.

    2010-01-01

    Motivated by the work of Granda and Oliveros [L.N. Granda, A. Oliveros, Phys. Lett. B 671 (2009) 199], we generalize their work to the non-flat case. We study the correspondence between the quintessence, tachyon, K-essence and dilaton scalar field models with the new holographic dark energy model in the non-flat FRW universe. We reconstruct the potentials and the dynamics for these scalar field models, which describe accelerated expansion of the universe. In the limiting case of a flat universe, i.e. k=0, all results given in [L.N. Granda, A. Oliveros, Phys. Lett. B 671 (2009) 199] are obtained.

  7. Reconstructing an f(R) model from holographic dark energy: using the observational evidence

    International Nuclear Information System (INIS)

    Saaidi, Kh; Aghamohammadi, A

    2012-01-01

    We investigate the correspondence relation between f(R) gravity and an interacting holographic dark energy (HDE). By obtaining the conditions needed for some observational evidence such as positive acceleration expansion of the Universe, crossing the phantom divide line and validity of the thermodynamics second law in an interacting HDE model and corresponding it with the f(R) model of gravity, we find a viable f(R) model that can explain the present Universe. We also obtain the explicit evolutionary forms of the corresponding scalar field, potential and scale factor of the Universe. (paper)

  8. Hierarchical Bayesian Modeling of Fluid-Induced Seismicity

    Science.gov (United States)

    Broccardo, M.; Mignan, A.; Wiemer, S.; Stojadinovic, B.; Giardini, D.

    2017-11-01

    In this study, we present a Bayesian hierarchical framework to model fluid-induced seismicity. The framework is based on a nonhomogeneous Poisson process with a fluid-induced seismicity rate proportional to the rate of injected fluid. The fluid-induced seismicity rate model depends upon a set of physically meaningful parameters and has been validated for six fluid-induced case studies. In line with the vision of hierarchical Bayesian modeling, the rate parameters are considered as random variables. We develop both the Bayesian inference and updating rules, which are used to develop a probabilistic forecasting model. We tested the Basel 2006 fluid-induced seismic case study to prove that the hierarchical Bayesian model offers a suitable framework to coherently encode both epistemic uncertainty and aleatory variability. Moreover, it provides a robust and consistent short-term seismic forecasting model suitable for online risk quantification and mitigation.

  9. Communicative Modelling of Cultural Transmission and Evolution Through a Holographic Cognition Model

    Directory of Open Access Journals (Sweden)

    Ambjörn Naeve

    2012-12-01

    Full Text Available This article presents communicative ways to model the transmission and evolution of the processes and artefacts of a culture as the result of ongoing interactions between its members - both at the tacit and the explicit level. The purpose is not to model the entire cultural process, but to provide semantically rich “conceptual placeholders” for modelling any cultural activity that is considered important enough within a certain context. The general purpose of communicative modelling is to create models that improve the quality of communication between people. In order to capture the subjective aspects of Gregory Bateson’s definition of information as “a difference that makes a difference,” the article introduces a Holographic Cognition Model that uses optical holography as an analogy for human cognition, with the object beam of holography corresponding to the first difference (the situation that the cognitive agent encounters, and the reference beam of holography corresponding to the subjective experiences and biases that the agent brings to the situation, and which makes the second difference (the interference/interpretation pattern unique for each agent. By combining the HCM with a semantically rich and recursive form of process modelling, based on the SECI-theory of knowledge creation, we arrive at way to model the cultural transmission and evolution process that is consistent with the Unified Theory of Information (the Triple-C model with its emphasis on intra-, inter- and supra-actions.

  10. Structure function of holographic quark-gluon plasma: Sakai-Sugimoto model versus its noncritical version

    International Nuclear Information System (INIS)

    Bu Yanyan; Yang Jinmin

    2011-01-01

    Motivated by recent studies of deep inelastic scattering off the N=4 super-Yang-Mills (SYM) plasma, holographically dual to an AdS 5 xS 5 black hole, we use the spacelike flavor current to probe the internal structure of one holographic quark-gluon plasma, which is described by the Sakai-Sugimoto model at high temperature phase (i.e., the chiral-symmetric phase). The plasma structure function is extracted from the retarded flavor current-current correlator. Our main aim in this paper is to explore the effect of nonconformality on these physical quantities. As usual, our study is under the supergravity approximation and the limit of large color number. Although the Sakai-Sugimoto model is nonconformal, which makes the calculations more involved than the well-studied N=4 SYM case, the result seems to indicate that the nonconformality has little essential effect on the physical picture of the internal structure of holographic plasma, which is consistent with the intuition from the asymptotic freedom of QCD at high energy. While the physical picture underlying our investigation is same as the deep inelastic scattering off the N=4 SYM plasma with(out) flavor, the plasma structure functions are quantitatively different, especially their scaling dependence on the temperature, which can be recognized as model dependent. As a comparison, we also do the same analysis for the noncritical version of the Sakai-Sugimoto model which is conformal in the sense that it has a constant dilaton vacuum. The result for this noncritical model is quite similar to the conformal N=4 SYM plasma. We therefore attribute the above difference to the effect of nonconformality of the Sakai-Sugimoto model.

  11. Determining Predictor Importance in Hierarchical Linear Models Using Dominance Analysis

    Science.gov (United States)

    Luo, Wen; Azen, Razia

    2013-01-01

    Dominance analysis (DA) is a method used to evaluate the relative importance of predictors that was originally proposed for linear regression models. This article proposes an extension of DA that allows researchers to determine the relative importance of predictors in hierarchical linear models (HLM). Commonly used measures of model adequacy in…

  12. Hierarchical modeling of molecular energies using a deep neural network

    Science.gov (United States)

    Lubbers, Nicholas; Smith, Justin S.; Barros, Kipton

    2018-06-01

    We introduce the Hierarchically Interacting Particle Neural Network (HIP-NN) to model molecular properties from datasets of quantum calculations. Inspired by a many-body expansion, HIP-NN decomposes properties, such as energy, as a sum over hierarchical terms. These terms are generated from a neural network—a composition of many nonlinear transformations—acting on a representation of the molecule. HIP-NN achieves the state-of-the-art performance on a dataset of 131k ground state organic molecules and predicts energies with 0.26 kcal/mol mean absolute error. With minimal tuning, our model is also competitive on a dataset of molecular dynamics trajectories. In addition to enabling accurate energy predictions, the hierarchical structure of HIP-NN helps to identify regions of model uncertainty.

  13. Applying Hierarchical Model Calibration to Automatically Generated Items.

    Science.gov (United States)

    Williamson, David M.; Johnson, Matthew S.; Sinharay, Sandip; Bejar, Isaac I.

    This study explored the application of hierarchical model calibration as a means of reducing, if not eliminating, the need for pretesting of automatically generated items from a common item model prior to operational use. Ultimately the successful development of automatic item generation (AIG) systems capable of producing items with highly similar…

  14. A HIERARCHICAL SET OF MODELS FOR SPECIES RESPONSE ANALYSIS

    NARCIS (Netherlands)

    HUISMAN, J; OLFF, H; FRESCO, LFM

    Variation in the abundance of species in space and/or time can be caused by a wide range of underlying processes. Before such causes can be analysed we need simple mathematical models which can describe the observed response patterns. For this purpose a hierarchical set of models is presented. These

  15. A hierarchical set of models for species response analysis

    NARCIS (Netherlands)

    Huisman, J.; Olff, H.; Fresco, L.F.M.

    1993-01-01

    Variation in the abundance of species in space and/or time can be caused by a wide range of underlying processes. Before such causes can be analysed we need simple mathematical models which can describe the observed response patterns. For this purpose a hierarchical set of models is presented. These

  16. The Revised Hierarchical Model: A critical review and assessment

    NARCIS (Netherlands)

    Kroll, J.F.; Hell, J.G. van; Tokowicz, N.; Green, D.W.

    2010-01-01

    Brysbaert and Duyck (this issue) suggest that it is time to abandon the Revised Hierarchical Model (Kroll and Stewart, 1994) in favor of connectionist models such as BIA+ (Dijkstra and Van Heuven, 2002) that more accurately account for the recent evidence on non-selective access in bilingual word

  17. A hierarchical model exhibiting the Kosterlitz-Thouless fixed point

    International Nuclear Information System (INIS)

    Marchetti, D.H.U.; Perez, J.F.

    1985-01-01

    A hierarchical model for 2-d Coulomb gases displaying a line stable of fixed points describing the Kosterlitz-Thouless phase transition is constructed. For Coulomb gases corresponding to Z sub(N)- models these fixed points are stable for an intermediate temperature interval. (Author) [pt

  18. Hierarchical Multiple Markov Chain Model for Unsupervised Texture Segmentation

    Czech Academy of Sciences Publication Activity Database

    Scarpa, G.; Gaetano, R.; Haindl, Michal; Zerubia, J.

    2009-01-01

    Roč. 18, č. 8 (2009), s. 1830-1843 ISSN 1057-7149 R&D Projects: GA ČR GA102/08/0593 EU Projects: European Commission(XE) 507752 - MUSCLE Institutional research plan: CEZ:AV0Z10750506 Keywords : Classification * texture analysis * segmentation * hierarchical image models * Markov process Subject RIV: BD - Theory of Information Impact factor: 2.848, year: 2009 http://library.utia.cas.cz/separaty/2009/RO/haindl-hierarchical multiple markov chain model for unsupervised texture segmentation.pdf

  19. Hierarchical graphs for rule-based modeling of biochemical systems

    Directory of Open Access Journals (Sweden)

    Hu Bin

    2011-02-01

    Full Text Available Abstract Background In rule-based modeling, graphs are used to represent molecules: a colored vertex represents a component of a molecule, a vertex attribute represents the internal state of a component, and an edge represents a bond between components. Components of a molecule share the same color. Furthermore, graph-rewriting rules are used to represent molecular interactions. A rule that specifies addition (removal of an edge represents a class of association (dissociation reactions, and a rule that specifies a change of a vertex attribute represents a class of reactions that affect the internal state of a molecular component. A set of rules comprises an executable model that can be used to determine, through various means, the system-level dynamics of molecular interactions in a biochemical system. Results For purposes of model annotation, we propose the use of hierarchical graphs to represent structural relationships among components and subcomponents of molecules. We illustrate how hierarchical graphs can be used to naturally document the structural organization of the functional components and subcomponents of two proteins: the protein tyrosine kinase Lck and the T cell receptor (TCR complex. We also show that computational methods developed for regular graphs can be applied to hierarchical graphs. In particular, we describe a generalization of Nauty, a graph isomorphism and canonical labeling algorithm. The generalized version of the Nauty procedure, which we call HNauty, can be used to assign canonical labels to hierarchical graphs or more generally to graphs with multiple edge types. The difference between the Nauty and HNauty procedures is minor, but for completeness, we provide an explanation of the entire HNauty algorithm. Conclusions Hierarchical graphs provide more intuitive formal representations of proteins and other structured molecules with multiple functional components than do the regular graphs of current languages for

  20. A Hierarchal Risk Assessment Model Using the Evidential Reasoning Rule

    Directory of Open Access Journals (Sweden)

    Xiaoxiao Ji

    2017-02-01

    Full Text Available This paper aims to develop a hierarchical risk assessment model using the newly-developed evidential reasoning (ER rule, which constitutes a generic conjunctive probabilistic reasoning process. In this paper, we first provide a brief introduction to the basics of the ER rule and emphasize the strengths for representing and aggregating uncertain information from multiple experts and sources. Further, we discuss the key steps of developing the hierarchical risk assessment framework systematically, including (1 formulation of risk assessment hierarchy; (2 representation of both qualitative and quantitative information; (3 elicitation of attribute weights and information reliabilities; (4 aggregation of assessment information using the ER rule and (5 quantification and ranking of risks using utility-based transformation. The proposed hierarchical risk assessment framework can potentially be implemented to various complex and uncertain systems. A case study on the fire/explosion risk assessment of marine vessels demonstrates the applicability of the proposed risk assessment model.

  1. f(R in Holographic and Agegraphic Dark Energy Models and the Generalized Uncertainty Principle

    Directory of Open Access Journals (Sweden)

    Barun Majumder

    2013-01-01

    Full Text Available We studied a unified approach with the holographic, new agegraphic, and f(R dark energy model to construct the form of f(R which in general is responsible for the curvature driven explanation of the very early inflation along with presently observed late time acceleration. We considered the generalized uncertainty principle in our approach which incorporated the corrections in the entropy-area relation and thereby modified the energy densities for the cosmological dark energy models considered. We found that holographic and new agegraphic f(R gravity models can behave like phantom or quintessence models in the spatially flat FRW universe. We also found a distinct term in the form of f(R which goes as R 3 / 2 due to the consideration of the GUP modified energy densities. Although the presence of this term in the action can be important in explaining the early inflationary scenario, Capozziello et al. recently showed that f(R ~ R 3 / 2 leads to an accelerated expansion, that is, a negative value for the deceleration parameter q which fits well with SNeIa and WMAP data.

  2. Λ(t)CDM model as a unified origin of holographic and agegraphic dark energy models

    International Nuclear Information System (INIS)

    Chen Yun; Zhu Zonghong; Xu Lixin; Alcaniz, J.S.

    2011-01-01

    Motivated by the fact that any nonzero Λ can introduce a length scale or a time scale into Einstein's theory, r Λ =ct Λ =√(3/|Λ|). Conversely, any cosmological length scale or time scale can introduce a Λ(t), Λ(t)=3/r Λ 2 (t)=3/(c 2 t Λ 2 (t)). In this Letter, we investigate the time varying Λ(t) corresponding to the length scales, including the Hubble horizon, the particle horizon and the future event horizon, and the time scales, including the age of the universe and the conformal time. It is found out that, in this scenario, the Λ(t)CDM model can be taken as the unified origin of the holographic and agegraphic dark energy models with interaction between the matter and the dark energy, where the interacting term is determined by Q=-ρ . Λ . We place observational constraints on the Λ(t)CDM models originating from different cosmological length scales and time scales with the recently compiled 'Union2 compilation' which consists of 557 Type Ia supernovae (SNIa) covering a redshift range 0.015≤z≤1.4. In conclusion, an accelerating expansion universe can be derived in the cases taking the Hubble horizon, the future event horizon, the age of the universe and the conformal time as the length scale or the time scale.

  3. Comparing hierarchical models via the marginalized deviance information criterion.

    Science.gov (United States)

    Quintero, Adrian; Lesaffre, Emmanuel

    2018-07-20

    Hierarchical models are extensively used in pharmacokinetics and longitudinal studies. When the estimation is performed from a Bayesian approach, model comparison is often based on the deviance information criterion (DIC). In hierarchical models with latent variables, there are several versions of this statistic: the conditional DIC (cDIC) that incorporates the latent variables in the focus of the analysis and the marginalized DIC (mDIC) that integrates them out. Regardless of the asymptotic and coherency difficulties of cDIC, this alternative is usually used in Markov chain Monte Carlo (MCMC) methods for hierarchical models because of practical convenience. The mDIC criterion is more appropriate in most cases but requires integration of the likelihood, which is computationally demanding and not implemented in Bayesian software. Therefore, we consider a method to compute mDIC by generating replicate samples of the latent variables that need to be integrated out. This alternative can be easily conducted from the MCMC output of Bayesian packages and is widely applicable to hierarchical models in general. Additionally, we propose some approximations in order to reduce the computational complexity for large-sample situations. The method is illustrated with simulated data sets and 2 medical studies, evidencing that cDIC may be misleading whilst mDIC appears pertinent. Copyright © 2018 John Wiley & Sons, Ltd.

  4. Conceptual hierarchical modeling to describe wetland plant community organization

    Science.gov (United States)

    Little, A.M.; Guntenspergen, G.R.; Allen, T.F.H.

    2010-01-01

    Using multivariate analysis, we created a hierarchical modeling process that describes how differently-scaled environmental factors interact to affect wetland-scale plant community organization in a system of small, isolated wetlands on Mount Desert Island, Maine. We followed the procedure: 1) delineate wetland groups using cluster analysis, 2) identify differently scaled environmental gradients using non-metric multidimensional scaling, 3) order gradient hierarchical levels according to spatiotem-poral scale of fluctuation, and 4) assemble hierarchical model using group relationships with ordination axes and post-hoc tests of environmental differences. Using this process, we determined 1) large wetland size and poor surface water chemistry led to the development of shrub fen wetland vegetation, 2) Sphagnum and water chemistry differences affected fen vs. marsh / sedge meadows status within small wetlands, and 3) small-scale hydrologic differences explained transitions between forested vs. non-forested and marsh vs. sedge meadow vegetation. This hierarchical modeling process can help explain how upper level contextual processes constrain biotic community response to lower-level environmental changes. It creates models with more nuanced spatiotemporal complexity than classification and regression tree procedures. Using this process, wetland scientists will be able to generate more generalizable theories of plant community organization, and useful management models. ?? Society of Wetland Scientists 2009.

  5. Control of discrete event systems modeled as hierarchical state machines

    Science.gov (United States)

    Brave, Y.; Heymann, M.

    1991-01-01

    The authors examine a class of discrete event systems (DESs) modeled as asynchronous hierarchical state machines (AHSMs). For this class of DESs, they provide an efficient method for testing reachability, which is an essential step in many control synthesis procedures. This method utilizes the asynchronous nature and hierarchical structure of AHSMs, thereby illustrating the advantage of the AHSM representation as compared with its equivalent (flat) state machine representation. An application of the method is presented where an online minimally restrictive solution is proposed for the problem of maintaining a controlled AHSM within prescribed legal bounds.

  6. Hierarchical modelling for the environmental sciences statistical methods and applications

    CERN Document Server

    Clark, James S

    2006-01-01

    New statistical tools are changing the way in which scientists analyze and interpret data and models. Hierarchical Bayes and Markov Chain Monte Carlo methods for analysis provide a consistent framework for inference and prediction where information is heterogeneous and uncertain, processes are complicated, and responses depend on scale. Nowhere are these methods more promising than in the environmental sciences.

  7. Holographic memories

    DEFF Research Database (Denmark)

    Ramanujam, P.S.; Berg, R.H.; Hvilsted, Søren

    1999-01-01

    A Two-dimensional holographic memory for archival storage is described. Assuming a coherent transfer function, an A4 page can be stored at high resolution in an area of 1 mm(2). Recently developed side-chain liquid crystalline azobenzene polyesters are found to be suitable media for holographic...

  8. Instability in interacting dark sector: an appropriate holographic Ricci dark energy model

    Energy Technology Data Exchange (ETDEWEB)

    Herrera, Ramón [Instituto de Física, Pontificia Universidad Católica de Valparaíso, Avenida Brasil 2950, Casilla 4059, Valparaíso (Chile); Hipólito-Ricaldi, W.S. [Departamento de Ciências Naturais, Universidade Federal do Espírito Santo, Rodovia BR 101 Norte, km. 60, São Mateus, Espírito Santo (Brazil); Videla, Nelson, E-mail: ramon.herrera@pucv.cl, E-mail: wiliam.ricaldi@ufes.br, E-mail: nelson.videla@ing.uchile.cl [Departamento de Física, Universidad de Chile, FCFM, Blanco Encalada 2008, Santiago (Chile)

    2016-08-01

    In this paper we investigate the consequences of phantom crossing considering the perturbative dynamics in models with interaction in their dark sector. By mean of a general study of gauge-invariant variables in comoving gauge, we relate the sources of instabilities in the structure formation process with the phantom crossing. In order to illustrate these relations and its consequences in more detail, we consider a specific case of an holographic dark energy interacting with dark matter. We find that in spite of the model is in excellent agreement with observational data at background level, however it is plagued of instabilities in its perturbative dynamics. We reconstruct the model in order to avoid these undesirable instabilities, and we show that this implies a modification of the concordance model at background. Also we find drastic changes on the parameters space in our model when instabilities are avoided.

  9. Analysis of Error Propagation Within Hierarchical Air Combat Models

    Science.gov (United States)

    2016-06-01

    values alone are propagated through layers of combat models, the final results will likely be biased, and risk underestimated. An air-to-air...values alone are propagated through layers of combat models, the final results will likely be biased, and risk underestimated. An air-to-air engagement... PROPAGATION WITHIN HIERARCHICAL AIR COMBAT MODELS by Salih Ilaslan June 2016 Thesis Advisor: Thomas W. Lucas Second Reader: Jeffrey

  10. Metastable states in the hierarchical Dyson model drive parallel processing in the hierarchical Hopfield network

    International Nuclear Information System (INIS)

    Agliari, Elena; Barra, Adriano; Guerra, Francesco; Galluzzi, Andrea; Tantari, Daniele; Tavani, Flavia

    2015-01-01

    In this paper, we introduce and investigate the statistical mechanics of hierarchical neural networks. First, we approach these systems à la Mattis, by thinking of the Dyson model as a single-pattern hierarchical neural network. We also discuss the stability of different retrievable states as predicted by the related self-consistencies obtained both from a mean-field bound and from a bound that bypasses the mean-field limitation. The latter is worked out by properly reabsorbing the magnetization fluctuations related to higher levels of the hierarchy into effective fields for the lower levels. Remarkably, mixing Amit's ansatz technique for selecting candidate-retrievable states with the interpolation procedure for solving for the free energy of these states, we prove that, due to gauge symmetry, the Dyson model accomplishes both serial and parallel processing. We extend this scenario to multiple stored patterns by implementing the Hebb prescription for learning within the couplings. This results in Hopfield-like networks constrained on a hierarchical topology, for which, by restricting to the low-storage regime where the number of patterns grows at its most logarithmical with the amount of neurons, we prove the existence of the thermodynamic limit for the free energy, and we give an explicit expression of its mean-field bound and of its related improved bound. We studied the resulting self-consistencies for the Mattis magnetizations, which act as order parameters, are studied and the stability of solutions is analyzed to get a picture of the overall retrieval capabilities of the system according to both mean-field and non-mean-field scenarios. Our main finding is that embedding the Hebbian rule on a hierarchical topology allows the network to accomplish both serial and parallel processing. By tuning the level of fast noise affecting it or triggering the decay of the interactions with the distance among neurons, the system may switch from sequential retrieval to

  11. Schwinger effect and negative differential conductivity in holographic models

    Directory of Open Access Journals (Sweden)

    Shankhadeep Chakrabortty

    2015-01-01

    Full Text Available The consequences of the Schwinger effect for conductivity are computed for strong coupling systems using holography. The one-loop diagram on the flavor brane introduces an O(λNc imaginary part in the effective action for a Maxwell flavor gauge field. This in turn introduces a real conductivity in an otherwise insulating phase of the boundary theory. Moreover, in certain regions of parameter space the differential conductivity is negative. This is computed in the context of the Sakai–Sugimoto model.

  12. SVZ⊕1/q{sup 2}-expansion versus some QCD holographic models

    Energy Technology Data Exchange (ETDEWEB)

    Jugeau, F., E-mail: frederic.jugeau@if.ufrj.br [Instituto de Física, Universidade Federal do Rio de Janeiro, Caixa Postal 68528, RJ 21941-972, Rio de Janeiro (Brazil); Narison, S., E-mail: snarison@yahoo.fr [Laboratoire Particules et Univers de Montpellier, CNRS-IN2P3, Case 070, Place Eugène Bataillon, 34095 Montpellier (France); Ratsimbarison, H., E-mail: herysedra@yahoo.fr [Institute of High-Energy Physics of Madagascar (iHEP-MAD), University of Antananarivo (Madagascar)

    2013-05-13

    Considering the classical two-point correlators built from (axial-) vector, scalar q{sup ¯}q and gluonium currents, we confront results obtained using the SVZ⊕1/q{sup 2}-expansion to the ones from some QCD holographic models in the Euclidean region and with negative dilaton Φ{sub i}(z)=−|c{sub i}{sup 2}|z{sup 2}. We conclude that the presence of the 1/q{sup 2}-term in the SVZ-expansion due to a tachyonic gluon mass appears naturally in the Minimum Soft-Wall (MSW) and the Gauge/String Dual (GSD) models which can also reproduce semi-quantitatively some of the higher dimension condensate contributions appearing in the OPE. The Hard-Wall model shows a large departure from the SVZ⊕1/q{sup 2}-expansion in the vector, scalar and gluonium channels due to the absence of any power corrections. The equivalence of the MSW and GSD models is manifest in the vector channel through the relation of the dilaton parameter with the tachyonic gluon mass. For approximately reproducing the phenomenological values of the dimension d=4,6 condensates, the holographic models require a tachyonic gluon mass (α{sub s}/π)λ{sup 2}≈−(0.12–0.14) GeV{sup 2}, which is about twice the fitted phenomenological value from e{sup +}e{sup −} data. The relation of the inverse length parameter c{sub i} to the tachyonic gluon mass also shows that c{sub i} is channel dependent but not universal for a given holographic model. Using the MSW model and M{sub ρ}=0.78 GeV as input, we predict a scalar q{sup ¯}q mass M{sub S}≈(0.95–1.10) GeV and a scalar gluonium mass M{sub G}≈(1.1–1.3) GeV.

  13. Hierarchical Models of the Nearshore Complex System

    National Research Council Canada - National Science Library

    Werner, Brad

    2004-01-01

    .... This grant was termination funding for the Werner group, specifically aimed at finishing up and publishing research related to synoptic imaging of near shore bathymetry, testing models for beach cusp...

  14. Hierarchical and coupling model of factors influencing vessel traffic flow.

    Directory of Open Access Journals (Sweden)

    Zhao Liu

    Full Text Available Understanding the characteristics of vessel traffic flow is crucial in maintaining navigation safety, efficiency, and overall waterway transportation management. Factors influencing vessel traffic flow possess diverse features such as hierarchy, uncertainty, nonlinearity, complexity, and interdependency. To reveal the impact mechanism of the factors influencing vessel traffic flow, a hierarchical model and a coupling model are proposed in this study based on the interpretative structural modeling method. The hierarchical model explains the hierarchies and relationships of the factors using a graph. The coupling model provides a quantitative method that explores interaction effects of factors using a coupling coefficient. The coupling coefficient is obtained by determining the quantitative indicators of the factors and their weights. Thereafter, the data obtained from Port of Tianjin is used to verify the proposed coupling model. The results show that the hierarchical model of the factors influencing vessel traffic flow can explain the level, structure, and interaction effect of the factors; the coupling model is efficient in analyzing factors influencing traffic volumes. The proposed method can be used for analyzing increases in vessel traffic flow in waterway transportation system.

  15. Hierarchical and coupling model of factors influencing vessel traffic flow.

    Science.gov (United States)

    Liu, Zhao; Liu, Jingxian; Li, Huanhuan; Li, Zongzhi; Tan, Zhirong; Liu, Ryan Wen; Liu, Yi

    2017-01-01

    Understanding the characteristics of vessel traffic flow is crucial in maintaining navigation safety, efficiency, and overall waterway transportation management. Factors influencing vessel traffic flow possess diverse features such as hierarchy, uncertainty, nonlinearity, complexity, and interdependency. To reveal the impact mechanism of the factors influencing vessel traffic flow, a hierarchical model and a coupling model are proposed in this study based on the interpretative structural modeling method. The hierarchical model explains the hierarchies and relationships of the factors using a graph. The coupling model provides a quantitative method that explores interaction effects of factors using a coupling coefficient. The coupling coefficient is obtained by determining the quantitative indicators of the factors and their weights. Thereafter, the data obtained from Port of Tianjin is used to verify the proposed coupling model. The results show that the hierarchical model of the factors influencing vessel traffic flow can explain the level, structure, and interaction effect of the factors; the coupling model is efficient in analyzing factors influencing traffic volumes. The proposed method can be used for analyzing increases in vessel traffic flow in waterway transportation system.

  16. Petascale Hierarchical Modeling VIA Parallel Execution

    Energy Technology Data Exchange (ETDEWEB)

    Gelman, Andrew [Principal Investigator

    2014-04-14

    The research allows more effective model building. By allowing researchers to fit complex models to large datasets in a scalable manner, our algorithms and software enable more effective scientific research. In the new area of “big data,” it is often necessary to fit “big models” to adjust for systematic differences between sample and population. For this task, scalable and efficient model-fitting tools are needed, and these have been achieved with our new Hamiltonian Monte Carlo algorithm, the no-U-turn sampler, and our new C++ program, Stan. In layman’s terms, our research enables researchers to create improved mathematical modes for large and complex systems.

  17. Hierarchical Modelling of Flood Risk for Engineering Decision Analysis

    DEFF Research Database (Denmark)

    Custer, Rocco

    protection structures in the hierarchical flood protection system - is identified. To optimise the design of protection structures, fragility and vulnerability models must allow for consideration of decision alternatives. While such vulnerability models are available for large protection structures (e...... systems, as well as the implementation of the flood risk analysis methodology and the vulnerability modelling approach are illustrated with an example application. In summary, the present thesis provides a characterisation of hierarchical flood protection systems as well as several methodologies to model...... and robust. Traditional risk management solutions, e.g. dike construction, are not particularly flexible, as they are difficult to adapt to changing risk. Conversely, the recent concept of integrated flood risk management, entailing a combination of several structural and non-structural risk management...

  18. A Hierarchical Visualization Analysis Model of Power Big Data

    Science.gov (United States)

    Li, Yongjie; Wang, Zheng; Hao, Yang

    2018-01-01

    Based on the conception of integrating VR scene and power big data analysis, a hierarchical visualization analysis model of power big data is proposed, in which levels are designed, targeting at different abstract modules like transaction, engine, computation, control and store. The regularly departed modules of power data storing, data mining and analysis, data visualization are integrated into one platform by this model. It provides a visual analysis solution for the power big data.

  19. Fully probabilistic design of hierarchical Bayesian models

    Czech Academy of Sciences Publication Activity Database

    Quinn, A.; Kárný, Miroslav; Guy, Tatiana Valentine

    2016-01-01

    Roč. 369, č. 1 (2016), s. 532-547 ISSN 0020-0255 R&D Projects: GA ČR GA13-13502S Institutional support: RVO:67985556 Keywords : Fully probabilistic design * Ideal distribution * Minimum cross-entropy principle * Bayesian conditioning * Kullback-Leibler divergence * Bayesian nonparametric modelling Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 4.832, year: 2016 http://library.utia.cas.cz/separaty/2016/AS/karny-0463052.pdf

  20. Wave optics modeling of real-time holographic wavefront compensation systems using OSSim

    Science.gov (United States)

    Carbon, Margarita A.; Guthals, Dennis M.; Logan, Jerry D.

    2005-08-01

    OSSim (Optical System Simulation) is a wave-optics, time-domain simulation toolbox with both optical and data processing components developed for adaptive optics (AO) systems. Diffractive wavefront control elements have recently been added that accurately model optically and electrically addressed spatial light modulators as real time holographic (RTH) devices in diffractive wavefront control systems. The developed RTH toolbox has found multiple applications for a variety of Boeing programs in solving problems of AO system analysis and design. Several complex diffractive wavefront control systems have been modeled for compensation of static and dynamic aberrations such as imperfect segmented primary mirrors and atmospheric and boundary layer turbulence. The results of OSSim simulations of RTH wavefront compensation show very good agreement with available experimental data.

  1. Hierarchical Model Predictive Control for Resource Distribution

    DEFF Research Database (Denmark)

    Bendtsen, Jan Dimon; Trangbæk, K; Stoustrup, Jakob

    2010-01-01

    units. The approach is inspired by smart-grid electric power production and consumption systems, where the flexibility of a large number of power producing and/or power consuming units can be exploited in a smart-grid solution. The objective is to accommodate the load variation on the grid, arising......This paper deals with hierarchichal model predictive control (MPC) of distributed systems. A three level hierachical approach is proposed, consisting of a high level MPC controller, a second level of so-called aggregators, controlled by an online MPC-like algorithm, and a lower level of autonomous...... on one hand from varying consumption, on the other hand by natural variations in power production e.g. from wind turbines. The approach presented is based on quadratic optimization and possess the properties of low algorithmic complexity and of scalability. In particular, the proposed design methodology...

  2. Introduction to Hierarchical Bayesian Modeling for Ecological Data

    CERN Document Server

    Parent, Eric

    2012-01-01

    Making statistical modeling and inference more accessible to ecologists and related scientists, Introduction to Hierarchical Bayesian Modeling for Ecological Data gives readers a flexible and effective framework to learn about complex ecological processes from various sources of data. It also helps readers get started on building their own statistical models. The text begins with simple models that progressively become more complex and realistic through explanatory covariates and intermediate hidden states variables. When fitting the models to data, the authors gradually present the concepts a

  3. A hierarchical spatiotemporal analog forecasting model for count data.

    Science.gov (United States)

    McDermott, Patrick L; Wikle, Christopher K; Millspaugh, Joshua

    2018-01-01

    Analog forecasting is a mechanism-free nonlinear method that forecasts a system forward in time by examining how past states deemed similar to the current state moved forward. Previous applications of analog forecasting has been successful at producing robust forecasts for a variety of ecological and physical processes, but it has typically been presented in an empirical or heuristic procedure, rather than as a formal statistical model. The methodology presented here extends the model-based analog method of McDermott and Wikle (Environmetrics, 27, 2016, 70) by placing analog forecasting within a fully hierarchical statistical framework that can accommodate count observations. Using a Bayesian approach, the hierarchical analog model is able to quantify rigorously the uncertainty associated with forecasts. Forecasting waterfowl settling patterns in the northwestern United States and Canada is conducted by applying the hierarchical analog model to a breeding population survey dataset. Sea surface temperature (SST) in the Pacific Ocean is used to help identify potential analogs for the waterfowl settling patterns.

  4. Bayesian hierarchical model for large-scale covariance matrix estimation.

    Science.gov (United States)

    Zhu, Dongxiao; Hero, Alfred O

    2007-12-01

    Many bioinformatics problems implicitly depend on estimating large-scale covariance matrix. The traditional approaches tend to give rise to high variance and low accuracy due to "overfitting." We cast the large-scale covariance matrix estimation problem into the Bayesian hierarchical model framework, and introduce dependency between covariance parameters. We demonstrate the advantages of our approaches over the traditional approaches using simulations and OMICS data analysis.

  5. Interacting cosmic fluids and phase transitions under a holographic modeling for dark energy

    Energy Technology Data Exchange (ETDEWEB)

    Lepe, Samuel [Pontificia Universidad Catolica de Valparaiso, Instituto de Fisica, Facultad de Ciencias, Valparaiso (Chile); Pena, Francisco [Universidad de La Frontera, Departamento de Ciencias Fisicas, Facultad de Ingenieria y Ciencias, Temuco (Chile)

    2016-09-15

    We discuss the consequences of possible sign changes of the Q-function which measures the transfer of energy between dark energy and dark matter. We investigate this scenario from a holographic perspective by modeling dark energy by a linear parametrization and CPL-parametrization of the equation of state (ω). By imposing the strong constraint of the second law of thermodynamics, we show that the change of sign for Q, due to the cosmic evolution, imply changes in the temperatures of dark energy and dark matter. We also discuss the phase transitions, in the past and future, experienced by dark energy and dark matter (or, equivalently, the sign changes of their heat capacities). (orig.)

  6. Direct phase derivative estimation using difference equation modeling in holographic interferometry

    International Nuclear Information System (INIS)

    Kulkarni, Rishikesh; Rastogi, Pramod

    2014-01-01

    A new method is proposed for the direct phase derivative estimation from a single spatial frequency modulated carrier fringe pattern in holographic interferometry. The fringe intensity in a given row/column is modeled as a difference equation of intensity with spatially varying coefficients. These coefficients carry the information on the phase derivative. Consequently, the accurate estimation of the coefficients is obtained by approximating the coefficients as a linear combination of the predefined linearly independent basis functions. Unlike Fourier transform based fringe analysis, the method does not call for performing the filtering of the Fourier spectrum of fringe intensity. Moreover, the estimation of the carrier frequency is performed by applying the proposed method to a reference interferogram. The performance of the proposed method is insensitive to the fringe amplitude modulation and is validated with the simulation results. (paper)

  7. Interacting cosmic fluids and phase transitions under a holographic modeling for dark energy

    International Nuclear Information System (INIS)

    Lepe, Samuel; Pena, Francisco

    2016-01-01

    We discuss the consequences of possible sign changes of the Q-function which measures the transfer of energy between dark energy and dark matter. We investigate this scenario from a holographic perspective by modeling dark energy by a linear parametrization and CPL-parametrization of the equation of state (ω). By imposing the strong constraint of the second law of thermodynamics, we show that the change of sign for Q, due to the cosmic evolution, imply changes in the temperatures of dark energy and dark matter. We also discuss the phase transitions, in the past and future, experienced by dark energy and dark matter (or, equivalently, the sign changes of their heat capacities). (orig.)

  8. Hierarchical composites: Analysis of damage evolution based on fiber bundle model

    DEFF Research Database (Denmark)

    Mishnaevsky, Leon

    2011-01-01

    A computational model of multiscale composites is developed on the basis of the fiber bundle model with the hierarchical load sharing rule, and employed to study the effect of the microstructures of hierarchical composites on their damage resistance. Two types of hierarchical materials were consi...

  9. Hierarchical modeling of cluster size in wildlife surveys

    Science.gov (United States)

    Royle, J. Andrew

    2008-01-01

    Clusters or groups of individuals are the fundamental unit of observation in many wildlife sampling problems, including aerial surveys of waterfowl, marine mammals, and ungulates. Explicit accounting of cluster size in models for estimating abundance is necessary because detection of individuals within clusters is not independent and detectability of clusters is likely to increase with cluster size. This induces a cluster size bias in which the average cluster size in the sample is larger than in the population at large. Thus, failure to account for the relationship between delectability and cluster size will tend to yield a positive bias in estimates of abundance or density. I describe a hierarchical modeling framework for accounting for cluster-size bias in animal sampling. The hierarchical model consists of models for the observation process conditional on the cluster size distribution and the cluster size distribution conditional on the total number of clusters. Optionally, a spatial model can be specified that describes variation in the total number of clusters per sample unit. Parameter estimation, model selection, and criticism may be carried out using conventional likelihood-based methods. An extension of the model is described for the situation where measurable covariates at the level of the sample unit are available. Several candidate models within the proposed class are evaluated for aerial survey data on mallard ducks (Anas platyrhynchos).

  10. Current Observational Constraints to Holographic Dark Energy Model with New Infrared cut-off via Markov Chain Monte Carlo Method

    OpenAIRE

    Wang, Yuting; Xu, Lixin

    2010-01-01

    In this paper, the holographic dark energy model with new infrared (IR) cut-off for both the flat case and the non-flat case are confronted with the combined constraints of current cosmological observations: type Ia Supernovae, Baryon Acoustic Oscillations, current Cosmic Microwave Background, and the observational hubble data. By utilizing the Markov Chain Monte Carlo (MCMC) method, we obtain the best fit values of the parameters with $1\\sigma, 2\\sigma$ errors in the flat model: $\\Omega_{b}h...

  11. A hierarchical community occurrence model for North Carolina stream fish

    Science.gov (United States)

    Midway, S.R.; Wagner, Tyler; Tracy, B.H.

    2016-01-01

    The southeastern USA is home to one of the richest—and most imperiled and threatened—freshwater fish assemblages in North America. For many of these rare and threatened species, conservation efforts are often limited by a lack of data. Drawing on a unique and extensive data set spanning over 20 years, we modeled occurrence probabilities of 126 stream fish species sampled throughout North Carolina, many of which occur more broadly in the southeastern USA. Specifically, we developed species-specific occurrence probabilities from hierarchical Bayesian multispecies models that were based on common land use and land cover covariates. We also used index of biotic integrity tolerance classifications as a second level in the model hierarchy; we identify this level as informative for our work, but it is flexible for future model applications. Based on the partial-pooling property of the models, we were able to generate occurrence probabilities for many imperiled and data-poor species in addition to highlighting a considerable amount of occurrence heterogeneity that supports species-specific investigations whenever possible. Our results provide critical species-level information on many threatened and imperiled species as well as information that may assist with re-evaluation of existing management strategies, such as the use of surrogate species. Finally, we highlight the use of a relatively simple hierarchical model that can easily be generalized for similar situations in which conventional models fail to provide reliable estimates for data-poor groups.

  12. Hierarchical Bayesian spatial models for multispecies conservation planning and monitoring.

    Science.gov (United States)

    Carroll, Carlos; Johnson, Devin S; Dunk, Jeffrey R; Zielinski, William J

    2010-12-01

    Biologists who develop and apply habitat models are often familiar with the statistical challenges posed by their data's spatial structure but are unsure of whether the use of complex spatial models will increase the utility of model results in planning. We compared the relative performance of nonspatial and hierarchical Bayesian spatial models for three vertebrate and invertebrate taxa of conservation concern (Church's sideband snails [Monadenia churchi], red tree voles [Arborimus longicaudus], and Pacific fishers [Martes pennanti pacifica]) that provide examples of a range of distributional extents and dispersal abilities. We used presence-absence data derived from regional monitoring programs to develop models with both landscape and site-level environmental covariates. We used Markov chain Monte Carlo algorithms and a conditional autoregressive or intrinsic conditional autoregressive model framework to fit spatial models. The fit of Bayesian spatial models was between 35 and 55% better than the fit of nonspatial analogue models. Bayesian spatial models outperformed analogous models developed with maximum entropy (Maxent) methods. Although the best spatial and nonspatial models included similar environmental variables, spatial models provided estimates of residual spatial effects that suggested how ecological processes might structure distribution patterns. Spatial models built from presence-absence data improved fit most for localized endemic species with ranges constrained by poorly known biogeographic factors and for widely distributed species suspected to be strongly affected by unmeasured environmental variables or population processes. By treating spatial effects as a variable of interest rather than a nuisance, hierarchical Bayesian spatial models, especially when they are based on a common broad-scale spatial lattice (here the national Forest Inventory and Analysis grid of 24 km(2) hexagons), can increase the relevance of habitat models to multispecies

  13. Linguistic steganography on Twitter: hierarchical language modeling with manual interaction

    Science.gov (United States)

    Wilson, Alex; Blunsom, Phil; Ker, Andrew D.

    2014-02-01

    This work proposes a natural language stegosystem for Twitter, modifying tweets as they are written to hide 4 bits of payload per tweet, which is a greater payload than previous systems have achieved. The system, CoverTweet, includes novel components, as well as some already developed in the literature. We believe that the task of transforming covers during embedding is equivalent to unilingual machine translation (paraphrasing), and we use this equivalence to de ne a distortion measure based on statistical machine translation methods. The system incorporates this measure of distortion to rank possible tweet paraphrases, using a hierarchical language model; we use human interaction as a second distortion measure to pick the best. The hierarchical language model is designed to model the speci c language of the covers, which in this setting is the language of the Twitter user who is embedding. This is a change from previous work, where general-purpose language models have been used. We evaluate our system by testing the output against human judges, and show that humans are unable to distinguish stego tweets from cover tweets any better than random guessing.

  14. Hierarchical Swarm Model: A New Approach to Optimization

    Directory of Open Access Journals (Sweden)

    Hanning Chen

    2010-01-01

    Full Text Available This paper presents a novel optimization model called hierarchical swarm optimization (HSO, which simulates the natural hierarchical complex system from where more complex intelligence can emerge for complex problems solving. This proposed model is intended to suggest ways that the performance of HSO-based algorithms on complex optimization problems can be significantly improved. This performance improvement is obtained by constructing the HSO hierarchies, which means that an agent in a higher level swarm can be composed of swarms of other agents from lower level and different swarms of different levels evolve on different spatiotemporal scale. A novel optimization algorithm (named PS2O, based on the HSO model, is instantiated and tested to illustrate the ideas of HSO model clearly. Experiments were conducted on a set of 17 benchmark optimization problems including both continuous and discrete cases. The results demonstrate remarkable performance of the PS2O algorithm on all chosen benchmark functions when compared to several successful swarm intelligence and evolutionary algorithms.

  15. Reexploration of interacting holographic dark energy model. Cases of interaction term excluding the Hubble parameter

    Energy Technology Data Exchange (ETDEWEB)

    Li, Hai-Li; Zhang, Jing-Fei; Feng, Lu [Northeastern University, Department of Physics, College of Sciences, Shenyang (China); Zhang, Xin [Northeastern University, Department of Physics, College of Sciences, Shenyang (China); Peking University, Center for High Energy Physics, Beijing (China)

    2017-12-15

    In this paper, we make a deep analysis for the five typical interacting holographic dark energy models with the interaction terms Q = 3βH{sub 0}ρ{sub de}, Q = 3βH{sub 0}ρ{sub c}, Q = 3βH{sub 0}(ρ{sub de} + ρ{sub c}), Q = 3βH{sub 0}√(ρ{sub de}ρ{sub c}), and Q = 3βH{sub 0}(ρ{sub de}ρ{sub c})/(ρ{sub de}+ρ{sub c}), respectively. We obtain observational constraints on these models by using the type Ia supernova data (the Joint Light-Curve Analysis sample), the cosmic microwave background data (Planck 2015 distance priors), the baryon acoustic oscillations data, and the direct measurement of the Hubble constant. We find that the values of χ{sub min}{sup 2} for all the five models are almost equal (around 699), indicating that the current observational data equally favor these IHDE models. In addition, a comparison with the cases of an interaction term involving the Hubble parameter H is also made. (orig.)

  16. The Realized Hierarchical Archimedean Copula in Risk Modelling

    Directory of Open Access Journals (Sweden)

    Ostap Okhrin

    2017-06-01

    Full Text Available This paper introduces the concept of the realized hierarchical Archimedean copula (rHAC. The proposed approach inherits the ability of the copula to capture the dependencies among financial time series, and combines it with additional information contained in high-frequency data. The considered model does not suffer from the curse of dimensionality, and is able to accurately predict high-dimensional distributions. This flexibility is obtained by using a hierarchical structure in the copula. The time variability of the model is provided by daily forecasts of the realized correlation matrix, which is used to estimate the structure and the parameters of the rHAC. Extensive simulation studies show the validity of the estimator based on this realized correlation matrix, and its performance, in comparison to the benchmark models. The application of the estimator to one-day-ahead Value at Risk (VaR prediction using high-frequency data exhibits good forecasting properties for a multivariate portfolio.

  17. Living on the edge: a toy model for holographic reconstruction of algebras with centers

    Energy Technology Data Exchange (ETDEWEB)

    Donnelly, William; Marolf, Donald; Michel, Ben; Wien, Jason [Department of Physics, University of California,Santa Barbara, CA 93106 (United States)

    2017-04-18

    We generalize the Pastawski-Yoshida-Harlow-Preskill (HaPPY) holographic quantum error-correcting code to provide a toy model for bulk gauge fields or linearized gravitons. The key new elements are the introduction of degrees of freedom on the links (edges) of the associated tensor network and their connection to further copies of the HaPPY code by an appropriate isometry. The result is a model in which boundary regions allow the reconstruction of bulk algebras with central elements living on the interior edges of the (greedy) entanglement wedge, and where these central elements can also be reconstructed from complementary boundary regions. In addition, the entropy of boundary regions receives both Ryu-Takayanagi-like contributions and further corrections that model the ((δArea)/(4G{sub N})) term of Faulkner, Lewkowycz, and Maldacena. Comparison with Yang-Mills theory then suggests that this ((δArea)/(4G{sub N})) term can be reinterpreted as a part of the bulk entropy of gravitons under an appropriate extension of the physical bulk Hilbert space.

  18. Living on the edge: a toy model for holographic reconstruction of algebras with centers

    International Nuclear Information System (INIS)

    Donnelly, William; Marolf, Donald; Michel, Ben; Wien, Jason

    2017-01-01

    We generalize the Pastawski-Yoshida-Harlow-Preskill (HaPPY) holographic quantum error-correcting code to provide a toy model for bulk gauge fields or linearized gravitons. The key new elements are the introduction of degrees of freedom on the links (edges) of the associated tensor network and their connection to further copies of the HaPPY code by an appropriate isometry. The result is a model in which boundary regions allow the reconstruction of bulk algebras with central elements living on the interior edges of the (greedy) entanglement wedge, and where these central elements can also be reconstructed from complementary boundary regions. In addition, the entropy of boundary regions receives both Ryu-Takayanagi-like contributions and further corrections that model the ((δArea)/(4G N )) term of Faulkner, Lewkowycz, and Maldacena. Comparison with Yang-Mills theory then suggests that this ((δArea)/(4G N )) term can be reinterpreted as a part of the bulk entropy of gravitons under an appropriate extension of the physical bulk Hilbert space.

  19. More on the holographic Ricci dark energy model: smoothing Rips through interaction effects?

    Science.gov (United States)

    Bouhmadi-López, Mariam; Errahmani, Ahmed; Ouali, Taoufik; Tavakoli, Yaser

    2018-04-01

    The background cosmological dynamics of the late Universe is analysed on the framework of a dark energy model described by an holographic Ricci dark energy component. Several kind of interactions between the dark energy and the dark matter components are considered herein. We solve the background cosmological dynamics for the different choices of interactions with the aim to analyse not only the current evolution of the universe but also its asymptotic behaviour and, in particular, possible future singularities removal. We show that in most of the cases, the Big Rip singularity, a finger print of this model in absence of an interaction between the dark sectors, is substituted by a de Sitter or a Minkowski state. Most importantly, we found two new future bouncing solutions leading to two possible asymptotic behaviours, we named Little Bang and Little Sibling of the Big Bang. At a Little Bang, as the size of the universe shrinks to zero in an infinite cosmic time, the Hubble rate and its cosmic time derivative blow up. In addition, at a Little sibling of the Big Bang, as the size of the universe shrinks to zero in an infinite cosmic time, the Hubble rate blows up but its cosmic time derivative is finite. These two abrupt events can happen as well in the past.

  20. Impact of an extended source in laser ablation using pulsed digital holographic interferometry and modelling

    Energy Technology Data Exchange (ETDEWEB)

    Amer, E., E-mail: eynas.amer@ltu.se [Lulea University of Technology, Department of Applied Physics and Mechanical Engineering, SE-971 87 Lulea (Sweden); Gren, P.; Kaplan, A.F.H.; Sjoedahl, M. [Lulea University of Technology, Department of Applied Physics and Mechanical Engineering, SE-971 87 Lulea (Sweden)

    2009-08-15

    Pulsed digital holographic interferometry has been used to study the effect of the laser spot diameter on the shock wave generated in the ablation process of an Nd:YAG laser pulse on a Zn target under atmospheric pressure. For different laser spot diameters and time delays, the propagation of the expanding vapour and of the shock wave were recorded by intensity maps calculated using the recorded digital holograms. From the latter, the phase maps, the refractive index and the density field can be derived. A model was developed that approaches the density distribution, in particular the ellipsoidal expansion characteristics. The induced shock wave has an ellipsoid shape that approaches a sphere for decreasing spot diameter. The ellipsoidal shock waves have almost the same centre offset towards the laser beam and the same aspect ratio for different time steps. The model facilitates the derivation of the particle velocity field. The method provides valuable quantitative results that are discussed, in particular in comparison with the simpler point source explosion theory.

  1. More on the holographic Ricci dark energy model: smoothing Rips through interaction effects?

    Science.gov (United States)

    Bouhmadi-López, Mariam; Errahmani, Ahmed; Ouali, Taoufik; Tavakoli, Yaser

    2018-01-01

    The background cosmological dynamics of the late Universe is analysed on the framework of a dark energy model described by an holographic Ricci dark energy component. Several kind of interactions between the dark energy and the dark matter components are considered herein. We solve the background cosmological dynamics for the different choices of interactions with the aim to analyse not only the current evolution of the universe but also its asymptotic behaviour and, in particular, possible future singularities removal. We show that in most of the cases, the Big Rip singularity, a finger print of this model in absence of an interaction between the dark sectors, is substituted by a de Sitter or a Minkowski state. Most importantly, we found two new future bouncing solutions leading to two possible asymptotic behaviours, we named Little Bang and Little Sibling of the Big Bang. At a Little Bang, as the size of the universe shrinks to zero in an infinite cosmic time, the Hubble rate and its cosmic time derivative blow up. In addition, at a Little sibling of the Big Bang, as the size of the universe shrinks to zero in an infinite cosmic time, the Hubble rate blows up but its cosmic time derivative is finite. These two abrupt events can happen as well in the past.

  2. Learning Hierarchical User Interest Models from Web Pages

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    We propose an algorithm for learning hierarchical user interest models according to the Web pages users have browsed. In this algorithm, the interests of a user are represented into a tree which is called a user interest tree, the content and the structure of which can change simultaneously to adapt to the changes in a user's interests. This expression represents a user's specific and general interests as a continuum. In some sense, specific interests correspond to short-term interests, while general interests correspond to long-term interests. So this representation more really reflects the users' interests. The algorithm can automatically model a user's multiple interest domains, dynamically generate the interest models and prune a user interest tree when the number of the nodes in it exceeds given value. Finally, we show the experiment results in a Chinese Web Site.

  3. Modeling evolutionary dynamics of epigenetic mutations in hierarchically organized tumors.

    Directory of Open Access Journals (Sweden)

    Andrea Sottoriva

    2011-05-01

    Full Text Available The cancer stem cell (CSC concept is a highly debated topic in cancer research. While experimental evidence in favor of the cancer stem cell theory is apparently abundant, the results are often criticized as being difficult to interpret. An important reason for this is that most experimental data that support this model rely on transplantation studies. In this study we use a novel cellular Potts model to elucidate the dynamics of established malignancies that are driven by a small subset of CSCs. Our results demonstrate that epigenetic mutations that occur during mitosis display highly altered dynamics in CSC-driven malignancies compared to a classical, non-hierarchical model of growth. In particular, the heterogeneity observed in CSC-driven tumors is considerably higher. We speculate that this feature could be used in combination with epigenetic (methylation sequencing studies of human malignancies to prove or refute the CSC hypothesis in established tumors without the need for transplantation. Moreover our tumor growth simulations indicate that CSC-driven tumors display evolutionary features that can be considered beneficial during tumor progression. Besides an increased heterogeneity they also exhibit properties that allow the escape of clones from local fitness peaks. This leads to more aggressive phenotypes in the long run and makes the neoplasm more adaptable to stringent selective forces such as cancer treatment. Indeed when therapy is applied the clone landscape of the regrown tumor is more aggressive with respect to the primary tumor, whereas the classical model demonstrated similar patterns before and after therapy. Understanding these often counter-intuitive fundamental properties of (non-hierarchically organized malignancies is a crucial step in validating the CSC concept as well as providing insight into the therapeutical consequences of this model.

  4. Tractography segmentation using a hierarchical Dirichlet processes mixture model.

    Science.gov (United States)

    Wang, Xiaogang; Grimson, W Eric L; Westin, Carl-Fredrik

    2011-01-01

    In this paper, we propose a new nonparametric Bayesian framework to cluster white matter fiber tracts into bundles using a hierarchical Dirichlet processes mixture (HDPM) model. The number of clusters is automatically learned driven by data with a Dirichlet process (DP) prior instead of being manually specified. After the models of bundles have been learned from training data without supervision, they can be used as priors to cluster/classify fibers of new subjects for comparison across subjects. When clustering fibers of new subjects, new clusters can be created for structures not observed in the training data. Our approach does not require computing pairwise distances between fibers and can cluster a huge set of fibers across multiple subjects. We present results on several data sets, the largest of which has more than 120,000 fibers. Copyright © 2010 Elsevier Inc. All rights reserved.

  5. Hierarchical decision modeling essays in honor of Dundar F. Kocaoglu

    CERN Document Server

    2016-01-01

    This volume, developed in honor of Dr. Dundar F. Kocaoglu, aims to demonstrate the applications of the Hierarchical Decision Model (HDM) in different sectors and its capacity in decision analysis. It is comprised of essays from noted scholars, academics and researchers of engineering and technology management around the world. This book is organized into four parts: Technology Assessment, Strategic Planning, National Technology Planning and Decision Making Tools. Dr. Dundar F. Kocaoglu is one of the pioneers of multiple decision models using hierarchies, and creator of the HDM in decision analysis. HDM is a mission-oriented method for evaluation and/or selection among alternatives. A wide range of alternatives can be considered, including but not limited to, different technologies, projects, markets, jobs, products, cities to live in, houses to buy, apartments to rent, and schools to attend. Dr. Kocaoglu’s approach has been adopted for decision problems in many industrial sectors, including electronics rese...

  6. The holographic entropy cone

    Energy Technology Data Exchange (ETDEWEB)

    Bao, Ning [Institute for Quantum Information and Matter, California Institute of Technology,Pasadena, CA 91125 (United States); Walter Burke Institute for Theoretical Physics, California Institute of Technology,452-48, Pasadena, CA 91125 (United States); Nezami, Sepehr [Stanford Institute for Theoretical Physics, Stanford University,Stanford, CA 94305 (United States); Ooguri, Hirosi [Walter Burke Institute for Theoretical Physics, California Institute of Technology,452-48, Pasadena, CA 91125 (United States); Kavli Institute for the Physics and Mathematics of the Universe, University of Tokyo,Kashiwa 277-8583 (Japan); Stoica, Bogdan [Walter Burke Institute for Theoretical Physics, California Institute of Technology,452-48, Pasadena, CA 91125 (United States); Sully, James [Theory Group, SLAC National Accelerator Laboratory, Stanford University,Menlo Park, CA 94025 (United States); Walter, Michael [Stanford Institute for Theoretical Physics, Stanford University,Stanford, CA 94305 (United States)

    2015-09-21

    We initiate a systematic enumeration and classification of entropy inequalities satisfied by the Ryu-Takayanagi formula for conformal field theory states with smooth holographic dual geometries. For 2, 3, and 4 regions, we prove that the strong subadditivity and the monogamy of mutual information give the complete set of inequalities. This is in contrast to the situation for generic quantum systems, where a complete set of entropy inequalities is not known for 4 or more regions. We also find an infinite new family of inequalities applicable to 5 or more regions. The set of all holographic entropy inequalities bounds the phase space of Ryu-Takayanagi entropies, defining the holographic entropy cone. We characterize this entropy cone by reducing geometries to minimal graph models that encode the possible cutting and gluing relations of minimal surfaces. We find that, for a fixed number of regions, there are only finitely many independent entropy inequalities. To establish new holographic entropy inequalities, we introduce a combinatorial proof technique that may also be of independent interest in Riemannian geometry and graph theory.

  7. The holographic entropy cone

    International Nuclear Information System (INIS)

    Bao, Ning; Nezami, Sepehr; Ooguri, Hirosi; Stoica, Bogdan; Sully, James; Walter, Michael

    2015-01-01

    We initiate a systematic enumeration and classification of entropy inequalities satisfied by the Ryu-Takayanagi formula for conformal field theory states with smooth holographic dual geometries. For 2, 3, and 4 regions, we prove that the strong subadditivity and the monogamy of mutual information give the complete set of inequalities. This is in contrast to the situation for generic quantum systems, where a complete set of entropy inequalities is not known for 4 or more regions. We also find an infinite new family of inequalities applicable to 5 or more regions. The set of all holographic entropy inequalities bounds the phase space of Ryu-Takayanagi entropies, defining the holographic entropy cone. We characterize this entropy cone by reducing geometries to minimal graph models that encode the possible cutting and gluing relations of minimal surfaces. We find that, for a fixed number of regions, there are only finitely many independent entropy inequalities. To establish new holographic entropy inequalities, we introduce a combinatorial proof technique that may also be of independent interest in Riemannian geometry and graph theory.

  8. Regulator Loss Functions and Hierarchical Modeling for Safety Decision Making.

    Science.gov (United States)

    Hatfield, Laura A; Baugh, Christine M; Azzone, Vanessa; Normand, Sharon-Lise T

    2017-07-01

    Regulators must act to protect the public when evidence indicates safety problems with medical devices. This requires complex tradeoffs among risks and benefits, which conventional safety surveillance methods do not incorporate. To combine explicit regulator loss functions with statistical evidence on medical device safety signals to improve decision making. In the Hospital Cost and Utilization Project National Inpatient Sample, we select pediatric inpatient admissions and identify adverse medical device events (AMDEs). We fit hierarchical Bayesian models to the annual hospital-level AMDE rates, accounting for patient and hospital characteristics. These models produce expected AMDE rates (a safety target), against which we compare the observed rates in a test year to compute a safety signal. We specify a set of loss functions that quantify the costs and benefits of each action as a function of the safety signal. We integrate the loss functions over the posterior distribution of the safety signal to obtain the posterior (Bayes) risk; the preferred action has the smallest Bayes risk. Using simulation and an analysis of AMDE data, we compare our minimum-risk decisions to a conventional Z score approach for classifying safety signals. The 2 rules produced different actions for nearly half of hospitals (45%). In the simulation, decisions that minimize Bayes risk outperform Z score-based decisions, even when the loss functions or hierarchical models are misspecified. Our method is sensitive to the choice of loss functions; eliciting quantitative inputs to the loss functions from regulators is challenging. A decision-theoretic approach to acting on safety signals is potentially promising but requires careful specification of loss functions in consultation with subject matter experts.

  9. Intelligent interaction based on holographic personalized portal

    Directory of Open Access Journals (Sweden)

    Yadong Huang

    2017-06-01

    Full Text Available Purpose – The purpose of this paper is to study the architecture of holographic personalized portal, user modeling, commodity modeling and intelligent interaction. Design/methodology/approach – In this paper, the authors propose crowd-science industrial ecological system based on holographic personalized portal and its interaction. The holographic personality portal is based on holographic enterprises, commodities and consumers, and the personalized portal consists of accurate ontology, reliable supply, intelligent demand and smart cyberspace. Findings – The personalized portal can realize the information acquisition, characteristic analysis and holographic presentation. Then, the intelligent interaction, e.g. demand decomposition, personalized search, personalized presentation and demand prediction, will be implemented within the personalized portal. Originality/value – The authors believe that their work on intelligent interaction based on holographic personalized portal, which has been first proposed in this paper, is innovation focusing on the interaction between intelligence and convenience.

  10. GSMNet: A Hierarchical Graph Model for Moving Objects in Networks

    Directory of Open Access Journals (Sweden)

    Hengcai Zhang

    2017-03-01

    Full Text Available Existing data models for moving objects in networks are often limited by flexibly controlling the granularity of representing networks and the cost of location updates and do not encompass semantic information, such as traffic states, traffic restrictions and social relationships. In this paper, we aim to fill the gap of traditional network-constrained models and propose a hierarchical graph model called the Geo-Social-Moving model for moving objects in Networks (GSMNet that adopts four graph structures, RouteGraph, SegmentGraph, ObjectGraph and MoveGraph, to represent the underlying networks, trajectories and semantic information in an integrated manner. The bulk of user-defined data types and corresponding operators is proposed to handle moving objects and answer a new class of queries supporting three kinds of conditions: spatial, temporal and semantic information. Then, we develop a prototype system with the native graph database system Neo4Jto implement the proposed GSMNet model. In the experiment, we conduct the performance evaluation using simulated trajectories generated from the BerlinMOD (Berlin Moving Objects Database benchmark and compare with the mature MOD system Secondo. The results of 17 benchmark queries demonstrate that our proposed GSMNet model has strong potential to reduce time-consuming table join operations an d shows remarkable advantages with regard to representing semantic information and controlling the cost of location updates.

  11. Application of hierarchical genetic models to Raven and WAIS subtests: a Dutch twin study

    NARCIS (Netherlands)

    Rijsdijk, F.V.; Vernon, P.A.; Boomsma, D.I.

    2002-01-01

    Hierarchical models of intelligence are highly informative and widely accepted. Application of these models to twin data, however, is sparse. This paper addresses the question of how a genetic hierarchical model fits the Wechsler Adult Intelligence Scale (WAIS) subtests and the Raven Standard

  12. Projection multiplex recording of computer-synthesised one-dimensional Fourier holograms for holographic memory systems: mathematical and experimental modelling

    Energy Technology Data Exchange (ETDEWEB)

    Betin, A Yu; Bobrinev, V I; Verenikina, N M; Donchenko, S S; Odinokov, S B [Research Institute ' Radiotronics and Laser Engineering' , Bauman Moscow State Technical University, Moscow (Russian Federation); Evtikhiev, N N; Zlokazov, E Yu; Starikov, S N; Starikov, R S [National Reseach Nuclear University MEPhI (Moscow Engineering Physics Institute), Moscow (Russian Federation)

    2015-08-31

    A multiplex method of recording computer-synthesised one-dimensional Fourier holograms intended for holographic memory devices is proposed. The method potentially allows increasing the recording density in the previously proposed holographic memory system based on the computer synthesis and projection recording of data page holograms. (holographic memory)

  13. MODELING THE RED SEQUENCE: HIERARCHICAL GROWTH YET SLOW LUMINOSITY EVOLUTION

    International Nuclear Information System (INIS)

    Skelton, Rosalind E.; Bell, Eric F.; Somerville, Rachel S.

    2012-01-01

    We explore the effects of mergers on the evolution of massive early-type galaxies by modeling the evolution of their stellar populations in a hierarchical context. We investigate how a realistic red sequence population set up by z ∼ 1 evolves under different assumptions for the merger and star formation histories, comparing changes in color, luminosity, and mass. The purely passive fading of existing red sequence galaxies, with no further mergers or star formation, results in dramatic changes at the bright end of the luminosity function and color-magnitude relation. Without mergers there is too much evolution in luminosity at a fixed space density compared to observations. The change in color and magnitude at a fixed mass resembles that of a passively evolving population that formed relatively recently, at z ∼ 2. Mergers among the red sequence population ('dry mergers') occurring after z = 1 build up mass, counteracting the fading of the existing stellar populations to give smaller changes in both color and luminosity for massive galaxies. By allowing some galaxies to migrate from the blue cloud onto the red sequence after z = 1 through gas-rich mergers, younger stellar populations are added to the red sequence. This manifestation of the progenitor bias increases the scatter in age and results in even smaller changes in color and luminosity between z = 1 and z = 0 at a fixed mass. The resultant evolution appears much slower, resembling the passive evolution of a population that formed at high redshift (z ∼ 3-5), and is in closer agreement with observations. We conclude that measurements of the luminosity and color evolution alone are not sufficient to distinguish between the purely passive evolution of an old population and cosmologically motivated hierarchical growth, although these scenarios have very different implications for the mass growth of early-type galaxies over the last half of cosmic history.

  14. Coupling constant corrections in a holographic model of heavy ion collisions

    NARCIS (Netherlands)

    Grozdanov, Sašo; Schee, Wilke van der

    2017-01-01

    We initiate a holographic study of coupling-dependent heavy ion collisions by analysing for the first time the effects of leading-order, inverse coupling constant corrections. In the dual description, this amounts to colliding gravitational shock waves in a theory with curvature-squared terms. We

  15. Hierarchical modeling and its numerical implementation for layered thin elastic structures

    Energy Technology Data Exchange (ETDEWEB)

    Cho, Jin-Rae [Hongik University, Sejong (Korea, Republic of)

    2017-05-15

    Thin elastic structures such as beam- and plate-like structures and laminates are characterized by the small thickness, which lead to classical plate and laminate theories in which the displacement fields through the thickness are assumed linear or higher-order polynomials. These classical theories are either insufficient to represent the complex stress variation through the thickness or may encounter the accuracy-computational cost dilemma. In order to overcome the inherent problem of classical theories, the concept of hierarchical modeling has been emerged. In the hierarchical modeling, the hierarchical models with different model levels are selected and combined within a structure domain, in order to make the modeling error be distributed as uniformly as possible throughout the problem domain. The purpose of current study is to explore the potential of hierarchical modeling for the effective numerical analysis of layered structures such as laminated composite. For this goal, the hierarchical models are constructed and the hierarchical modeling is implemented by selectively adjusting the level of hierarchical models. As well, the major characteristics of hierarchical models are investigated through the numerical experiments.

  16. Bayesian Hierarchical Random Effects Models in Forensic Science

    Directory of Open Access Journals (Sweden)

    Colin G. G. Aitken

    2018-04-01

    Full Text Available Statistical modeling of the evaluation of evidence with the use of the likelihood ratio has a long history. It dates from the Dreyfus case at the end of the nineteenth century through the work at Bletchley Park in the Second World War to the present day. The development received a significant boost in 1977 with a seminal work by Dennis Lindley which introduced a Bayesian hierarchical random effects model for the evaluation of evidence with an example of refractive index measurements on fragments of glass. Many models have been developed since then. The methods have now been sufficiently well-developed and have become so widespread that it is timely to try and provide a software package to assist in their implementation. With that in mind, a project (SAILR: Software for the Analysis and Implementation of Likelihood Ratios was funded by the European Network of Forensic Science Institutes through their Monopoly programme to develop a software package for use by forensic scientists world-wide that would assist in the statistical analysis and implementation of the approach based on likelihood ratios. It is the purpose of this document to provide a short review of a small part of this history. The review also provides a background, or landscape, for the development of some of the models within the SAILR package and references to SAILR as made as appropriate.

  17. Bayesian Hierarchical Random Effects Models in Forensic Science.

    Science.gov (United States)

    Aitken, Colin G G

    2018-01-01

    Statistical modeling of the evaluation of evidence with the use of the likelihood ratio has a long history. It dates from the Dreyfus case at the end of the nineteenth century through the work at Bletchley Park in the Second World War to the present day. The development received a significant boost in 1977 with a seminal work by Dennis Lindley which introduced a Bayesian hierarchical random effects model for the evaluation of evidence with an example of refractive index measurements on fragments of glass. Many models have been developed since then. The methods have now been sufficiently well-developed and have become so widespread that it is timely to try and provide a software package to assist in their implementation. With that in mind, a project (SAILR: Software for the Analysis and Implementation of Likelihood Ratios) was funded by the European Network of Forensic Science Institutes through their Monopoly programme to develop a software package for use by forensic scientists world-wide that would assist in the statistical analysis and implementation of the approach based on likelihood ratios. It is the purpose of this document to provide a short review of a small part of this history. The review also provides a background, or landscape, for the development of some of the models within the SAILR package and references to SAILR as made as appropriate.

  18. Renormalization group analysis of a simple hierarchical fermion model

    International Nuclear Information System (INIS)

    Dorlas, T.C.

    1991-01-01

    A simple hierarchical fermion model is constructed which gives rise to an exact renormalization transformation in a 2-dimensional parameter space. The behaviour of this transformation is studied. It has two hyperbolic fixed points for which the existence of a global critical line is proven. The asymptotic behaviour of the transformation is used to prove the existence of the thermodynamic limit in a certain domain in parameter space. Also the existence of a continuum limit for these theories is investigated using information about the asymptotic renormalization behaviour. It turns out that the 'trivial' fixed point gives rise to a two-parameter family of continuum limits corresponding to that part of parameter space where the renormalization trajectories originate at this fixed point. Although the model is not very realistic it serves as a simple example of the appliclation of the renormalization group to proving the existence of the thermodynamic limit and the continuum limit of lattice models. Moreover, it illustrates possible complications that can arise in global renormalization group behaviour, and that might also be present in other models where no global analysis of the renormalization transformation has yet been achieved. (orig.)

  19. Testing adaptive toolbox models: a Bayesian hierarchical approach.

    Science.gov (United States)

    Scheibehenne, Benjamin; Rieskamp, Jörg; Wagenmakers, Eric-Jan

    2013-01-01

    Many theories of human cognition postulate that people are equipped with a repertoire of strategies to solve the tasks they face. This theoretical framework of a cognitive toolbox provides a plausible account of intra- and interindividual differences in human behavior. Unfortunately, it is often unclear how to rigorously test the toolbox framework. How can a toolbox model be quantitatively specified? How can the number of toolbox strategies be limited to prevent uncontrolled strategy sprawl? How can a toolbox model be formally tested against alternative theories? The authors show how these challenges can be met by using Bayesian inference techniques. By means of parameter recovery simulations and the analysis of empirical data across a variety of domains (i.e., judgment and decision making, children's cognitive development, function learning, and perceptual categorization), the authors illustrate how Bayesian inference techniques allow toolbox models to be quantitatively specified, strategy sprawl to be contained, and toolbox models to be rigorously tested against competing theories. The authors demonstrate that their approach applies at the individual level but can also be generalized to the group level with hierarchical Bayesian procedures. The suggested Bayesian inference techniques represent a theoretical and methodological advancement for toolbox theories of cognition and behavior.

  20. Constraints on Λ(t)CDM models as holographic and agegraphic dark energy with the observational Hubble parameter data

    Energy Technology Data Exchange (ETDEWEB)

    Zhai, Zhong-Xu; Liu, Wen-Biao [Department of Physics, Institute of Theoretical Physics, Beijing Normal University, Beijing, 100875 (China); Zhang, Tong-Jie, E-mail: zzx@mail.bnu.edu.cn, E-mail: tjzhang@bnu.edu.cn, E-mail: wbliu@bnu.edu.cn [Department of Astronomy, Beijing Normal University, Beijing, 100875 (China)

    2011-08-01

    The newly released observational H(z) data (OHD) is used to constrain Λ(t)CDM models as holographic and agegraphic dark energy. By the use of the length scale and time scale as the IR cut-off including Hubble horizon (HH), future event horizon (FEH), age of the universe (AU), and conformal time (CT), we achieve four different Λ(t)CDM models which can describe the present cosmological acceleration respectively. In order to get a comparison between such Λ(t)CDM models and standard ΛCDM model, we use the information criteria (IC), Om(z) diagnostic, and statefinder diagnostic to measure the deviations. Furthermore, by simulating a larger Hubble parameter data sample in the redshift range of 0.1 < z < 2.0, we get the improved constraints and more sufficient comparison. We show that OHD is not only able to play almost the same role in constraining cosmological parameters as SNe Ia does but also provides the effective measurement of the deviation of the DE models from standard ΛCDM model. In the holographic and agegraphic scenarios, the results indicate that the FEH is more preferable than HH scenario. However, both two time scenarios show better approximations to ΛCDM model than the length scenarios.

  1. Application of Hierarchical Linear Models/Linear Mixed-Effects Models in School Effectiveness Research

    Science.gov (United States)

    Ker, H. W.

    2014-01-01

    Multilevel data are very common in educational research. Hierarchical linear models/linear mixed-effects models (HLMs/LMEs) are often utilized to analyze multilevel data nowadays. This paper discusses the problems of utilizing ordinary regressions for modeling multilevel educational data, compare the data analytic results from three regression…

  2. Hierarchical Bayesian modelling of mobility metrics for hazard model input calibration

    Science.gov (United States)

    Calder, Eliza; Ogburn, Sarah; Spiller, Elaine; Rutarindwa, Regis; Berger, Jim

    2015-04-01

    In this work we present a method to constrain flow mobility input parameters for pyroclastic flow models using hierarchical Bayes modeling of standard mobility metrics such as H/L and flow volume etc. The advantage of hierarchical modeling is that it can leverage the information in global dataset for a particular mobility metric in order to reduce the uncertainty in modeling of an individual volcano, especially important where individual volcanoes have only sparse datasets. We use compiled pyroclastic flow runout data from Colima, Merapi, Soufriere Hills, Unzen and Semeru volcanoes, presented in an open-source database FlowDat (https://vhub.org/groups/massflowdatabase). While the exact relationship between flow volume and friction varies somewhat between volcanoes, dome collapse flows originating from the same volcano exhibit similar mobility relationships. Instead of fitting separate regression models for each volcano dataset, we use a variation of the hierarchical linear model (Kass and Steffey, 1989). The model presents a hierarchical structure with two levels; all dome collapse flows and dome collapse flows at specific volcanoes. The hierarchical model allows us to assume that the flows at specific volcanoes share a common distribution of regression slopes, then solves for that distribution. We present comparisons of the 95% confidence intervals on the individual regression lines for the data set from each volcano as well as those obtained from the hierarchical model. The results clearly demonstrate the advantage of considering global datasets using this technique. The technique developed is demonstrated here for mobility metrics, but can be applied to many other global datasets of volcanic parameters. In particular, such methods can provide a means to better contain parameters for volcanoes for which we only have sparse data, a ubiquitous problem in volcanology.

  3. A hierarchical network modeling method for railway tunnels safety assessment

    Science.gov (United States)

    Zhou, Jin; Xu, Weixiang; Guo, Xin; Liu, Xumin

    2017-02-01

    Using network theory to model risk-related knowledge on accidents is regarded as potential very helpful in risk management. A large amount of defects detection data for railway tunnels is collected in autumn every year in China. It is extremely important to discover the regularities knowledge in database. In this paper, based on network theories and by using data mining techniques, a new method is proposed for mining risk-related regularities to support risk management in railway tunnel projects. A hierarchical network (HN) model which takes into account the tunnel structures, tunnel defects, potential failures and accidents is established. An improved Apriori algorithm is designed to rapidly and effectively mine correlations between tunnel structures and tunnel defects. Then an algorithm is presented in order to mine the risk-related regularities table (RRT) from the frequent patterns. At last, a safety assessment method is proposed by consideration of actual defects and possible risks of defects gained from the RRT. This method cannot only generate the quantitative risk results but also reveal the key defects and critical risks of defects. This paper is further development on accident causation network modeling methods which can provide guidance for specific maintenance measure.

  4. Production optimisation in the petrochemical industry by hierarchical multivariate modelling

    Energy Technology Data Exchange (ETDEWEB)

    Andersson, Magnus; Furusjoe, Erik; Jansson, Aasa

    2004-06-01

    This project demonstrates the advantages of applying hierarchical multivariate modelling in the petrochemical industry in order to increase knowledge of the total process. The models indicate possible ways to optimise the process regarding the use of energy and raw material, which is directly linked to the environmental impact of the process. The refinery of Nynaes Refining AB (Goeteborg, Sweden) has acted as a demonstration site in this project. The models developed for the demonstration site resulted in: Detection of an unknown process disturbance and suggestions of possible causes; Indications on how to increase the yield in combination with energy savings; The possibility to predict product quality from on-line process measurements, making the results available at a higher frequency than customary laboratory analysis; Quantification of the gradually lowered efficiency of heat transfer in the furnace and increased fuel consumption as an effect of soot build-up on the furnace coils; Increased knowledge of the relation between production rate and the efficiency of the heat exchangers. This report is one of two reports from the project. It contains a technical discussion of the result with some degree of detail. A shorter and more easily accessible report is also available, see IVL report B1586-A.

  5. Intelligent holographic databases

    Science.gov (United States)

    Barbastathis, George

    Memory is a key component of intelligence. In the human brain, physical structure and functionality jointly provide diverse memory modalities at multiple time scales. How could we engineer artificial memories with similar faculties? In this thesis, we attack both hardware and algorithmic aspects of this problem. A good part is devoted to holographic memory architectures, because they meet high capacity and parallelism requirements. We develop and fully characterize shift multiplexing, a novel storage method that simplifies disk head design for holographic disks. We develop and optimize the design of compact refreshable holographic random access memories, showing several ways that 1 Tbit can be stored holographically in volume less than 1 m3, with surface density more than 20 times higher than conventional silicon DRAM integrated circuits. To address the issue of photorefractive volatility, we further develop the two-lambda (dual wavelength) method for shift multiplexing, and combine electrical fixing with angle multiplexing to demonstrate 1,000 multiplexed fixed holograms. Finally, we propose a noise model and an information theoretic metric to optimize the imaging system of a holographic memory, in terms of storage density and error rate. Motivated by the problem of interfacing sensors and memories to a complex system with limited computational resources, we construct a computer game of Desert Survival, built as a high-dimensional non-stationary virtual environment in a competitive setting. The efficacy of episodic learning, implemented as a reinforced Nearest Neighbor scheme, and the probability of winning against a control opponent improve significantly by concentrating the algorithmic effort to the virtual desert neighborhood that emerges as most significant at any time. The generalized computational model combines the autonomous neural network and von Neumann paradigms through a compact, dynamic central representation, which contains the most salient features

  6. Studies of a general flat space/boson star transition model in a box through a language similar to holographic superconductors

    Science.gov (United States)

    Peng, Yan

    2017-07-01

    We study a general flat space/boson star transition model in quasi-local ensemble through approaches familiar from holographic superconductor theories. We manage to find a parameter ψ 2, which is proved to be useful in disclosing properties of phase transitions. In this work, we explore effects of the scalar mass, scalar charge and Stückelberg mechanism on the critical phase transition points and the order of transitions mainly from behaviors of the parameter ψ 2. We mention that properties of transitions in quasi-local gravity are strikingly similar to those in holographic superconductor models. We also obtain an analytical relation ψ 2 ∝ ( μ - μ c )1/2, which also holds for the condensed scalar operator in the holographic insulator/superconductor system in accordance with mean field theories.

  7. Flowing holographic anyonic superfluid

    Science.gov (United States)

    Jokela, Niko; Lifschytz, Gilad; Lippert, Matthew

    2014-10-01

    We investigate the flow of a strongly coupled anyonic superfluid based on the holographic D3-D7' probe brane model. By analyzing the spectrum of fluctuations, we find the critical superfluid velocity, as a function of the temperature, at which the flow stops being dissipationless when flowing past a barrier. We find that at a larger velocity the flow becomes unstable even in the absence of a barrier.

  8. A joint model for multivariate hierarchical semicontinuous data with replications.

    Science.gov (United States)

    Kassahun-Yimer, Wondwosen; Albert, Paul S; Lipsky, Leah M; Nansel, Tonja R; Liu, Aiyi

    2017-01-01

    Longitudinal data are often collected in biomedical applications in such a way that measurements on more than one response are taken from a given subject repeatedly overtime. For some problems, these multiple profiles need to be modeled jointly to get insight on the joint evolution and/or association of these responses over time. In practice, such longitudinal outcomes may have many zeros that need to be accounted for in the analysis. For example, in dietary intake studies, as we focus on in this paper, some food components are eaten daily by almost all subjects, while others are consumed episodically, where individuals have time periods where they do not eat these components followed by periods where they do. These episodically consumed foods need to be adequately modeled to account for the many zeros that are encountered. In this paper, we propose a joint model to analyze multivariate hierarchical semicontinuous data characterized by many zeros and more than one replicate observations at each measurement occasion. This approach allows for different probability mechanisms for describing the zero behavior as compared with the mean intake given that the individual consumes the food. To deal with the potentially large number of multivariate profiles, we use a pairwise model fitting approach that was developed in the context of multivariate Gaussian random effects models with large number of multivariate components. The novelty of the proposed approach is that it incorporates: (1) multivariate, possibly correlated, response variables; (2) within subject correlation resulting from repeated measurements taken from each subject; (3) many zero observations; (4) overdispersion; and (5) replicate measurements at each visit time.

  9. Adaptive hierarchical grid model of water-borne pollutant dispersion

    Science.gov (United States)

    Borthwick, A. G. L.; Marchant, R. D.; Copeland, G. J. M.

    Water pollution by industrial and agricultural waste is an increasingly major public health issue. It is therefore important for water engineers and managers to be able to predict accurately the local behaviour of water-borne pollutants. This paper describes the novel and efficient coupling of dynamically adaptive hierarchical grids with standard solvers of the advection-diffusion equation. Adaptive quadtree grids are able to focus on regions of interest such as pollutant fronts, while retaining economy in the total number of grid elements through selective grid refinement. Advection is treated using Lagrangian particle tracking. Diffusion is solved separately using two grid-based methods; one is by explicit finite differences, the other a diffusion-velocity approach. Results are given in two dimensions for pure diffusion of an initially Gaussian plume, advection-diffusion of the Gaussian plume in the rotating flow field of a forced vortex, and the transport of species in a rectangular channel with side wall boundary layers. Close agreement is achieved with analytical solutions of the advection-diffusion equation and simulations from a Lagrangian random walk model. An application to Sepetiba Bay, Brazil is included to demonstrate the method with complex flows and topography.

  10. Hierarchical statistical modeling of xylem vulnerability to cavitation.

    Science.gov (United States)

    Ogle, Kiona; Barber, Jarrett J; Willson, Cynthia; Thompson, Brenda

    2009-01-01

    Cavitation of xylem elements diminishes the water transport capacity of plants, and quantifying xylem vulnerability to cavitation is important to understanding plant function. Current approaches to analyzing hydraulic conductivity (K) data to infer vulnerability to cavitation suffer from problems such as the use of potentially unrealistic vulnerability curves, difficulty interpreting parameters in these curves, a statistical framework that ignores sampling design, and an overly simplistic view of uncertainty. This study illustrates how two common curves (exponential-sigmoid and Weibull) can be reparameterized in terms of meaningful parameters: maximum conductivity (k(sat)), water potential (-P) at which percentage loss of conductivity (PLC) =X% (P(X)), and the slope of the PLC curve at P(X) (S(X)), a 'sensitivity' index. We provide a hierarchical Bayesian method for fitting the reparameterized curves to K(H) data. We illustrate the method using data for roots and stems of two populations of Juniperus scopulorum and test for differences in k(sat), P(X), and S(X) between different groups. Two important results emerge from this study. First, the Weibull model is preferred because it produces biologically realistic estimates of PLC near P = 0 MPa. Second, stochastic embolisms contribute an important source of uncertainty that should be included in such analyses.

  11. Scale of association: hierarchical linear models and the measurement of ecological systems

    Science.gov (United States)

    Sean M. McMahon; Jeffrey M. Diez

    2007-01-01

    A fundamental challenge to understanding patterns in ecological systems lies in employing methods that can analyse, test and draw inference from measured associations between variables across scales. Hierarchical linear models (HLM) use advanced estimation algorithms to measure regression relationships and variance-covariance parameters in hierarchically structured...

  12. Holographic Aquaculture

    Science.gov (United States)

    Ian, Richard; King, Elisabeth

    1988-01-01

    Proposed is an exploratory study to verify the feasibility of an inexpensive micro-climate control system for both marine and freshwater pond and tank aquaculture, offering good control over water temperature, incident light flux, and bandwidth, combined with good energy efficiency. The proposed control system utilizes some familiar components of passive solar design, together with a new holographic glazing system which is currently being developed by, and proprietary to Advanced Environmental Research Group (AERG). The use of solar algae ponds and tanks to warm and purify water for fish and attached macroscopic marine algae culture is an ancient and effective technique, but limited seasonally and geographically by the availability of sunlight. Holographic Diffracting Structures (HDSs) can be made which passively track, accept and/or reject sunlight from a wide range of altitude and azimuth angles, and redirect and distribute light energy as desired (either directly or indirectly over water surface in an enclosed, insulated structure), effectively increasing insolation values by accepting sunlight which would not otherwise enter the structure.

  13. A novel Bayesian hierarchical model for road safety hotspot prediction.

    Science.gov (United States)

    Fawcett, Lee; Thorpe, Neil; Matthews, Joseph; Kremer, Karsten

    2017-02-01

    In this paper, we propose a Bayesian hierarchical model for predicting accident counts in future years at sites within a pool of potential road safety hotspots. The aim is to inform road safety practitioners of the location of likely future hotspots to enable a proactive, rather than reactive, approach to road safety scheme implementation. A feature of our model is the ability to rank sites according to their potential to exceed, in some future time period, a threshold accident count which may be used as a criterion for scheme implementation. Our model specification enables the classical empirical Bayes formulation - commonly used in before-and-after studies, wherein accident counts from a single before period are used to estimate counterfactual counts in the after period - to be extended to incorporate counts from multiple time periods. This allows site-specific variations in historical accident counts (e.g. locally-observed trends) to offset estimates of safety generated by a global accident prediction model (APM), which itself is used to help account for the effects of global trend and regression-to-mean (RTM). The Bayesian posterior predictive distribution is exploited to formulate predictions and to properly quantify our uncertainty in these predictions. The main contributions of our model include (i) the ability to allow accident counts from multiple time-points to inform predictions, with counts in more recent years lending more weight to predictions than counts from time-points further in the past; (ii) where appropriate, the ability to offset global estimates of trend by variations in accident counts observed locally, at a site-specific level; and (iii) the ability to account for unknown/unobserved site-specific factors which may affect accident counts. We illustrate our model with an application to accident counts at 734 potential hotspots in the German city of Halle; we also propose some simple diagnostics to validate the predictive capability of our

  14. Metamodeling Techniques to Aid in the Aggregation Process of Large Hierarchical Simulation Models

    National Research Council Canada - National Science Library

    Rodriguez, June F

    2008-01-01

    .... More specifically, investigating how to accurately aggregate hierarchical lower-level (higher resolution) models into the next higher-level in order to reduce the complexity of the overall simulation model...

  15. A Hierarchical Modeling for Reactive Power Optimization With Joint Transmission and Distribution Networks by Curve Fitting

    DEFF Research Database (Denmark)

    Ding, Tao; Li, Cheng; Huang, Can

    2018-01-01

    –slave structure and improves traditional centralized modeling methods by alleviating the big data problem in a control center. Specifically, the transmission-distribution-network coordination issue of the hierarchical modeling method is investigated. First, a curve-fitting approach is developed to provide a cost......In order to solve the reactive power optimization with joint transmission and distribution networks, a hierarchical modeling method is proposed in this paper. It allows the reactive power optimization of transmission and distribution networks to be performed separately, leading to a master...... optimality. Numerical results on two test systems verify the effectiveness of the proposed hierarchical modeling and curve-fitting methods....

  16. A Bayesian hierarchical model for demand curve analysis.

    Science.gov (United States)

    Ho, Yen-Yi; Nhu Vo, Tien; Chu, Haitao; Luo, Xianghua; Le, Chap T

    2018-07-01

    Drug self-administration experiments are a frequently used approach to assessing the abuse liability and reinforcing property of a compound. It has been used to assess the abuse liabilities of various substances such as psychomotor stimulants and hallucinogens, food, nicotine, and alcohol. The demand curve generated from a self-administration study describes how demand of a drug or non-drug reinforcer varies as a function of price. With the approval of the 2009 Family Smoking Prevention and Tobacco Control Act, demand curve analysis provides crucial evidence to inform the US Food and Drug Administration's policy on tobacco regulation, because it produces several important quantitative measurements to assess the reinforcing strength of nicotine. The conventional approach popularly used to analyze the demand curve data is individual-specific non-linear least square regression. The non-linear least square approach sets out to minimize the residual sum of squares for each subject in the dataset; however, this one-subject-at-a-time approach does not allow for the estimation of between- and within-subject variability in a unified model framework. In this paper, we review the existing approaches to analyze the demand curve data, non-linear least square regression, and the mixed effects regression and propose a new Bayesian hierarchical model. We conduct simulation analyses to compare the performance of these three approaches and illustrate the proposed approaches in a case study of nicotine self-administration in rats. We present simulation results and discuss the benefits of using the proposed approaches.

  17. Statistical modelling of railway track geometry degradation using Hierarchical Bayesian models

    International Nuclear Information System (INIS)

    Andrade, A.R.; Teixeira, P.F.

    2015-01-01

    Railway maintenance planners require a predictive model that can assess the railway track geometry degradation. The present paper uses a Hierarchical Bayesian model as a tool to model the main two quality indicators related to railway track geometry degradation: the standard deviation of longitudinal level defects and the standard deviation of horizontal alignment defects. Hierarchical Bayesian Models (HBM) are flexible statistical models that allow specifying different spatially correlated components between consecutive track sections, namely for the deterioration rates and the initial qualities parameters. HBM are developed for both quality indicators, conducting an extensive comparison between candidate models and a sensitivity analysis on prior distributions. HBM is applied to provide an overall assessment of the degradation of railway track geometry, for the main Portuguese railway line Lisbon–Oporto. - Highlights: • Rail track geometry degradation is analysed using Hierarchical Bayesian models. • A Gibbs sampling strategy is put forward to estimate the HBM. • Model comparison and sensitivity analysis find the most suitable model. • We applied the most suitable model to all the segments of the main Portuguese line. • Tackling spatial correlations using CAR structures lead to a better model fit

  18. Hierarchical functional model for automobile development; Jidosha kaihatsu no tame no kaisogata kino model

    Energy Technology Data Exchange (ETDEWEB)

    Sumida, S [U-shin Ltd., Tokyo (Japan); Nagamatsu, M; Maruyama, K [Hokkaido Institute of Technology, Sapporo (Japan); Hiramatsu, S [Mazda Motor Corp., Hiroshima (Japan)

    1997-10-01

    A new approach on modeling is put forward in order to compose the virtual prototype which is indispensable for fully computer integrated concurrent development of automobile product. A basic concept of the hierarchical functional model is proposed as the concrete form of this new modeling technology. This model is used mainly for explaining and simulating functions and efficiencies of both the parts and the total product of automobile. All engineers who engage themselves in design and development of automobile can collaborate with one another using this model. Some application examples are shown, and usefulness of this model is demonstrated. 5 refs., 5 figs.

  19. Recognizing Chinese characters in digital ink from non-native language writers using hierarchical models

    Science.gov (United States)

    Bai, Hao; Zhang, Xi-wen

    2017-06-01

    While Chinese is learned as a second language, its characters are taught step by step from their strokes to components, radicals to components, and their complex relations. Chinese Characters in digital ink from non-native language writers are deformed seriously, thus the global recognition approaches are poorer. So a progressive approach from bottom to top is presented based on hierarchical models. Hierarchical information includes strokes and hierarchical components. Each Chinese character is modeled as a hierarchical tree. Strokes in one Chinese characters in digital ink are classified with Hidden Markov Models and concatenated to the stroke symbol sequence. And then the structure of components in one ink character is extracted. According to the extraction result and the stroke symbol sequence, candidate characters are traversed and scored. Finally, the recognition candidate results are listed by descending. The method of this paper is validated by testing 19815 copies of the handwriting Chinese characters written by foreign students.

  20. New aerial survey and hierarchical model to estimate manatee abundance

    Science.gov (United States)

    Langimm, Cahterine A.; Dorazio, Robert M.; Stith, Bradley M.; Doyle, Terry J.

    2011-01-01

    Monitoring the response of endangered and protected species to hydrological restoration is a major component of the adaptive management framework of the Comprehensive Everglades Restoration Plan. The endangered Florida manatee (Trichechus manatus latirostris) lives at the marine-freshwater interface in southwest Florida and is likely to be affected by hydrologic restoration. To provide managers with prerestoration information on distribution and abundance for postrestoration comparison, we developed and implemented a new aerial survey design and hierarchical statistical model to estimate and map abundance of manatees as a function of patch-specific habitat characteristics, indicative of manatee requirements for offshore forage (seagrass), inland fresh drinking water, and warm-water winter refuge. We estimated the number of groups of manatees from dual-observer counts and estimated the number of individuals within groups by removal sampling. Our model is unique in that we jointly analyzed group and individual counts using assumptions that allow probabilities of group detection to depend on group size. Ours is the first analysis of manatee aerial surveys to model spatial and temporal abundance of manatees in association with habitat type while accounting for imperfect detection. We conducted the study in the Ten Thousand Islands area of southwestern Florida, USA, which was expected to be affected by the Picayune Strand Restoration Project to restore hydrology altered for a failed real-estate development. We conducted 11 surveys in 2006, spanning the cold, dry season and warm, wet season. To examine short-term and seasonal changes in distribution we flew paired surveys 1–2 days apart within a given month during the year. Manatees were sparsely distributed across the landscape in small groups. Probability of detection of a group increased with group size; the magnitude of the relationship between group size and detection probability varied among surveys. Probability

  1. A mechanical model of biomimetic adhesive pads with tilted and hierarchical structures.

    Science.gov (United States)

    Schargott, M

    2009-06-01

    A 3D model for hierarchical biomimetic adhesive pads is constructed. It is based on the main principles of the adhesive pads of the Tokay gecko and consists of hierarchical layers of vertical or tilted beams, where each layer is constructed in such a way that no cohesion between adjacent beams can occur. The elastic and adhesive properties are calculated analytically and numerically. For the adhesive contact on stochastically rough surfaces, the maximum adhesion force increases with increasing number of hierarchical layers. Additional calculations show that the adhesion force also depends on the height spectrum of the rough surface.

  2. A mechanical model of biomimetic adhesive pads with tilted and hierarchical structures

    Energy Technology Data Exchange (ETDEWEB)

    Schargott, M [Institute of Mechanics, Technische Universitaet Berlin, Strd 17 Juni 135, 10623 Berlin (Germany)], E-mail: martin.schargott@tu-berlin.de

    2009-06-01

    A 3D model for hierarchical biomimetic adhesive pads is constructed. It is based on the main principles of the adhesive pads of the Tokay gecko and consists of hierarchical layers of vertical or tilted beams, where each layer is constructed in such a way that no cohesion between adjacent beams can occur. The elastic and adhesive properties are calculated analytically and numerically. For the adhesive contact on stochastically rough surfaces, the maximum adhesion force increases with increasing number of hierarchical layers. Additional calculations show that the adhesion force also depends on the height spectrum of the rough surface.

  3. A mechanical model of biomimetic adhesive pads with tilted and hierarchical structures

    International Nuclear Information System (INIS)

    Schargott, M

    2009-01-01

    A 3D model for hierarchical biomimetic adhesive pads is constructed. It is based on the main principles of the adhesive pads of the Tokay gecko and consists of hierarchical layers of vertical or tilted beams, where each layer is constructed in such a way that no cohesion between adjacent beams can occur. The elastic and adhesive properties are calculated analytically and numerically. For the adhesive contact on stochastically rough surfaces, the maximum adhesion force increases with increasing number of hierarchical layers. Additional calculations show that the adhesion force also depends on the height spectrum of the rough surface

  4. Holographic effective field theories

    Energy Technology Data Exchange (ETDEWEB)

    Martucci, Luca [Dipartimento di Fisica ed Astronomia “Galileo Galilei' , Università di Padova,and INFN - Sezione di Padova, Via Marzolo 8, I-35131 Padova (Italy); Zaffaroni, Alberto [Dipartimento di Fisica, Università di Milano-Bicocca,and INFN - Sezione di Milano-Bicocca, I-20126 Milano (Italy)

    2016-06-28

    We derive the four-dimensional low-energy effective field theory governing the moduli space of strongly coupled superconformal quiver gauge theories associated with D3-branes at Calabi-Yau conical singularities in the holographic regime of validity. We use the dual supergravity description provided by warped resolved conical geometries with mobile D3-branes. Information on the baryonic directions of the moduli space is also obtained by using wrapped Euclidean D3-branes. We illustrate our general results by discussing in detail their application to the Klebanov-Witten model.

  5. Phases of Holographic QCD

    International Nuclear Information System (INIS)

    Lippert, Matthew

    2009-01-01

    We investigated the Sakai-Sugimoto model of large N QCD at nonzero temperature and baryon chemical potential and in the presence of background electric and magnetic fields. We studied the holographic representation of baryons and the deconfinement, chiral-symmetry breaking, and nuclear matter phase transitions. In a background electric field, chiral-symmetry breaking corresponds to an insulator-conductor transition. A magnetic field both catalyzes chiral-symmetry breaking and generates, in the confined phase, a pseudo-scalar gradient or, in the deconfined phase, an axial current. The resulting phase diagram is in qualitative agreement with studies of hot, dense QCD.

  6. Usability Prediction & Ranking of SDLC Models Using Fuzzy Hierarchical Usability Model

    Science.gov (United States)

    Gupta, Deepak; Ahlawat, Anil K.; Sagar, Kalpna

    2017-06-01

    Evaluation of software quality is an important aspect for controlling and managing the software. By such evaluation, improvements in software process can be made. The software quality is significantly dependent on software usability. Many researchers have proposed numbers of usability models. Each model considers a set of usability factors but do not cover all the usability aspects. Practical implementation of these models is still missing, as there is a lack of precise definition of usability. Also, it is very difficult to integrate these models into current software engineering practices. In order to overcome these challenges, this paper aims to define the term `usability' using the proposed hierarchical usability model with its detailed taxonomy. The taxonomy considers generic evaluation criteria for identifying the quality components, which brings together factors, attributes and characteristics defined in various HCI and software models. For the first time, the usability model is also implemented to predict more accurate usability values. The proposed system is named as fuzzy hierarchical usability model that can be easily integrated into the current software engineering practices. In order to validate the work, a dataset of six software development life cycle models is created and employed. These models are ranked according to their predicted usability values. This research also focuses on the detailed comparison of proposed model with the existing usability models.

  7. Bottom-up learning of hierarchical models in a class of deterministic POMDP environments

    Directory of Open Access Journals (Sweden)

    Itoh Hideaki

    2015-09-01

    Full Text Available The theory of partially observable Markov decision processes (POMDPs is a useful tool for developing various intelligent agents, and learning hierarchical POMDP models is one of the key approaches for building such agents when the environments of the agents are unknown and large. To learn hierarchical models, bottom-up learning methods in which learning takes place in a layer-by-layer manner from the lowest to the highest layer are already extensively used in some research fields such as hidden Markov models and neural networks. However, little attention has been paid to bottom-up approaches for learning POMDP models. In this paper, we present a novel bottom-up learning algorithm for hierarchical POMDP models and prove that, by using this algorithm, a perfect model (i.e., a model that can perfectly predict future observations can be learned at least in a class of deterministic POMDP environments

  8. Automatic thoracic anatomy segmentation on CT images using hierarchical fuzzy models and registration

    Science.gov (United States)

    Sun, Kaioqiong; Udupa, Jayaram K.; Odhner, Dewey; Tong, Yubing; Torigian, Drew A.

    2014-03-01

    This paper proposes a thoracic anatomy segmentation method based on hierarchical recognition and delineation guided by a built fuzzy model. Labeled binary samples for each organ are registered and aligned into a 3D fuzzy set representing the fuzzy shape model for the organ. The gray intensity distributions of the corresponding regions of the organ in the original image are recorded in the model. The hierarchical relation and mean location relation between different organs are also captured in the model. Following the hierarchical structure and location relation, the fuzzy shape model of different organs is registered to the given target image to achieve object recognition. A fuzzy connected delineation method is then used to obtain the final segmentation result of organs with seed points provided by recognition. The hierarchical structure and location relation integrated in the model provide the initial parameters for registration and make the recognition efficient and robust. The 3D fuzzy model combined with hierarchical affine registration ensures that accurate recognition can be obtained for both non-sparse and sparse organs. The results on real images are presented and shown to be better than a recently reported fuzzy model-based anatomy recognition strategy.

  9. Hierarchical modeling and inference in ecology: The analysis of data from populations, metapopulations and communities

    Science.gov (United States)

    Royle, J. Andrew; Dorazio, Robert M.

    2008-01-01

    A guide to data collection, modeling and inference strategies for biological survey data using Bayesian and classical statistical methods. This book describes a general and flexible framework for modeling and inference in ecological systems based on hierarchical models, with a strict focus on the use of probability models and parametric inference. Hierarchical models represent a paradigm shift in the application of statistics to ecological inference problems because they combine explicit models of ecological system structure or dynamics with models of how ecological systems are observed. The principles of hierarchical modeling are developed and applied to problems in population, metapopulation, community, and metacommunity systems. The book provides the first synthetic treatment of many recent methodological advances in ecological modeling and unifies disparate methods and procedures. The authors apply principles of hierarchical modeling to ecological problems, including * occurrence or occupancy models for estimating species distribution * abundance models based on many sampling protocols, including distance sampling * capture-recapture models with individual effects * spatial capture-recapture models based on camera trapping and related methods * population and metapopulation dynamic models * models of biodiversity, community structure and dynamics.

  10. Holographic complexity and spacetime singularities

    Energy Technology Data Exchange (ETDEWEB)

    Barbón, José L.F. [Instituto de Física Teórica IFT UAM/CSIC,C/ Nicolás Cabrera 13, Campus Universidad Autónoma de Madrid,Madrid 28049 (Spain); Rabinovici, Eliezer [Racah Institute of Physics, The Hebrew University,Jerusalem 91904 (Israel); Laboratoire de Physique Théorique et Hautes Energies, Université Pierre et Marie Curie, 4 Place Jussieu, 75252 Paris Cedex 05 (France)

    2016-01-15

    We study the evolution of holographic complexity in various AdS/CFT models containing cosmological crunch singularities. We find that a notion of complexity measured by extremal bulk volumes tends to decrease as the singularity is approached in CFT time, suggesting that the corresponding quantum states have simpler entanglement structure at the singularity.

  11. Holographic complexity and spacetime singularities

    International Nuclear Information System (INIS)

    Barbón, José L.F.; Rabinovici, Eliezer

    2016-01-01

    We study the evolution of holographic complexity in various AdS/CFT models containing cosmological crunch singularities. We find that a notion of complexity measured by extremal bulk volumes tends to decrease as the singularity is approached in CFT time, suggesting that the corresponding quantum states have simpler entanglement structure at the singularity.

  12. Ranking of Business Process Simulation Software Tools with DEX/QQ Hierarchical Decision Model.

    Science.gov (United States)

    Damij, Nadja; Boškoski, Pavle; Bohanec, Marko; Mileva Boshkoska, Biljana

    2016-01-01

    The omnipresent need for optimisation requires constant improvements of companies' business processes (BPs). Minimising the risk of inappropriate BP being implemented is usually performed by simulating the newly developed BP under various initial conditions and "what-if" scenarios. An effectual business process simulations software (BPSS) is a prerequisite for accurate analysis of an BP. Characterisation of an BPSS tool is a challenging task due to the complex selection criteria that includes quality of visual aspects, simulation capabilities, statistical facilities, quality reporting etc. Under such circumstances, making an optimal decision is challenging. Therefore, various decision support models are employed aiding the BPSS tool selection. The currently established decision support models are either proprietary or comprise only a limited subset of criteria, which affects their accuracy. Addressing this issue, this paper proposes a new hierarchical decision support model for ranking of BPSS based on their technical characteristics by employing DEX and qualitative to quantitative (QQ) methodology. Consequently, the decision expert feeds the required information in a systematic and user friendly manner. There are three significant contributions of the proposed approach. Firstly, the proposed hierarchical model is easily extendible for adding new criteria in the hierarchical structure. Secondly, a fully operational decision support system (DSS) tool that implements the proposed hierarchical model is presented. Finally, the effectiveness of the proposed hierarchical model is assessed by comparing the resulting rankings of BPSS with respect to currently available results.

  13. Robust Real-Time Music Transcription with a Compositional Hierarchical Model.

    Science.gov (United States)

    Pesek, Matevž; Leonardis, Aleš; Marolt, Matija

    2017-01-01

    The paper presents a new compositional hierarchical model for robust music transcription. Its main features are unsupervised learning of a hierarchical representation of input data, transparency, which enables insights into the learned representation, as well as robustness and speed which make it suitable for real-world and real-time use. The model consists of multiple layers, each composed of a number of parts. The hierarchical nature of the model corresponds well to hierarchical structures in music. The parts in lower layers correspond to low-level concepts (e.g. tone partials), while the parts in higher layers combine lower-level representations into more complex concepts (tones, chords). The layers are learned in an unsupervised manner from music signals. Parts in each layer are compositions of parts from previous layers based on statistical co-occurrences as the driving force of the learning process. In the paper, we present the model's structure and compare it to other hierarchical approaches in the field of music information retrieval. We evaluate the model's performance for the multiple fundamental frequency estimation. Finally, we elaborate on extensions of the model towards other music information retrieval tasks.

  14. The AdS/CFT Correspondence and Holographic QCD

    International Nuclear Information System (INIS)

    Erlich, J.

    2012-01-01

    Holographic QCD is an extra-dimensional approach to modeling QCD resonances and their interactions. Holographic models encode information about chiral symmetry breaking, Weinberg sum rules, vector meson dominance, and other phenomenological features of QCD. There are two complementary approaches to holographic model building: a top-down approach which begins with string-theory brane configurations, and a bottom-up approach which is more phenomenological. In this talk I will describe the AdS/CFT correspondence, which motivates Holographic QCD, and the techniques used to build holographic models of QCD and to calculate observables in those models. I will also discuss an intriguing light cone approach to Holographic QCD discovered by Brodsky and De Teramond. (author)

  15. A generalized linear factor model approach to the hierarchical framework for responses and response times.

    Science.gov (United States)

    Molenaar, Dylan; Tuerlinckx, Francis; van der Maas, Han L J

    2015-05-01

    We show how the hierarchical model for responses and response times as developed by van der Linden (2007), Fox, Klein Entink, and van der Linden (2007), Klein Entink, Fox, and van der Linden (2009), and Glas and van der Linden (2010) can be simplified to a generalized linear factor model with only the mild restriction that there is no hierarchical model at the item side. This result is valuable as it enables all well-developed modelling tools and extensions that come with these methods. We show that the restriction we impose on the hierarchical model does not influence parameter recovery under realistic circumstances. In addition, we present two illustrative real data analyses to demonstrate the practical benefits of our approach. © 2014 The British Psychological Society.

  16. Neutrinos in the holographic dark energy model: constraints from latest measurements of expansion history and growth of structure

    International Nuclear Information System (INIS)

    Zhang, Jing-Fei; Zhao, Ming-Ming; Li, Yun-He; Zhang, Xin

    2015-01-01

    The model of holographic dark energy (HDE) with massive neutrinos and/or dark radiation is investigated in detail. The background and perturbation evolutions in the HDE model are calculated. We employ the PPF approach to overcome the gravity instability difficulty (perturbation divergence of dark energy) led by the equation-of-state parameter w evolving across the phantom divide w=−1 in the HDE model with c<1. We thus derive the evolutions of density perturbations of various components and metric fluctuations in the HDE model. The impacts of massive neutrino and dark radiation on the CMB anisotropy power spectrum and the matter power spectrum in the HDE scenario are discussed. Furthermore, we constrain the models of HDE with massive neutrinos and/or dark radiation by using the latest measurements of expansion history and growth of structure, including the Planck CMB temperature data, the baryon acoustic oscillation data, the JLA supernova data, the Hubble constant direct measurement, the cosmic shear data of weak lensing, the Planck CMB lensing data, and the redshift space distortions data. We find that ∑ m ν <0.186 eV (95% CL) and N eff =3.75 +0.28 −0.32 in the HDE model from the constraints of these data

  17. Predicting Longitudinal Change in Language Production and Comprehension in Individuals with Down Syndrome: Hierarchical Linear Modeling.

    Science.gov (United States)

    Chapman, Robin S.; Hesketh, Linda J.; Kistler, Doris J.

    2002-01-01

    Longitudinal change in syntax comprehension and production skill, measured over six years, was modeled in 31 individuals (ages 5-20) with Down syndrome. The best fitting Hierarchical Linear Modeling model of comprehension uses age and visual and auditory short-term memory as predictors of initial status, and age for growth trajectory. (Contains…

  18. Measuring Teacher Effectiveness through Hierarchical Linear Models: Exploring Predictors of Student Achievement and Truancy

    Science.gov (United States)

    Subedi, Bidya Raj; Reese, Nancy; Powell, Randy

    2015-01-01

    This study explored significant predictors of student's Grade Point Average (GPA) and truancy (days absent), and also determined teacher effectiveness based on proportion of variance explained at teacher level model. We employed a two-level hierarchical linear model (HLM) with student and teacher data at level-1 and level-2 models, respectively.…

  19. Heuristics for Hierarchical Partitioning with Application to Model Checking

    DEFF Research Database (Denmark)

    Möller, Michael Oliver; Alur, Rajeev

    2001-01-01

    Given a collection of connected components, it is often desired to cluster together parts of strong correspondence, yielding a hierarchical structure. We address the automation of this process and apply heuristics to battle the combinatorial and computational complexity. We define a cost function...... that captures the quality of a structure relative to the connections and favors shallow structures with a low degree of branching. Finding a structure with minimal cost is NP-complete. We present a greedy polynomial-time algorithm that approximates good solutions incrementally by local evaluation of a heuristic...... function. We argue for a heuristic function based on four criteria: the number of enclosed connections, the number of components, the number of touched connections and the depth of the structure. We report on an application in the context of formal verification, where our algorithm serves as a preprocessor...

  20. New holographic reconstruction of scalar-field dark-energy models in the framework of chameleon Brans-Dicke cosmology

    International Nuclear Information System (INIS)

    Chattopadhyay, Surajit; Pasqua, Antonio; Khurshudyan, Martiros

    2014-01-01

    Motivated by the work of Yang et al. (Mod. Phys. Lett. A 26:191, 2011), we report on a study of the new holographic dark energy (NHDE) model with energy density given by ρ D = (3φ 2 )/(4ω)(μH 2 + νH) in the framework of chameleon Brans-Dicke cosmology. We have studied the correspondence between the quintessence, the DBI-essence, and the tachyon scalar-field models with the NHDE model in the framework of chameleon Brans-Dicke cosmology. Deriving an expression of the Hubble parameter H and, accordingly, ρ D in the context of chameleon Brans-Dicke chameleon cosmology, we have reconstructed the potentials and dynamics for these scalar-field models. Furthermore, we have examined the stability for the obtained solutions of the crossing of the phantom divide under a quantum correction of massless conformally invariant fields, and we have seen that the quantum correction could be small when the phantom crossing occurs and the obtained solutions of the phantom crossing could be stable under the quantum correction. It has also been noted that the potential increases as the matter. chameleon coupling gets stronger with the evolution of the universe. (orig.)

  1. New holographic reconstruction of scalar-field dark-energy models in the framework of chameleon Brans-Dicke cosmology

    Energy Technology Data Exchange (ETDEWEB)

    Chattopadhyay, Surajit [Pailan College of Management and Technology, Kolkata (India); Pasqua, Antonio [University of Trieste, Department of Physics, Trieste (Italy); Khurshudyan, Martiros [Yerevan State University, Department of Theoretical Physics, Yerevan (Armenia); Potsdam-Golm Science Park, Max Planck Institute of Colloids and Interfaces, Potsdam (Germany)

    2014-09-15

    Motivated by the work of Yang et al. (Mod. Phys. Lett. A 26:191, 2011), we report on a study of the new holographic dark energy (NHDE) model with energy density given by ρ{sub D} = (3φ{sup 2})/(4ω)(μH{sup 2} + νH) in the framework of chameleon Brans-Dicke cosmology. We have studied the correspondence between the quintessence, the DBI-essence, and the tachyon scalar-field models with the NHDE model in the framework of chameleon Brans-Dicke cosmology. Deriving an expression of the Hubble parameter H and, accordingly, ρ{sub D} in the context of chameleon Brans-Dicke chameleon cosmology, we have reconstructed the potentials and dynamics for these scalar-field models. Furthermore, we have examined the stability for the obtained solutions of the crossing of the phantom divide under a quantum correction of massless conformally invariant fields, and we have seen that the quantum correction could be small when the phantom crossing occurs and the obtained solutions of the phantom crossing could be stable under the quantum correction. It has also been noted that the potential increases as the matter. chameleon coupling gets stronger with the evolution of the universe. (orig.)

  2. Cosmological model from the holographic equipartition law with a modified Renyi entropy

    Energy Technology Data Exchange (ETDEWEB)

    Komatsu, Nobuyoshi [Kanazawa University, Department of Mechanical Systems Engineering, Kanazawa, Ishikawa (Japan)

    2017-04-15

    Cosmological equations were recently derived by Padmanabhan from the expansion of cosmic space due to the difference between the degrees of freedom on the surface and in the bulk in a region of space. In this study, a modified Renyi entropy is applied to Padmanabhan's 'holographic equipartition law', by regarding the Bekenstein-Hawking entropy as a nonextensive Tsallis entropy and using a logarithmic formula of the original Renyi entropy. Consequently, the acceleration equation including an extra driving term (such as a time-varying cosmological term) can be derived in a homogeneous, isotropic, and spatially flat universe. When a specific condition is mathematically satisfied, the extra driving term is found to be constant-like as if it is a cosmological constant. Interestingly, the order of the constant-like term is naturally consistent with the order of the cosmological constant measured by observations, because the specific condition constrains the value of the constant-like term. (orig.)

  3. The Hierarchical Trend Model for property valuation and local price indices

    NARCIS (Netherlands)

    Francke, M.K.; Vos, G.A.

    2002-01-01

    This paper presents a hierarchical trend model (HTM) for selling prices of houses, addressing three main problems: the spatial and temporal dependence of selling prices and the dependency of price index changes on housing quality. In this model the general price trend, cluster-level price trends,

  4. Measuring Service Quality in Higher Education: Development of a Hierarchical Model (HESQUAL)

    Science.gov (United States)

    Teeroovengadum, Viraiyan; Kamalanabhan, T. J.; Seebaluck, Ashley Keshwar

    2016-01-01

    Purpose: This paper aims to develop and empirically test a hierarchical model for measuring service quality in higher education. Design/methodology/approach: The first phase of the study consisted of qualitative research methods and a comprehensive literature review, which allowed the development of a conceptual model comprising 53 service quality…

  5. Avoiding Boundary Estimates in Hierarchical Linear Models through Weakly Informative Priors

    Science.gov (United States)

    Chung, Yeojin; Rabe-Hesketh, Sophia; Gelman, Andrew; Dorie, Vincent; Liu, Jinchen

    2012-01-01

    Hierarchical or multilevel linear models are widely used for longitudinal or cross-sectional data on students nested in classes and schools, and are particularly important for estimating treatment effects in cluster-randomized trials, multi-site trials, and meta-analyses. The models can allow for variation in treatment effects, as well as…

  6. LiDAR based prediction of forest biomass using hierarchical models with spatially varying coefficients

    Science.gov (United States)

    Chad Babcock; Andrew O. Finley; John B. Bradford; Randy Kolka; Richard Birdsey; Michael G. Ryan

    2015-01-01

    Many studies and production inventory systems have shown the utility of coupling covariates derived from Light Detection and Ranging (LiDAR) data with forest variables measured on georeferenced inventory plots through regression models. The objective of this study was to propose and assess the use of a Bayesian hierarchical modeling framework that accommodates both...

  7. A Hierarchical Linear Model for Estimating Gender-Based Earnings Differentials.

    Science.gov (United States)

    Haberfield, Yitchak; Semyonov, Moshe; Addi, Audrey

    1998-01-01

    Estimates of gender earnings inequality in data from 116,431 Jewish workers were compared using a hierarchical linear model (HLM) and ordinary least squares model. The HLM allows estimation of the extent to which earnings inequality depends on occupational characteristics. (SK)

  8. Galactic chemical evolution in hierarchical formation models - I. Early-type galaxies in the local Universe

    NARCIS (Netherlands)

    Arrigoni, Matías; Trager, Scott C.; Somerville, Rachel S.; Gibson, Brad K.

    We study the metallicities and abundance ratios of early-type galaxies in cosmological semi-analytic models (SAMs) within the hierarchical galaxy formation paradigm. To achieve this we implemented a detailed galactic chemical evolution model and can now predict abundances of individual elements for

  9. Galactic chemical evolution in hierarchical formation models : I. Early-type galaxies in the local Universe

    NARCIS (Netherlands)

    Arrigoni, Matias; Trager, Scott C.; Somerville, Rachel S.; Gibson, Brad K.

    2010-01-01

    We study the metallicities and abundance ratios of early-type galaxies in cosmological semi-analytic models (SAMs) within the hierarchical galaxy formation paradigm. To achieve this we implemented a detailed galactic chemical evolution model and can now predict abundances of individual elements for

  10. Hierarchical Bayesian modeling of the space - time diffusion patterns of cholera epidemic in Kumasi, Ghana

    NARCIS (Netherlands)

    Osei, Frank B.; Osei, F.B.; Duker, Alfred A.; Stein, A.

    2011-01-01

    This study analyses the joint effects of the two transmission routes of cholera on the space-time diffusion dynamics. Statistical models are developed and presented to investigate the transmission network routes of cholera diffusion. A hierarchical Bayesian modelling approach is employed for a joint

  11. A Hybrid PO - Higher-Order Hierarchical MoM Formulation using Curvilinear Geometry Modeling

    DEFF Research Database (Denmark)

    Jørgensen, E.; Meincke, Peter; Breinbjerg, Olav

    2003-01-01

    which implies a very modest memory requirement. Nevertheless, the hierarchical feature of the basis functions maintains the ability to treat small geometrical details efficiently. In addition, the scatterer is modelled with higher-order curved patches which allows accurate modelling of curved surfaces...

  12. Soft tissue deformation using a Hierarchical Finite Element Model.

    Science.gov (United States)

    Faraci, Alessandro; Bello, Fernando; Darzi, Ara

    2004-01-01

    Simulating soft tissue deformation in real-time has become increasingly important in order to provide a realistic virtual environment for training surgical skills. Several methods have been proposed with the aim of rendering in real-time the mechanical and physiological behaviour of human organs, one of the most popular being Finite Element Method (FEM). In this paper we present a new approach to the solution of the FEM problem introducing the concept of parent and child mesh within the development of a hierarchical FEM. The online selection of the child mesh is presented with the purpose to adapt the mesh hierarchy in real-time. This permits further refinement of the child mesh increasing the detail of the deformation without slowing down the simulation and giving the possibility of integrating force feedback. The results presented demonstrate the application of our proposed framework using a desktop virtual reality (VR) system that incorporates stereo vision with integrated haptics co-location via a desktop Phantom force feedback device.

  13. Transformation of renormalization groups in 2N-component fermion hierarchical model

    International Nuclear Information System (INIS)

    Stepanov, R.G.

    2006-01-01

    The 2N-component fermion model on the hierarchical lattice is studied. The explicit formulae for renormalization groups transformation in the space of coefficients setting the Grassmannian-significant density of the free measure are presented. The inverse transformation of the renormalization group is calculated. The definition of immovable points of renormalization groups is reduced to solving the set of algebraic equations. The interesting connection between renormalization group transformations in boson and fermion hierarchical models is found out. It is shown that one transformation is obtained from other one by the substitution of N on -N [ru

  14. New holographic dark energy model with constant bulk viscosity in modified f(R,T) gravity theory

    Science.gov (United States)

    Srivastava, Milan; Singh, C. P.

    2018-06-01

    The aim of this paper is to study new holographic dark energy (HDE) model in modified f(R,T) gravity theory within the framework of a flat Friedmann-Robertson-Walker model with bulk viscous matter content. It is thought that the negative pressure caused by the bulk viscosity can play the role of dark energy component, and drive the accelerating expansion of the universe. This is the motive of this paper to observe such phenomena with bulk viscosity. In the specific model f(R,T)=R+λ T, where R is the Ricci scalar, T the trace of the energy-momentum tensor and λ is a constant, we find the solution for non-viscous and viscous new HDE models. We analyze new HDE model with constant bulk viscosity, ζ =ζ 0= const. to explain the present accelerated expansion of the universe. We classify all possible scenarios (deceleration, acceleration and their transition) with possible positive and negative ranges of λ over the constraint on ζ 0 to analyze the evolution of the universe. We obtain the solutions of scale factor and deceleration parameter, and discuss the evolution of the universe. We observe the future finite-time singularities of type I and III at a finite time under certain constraints on λ . We also investigate the statefinder and Om diagnostics of the viscous new HDE model to discriminate with other existing dark energy models. In late time the viscous new HDE model approaches to Λ CDM model. We also discuss the thermodynamics and entropy of the model and find that it satisfies the second law of thermodynamics.

  15. Fuzzy hierarchical model for risk assessment principles, concepts, and practical applications

    CERN Document Server

    Chan, Hing Kai

    2013-01-01

    Risk management is often complicated by situational uncertainties and the subjective preferences of decision makers. Fuzzy Hierarchical Model for Risk Assessment introduces a fuzzy-based hierarchical approach to solve risk management problems considering both qualitative and quantitative criteria to tackle imprecise information.   This approach is illustrated through number of case studies using examples from the food, fashion and electronics sectors to cover a range of applications including supply chain management, green product design and green initiatives. These practical examples explore how this method can be adapted and fine tuned to fit other industries as well.   Supported by an extensive literature review, Fuzzy Hierarchical Model for Risk Assessment  comprehensively introduces a new method for project managers across all industries as well as researchers in risk management.

  16. The Holographic Electron Density Theorem, de-quantization, re-quantization, and nuclear charge space extrapolations of the Universal Molecule Model

    Science.gov (United States)

    Mezey, Paul G.

    2017-11-01

    Two strongly related theorems on non-degenerate ground state electron densities serve as the basis of "Molecular Informatics". The Hohenberg-Kohn theorem is a statement on global molecular information, ensuring that the complete electron density contains the complete molecular information. However, the Holographic Electron Density Theorem states more: the local information present in each and every positive volume density fragment is already complete: the information in the fragment is equivalent to the complete molecular information. In other words, the complete molecular information provided by the Hohenberg-Kohn Theorem is already provided, in full, by any positive volume, otherwise arbitrarily small electron density fragment. In this contribution some of the consequences of the Holographic Electron Density Theorem are discussed within the framework of the "Nuclear Charge Space" and the Universal Molecule Model. In the Nuclear Charge Space" the nuclear charges are regarded as continuous variables, and in the more general Universal Molecule Model some other quantized parameteres are also allowed to become "de-quantized and then re-quantized, leading to interrelations among real molecules through abstract molecules. Here the specific role of the Holographic Electron Density Theorem is discussed within the above context.

  17. Experiments in Error Propagation within Hierarchal Combat Models

    Science.gov (United States)

    2015-09-01

    stochastic Lanchester campaign model that contains 18 Blue and 25 Red submarines. The outputs of the campaign models are analyzed statistically. The...sampled in a variety of ways, including just the mean, and used to calculate the attrition coefficients for a stochastic Lanchester campaign model...9 2. Lanchester Models .............................................................................10 III. SCENARIO AND MODEL DEVELOPMENT

  18. INFOGRAPHIC MODELING OF THE HIERARCHICAL STRUCTURE OF THE MANAGEMENT SYSTEM EXPOSED TO AN INNOVATIVE CONFLICT

    Directory of Open Access Journals (Sweden)

    Chulkov Vitaliy Olegovich

    2012-12-01

    Full Text Available This article deals with the infographic modeling of hierarchical management systems exposed to innovative conflicts. The authors analyze the facts that serve as conflict drivers in the construction management environment. The reasons for innovative conflicts include changes in hierarchical structures of management systems, adjustment of workers to new management conditions, changes in the ideology, etc. Conflicts under consideration may involve contradictions between requests placed by customers and the legislation, any risks that may originate from the above contradiction, conflicts arising from any failure to comply with any accepted standards of conduct, etc. One of the main objectives of the theory of hierarchical structures is to develop a model capable of projecting potential innovative conflicts. Models described in the paper reflect dynamic changes in patterns of external impacts within the conflict area. The simplest model element is a monad, or an indivisible set of characteristics of participants at the pre-set level. Interaction between two monads forms a diad. Modeling of situations that involve a different number of monads, diads, resources and impacts can improve methods used to control and manage hierarchical structures in the construction industry. However, in the absence of any mathematical models employed to simulate conflict-related events, processes and situations, any research into, projection and management of interpersonal and group-to-group conflicts are to be performed in the legal environment

  19. Application of hierarchical genetic models to Raven and WAIS subtests: a Dutch twin study.

    Science.gov (United States)

    Rijsdijk, Frühling V; Vernon, P A; Boomsma, Dorret I

    2002-05-01

    Hierarchical models of intelligence are highly informative and widely accepted. Application of these models to twin data, however, is sparse. This paper addresses the question of how a genetic hierarchical model fits the Wechsler Adult Intelligence Scale (WAIS) subtests and the Raven Standard Progressive test score, collected in 194 18-year-old Dutch twin pairs. We investigated whether first-order group factors possess genetic and environmental variance independent of the higher-order general factor and whether the hierarchical structure is significant for all sources of variance. A hierarchical model with the 3 Cohen group-factors (verbal comprehension, perceptual organisation and freedom-from-distractibility) and a higher-order g factor showed the best fit to the phenotypic data and to additive genetic influences (A), whereas the unique environmental source of variance (E) could be modeled by a single general factor and specifics. There was no evidence for common environmental influences. The covariation among the WAIS group factors and the covariation between the group factors and the Raven is predominantly influenced by a second-order genetic factor and strongly support the notion of a biological basis of g.

  20. Constructive use of holographic projections

    International Nuclear Information System (INIS)

    Schroer, Bert

    2008-01-01

    Revisiting the old problem of existence of interacting models of QFT with new conceptual ideas and mathematical tools, one arrives at a novel view about the nature of QFT. The recent success of algebraic methods in establishing the existence of factorizing models suggests new directions for a more intrinsic constructive approach beyond Lagrangian quantization. Holographic projection simplifies certain properties of the bulk theory and hence is a promising new tool for these new attempts. (author)

  1. Constructive use of holographic projections

    Energy Technology Data Exchange (ETDEWEB)

    Schroer, Bert [Centro Brasileiro de Pesquisas Fisicas (CBPF), Rio de Janeiro, RJ (Brazil); Institut fuer Theoretische Physik der FU, Berlin (Germany)

    2008-07-01

    Revisiting the old problem of existence of interacting models of QFT with new conceptual ideas and mathematical tools, one arrives at a novel view about the nature of QFT. The recent success of algebraic methods in establishing the existence of factorizing models suggests new directions for a more intrinsic constructive approach beyond Lagrangian quantization. Holographic projection simplifies certain properties of the bulk theory and hence is a promising new tool for these new attempts. (author)

  2. Holographic multiverse and conformal invariance

    Energy Technology Data Exchange (ETDEWEB)

    Garriga, Jaume [Departament de Física Fonamental i Institut de Ciències del Cosmos, Universitat de Barcelona, Martí i Franquès 1, 08193 Barcelona (Spain); Vilenkin, Alexander, E-mail: jaume.garriga@ub.edu, E-mail: vilenkin@cosmos.phy.tufts.edu [Institute of Cosmology, Department of Physics and Astronomy, Tufts University, 212 College Ave., Medford, MA 02155 (United States)

    2009-11-01

    We consider a holographic description of the inflationary multiverse, according to which the wave function of the universe is interpreted as the generating functional for a lower dimensional Euclidean theory. We analyze a simple model where transitions between inflationary vacua occur through bubble nucleation, and the inflating part of spacetime consists of de Sitter regions separated by thin bubble walls. In this model, we present some evidence that the dual theory is conformally invariant in the UV.

  3. Holographic multiverse and conformal invariance

    International Nuclear Information System (INIS)

    Garriga, Jaume; Vilenkin, Alexander

    2009-01-01

    We consider a holographic description of the inflationary multiverse, according to which the wave function of the universe is interpreted as the generating functional for a lower dimensional Euclidean theory. We analyze a simple model where transitions between inflationary vacua occur through bubble nucleation, and the inflating part of spacetime consists of de Sitter regions separated by thin bubble walls. In this model, we present some evidence that the dual theory is conformally invariant in the UV

  4. A Hierarchical Bayesian Model to Predict Self-Thinning Line for Chinese Fir in Southern China.

    Directory of Open Access Journals (Sweden)

    Xiongqing Zhang

    Full Text Available Self-thinning is a dynamic equilibrium between forest growth and mortality at full site occupancy. Parameters of the self-thinning lines are often confounded by differences across various stand and site conditions. For overcoming the problem of hierarchical and repeated measures, we used hierarchical Bayesian method to estimate the self-thinning line. The results showed that the self-thinning line for Chinese fir (Cunninghamia lanceolata (Lamb.Hook. plantations was not sensitive to the initial planting density. The uncertainty of model predictions was mostly due to within-subject variability. The simulation precision of hierarchical Bayesian method was better than that of stochastic frontier function (SFF. Hierarchical Bayesian method provided a reasonable explanation of the impact of other variables (site quality, soil type, aspect, etc. on self-thinning line, which gave us the posterior distribution of parameters of self-thinning line. The research of self-thinning relationship could be benefit from the use of hierarchical Bayesian method.

  5. Time to failure of hierarchical load-transfer models of fracture

    DEFF Research Database (Denmark)

    Vázquez-Prada, M; Gómez, J B; Moreno, Y

    1999-01-01

    The time to failure, T, of dynamical models of fracture for a hierarchical load-transfer geometry is studied. Using a probabilistic strategy and juxtaposing hierarchical structures of height n, we devise an exact method to compute T, for structures of height n+1. Bounding T, for large n, we are a...... are able to deduce that the time to failure tends to a nonzero value when n tends to infinity. This numerical conclusion is deduced for both power law and exponential breakdown rules....

  6. Interacting viscous entropy-corrected holographic scalar field models of dark energy with time-varying G in modified FRW cosmology

    International Nuclear Information System (INIS)

    Adabi, Farzin; Karami, Kayoomars; Felegary, Fereshte; Azarmi, Zohre

    2012-01-01

    We study the entropy-corrected version of the holographic dark energy (HDE) model in the framework of modified Friedmann-Robertson-Walker cosmology. We consider a non-flat universe filled with an interacting viscous entropy-corrected HDE (ECHDE) with dark matter. Also included in our model is the case of the variable gravitational constant G. We obtain the equation of state and the deceleration parameters of the interacting viscous ECHDE. Moreover, we reconstruct the potential and the dynamics of the quintessence, tachyon, K-essence and dilaton scalar field models according to the evolutionary behavior of the interacting viscous ECHDE model with time-varying G. (research papers)

  7. Diagnosing holographic type dark energy models with the Statefinder hierarchy, composite null diagnostic and w- w' pair

    Science.gov (United States)

    Zhao, Ze; Wang, Shuang

    2018-03-01

    The main purpose of this work is to distinguish various holographic type dark energy (DE) models, including the ΛHDE, HDE, NADE, and RDE model, by using various diagnostic tools. The first diagnostic tool is the Statefinder hierarchy, in which the evolution of Statefinder hierarchy parmeter S (1) 3( z) and S (1) 4( z) are studied. The second is composite null diagnostic (CND), in which the trajectories of { S (1) 3, ɛ} and { S (1) 4, ɛ} are investigated, where ɛ is the fractional growth parameter. The last is w-w' analysis, where w is the equation of state for DE and the prime denotes derivative with respect to ln a. In the analysis we consider two cases: varying current fractional DE density Ω de0 and varying DE model parameter C. We find that: (1) both the Statefinder hierarchy and the CND have qualitative impact on ΛHDE, but only have quantitative impact on HDE. (2) S (1) 4 can lead to larger differences than S (1) 3, while the CND pair has a stronger ability to distinguish different models than the Statefinder hierarchy. (3) For the case of varying C, the { w,w'} pair has qualitative impact on ΛHDE; for the case of varying Ω de0, the { w, w'} pair only has quantitative impact; these results are different from the cases of HDE, RDE, and NADE, in which the {w,w'} pair only has quantitative impact on these models. In conclusion, compared with HDE, RDE, and NADE, the ΛHDE model can be easily distinguished by using these diagnostic tools.

  8. A holographic model of reminiscence in the poetry of Czesław Miłosz

    Directory of Open Access Journals (Sweden)

    Agnieszka Rydz

    2011-01-01

    Full Text Available For a model of nostalgic memory in the poetry of Czesław Miłosz, based on the psychological phenomenon of reminiscence, an allegoric counterpart can be identified in the hologram metaphor (Douwe Draaisma. The question: “Who am I” – reappears in Miłosz’s late lyrical poetry when he ponders over both his biography and the biographies of others. The response is provided, for instance, in the concept of human dialectic biography (of subject and object, formulated by Paul Ricoeur in his philosophical analyses. Human memory remains equally dialectic, placed in the antinomy between memory and oblivion. Still, retrieving a detail which has been remembered evokes all experience along with its rich context. That is the holographic effect, described in literature as the “ghost image”. Also in poetry, the effacing of memory trace does not make a barrier for the restitution of recollection. “The Sun of Memory” beams through the lyric of the author of the collection of poems “This”.

  9. Holographic Moire Contouring

    Science.gov (United States)

    Sciammarella, C. A.; Sainov, Ventseslav; Simova, Eli

    1990-04-01

    Theoretical analysis and experimental results on holographic moire contouring (HMC) of difussely reflecting objects are presented. The sensitivity and application constraints of the method are discussed. A high signal-to-noise ratio and contrast of the fringes is achieved through the use of high quality silver halide holographic plates HP-650. A good agreement between theoretical and experimental results is observed.

  10. From Playability to a Hierarchical Game Usability Model

    OpenAIRE

    Nacke, Lennart E.

    2010-01-01

    This paper presents a brief review of current game usability models. This leads to the conception of a high-level game development-centered usability model that integrates current usability approaches in game industry and game research.

  11. Evaluation of Validity and Reliability for Hierarchical Scales Using Latent Variable Modeling

    Science.gov (United States)

    Raykov, Tenko; Marcoulides, George A.

    2012-01-01

    A latent variable modeling method is outlined, which accomplishes estimation of criterion validity and reliability for a multicomponent measuring instrument with hierarchical structure. The approach provides point and interval estimates for the scale criterion validity and reliability coefficients, and can also be used for testing composite or…

  12. Predicting Examination Performance Using an Expanded Integrated Hierarchical Model of Test Emotions and Achievement Goals

    Science.gov (United States)

    Putwain, Dave; Deveney, Carolyn

    2009-01-01

    The aim of this study was to examine an expanded integrative hierarchical model of test emotions and achievement goal orientations in predicting the examination performance of undergraduate students. Achievement goals were theorised as mediating the relationship between test emotions and performance. 120 undergraduate students completed…

  13. Using Hierarchical Linear Modelling to Examine Factors Predicting English Language Students' Reading Achievement

    Science.gov (United States)

    Fung, Karen; ElAtia, Samira

    2015-01-01

    Using Hierarchical Linear Modelling (HLM), this study aimed to identify factors such as ESL/ELL/EAL status that would predict students' reading performance in an English language arts exam taken across Canada. Using data from the 2007 administration of the Pan-Canadian Assessment Program (PCAP) along with the accompanying surveys for students and…

  14. The Hierarchical Factor Model of ADHD: Invariant across Age and National Groupings?

    Science.gov (United States)

    Toplak, Maggie E.; Sorge, Geoff B.; Flora, David B.; Chen, Wai; Banaschewski, Tobias; Buitelaar, Jan; Ebstein, Richard; Eisenberg, Jacques; Franke, Barbara; Gill, Michael; Miranda, Ana; Oades, Robert D.; Roeyers, Herbert; Rothenberger, Aribert; Sergeant, Joseph; Sonuga-Barke, Edmund; Steinhausen, Hans-Christoph; Thompson, Margaret; Tannock, Rosemary; Asherson, Philip; Faraone, Stephen V.

    2012-01-01

    Objective: To examine the factor structure of attention-deficit/hyperactivity disorder (ADHD) in a clinical sample of 1,373 children and adolescents with ADHD and their 1,772 unselected siblings recruited from different countries across a large age range. Hierarchical and correlated factor analytic models were compared separately in the ADHD and…

  15. Symptom structure of PTSD: support for a hierarchical model separating core PTSD symptoms from dysphoria

    NARCIS (Netherlands)

    Rademaker, Arthur R.; van Minnen, Agnes; Ebberink, Freek; van Zuiden, Mirjam; Hagenaars, Muriel A.; Geuze, Elbert

    2012-01-01

    As of yet, no collective agreement has been reached regarding the precise factor structure of posttraumatic stress disorder (PTSD). Several alternative factor-models have been proposed in the last decades. The current study examined the fit of a hierarchical adaptation of the Simms et al. (2002)

  16. Hierarchical models for informing general biomass equations with felled tree data

    Science.gov (United States)

    Brian J. Clough; Matthew B. Russell; Christopher W. Woodall; Grant M. Domke; Philip J. Radtke

    2015-01-01

    We present a hierarchical framework that uses a large multispecies felled tree database to inform a set of general models for predicting tree foliage biomass, with accompanying uncertainty, within the FIA database. Results suggest significant prediction uncertainty for individual trees and reveal higher errors when predicting foliage biomass for larger trees and for...

  17. Perfect observables for the hierarchical non-linear O(N)-invariant σ-model

    International Nuclear Information System (INIS)

    Wieczerkowski, C.; Xylander, Y.

    1995-05-01

    We compute moving eigenvalues and the eigenvectors of the linear renormalization group transformation for observables along the renormalized trajectory of the hierarchical non-linear O(N)-invariant σ-model by means of perturbation theory in the running coupling constant. Moving eigenvectors are defined as solutions to a Callan-Symanzik type equation. (orig.)

  18. Intraclass Correlation Coefficients in Hierarchical Designs: Evaluation Using Latent Variable Modeling

    Science.gov (United States)

    Raykov, Tenko

    2011-01-01

    Interval estimation of intraclass correlation coefficients in hierarchical designs is discussed within a latent variable modeling framework. A method accomplishing this aim is outlined, which is applicable in two-level studies where participants (or generally lower-order units) are clustered within higher-order units. The procedure can also be…

  19. Polychromatic holographic plasma diagnostics

    International Nuclear Information System (INIS)

    Zhiglinskij, A.G.; Morozov, A.O.

    1992-01-01

    Review of holographic interferometry properties is performed and advantages of this method by plasma diagnostics are indicated. Main results obtained by the method of holographic interferometry in studies of various-type plasmas are considered. Special attention is paid to multiwave plasma diagnostics, the necessity of which is related as a rule to multicomponent composition of plasma. The eight laser and gas-discharge sources and holographic schemes, which make it possible to realize plasma polychromatic and holographic interferometry, are considered. The advantages of the method are demonstrated by examples of polychromatic holographic diagnostics of arc discharge and discharge in a hollow cathode. Review of theoretical works determining the applicability area of resonance polychromatic interferometry is carried out

  20. An Analysis of Turkey's PISA 2015 Results Using Two-Level Hierarchical Linear Modelling

    Science.gov (United States)

    Atas, Dogu; Karadag, Özge

    2017-01-01

    In the field of education, most of the data collected are multi-level structured. Cities, city based schools, school based classes and finally students in the classrooms constitute a hierarchical structure. Hierarchical linear models give more accurate results compared to standard models when the data set has a structure going far as individuals,…

  1. Technical Note: Probabilistically constraining proxy age–depth models within a Bayesian hierarchical reconstruction model

    Directory of Open Access Journals (Sweden)

    J. P. Werner

    2015-03-01

    Full Text Available Reconstructions of the late-Holocene climate rely heavily upon proxies that are assumed to be accurately dated by layer counting, such as measurements of tree rings, ice cores, and varved lake sediments. Considerable advances could be achieved if time-uncertain proxies were able to be included within these multiproxy reconstructions, and if time uncertainties were recognized and correctly modeled for proxies commonly treated as free of age model errors. Current approaches for accounting for time uncertainty are generally limited to repeating the reconstruction using each one of an ensemble of age models, thereby inflating the final estimated uncertainty – in effect, each possible age model is given equal weighting. Uncertainties can be reduced by exploiting the inferred space–time covariance structure of the climate to re-weight the possible age models. Here, we demonstrate how Bayesian hierarchical climate reconstruction models can be augmented to account for time-uncertain proxies. Critically, although a priori all age models are given equal probability of being correct, the probabilities associated with the age models are formally updated within the Bayesian framework, thereby reducing uncertainties. Numerical experiments show that updating the age model probabilities decreases uncertainty in the resulting reconstructions, as compared with the current de facto standard of sampling over all age models, provided there is sufficient information from other data sources in the spatial region of the time-uncertain proxy. This approach can readily be generalized to non-layer-counted proxies, such as those derived from marine sediments.

  2. A hierarchical causal modeling for large industrial plants supervision

    International Nuclear Information System (INIS)

    Dziopa, P.; Leyval, L.

    1994-01-01

    A supervision system has to analyse the process current state and the way it will evolve after a modification of the inputs or disturbance. It is proposed to base this analysis on a hierarchy of models, witch differ by the number of involved variables and the abstraction level used to describe their temporal evolution. In a first step, special attention is paid to causal models building, from the most abstract one. Once the hierarchy of models has been build, the most detailed model parameters are estimated. Several models of different abstraction levels can be used for on line prediction. These methods have been applied to a nuclear reprocessing plant. The abstraction level could be chosen on line by the operator. Moreover when an abnormal process behaviour is detected a more detailed model is automatically triggered in order to focus the operator attention on the suspected subsystem. (authors). 11 refs., 11 figs

  3. Sparse Event Modeling with Hierarchical Bayesian Kernel Methods

    Science.gov (United States)

    2016-01-05

    SECURITY CLASSIFICATION OF: The research objective of this proposal was to develop a predictive Bayesian kernel approach to model count data based on...several predictive variables. Such an approach, which we refer to as the Poisson Bayesian kernel model, is able to model the rate of occurrence of... kernel methods made use of: (i) the Bayesian property of improving predictive accuracy as data are dynamically obtained, and (ii) the kernel function

  4. Hierarchical modelling of line commutated power systems used in particle accelerators using Saber

    International Nuclear Information System (INIS)

    Reimund, J.A.

    1993-01-01

    This paper discusses the use of hierarchical simulation models using the program Saber trademark for the prediction of magnet ripple currents generated by the power supply/output filter combination. Modeling of an entire power system connected to output filters and particle accelerator ring magnets will be presented. Special emphasis is made on the modeling of power source imbalances caused by transformer impedance imbalances and utility variances. The affect that these imbalances have on the harmonic content of ripple current is also investigated

  5. A test of the hierarchical model of litter decomposition

    DEFF Research Database (Denmark)

    Bradford, Mark A.; Veen, G. F.; Bonis, Anne

    2017-01-01

    Our basic understanding of plant litter decomposition informs the assumptions underlying widely applied soil biogeochemical models, including those embedded in Earth system models. Confidence in projected carbon cycle-climate feedbacks therefore depends on accurate knowledge about the controls...... regulating the rate at which plant biomass is decomposed into products such as CO2. Here we test underlying assumptions of the dominant conceptual model of litter decomposition. The model posits that a primary control on the rate of decomposition at regional to global scales is climate (temperature...

  6. Baryon physics in holographic QCD

    Directory of Open Access Journals (Sweden)

    Alex Pomarol

    2009-03-01

    Full Text Available In a simple holographic model for QCD in which the Chern–Simons term is incorporated to take into account the QCD chiral anomaly, we show that baryons arise as stable solitons which are the 5D analogs of 4D skyrmions. Contrary to 4D skyrmions and previously considered holographic scenarios, these solitons have sizes larger than the inverse cut-off of the model, and therefore they are predictable within our effective field theory approach. We perform a numerical determination of several static properties of the nucleons and find a satisfactory agreement with data. We also calculate the amplitudes of “anomalous” processes induced by the Chern–Simons term in the meson sector, such as ω→πγ and ω→3π. A combined fit to baryonic and mesonic observables leads to an agreement with experiments within 16%.

  7. Simulating individual-based models of epidemics in hierarchical networks

    NARCIS (Netherlands)

    Quax, R.; Bader, D.A.; Sloot, P.M.A.

    2009-01-01

    Current mathematical modeling methods for the spreading of infectious diseases are too simplified and do not scale well. We present the Simulator of Epidemic Evolution in Complex Networks (SEECN), an efficient simulator of detailed individual-based models by parameterizing separate dynamics

  8. A three-component, hierarchical model of executive attention

    OpenAIRE

    Whittle, Sarah; Pantelis, Christos; Testa, Renee; Tiego, Jeggan; Bellgrove, Mark

    2017-01-01

    Executive attention refers to the goal-directed control of attention. Existing models of executive attention distinguish between three correlated, but empirically dissociable, factors related to selectively attending to task-relevant stimuli (Selective Attention), inhibiting task-irrelevant responses (Response Inhibition), and actively maintaining goal-relevant information (Working Memory Capacity). In these models, Selective Attention and Response Inhibition are moderately strongly correlate...

  9. An open-population hierarchical distance sampling model

    Science.gov (United States)

    Sollmann, Rachel; Beth Gardner,; Richard B Chandler,; Royle, J. Andrew; T Scott Sillett,

    2015-01-01

    Modeling population dynamics while accounting for imperfect detection is essential to monitoring programs. Distance sampling allows estimating population size while accounting for imperfect detection, but existing methods do not allow for direct estimation of demographic parameters. We develop a model that uses temporal correlation in abundance arising from underlying population dynamics to estimate demographic parameters from repeated distance sampling surveys. Using a simulation study motivated by designing a monitoring program for island scrub-jays (Aphelocoma insularis), we investigated the power of this model to detect population trends. We generated temporally autocorrelated abundance and distance sampling data over six surveys, using population rates of change of 0.95 and 0.90. We fit the data generating Markovian model and a mis-specified model with a log-linear time effect on abundance, and derived post hoc trend estimates from a model estimating abundance for each survey separately. We performed these analyses for varying number of survey points. Power to detect population changes was consistently greater under the Markov model than under the alternatives, particularly for reduced numbers of survey points. The model can readily be extended to more complex demographic processes than considered in our simulations. This novel framework can be widely adopted for wildlife population monitoring.

  10. An open-population hierarchical distance sampling model.

    Science.gov (United States)

    Sollmann, Rahel; Gardner, Beth; Chandler, Richard B; Royle, J Andrew; Sillett, T Scott

    2015-02-01

    Modeling population dynamics while accounting for imperfect detection is essential to monitoring programs. Distance sampling allows estimating population size while accounting for imperfect detection, but existing methods do not allow for estimation of demographic parameters. We develop a model that uses temporal correlation in abundance arising from underlying population dynamics to estimate demographic parameters from repeated distance sampling surveys. Using a simulation study motivated by designing a monitoring program for Island Scrub-Jays (Aphelocoma insularis), we investigated the power of this model to detect population trends. We generated temporally autocorrelated abundance and distance sampling data over six surveys, using population rates of change of 0.95 and 0.90. We fit the data generating Markovian model and a mis-specified model with a log-linear time effect on abundance, and derived post hoc trend estimates from a model estimating abundance for each survey separately. We performed these analyses for varying numbers of survey points. Power to detect population changes was consistently greater under the Markov model than under the alternatives, particularly for reduced numbers of survey points. The model can readily be extended to more complex demographic processes than considered in our simulations. This novel framework can be widely adopted for wildlife population monitoring.

  11. Hierarchical material models for fragmentation modeling in NIF-ALE-AMR

    International Nuclear Information System (INIS)

    Fisher, A C; Masters, N D; Koniges, A E; Anderson, R W; Gunney, B T N; Wang, P; Becker, R; Dixit, P; Benson, D J

    2008-01-01

    Fragmentation is a fundamental process that naturally spans micro to macroscopic scales. Recent advances in algorithms, computer simulations, and hardware enable us to connect the continuum to microstructural regimes in a real simulation through a heterogeneous multiscale mathematical model. We apply this model to the problem of predicting how targets in the NIF chamber dismantle, so that optics and diagnostics can be protected from damage. The mechanics of the initial material fracture depend on the microscopic grain structure. In order to effectively simulate the fragmentation, this process must be modeled at the subgrain level with computationally expensive crystal plasticity models. However, there are not enough computational resources to model the entire NIF target at this microscopic scale. In order to accomplish these calculations, a hierarchical material model (HMM) is being developed. The HMM will allow fine-scale modeling of the initial fragmentation using computationally expensive crystal plasticity, while the elements at the mesoscale can use polycrystal models, and the macroscopic elements use analytical flow stress models. The HMM framework is built upon an adaptive mesh refinement (AMR) capability. We present progress in implementing the HMM in the NIF-ALE-AMR code. Additionally, we present test simulations relevant to NIF targets

  12. Hierarchical material models for fragmentation modeling in NIF-ALE-AMR

    Energy Technology Data Exchange (ETDEWEB)

    Fisher, A C; Masters, N D; Koniges, A E; Anderson, R W; Gunney, B T N; Wang, P; Becker, R [Lawrence Livermore National Laboratory, PO Box 808, Livermore, CA 94551 (United States); Dixit, P; Benson, D J [University of California San Diego, 9500 Gilman Dr., La Jolla. CA 92093 (United States)], E-mail: fisher47@llnl.gov

    2008-05-15

    Fragmentation is a fundamental process that naturally spans micro to macroscopic scales. Recent advances in algorithms, computer simulations, and hardware enable us to connect the continuum to microstructural regimes in a real simulation through a heterogeneous multiscale mathematical model. We apply this model to the problem of predicting how targets in the NIF chamber dismantle, so that optics and diagnostics can be protected from damage. The mechanics of the initial material fracture depend on the microscopic grain structure. In order to effectively simulate the fragmentation, this process must be modeled at the subgrain level with computationally expensive crystal plasticity models. However, there are not enough computational resources to model the entire NIF target at this microscopic scale. In order to accomplish these calculations, a hierarchical material model (HMM) is being developed. The HMM will allow fine-scale modeling of the initial fragmentation using computationally expensive crystal plasticity, while the elements at the mesoscale can use polycrystal models, and the macroscopic elements use analytical flow stress models. The HMM framework is built upon an adaptive mesh refinement (AMR) capability. We present progress in implementing the HMM in the NIF-ALE-AMR code. Additionally, we present test simulations relevant to NIF targets.

  13. Comparing the performance of flat and hierarchical Habitat/Land-Cover classification models in a NATURA 2000 site

    Science.gov (United States)

    Gavish, Yoni; O'Connell, Jerome; Marsh, Charles J.; Tarantino, Cristina; Blonda, Palma; Tomaselli, Valeria; Kunin, William E.

    2018-02-01

    The increasing need for high quality Habitat/Land-Cover (H/LC) maps has triggered considerable research into novel machine-learning based classification models. In many cases, H/LC classes follow pre-defined hierarchical classification schemes (e.g., CORINE), in which fine H/LC categories are thematically nested within more general categories. However, none of the existing machine-learning algorithms account for this pre-defined hierarchical structure. Here we introduce a novel Random Forest (RF) based application of hierarchical classification, which fits a separate local classification model in every branching point of the thematic tree, and then integrates all the different local models to a single global prediction. We applied the hierarchal RF approach in a NATURA 2000 site in Italy, using two land-cover (CORINE, FAO-LCCS) and one habitat classification scheme (EUNIS) that differ from one another in the shape of the class hierarchy. For all 3 classification schemes, both the hierarchical model and a flat model alternative provided accurate predictions, with kappa values mostly above 0.9 (despite using only 2.2-3.2% of the study area as training cells). The flat approach slightly outperformed the hierarchical models when the hierarchy was relatively simple, while the hierarchical model worked better under more complex thematic hierarchies. Most misclassifications came from habitat pairs that are thematically distant yet spectrally similar. In 2 out of 3 classification schemes, the additional constraints of the hierarchical model resulted with fewer such serious misclassifications relative to the flat model. The hierarchical model also provided valuable information on variable importance which can shed light into "black-box" based machine learning algorithms like RF. We suggest various ways by which hierarchical classification models can increase the accuracy and interpretability of H/LC classification maps.

  14. The application of a hierarchical Bayesian spatiotemporal model for ...

    Indian Academy of Sciences (India)

    Process (GP) model by using the Gibbs sampling method. The result for ... good indicator of the HBST method. The statistical ... summary and discussion of future works are given .... spatiotemporal package in R language (R core team. 2013).

  15. Bayesian disease mapping: hierarchical modeling in spatial epidemiology

    National Research Council Canada - National Science Library

    Lawson, Andrew

    2013-01-01

    Since the publication of the first edition, many new Bayesian tools and methods have been developed for space-time data analysis, the predictive modeling of health outcomes, and other spatial biostatistical areas...

  16. Hierarchical models and iterative optimization of hybrid systems

    Energy Technology Data Exchange (ETDEWEB)

    Rasina, Irina V. [Ailamazyan Program Systems Institute, Russian Academy of Sciences, Peter One str. 4a, Pereslavl-Zalessky, 152021 (Russian Federation); Baturina, Olga V. [Trapeznikov Control Sciences Institute, Russian Academy of Sciences, Profsoyuznaya str. 65, 117997, Moscow (Russian Federation); Nasatueva, Soelma N. [Buryat State University, Smolina str.24a, Ulan-Ude, 670000 (Russian Federation)

    2016-06-08

    A class of hybrid control systems on the base of two-level discrete-continuous model is considered. The concept of this model was proposed and developed in preceding works as a concretization of the general multi-step system with related optimality conditions. A new iterative optimization procedure for such systems is developed on the base of localization of the global optimality conditions via contraction the control set.

  17. Trans-dimensional matched-field geoacoustic inversion with hierarchical error models and interacting Markov chains.

    Science.gov (United States)

    Dettmer, Jan; Dosso, Stan E

    2012-10-01

    This paper develops a trans-dimensional approach to matched-field geoacoustic inversion, including interacting Markov chains to improve efficiency and an autoregressive model to account for correlated errors. The trans-dimensional approach and hierarchical seabed model allows inversion without assuming any particular parametrization by relaxing model specification to a range of plausible seabed models (e.g., in this case, the number of sediment layers is an unknown parameter). Data errors are addressed by sampling statistical error-distribution parameters, including correlated errors (covariance), by applying a hierarchical autoregressive error model. The well-known difficulty of low acceptance rates for trans-dimensional jumps is addressed with interacting Markov chains, resulting in a substantial increase in efficiency. The trans-dimensional seabed model and the hierarchical error model relax the degree of prior assumptions required in the inversion, resulting in substantially improved (more realistic) uncertainty estimates and a more automated algorithm. In particular, the approach gives seabed parameter uncertainty estimates that account for uncertainty due to prior model choice (layering and data error statistics). The approach is applied to data measured on a vertical array in the Mediterranean Sea.

  18. A hybrid deterministic-probabilistic approach to model the mechanical response of helically arranged hierarchical strands

    Science.gov (United States)

    Fraldi, M.; Perrella, G.; Ciervo, M.; Bosia, F.; Pugno, N. M.

    2017-09-01

    Very recently, a Weibull-based probabilistic strategy has been successfully applied to bundles of wires to determine their overall stress-strain behaviour, also capturing previously unpredicted nonlinear and post-elastic features of hierarchical strands. This approach is based on the so-called "Equal Load Sharing (ELS)" hypothesis by virtue of which, when a wire breaks, the load acting on the strand is homogeneously redistributed among the surviving wires. Despite the overall effectiveness of the method, some discrepancies between theoretical predictions and in silico Finite Element-based simulations or experimental findings might arise when more complex structures are analysed, e.g. helically arranged bundles. To overcome these limitations, an enhanced hybrid approach is proposed in which the probability of rupture is combined with a deterministic mechanical model of a strand constituted by helically-arranged and hierarchically-organized wires. The analytical model is validated comparing its predictions with both Finite Element simulations and experimental tests. The results show that generalized stress-strain responses - incorporating tension/torsion coupling - are naturally found and, once one or more elements break, the competition between geometry and mechanics of the strand microstructure, i.e. the different cross sections and helical angles of the wires in the different hierarchical levels of the strand, determines the no longer homogeneous stress redistribution among the surviving wires whose fate is hence governed by a "Hierarchical Load Sharing" criterion.

  19. A hierarchical stress release model for synthetic seismicity

    Science.gov (United States)

    Bebbington, Mark

    1997-06-01

    We construct a stochastic dynamic model for synthetic seismicity involving stochastic stress input, release, and transfer in an environment of heterogeneous strength and interacting segments. The model is not fault-specific, having a number of adjustable parameters with physical interpretation, namely, stress relaxation, stress transfer, stress dissipation, segment structure, strength, and strength heterogeneity, which affect the seismicity in various ways. Local parameters are chosen to be consistent with large historical events, other parameters to reproduce bulk seismicity statistics for the fault as a whole. The one-dimensional fault is divided into a number of segments, each comprising a varying number of nodes. Stress input occurs at each node in a simple random process, representing the slow buildup due to tectonic plate movements. Events are initiated, subject to a stochastic hazard function, when the stress on a node exceeds the local strength. An event begins with the transfer of excess stress to neighboring nodes, which may in turn transfer their excess stress to the next neighbor. If the event grows to include the entire segment, then most of the stress on the segment is transferred to neighboring segments (or dissipated) in a characteristic event. These large events may themselves spread to other segments. We use the Middle America Trench to demonstrate that this model, using simple stochastic stress input and triggering mechanisms, can produce behavior consistent with the historical record over five units of magnitude. We also investigate the effects of perturbing various parameters in order to show how the model might be tailored to a specific fault structure. The strength of the model lies in this ability to reproduce the behavior of a general linear fault system through the choice of a relatively small number of parameters. It remains to develop a procedure for estimating the internal state of the model from the historical observations in order to

  20. Calibration of Automatically Generated Items Using Bayesian Hierarchical Modeling.

    Science.gov (United States)

    Johnson, Matthew S.; Sinharay, Sandip

    For complex educational assessments, there is an increasing use of "item families," which are groups of related items. However, calibration or scoring for such an assessment requires fitting models that take into account the dependence structure inherent among the items that belong to the same item family. C. Glas and W. van der Linden…

  1. A hierarchical modeling of information seeking behavior of school ...

    African Journals Online (AJOL)

    The aim of this study was to investigate the information seeking behavior of school teachers in the public primary schools of rural areas of Nigeria and to draw up a model of their information-seeking behavior. A Cross-sectional survey design research was employed to carry out the research. Findings showed that the ...

  2. Generic Database Cost Models for Hierarchical Memory Systems

    NARCIS (Netherlands)

    S. Manegold (Stefan); P.A. Boncz (Peter); M.L. Kersten (Martin)

    2002-01-01

    textabstractAccurate prediction of operator execution time is a prerequisite for database query optimization. Although extensively studied for conventional disk-based DBMSs, cost modeling in main-memory DBMSs is still an open issue. Recent database research has demonstrated that memory access is

  3. Generic database cost models for hierarchical memory systems

    NARCIS (Netherlands)

    S. Manegold (Stefan); P.A. Boncz (Peter); M.L. Kersten (Martin)

    2002-01-01

    textabstractAccurate prediction of operator execution time is a prerequisite fordatabase query optimization. Although extensively studied for conventionaldisk-based DBMSs, cost modeling in main-memory DBMSs is still an openissue. Recent database research has demonstrated that memory access ismore

  4. Bayesian Hierarchical Distributed Lag Models for Summer Ozone Exposure and Cardio-Respiratory Mortality

    OpenAIRE

    Yi Huang; Francesca Dominici; Michelle Bell

    2004-01-01

    In this paper, we develop Bayesian hierarchical distributed lag models for estimating associations between daily variations in summer ozone levels and daily variations in cardiovascular and respiratory (CVDRESP) mortality counts for 19 U.S. large cities included in the National Morbidity Mortality Air Pollution Study (NMMAPS) for the period 1987 - 1994. At the first stage, we define a semi-parametric distributed lag Poisson regression model to estimate city-specific relative rates of CVDRESP ...

  5. Weak-interacting holographic QCD

    International Nuclear Information System (INIS)

    Gazit, D.; Yee, H.-U.

    2008-06-01

    We propose a simple prescription for including low-energy weak-interactions into the frame- work of holographic QCD, based on the standard AdS/CFT dictionary of double-trace deformations. As our proposal enables us to calculate various electro-weak observables involving strongly coupled QCD, it opens a new perspective on phenomenological applications of holographic QCD. We illustrate efficiency and usefulness of our method by performing a few exemplar calculations; neutron beta decay, charged pion weak decay, and meson-nucleon parity non-conserving (PNC) couplings. The idea is general enough to be implemented in both Sakai-Sugimoto as well as Hard/Soft Wall models. (author)

  6. Holographic Chern-Simons defects

    International Nuclear Information System (INIS)

    Fujita, Mitsutoshi; Melby-Thompson, Charles M.; Meyer, René; Sugimoto, Shigeki

    2016-01-01

    We study SU(N) Yang-Mills-Chern-Simons theory in the presence of defects that shift the Chern-Simons level from a holographic point of view by embedding the system in string theory. The model is a D3-D7 system in Type IIB string theory, whose gravity dual is given by the AdS soliton background with probe D7 branes attaching to the AdS boundary along the defects. We holographically renormalize the free energy of the defect system with sources, from which we obtain the correlation functions for certain operators naturally associated to these defects. We find interesting phase transitions when the separation of the defects as well as the temperature are varied. We also discuss some implications for the Fractional Quantum Hall Effect and for 2-dimensional QCD.

  7. A hierarchical analysis of terrestrial ecosystem model Biome-BGC: Equilibrium analysis and model calibration

    Energy Technology Data Exchange (ETDEWEB)

    Thornton, Peter E [ORNL; Wang, Weile [ORNL; Law, Beverly E. [Oregon State University; Nemani, Ramakrishna R [NASA Ames Research Center

    2009-01-01

    The increasing complexity of ecosystem models represents a major difficulty in tuning model parameters and analyzing simulated results. To address this problem, this study develops a hierarchical scheme that simplifies the Biome-BGC model into three functionally cascaded tiers and analyzes them sequentially. The first-tier model focuses on leaf-level ecophysiological processes; it simulates evapotranspiration and photosynthesis with prescribed leaf area index (LAI). The restriction on LAI is then lifted in the following two model tiers, which analyze how carbon and nitrogen is cycled at the whole-plant level (the second tier) and in all litter/soil pools (the third tier) to dynamically support the prescribed canopy. In particular, this study analyzes the steady state of these two model tiers by a set of equilibrium equations that are derived from Biome-BGC algorithms and are based on the principle of mass balance. Instead of spinning-up the model for thousands of climate years, these equations are able to estimate carbon/nitrogen stocks and fluxes of the target (steady-state) ecosystem directly from the results obtained by the first-tier model. The model hierarchy is examined with model experiments at four AmeriFlux sites. The results indicate that the proposed scheme can effectively calibrate Biome-BGC to simulate observed fluxes of evapotranspiration and photosynthesis; and the carbon/nitrogen stocks estimated by the equilibrium analysis approach are highly consistent with the results of model simulations. Therefore, the scheme developed in this study may serve as a practical guide to calibrate/analyze Biome-BGC; it also provides an efficient way to solve the problem of model spin-up, especially for applications over large regions. The same methodology may help analyze other similar ecosystem models as well.

  8. Generic Database Cost Models for Hierarchical Memory Systems

    OpenAIRE

    Manegold, Stefan; Boncz, Peter; Kersten, Martin

    2002-01-01

    textabstractAccurate prediction of operator execution time is a prerequisite for database query optimization. Although extensively studied for conventional disk-based DBMSs, cost modeling in main-memory DBMSs is still an open issue. Recent database research has demonstrated that memory access is more and more becoming a significant---if not the major---cost component of database operations. If used properly, fast but small cache memories---usually organized in cascading hierarchy between CPU ...

  9. Probabilistic Inference: Task Dependency and Individual Differences of Probability Weighting Revealed by Hierarchical Bayesian Modeling.

    Science.gov (United States)

    Boos, Moritz; Seer, Caroline; Lange, Florian; Kopp, Bruno

    2016-01-01

    Cognitive determinants of probabilistic inference were examined using hierarchical Bayesian modeling techniques. A classic urn-ball paradigm served as experimental strategy, involving a factorial two (prior probabilities) by two (likelihoods) design. Five computational models of cognitive processes were compared with the observed behavior. Parameter-free Bayesian posterior probabilities and parameter-free base rate neglect provided inadequate models of probabilistic inference. The introduction of distorted subjective probabilities yielded more robust and generalizable results. A general class of (inverted) S-shaped probability weighting functions had been proposed; however, the possibility of large differences in probability distortions not only across experimental conditions, but also across individuals, seems critical for the model's success. It also seems advantageous to consider individual differences in parameters of probability weighting as being sampled from weakly informative prior distributions of individual parameter values. Thus, the results from hierarchical Bayesian modeling converge with previous results in revealing that probability weighting parameters show considerable task dependency and individual differences. Methodologically, this work exemplifies the usefulness of hierarchical Bayesian modeling techniques for cognitive psychology. Theoretically, human probabilistic inference might be best described as the application of individualized strategic policies for Bayesian belief revision.

  10. Statistical shear lag model - unraveling the size effect in hierarchical composites.

    Science.gov (United States)

    Wei, Xiaoding; Filleter, Tobin; Espinosa, Horacio D

    2015-05-01

    Numerous experimental and computational studies have established that the hierarchical structures encountered in natural materials, such as the brick-and-mortar structure observed in sea shells, are essential for achieving defect tolerance. Due to this hierarchy, the mechanical properties of natural materials have a different size dependence compared to that of typical engineered materials. This study aimed to explore size effects on the strength of bio-inspired staggered hierarchical composites and to define the influence of the geometry of constituents in their outstanding defect tolerance capability. A statistical shear lag model is derived by extending the classical shear lag model to account for the statistics of the constituents' strength. A general solution emerges from rigorous mathematical derivations, unifying the various empirical formulations for the fundamental link length used in previous statistical models. The model shows that the staggered arrangement of constituents grants composites a unique size effect on mechanical strength in contrast to homogenous continuous materials. The model is applied to hierarchical yarns consisting of double-walled carbon nanotube bundles to assess its predictive capabilities for novel synthetic materials. Interestingly, the model predicts that yarn gauge length does not significantly influence the yarn strength, in close agreement with experimental observations. Copyright © 2015 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.

  11. Accounting for uncertainty in ecological analysis: the strengths and limitations of hierarchical statistical modeling.

    Science.gov (United States)

    Cressie, Noel; Calder, Catherine A; Clark, James S; Ver Hoef, Jay M; Wikle, Christopher K

    2009-04-01

    Analyses of ecological data should account for the uncertainty in the process(es) that generated the data. However, accounting for these uncertainties is a difficult task, since ecology is known for its complexity. Measurement and/or process errors are often the only sources of uncertainty modeled when addressing complex ecological problems, yet analyses should also account for uncertainty in sampling design, in model specification, in parameters governing the specified model, and in initial and boundary conditions. Only then can we be confident in the scientific inferences and forecasts made from an analysis. Probability and statistics provide a framework that accounts for multiple sources of uncertainty. Given the complexities of ecological studies, the hierarchical statistical model is an invaluable tool. This approach is not new in ecology, and there are many examples (both Bayesian and non-Bayesian) in the literature illustrating the benefits of this approach. In this article, we provide a baseline for concepts, notation, and methods, from which discussion on hierarchical statistical modeling in ecology can proceed. We have also planted some seeds for discussion and tried to show where the practical difficulties lie. Our thesis is that hierarchical statistical modeling is a powerful way of approaching ecological analysis in the presence of inevitable but quantifiable uncertainties, even if practical issues sometimes require pragmatic compromises.

  12. Multi-subject hierarchical inverse covariance modelling improves estimation of functional brain networks.

    Science.gov (United States)

    Colclough, Giles L; Woolrich, Mark W; Harrison, Samuel J; Rojas López, Pedro A; Valdes-Sosa, Pedro A; Smith, Stephen M

    2018-05-07

    A Bayesian model for sparse, hierarchical inverse covariance estimation is presented, and applied to multi-subject functional connectivity estimation in the human brain. It enables simultaneous inference of the strength of connectivity between brain regions at both subject and population level, and is applicable to fmri, meg and eeg data. Two versions of the model can encourage sparse connectivity, either using continuous priors to suppress irrelevant connections, or using an explicit description of the network structure to estimate the connection probability between each pair of regions. A large evaluation of this model, and thirteen methods that represent the state of the art of inverse covariance modelling, is conducted using both simulated and resting-state functional imaging datasets. Our novel Bayesian approach has similar performance to the best extant alternative, Ng et al.'s Sparse Group Gaussian Graphical Model algorithm, which also is based on a hierarchical structure. Using data from the Human Connectome Project, we show that these hierarchical models are able to reduce the measurement error in meg beta-band functional networks by 10%, producing concomitant increases in estimates of the genetic influence on functional connectivity. Copyright © 2018. Published by Elsevier Inc.

  13. Latent Variable Regression 4-Level Hierarchical Model Using Multisite Multiple-Cohorts Longitudinal Data. CRESST Report 801

    Science.gov (United States)

    Choi, Kilchan

    2011-01-01

    This report explores a new latent variable regression 4-level hierarchical model for monitoring school performance over time using multisite multiple-cohorts longitudinal data. This kind of data set has a 4-level hierarchical structure: time-series observation nested within students who are nested within different cohorts of students. These…

  14. Principal-subordinate hierarchical multi-objective programming model of initial water rights allocation

    Directory of Open Access Journals (Sweden)

    Dan Wu

    2009-06-01

    Full Text Available The principal-subordinate hierarchical multi-objective programming model of initial water rights allocation was developed based on the principle of coordinated and sustainable development of different regions and water sectors within a basin. With the precondition of strictly controlling maximum emissions rights, initial water rights were allocated between the first and the second levels of the hierarchy in order to promote fair and coordinated development across different regions of the basin and coordinated and efficient water use across different water sectors, realize the maximum comprehensive benefits to the basin, promote the unity of quantity and quality of initial water rights allocation, and eliminate water conflict across different regions and water sectors. According to interactive decision-making theory, a principal-subordinate hierarchical interactive iterative algorithm based on the satisfaction degree was developed and used to solve the initial water rights allocation model. A case study verified the validity of the model.

  15. Use of a Bayesian hierarchical model to study the allometric scaling of the fetoplacental weight ratio

    Directory of Open Access Journals (Sweden)

    Fidel Ernesto Castro Morales

    2016-03-01

    Full Text Available Abstract Objectives: to propose the use of a Bayesian hierarchical model to study the allometric scaling of the fetoplacental weight ratio, including possible confounders. Methods: data from 26 singleton pregnancies with gestational age at birth between 37 and 42 weeks were analyzed. The placentas were collected immediately after delivery and stored under refrigeration until the time of analysis, which occurred within up to 12 hours. Maternal data were collected from medical records. A Bayesian hierarchical model was proposed and Markov chain Monte Carlo simulation methods were used to obtain samples from distribution a posteriori. Results: the model developed showed a reasonable fit, even allowing for the incorporation of variables and a priori information on the parameters used. Conclusions: new variables can be added to the modelfrom the available code, allowing many possibilities for data analysis and indicating the potential for use in research on the subject.

  16. Hierarchic stochastic modelling applied to intracellular Ca(2+ signals.

    Directory of Open Access Journals (Sweden)

    Gregor Moenke

    Full Text Available Important biological processes like cell signalling and gene expression have noisy components and are very complex at the same time. Mathematical analysis of such systems has often been limited to the study of isolated subsystems, or approximations are used that are difficult to justify. Here we extend a recently published method (Thurley and Falcke, PNAS 2011 which is formulated in observable system configurations instead of molecular transitions. This reduces the number of system states by several orders of magnitude and avoids fitting of kinetic parameters. The method is applied to Ca(2+ signalling. Ca(2+ is a ubiquitous second messenger transmitting information by stochastic sequences of concentration spikes, which arise by coupling of subcellular Ca(2+ release events (puffs. We derive analytical expressions for a mechanistic Ca(2+ model, based on recent data from live cell imaging, and calculate Ca(2+ spike statistics in dependence on cellular parameters like stimulus strength or number of Ca(2+ channels. The new approach substantiates a generic Ca(2+ model, which is a very convenient way to simulate Ca(2+ spike sequences with correct spiking statistics.

  17. Probabilistic inference: Task dependency and individual differences of probability weighting revealed by hierarchical Bayesian modelling

    Directory of Open Access Journals (Sweden)

    Moritz eBoos

    2016-05-01

    Full Text Available Cognitive determinants of probabilistic inference were examined using hierarchical Bayesian modelling techniques. A classic urn-ball paradigm served as experimental strategy, involving a factorial two (prior probabilities by two (likelihoods design. Five computational models of cognitive processes were compared with the observed behaviour. Parameter-free Bayesian posterior probabilities and parameter-free base rate neglect provided inadequate models of probabilistic inference. The introduction of distorted subjective probabilities yielded more robust and generalizable results. A general class of (inverted S-shaped probability weighting functions had been proposed; however, the possibility of large differences in probability distortions not only across experimental conditions, but also across individuals, seems critical for the model’s success. It also seems advantageous to consider individual differences in parameters of probability weighting as being sampled from weakly informative prior distributions of individual parameter values. Thus, the results from hierarchical Bayesian modelling converge with previous results in revealing that probability weighting parameters show considerable task dependency and individual differences. Methodologically, this work exemplifies the usefulness of hierarchical Bayesian modelling techniques for cognitive psychology. Theoretically, human probabilistic inference might be best described as the application of individualized strategic policies for Bayesian belief revision.

  18. Modeling when people quit: Bayesian censored geometric models with hierarchical and latent-mixture extensions.

    Science.gov (United States)

    Okada, Kensuke; Vandekerckhove, Joachim; Lee, Michael D

    2018-02-01

    People often interact with environments that can provide only a finite number of items as resources. Eventually a book contains no more chapters, there are no more albums available from a band, and every Pokémon has been caught. When interacting with these sorts of environments, people either actively choose to quit collecting new items, or they are forced to quit when the items are exhausted. Modeling the distribution of how many items people collect before they quit involves untangling these two possibilities, We propose that censored geometric models are a useful basic technique for modeling the quitting distribution, and, show how, by implementing these models in a hierarchical and latent-mixture framework through Bayesian methods, they can be extended to capture the additional features of specific situations. We demonstrate this approach by developing and testing a series of models in two case studies involving real-world data. One case study deals with people choosing jokes from a recommender system, and the other deals with people completing items in a personality survey.

  19. The Case for A Hierarchal System Model for Linux Clusters

    Energy Technology Data Exchange (ETDEWEB)

    Seager, M; Gorda, B

    2009-06-05

    The computer industry today is no longer driven, as it was in the 40s, 50s and 60s, by High-performance computing requirements. Rather, HPC systems, especially Leadership class systems, sit on top of a pyramid investment mode. Figure 1 shows a representative pyramid investment model for systems hardware. At the base of the pyramid is the huge investment (order 10s of Billions of US Dollars per year) in semiconductor fabrication and process technologies. These costs, which are approximately doubling with every generation, are funded from investments multiple markets: enterprise, desktops, games, embedded and specialized devices. Over and above these base technology investments are investments for critical technology elements such as microprocessor, chipsets and memory ASIC components. Investments for these components are spread across the same markets as the base semiconductor processes investments. These second tier investments are approximately half the size of the lower level of the pyramid. The next technology investment layer up, tier 3, is more focused on scalable computing systems such as those needed for HPC and other markets. These tier 3 technology elements include networking (SAN, WAN and LAN), interconnects and large scalable SMP designs. Above these is tier 4 are relatively small investments necessary to build very large, scalable systems high-end or Leadership class systems. Primary among these are the specialized network designs of vertically integrated systems, etc.

  20. The holographic Weyl semi-metal

    Directory of Open Access Journals (Sweden)

    Karl Landsteiner

    2016-02-01

    Full Text Available We present a holographic model of a Weyl semi-metal. We show the evidences that upon varying a mass parameter the model undergoes a sharp crossover at small temperature from a topologically non-trivial state to a trivial one. The order parameter is the anomalous Hall effect (AHE and we find that it is very strongly suppressed above a critical value of the mass parameter. This can be taken as a hint for an underlying topological quantum phase transition. We give an interpretation of the results in terms of a holographic RG flow and compare to a weakly coupled field theoretical model. Since there are no fermionic quasiparticle excitations in the strongly coupled holographic model the presence of an anomalous Hall effect cannot be bound to notions of topology in momentum spaces.

  1. The holographic Weyl semi-metal

    Energy Technology Data Exchange (ETDEWEB)

    Landsteiner, Karl, E-mail: karl.landsteiner@csic.es; Liu, Yan, E-mail: yan.liu@csic.es

    2016-02-10

    We present a holographic model of a Weyl semi-metal. We show the evidences that upon varying a mass parameter the model undergoes a sharp crossover at small temperature from a topologically non-trivial state to a trivial one. The order parameter is the anomalous Hall effect (AHE) and we find that it is very strongly suppressed above a critical value of the mass parameter. This can be taken as a hint for an underlying topological quantum phase transition. We give an interpretation of the results in terms of a holographic RG flow and compare to a weakly coupled field theoretical model. Since there are no fermionic quasiparticle excitations in the strongly coupled holographic model the presence of an anomalous Hall effect cannot be bound to notions of topology in momentum spaces.

  2. Emotional intelligence is a second-stratum factor of intelligence: evidence from hierarchical and bifactor models.

    Science.gov (United States)

    MacCann, Carolyn; Joseph, Dana L; Newman, Daniel A; Roberts, Richard D

    2014-04-01

    This article examines the status of emotional intelligence (EI) within the structure of human cognitive abilities. To evaluate whether EI is a 2nd-stratum factor of intelligence, data were fit to a series of structural models involving 3 indicators each for fluid intelligence, crystallized intelligence, quantitative reasoning, visual processing, and broad retrieval ability, as well as 2 indicators each for emotion perception, emotion understanding, and emotion management. Unidimensional, multidimensional, hierarchical, and bifactor solutions were estimated in a sample of 688 college and community college students. Results suggest adequate fit for 2 models: (a) an oblique 8-factor model (with 5 traditional cognitive ability factors and 3 EI factors) and (b) a hierarchical solution (with cognitive g at the highest level and EI representing a 2nd-stratum factor that loads onto g at λ = .80). The acceptable relative fit of the hierarchical model confirms the notion that EI is a group factor of cognitive ability, marking the expression of intelligence in the emotion domain. The discussion proposes a possible expansion of Cattell-Horn-Carroll theory to include EI as a 2nd-stratum factor of similar standing to factors such as fluid intelligence and visual processing.

  3. Action detection by double hierarchical multi-structure space-time statistical matching model

    Science.gov (United States)

    Han, Jing; Zhu, Junwei; Cui, Yiyin; Bai, Lianfa; Yue, Jiang

    2018-03-01

    Aimed at the complex information in videos and low detection efficiency, an actions detection model based on neighboring Gaussian structure and 3D LARK features is put forward. We exploit a double hierarchical multi-structure space-time statistical matching model (DMSM) in temporal action localization. First, a neighboring Gaussian structure is presented to describe the multi-scale structural relationship. Then, a space-time statistical matching method is proposed to achieve two similarity matrices on both large and small scales, which combines double hierarchical structural constraints in model by both the neighboring Gaussian structure and the 3D LARK local structure. Finally, the double hierarchical similarity is fused and analyzed to detect actions. Besides, the multi-scale composite template extends the model application into multi-view. Experimental results of DMSM on the complex visual tracker benchmark data sets and THUMOS 2014 data sets show the promising performance. Compared with other state-of-the-art algorithm, DMSM achieves superior performances.

  4. Oscillatory Critical Amplitudes in Hierarchical Models and the Harris Function of Branching Processes

    Science.gov (United States)

    Costin, Ovidiu; Giacomin, Giambattista

    2013-02-01

    Oscillatory critical amplitudes have been repeatedly observed in hierarchical models and, in the cases that have been taken into consideration, these oscillations are so small to be hardly detectable. Hierarchical models are tightly related to iteration of maps and, in fact, very similar phenomena have been repeatedly reported in many fields of mathematics, like combinatorial evaluations and discrete branching processes. It is precisely in the context of branching processes with bounded off-spring that T. Harris, in 1948, first set forth the possibility that the logarithm of the moment generating function of the rescaled population size, in the super-critical regime, does not grow near infinity as a power, but it has an oscillatory prefactor (the Harris function). These oscillations have been observed numerically only much later and, while the origin is clearly tied to the discrete character of the iteration, the amplitude size is not so well understood. The purpose of this note is to reconsider the issue for hierarchical models and in what is arguably the most elementary setting—the pinning model—that actually just boils down to iteration of polynomial maps (and, notably, quadratic maps). In this note we show that the oscillatory critical amplitude for pinning models and the Harris function coincide. Moreover we make explicit the link between these oscillatory functions and the geometry of the Julia set of the map, making thus rigorous and quantitative some ideas set forth in Derrida et al. (Commun. Math. Phys. 94:115-132, 1984).

  5. On hierarchical models for visual recognition and learning of objects, scenes, and activities

    CERN Document Server

    Spehr, Jens

    2015-01-01

    In many computer vision applications, objects have to be learned and recognized in images or image sequences. This book presents new probabilistic hierarchical models that allow an efficient representation of multiple objects of different categories, scales, rotations, and views. The idea is to exploit similarities between objects and object parts in order to share calculations and avoid redundant information. Furthermore inference approaches for fast and robust detection are presented. These new approaches combine the idea of compositional and similarity hierarchies and overcome limitations of previous methods. Besides classical object recognition the book shows the use for detection of human poses in a project for gait analysis. The use of activity detection is presented for the design of environments for ageing, to identify activities and behavior patterns in smart homes. In a presented project for parking spot detection using an intelligent vehicle, the proposed approaches are used to hierarchically model...

  6. Pulse holographic measurement techniques

    International Nuclear Information System (INIS)

    Kim, Cheol Jung; Baik, Seong Hoon; Hong, Seok Kyung; Kim, Jeong Moog; Kim, Duk Hyun

    1992-01-01

    With the development of laser, remote inspection techniques using laser have been growing on. The inspection and measurement techniques by pulse holography are well-established technique for precise measurement, and widely used in various fields of industry now. In nuclear industry, this technology is practically used because holographic inspection is remote, noncontact, and precise measurement technique. In relation to remote inspection technology in nuclear industry, state-of-the art of pulse HNDT (Holographic non-destructive testing) and holographic measurement techniques are examined. First of all, the fundamental principles as well as practical problems for applications are briefly described. The fields of pulse holography have been divided into the HNDT, flow visualization and distribution study, and other application techniques. Additionally holographic particle study, bubble chamber holography, and applications to other visualization techniques are described. Lastly, the current status for the researches and applications of pulse holography to nuclear industry which are carried out actively in Europe and USA, is described. (Author)

  7. A hierarchical lattice spring model to simulate the mechanics of 2-D materials-based composites

    Directory of Open Access Journals (Sweden)

    Lucas eBrely

    2015-07-01

    Full Text Available In the field of engineering materials, strength and toughness are typically two mutually exclusive properties. Structural biological materials such as bone, tendon or dentin have resolved this conflict and show unprecedented damage tolerance, toughness and strength levels. The common feature of these materials is their hierarchical heterogeneous structure, which contributes to increased energy dissipation before failure occurring at different scale levels. These structural properties are the key to exceptional bioinspired material mechanical properties, in particular for nanocomposites. Here, we develop a numerical model in order to simulate the mechanisms involved in damage progression and energy dissipation at different size scales in nano- and macro-composites, which depend both on the heterogeneity of the material and on the type of hierarchical structure. Both these aspects have been incorporated into a 2-dimensional model based on a Lattice Spring Model, accounting for geometrical nonlinearities and including statistically-based fracture phenomena. The model has been validated by comparing numerical results to continuum and fracture mechanics results as well as finite elements simulations, and then employed to study how structural aspects impact on hierarchical composite material properties. Results obtained with the numerical code highlight the dependence of stress distributions on matrix properties and reinforcement dispersion, geometry and properties, and how failure of sacrificial elements is directly involved in the damage tolerance of the material. Thanks to the rapidly developing field of nanocomposite manufacture, it is already possible to artificially create materials with multi-scale hierarchical reinforcements. The developed code could be a valuable support in the design and optimization of these advanced materials, drawing inspiration and going beyond biological materials with exceptional mechanical properties.

  8. Loss Performance Modeling for Hierarchical Heterogeneous Wireless Networks With Speed-Sensitive Call Admission Control

    DEFF Research Database (Denmark)

    Huang, Qian; Huang, Yue-Cai; Ko, King-Tim

    2011-01-01

    . This approach avoids unnecessary and frequent handoff between cells and reduces signaling overheads. An approximation model with guaranteed accuracy and low computational complexity is presented for the loss performance of multiservice traffic. The accuracy of numerical results is validated by comparing......A hierarchical overlay structure is an alternative solution that integrates existing and future heterogeneous wireless networks to provide subscribers with better mobile broadband services. Traffic loss performance in such integrated heterogeneous networks is necessary for an operator's network...

  9. Bayesian Poisson hierarchical models for crash data analysis: Investigating the impact of model choice on site-specific predictions.

    Science.gov (United States)

    Khazraee, S Hadi; Johnson, Valen; Lord, Dominique

    2018-08-01

    The Poisson-gamma (PG) and Poisson-lognormal (PLN) regression models are among the most popular means for motor vehicle crash data analysis. Both models belong to the Poisson-hierarchical family of models. While numerous studies have compared the overall performance of alternative Bayesian Poisson-hierarchical models, little research has addressed the impact of model choice on the expected crash frequency prediction at individual sites. This paper sought to examine whether there are any trends among candidate models predictions e.g., that an alternative model's prediction for sites with certain conditions tends to be higher (or lower) than that from another model. In addition to the PG and PLN models, this research formulated a new member of the Poisson-hierarchical family of models: the Poisson-inverse gamma (PIGam). Three field datasets (from Texas, Michigan and Indiana) covering a wide range of over-dispersion characteristics were selected for analysis. This study demonstrated that the model choice can be critical when the calibrated models are used for prediction at new sites, especially when the data are highly over-dispersed. For all three datasets, the PIGam model would predict higher expected crash frequencies than would the PLN and PG models, in order, indicating a clear link between the models predictions and the shape of their mixing distributions (i.e., gamma, lognormal, and inverse gamma, respectively). The thicker tail of the PIGam and PLN models (in order) may provide an advantage when the data are highly over-dispersed. The analysis results also illustrated a major deficiency of the Deviance Information Criterion (DIC) in comparing the goodness-of-fit of hierarchical models; models with drastically different set of coefficients (and thus predictions for new sites) may yield similar DIC values, because the DIC only accounts for the parameters in the lowest (observation) level of the hierarchy and ignores the higher levels (regression coefficients

  10. Holographic anyonic superfluidity

    Science.gov (United States)

    Jokela, Niko; Lifschytz, Gilad; Lippert, Matthew

    2013-10-01

    Starting with a holographic construction for a fractional quantum Hall state based on the D3-D7' system, we explore alternative quantization conditions for the bulk gauge fields. This gives a description of a quantum Hall state with various filling fractions. For a particular alternative quantization of the bulk gauge fields, we obtain a holographic anyon fluid in a vanishing background magnetic field. We show that this system is a superfluid, exhibiting the relevant gapless excitation.

  11. Bulk viscosity in holographic Lifshitz hydrodynamics

    International Nuclear Information System (INIS)

    Hoyos, Carlos; Kim, Bom Soo; Oz, Yaron

    2014-01-01

    We compute the bulk viscosity in holographic models dual to theories with Lifshitz scaling and/or hyperscaling violation, using a generalization of the bulk viscosity formula derived in arXiv:1103.1657 from the null focusing equation. We find that only a class of models with massive vector fields are truly Lifshitz scale invariant, and have a vanishing bulk viscosity. For other holographic models with scalars and/or massless vector fields we find a universal formula in terms of the dynamical exponent and the hyperscaling violation exponent

  12. A sow replacement model using Bayesian updating in a three-level hierarchic Markov process. II. Optimization model

    DEFF Research Database (Denmark)

    Kristensen, Anders Ringgaard; Søllested, Thomas Algot

    2004-01-01

    improvements. The biological model of the replacement model is described in a previous paper and in this paper the optimization model is described. The model is developed as a prototype for use under practical conditions. The application of the model is demonstrated using data from two commercial Danish sow......Recent methodological improvements in replacement models comprising multi-level hierarchical Markov processes and Bayesian updating have hardly been implemented in any replacement model and the aim of this study is to present a sow replacement model that really uses these methodological...... herds. It is concluded that the Bayesian updating technique and the hierarchical structure decrease the size of the state space dramatically. Since parameter estimates vary considerably among herds it is concluded that decision support concerning sow replacement only makes sense with parameters...

  13. Topics in Computational Bayesian Statistics With Applications to Hierarchical Models in Astronomy and Sociology

    Science.gov (United States)

    Sahai, Swupnil

    This thesis includes three parts. The overarching theme is how to analyze structured hierarchical data, with applications to astronomy and sociology. The first part discusses how expectation propagation can be used to parallelize the computation when fitting big hierarchical bayesian models. This methodology is then used to fit a novel, nonlinear mixture model to ultraviolet radiation from various regions of the observable universe. The second part discusses how the Stan probabilistic programming language can be used to numerically integrate terms in a hierarchical bayesian model. This technique is demonstrated on supernovae data to significantly speed up convergence to the posterior distribution compared to a previous study that used a Gibbs-type sampler. The third part builds a formal latent kernel representation for aggregate relational data as a way to more robustly estimate the mixing characteristics of agents in a network. In particular, the framework is applied to sociology surveys to estimate, as a function of ego age, the age and sex composition of the personal networks of individuals in the United States.

  14. Exploring Neural Network Models with Hierarchical Memories and Their Use in Modeling Biological Systems

    Science.gov (United States)

    Pusuluri, Sai Teja

    Energy landscapes are often used as metaphors for phenomena in biology, social sciences and finance. Different methods have been implemented in the past for the construction of energy landscapes. Neural network models based on spin glass physics provide an excellent mathematical framework for the construction of energy landscapes. This framework uses a minimal number of parameters and constructs the landscape using data from the actual phenomena. In the past neural network models were used to mimic the storage and retrieval process of memories (patterns) in the brain. With advances in the field now, these models are being used in machine learning, deep learning and modeling of complex phenomena. Most of the past literature focuses on increasing the storage capacity and stability of stored patterns in the network but does not study these models from a modeling perspective or an energy landscape perspective. This dissertation focuses on neural network models both from a modeling perspective and from an energy landscape perspective. I firstly show how the cellular interconversion phenomenon can be modeled as a transition between attractor states on an epigenetic landscape constructed using neural network models. The model allows the identification of a reaction coordinate of cellular interconversion by analyzing experimental and simulation time course data. Monte Carlo simulations of the model show that the initial phase of cellular interconversion is a Poisson process and the later phase of cellular interconversion is a deterministic process. Secondly, I explore the static features of landscapes generated using neural network models, such as sizes of basins of attraction and densities of metastable states. The simulation results show that the static landscape features are strongly dependent on the correlation strength and correlation structure between patterns. Using different hierarchical structures of the correlation between patterns affects the landscape features

  15. Hierarchical Model Predictive Control for Plug-and-Play Resource Distribution

    DEFF Research Database (Denmark)

    Bendtsen, Jan Dimon; Trangbæk, K; Stoustrup, Jakob

    2012-01-01

    of autonomous units. The approach is inspired by smart-grid electric power production and consumption systems, where the flexibility of a large number of power producing and/or power consuming units can be exploited in a smart-grid solution. The objective is to accommodate the load variation on the grid......This chapter deals with hierarchical model predictive control (MPC) of distributed systems. A three level hierarchical approach is proposed, consisting of a high level MPC controller, a second level of so-called aggregators, controlled by an online MPC-like algorithm, and a lower level......, arising on one hand from varying consumption, on the other hand by natural variations in power production e.g. from wind turbines. The proposed method can also be applied to supply chain management systems, where the challenge is to balance demand and supply, using a number of storages each with a maximal...

  16. Market Competitiveness Evaluation of Mechanical Equipment with a Pairwise Comparisons Hierarchical Model.

    Science.gov (United States)

    Hou, Fujun

    2016-01-01

    This paper provides a description of how market competitiveness evaluations concerning mechanical equipment can be made in the context of multi-criteria decision environments. It is assumed that, when we are evaluating the market competitiveness, there are limited number of candidates with some required qualifications, and the alternatives will be pairwise compared on a ratio scale. The qualifications are depicted as criteria in hierarchical structure. A hierarchical decision model called PCbHDM was used in this study based on an analysis of its desirable traits. Illustration and comparison shows that the PCbHDM provides a convenient and effective tool for evaluating the market competitiveness of mechanical equipment. The researchers and practitioners might use findings of this paper in application of PCbHDM.

  17. Hierarchical relaxation dynamics in a tilted two-band Bose-Hubbard model

    Science.gov (United States)

    Cosme, Jayson G.

    2018-04-01

    We numerically examine slow and hierarchical relaxation dynamics of interacting bosons described by a tilted two-band Bose-Hubbard model. The system is found to exhibit signatures of quantum chaos within the spectrum and the validity of the eigenstate thermalization hypothesis for relevant physical observables is demonstrated for certain parameter regimes. Using the truncated Wigner representation in the semiclassical limit of the system, dynamics of relevant observables reveal hierarchical relaxation and the appearance of prethermalized states is studied from the perspective of statistics of the underlying mean-field trajectories. The observed prethermalization scenario can be attributed to different stages of glassy dynamics in the mode-time configuration space due to dynamical phase transition between ergodic and nonergodic trajectories.

  18. Hierarchical Bayesian nonparametric mixture models for clustering with variable relevance determination.

    Science.gov (United States)

    Yau, Christopher; Holmes, Chris

    2011-07-01

    We propose a hierarchical Bayesian nonparametric mixture model for clustering when some of the covariates are assumed to be of varying relevance to the clustering problem. This can be thought of as an issue in variable selection for unsupervised learning. We demonstrate that by defining a hierarchical population based nonparametric prior on the cluster locations scaled by the inverse covariance matrices of the likelihood we arrive at a 'sparsity prior' representation which admits a conditionally conjugate prior. This allows us to perform full Gibbs sampling to obtain posterior distributions over parameters of interest including an explicit measure of each covariate's relevance and a distribution over the number of potential clusters present in the data. This also allows for individual cluster specific variable selection. We demonstrate improved inference on a number of canonical problems.

  19. Improving Hierarchical Models Using Historical Data with Applications in High-Throughput Genomics Data Analysis.

    Science.gov (United States)

    Li, Ben; Li, Yunxiao; Qin, Zhaohui S

    2017-06-01

    Modern high-throughput biotechnologies such as microarray and next generation sequencing produce a massive amount of information for each sample assayed. However, in a typical high-throughput experiment, only limited amount of data are observed for each individual feature, thus the classical 'large p , small n ' problem. Bayesian hierarchical model, capable of borrowing strength across features within the same dataset, has been recognized as an effective tool in analyzing such data. However, the shrinkage effect, the most prominent feature of hierarchical features, can lead to undesirable over-correction for some features. In this work, we discuss possible causes of the over-correction problem and propose several alternative solutions. Our strategy is rooted in the fact that in the Big Data era, large amount of historical data are available which should be taken advantage of. Our strategy presents a new framework to enhance the Bayesian hierarchical model. Through simulation and real data analysis, we demonstrated superior performance of the proposed strategy. Our new strategy also enables borrowing information across different platforms which could be extremely useful with emergence of new technologies and accumulation of data from different platforms in the Big Data era. Our method has been implemented in R package "adaptiveHM", which is freely available from https://github.com/benliemory/adaptiveHM.

  20. Hierarchical modelling of temperature and habitat size effects on population dynamics of North Atlantic cod

    DEFF Research Database (Denmark)

    Mantzouni, Irene; Sørensen, Helle; O'Hara, Robert B.

    2010-01-01

    and Beverton and Holt stock–recruitment (SR) models were extended by applying hierarchical methods, mixed-effects models, and Bayesian inference to incorporate the influence of these ecosystem factors on model parameters representing cod maximum reproductive rate and carrying capacity. We identified......Understanding how temperature affects cod (Gadus morhua) ecology is important for forecasting how populations will develop as climate changes in future. The effects of spawning-season temperature and habitat size on cod recruitment dynamics have been investigated across the North Atlantic. Ricker...

  1. Modeling for mechanical response of CICC by hierarchical approach and ABAQUS simulation

    Energy Technology Data Exchange (ETDEWEB)

    Li, Y.X.; Wang, X.; Gao, Y.W., E-mail: ywgao@lzu.edu.cn; Zhou, Y.H.

    2013-11-15

    Highlights: • We develop an analytical model based on the hierarchical approach of classical wire rope theory. • The numerical model is set up through ABAQUS to verify and enhance the theoretical model. • We calculate two concerned mechanical response: global displacement–load curve and local axial strain distribution. • Elastic–plasticity is the main character in loading curve, and the friction between adjacent strands plays a significant role in the distribution map. -- Abstract: An unexpected degradation frequently occurs in superconducting cable (CICC) due to the mechanical response (deformation) when suffering from electromagnetic load and thermal load during operation. Because of the cable's hierarchical twisted configuration, it is difficult to quantitatively model the mechanical response. In addition, the local mechanical characteristics such as strain distribution could be hardly monitored via experimental method. To address this issue, we develop an analytical model based on the hierarchical approach of classical wire rope theory. This approach follows the algorithm advancing successively from n + 1 stage (e.g. 3 × 3 × 5 subcable) to n stage (e.g. 3 × 3 subcable). There are no complicated numerical procedures required in this model. Meanwhile, the numerical model is set up through ABAQUS to verify and enhance the theoretical model. Subsequently, we calculate two concerned mechanical responses: global displacement–load curve and local axial strain distribution. We find that in the global displacement–load curve, the elastic–plasticity is the main character, and the higher-level cable shows enhanced nonlinear characteristics. As for the local distribution, the friction among adjacent strands plays a significant role in this map. The magnitude of friction strongly influences the regularity of the distribution at different twisted stages. More detailed results are presented in this paper.

  2. Modeling for mechanical response of CICC by hierarchical approach and ABAQUS simulation

    International Nuclear Information System (INIS)

    Li, Y.X.; Wang, X.; Gao, Y.W.; Zhou, Y.H.

    2013-01-01

    Highlights: • We develop an analytical model based on the hierarchical approach of classical wire rope theory. • The numerical model is set up through ABAQUS to verify and enhance the theoretical model. • We calculate two concerned mechanical response: global displacement–load curve and local axial strain distribution. • Elastic–plasticity is the main character in loading curve, and the friction between adjacent strands plays a significant role in the distribution map. -- Abstract: An unexpected degradation frequently occurs in superconducting cable (CICC) due to the mechanical response (deformation) when suffering from electromagnetic load and thermal load during operation. Because of the cable's hierarchical twisted configuration, it is difficult to quantitatively model the mechanical response. In addition, the local mechanical characteristics such as strain distribution could be hardly monitored via experimental method. To address this issue, we develop an analytical model based on the hierarchical approach of classical wire rope theory. This approach follows the algorithm advancing successively from n + 1 stage (e.g. 3 × 3 × 5 subcable) to n stage (e.g. 3 × 3 subcable). There are no complicated numerical procedures required in this model. Meanwhile, the numerical model is set up through ABAQUS to verify and enhance the theoretical model. Subsequently, we calculate two concerned mechanical responses: global displacement–load curve and local axial strain distribution. We find that in the global displacement–load curve, the elastic–plasticity is the main character, and the higher-level cable shows enhanced nonlinear characteristics. As for the local distribution, the friction among adjacent strands plays a significant role in this map. The magnitude of friction strongly influences the regularity of the distribution at different twisted stages. More detailed results are presented in this paper

  3. Anomalous transport and holographic momentum relaxation

    Science.gov (United States)

    Copetti, Christian; Fernández-Pendás, Jorge; Landsteiner, Karl; Megías, Eugenio

    2017-09-01

    The chiral magnetic and vortical effects denote the generation of dissipationless currents due to magnetic fields or rotation. They can be studied in holographic models with Chern-Simons couplings dual to anomalies in field theory. We study a holographic model with translation symmetry breaking based on linear massless scalar field backgrounds. We compute the electric DC conductivity and find that it can vanish for certain values of the translation symmetry breaking couplings. Then we compute the chiral magnetic and chiral vortical conductivities. They are completely independent of the holographic disorder couplings and take the usual values in terms of chemical potential and temperature. To arrive at this result we suggest a new definition of energy-momentum tensor in presence of the gravitational Chern-Simons coupling.

  4. A sow replacement model using Bayesian updating in a three-level hierarchic Markov process. I. Biological model

    DEFF Research Database (Denmark)

    Kristensen, Anders Ringgaard; Søllested, Thomas Algot

    2004-01-01

    that really uses all these methodological improvements. In this paper, the biological model describing the performance and feed intake of sows is presented. In particular, estimation of herd specific parameters is emphasized. The optimization model is described in a subsequent paper......Several replacement models have been presented in literature. In other applicational areas like dairy cow replacement, various methodological improvements like hierarchical Markov processes and Bayesian updating have been implemented, but not in sow models. Furthermore, there are methodological...... improvements like multi-level hierarchical Markov processes with decisions on multiple time scales, efficient methods for parameter estimations at herd level and standard software that has been hardly implemented at all in any replacement model. The aim of this study is to present a sow replacement model...

  5. A hierarchical modeling methodology for the definition and selection of requirements

    Science.gov (United States)

    Dufresne, Stephane

    This dissertation describes the development of a requirements analysis methodology that takes into account the concept of operations and the hierarchical decomposition of aerospace systems. At the core of the methodology, the Analytic Network Process (ANP) is used to ensure the traceability between the qualitative and quantitative information present in the hierarchical model. The proposed methodology is implemented to the requirements definition of a hurricane tracker Unmanned Aerial Vehicle. Three research objectives are identified in this work; (1) improve the requirements mapping process by matching the stakeholder expectations with the concept of operations, systems and available resources; (2) reduce the epistemic uncertainty surrounding the requirements and requirements mapping; and (3) improve the requirements down-selection process by taking into account the level of importance of the criteria and the available resources. Several challenges are associated with the identification and definition of requirements. The complexity of the system implies that a large number of requirements are needed to define the systems. These requirements are defined early in the conceptual design, where the level of knowledge is relatively low and the level of uncertainty is large. The proposed methodology intends to increase the level of knowledge and reduce the level of uncertainty by guiding the design team through a structured process. To address these challenges, a new methodology is created to flow-down the requirements from the stakeholder expectations to the systems alternatives. A taxonomy of requirements is created to classify the information gathered during the problem definition. Subsequently, the operational and systems functions and measures of effectiveness are integrated to a hierarchical model to allow the traceability of the information. Monte Carlo methods are used to evaluate the variations of the hierarchical model elements and consequently reduce the

  6. Hierarchical model generation for architecture reconstruction using laser-scanned point clouds

    Science.gov (United States)

    Ning, Xiaojuan; Wang, Yinghui; Zhang, Xiaopeng

    2014-06-01

    Architecture reconstruction using terrestrial laser scanner is a prevalent and challenging research topic. We introduce an automatic, hierarchical architecture generation framework to produce full geometry of architecture based on a novel combination of facade structures detection, detailed windows propagation, and hierarchical model consolidation. Our method highlights the generation of geometric models automatically fitting the design information of the architecture from sparse, incomplete, and noisy point clouds. First, the planar regions detected in raw point clouds are interpreted as three-dimensional clusters. Then, the boundary of each region extracted by projecting the points into its corresponding two-dimensional plane is classified to obtain detailed shape structure elements (e.g., windows and doors). Finally, a polyhedron model is generated by calculating the proposed local structure model, consolidated structure model, and detailed window model. Experiments on modeling the scanned real-life buildings demonstrate the advantages of our method, in which the reconstructed models not only correspond to the information of architectural design accurately, but also satisfy the requirements for visualization and analysis.

  7. Hierarchical Agent-Based Integrated Modelling Approach for Microgrids with Adoption of EVs and HRES

    Directory of Open Access Journals (Sweden)

    Peng Han

    2014-01-01

    Full Text Available The large adoption of electric vehicles (EVs, hybrid renewable energy systems (HRESs, and the increasing of the loads shall bring significant challenges to the microgrid. The methodology to model microgrid with high EVs and HRESs penetrations is the key to EVs adoption assessment and optimized HRESs deployment. However, considering the complex interactions of the microgrid containing massive EVs and HRESs, any previous single modelling approaches are insufficient. Therefore in this paper, the methodology named Hierarchical Agent-based Integrated Modelling Approach (HAIMA is proposed. With the effective integration of the agent-based modelling with other advanced modelling approaches, the proposed approach theoretically contributes to a new microgrid model hierarchically constituted by microgrid management layer, component layer, and event layer. Then the HAIMA further links the key parameters and interconnects them to achieve the interactions of the whole model. Furthermore, HAIMA practically contributes to a comprehensive microgrid operation system, through which the assessment of the proposed model and the impact of the EVs adoption are achieved. Simulations show that the proposed HAIMA methodology will be beneficial for the microgrid study and EV’s operation assessment and shall be further utilized for the energy management, electricity consumption prediction, the EV scheduling control, and HRES deployment optimization.

  8. An adaptive sampling method for variable-fidelity surrogate models using improved hierarchical kriging

    Science.gov (United States)

    Hu, Jiexiang; Zhou, Qi; Jiang, Ping; Shao, Xinyu; Xie, Tingli

    2018-01-01

    Variable-fidelity (VF) modelling methods have been widely used in complex engineering system design to mitigate the computational burden. Building a VF model generally includes two parts: design of experiments and metamodel construction. In this article, an adaptive sampling method based on improved hierarchical kriging (ASM-IHK) is proposed to refine the improved VF model. First, an improved hierarchical kriging model is developed as the metamodel, in which the low-fidelity model is varied through a polynomial response surface function to capture the characteristics of a high-fidelity model. Secondly, to reduce local approximation errors, an active learning strategy based on a sequential sampling method is introduced to make full use of the already required information on the current sampling points and to guide the sampling process of the high-fidelity model. Finally, two numerical examples and the modelling of the aerodynamic coefficient for an aircraft are provided to demonstrate the approximation capability of the proposed approach, as well as three other metamodelling methods and two sequential sampling methods. The results show that ASM-IHK provides a more accurate metamodel at the same simulation cost, which is very important in metamodel-based engineering design problems.

  9. Holographic characterization of colloidal particles in turbid media

    Science.gov (United States)

    Cheong, Fook Chiong; Kasimbeg, Priya; Ruffner, David B.; Hlaing, Ei Hnin; Blusewicz, Jaroslaw M.; Philips, Laura A.; Grier, David G.

    2017-10-01

    Holographic particle characterization uses in-line holographic microscopy and the Lorenz-Mie theory of light scattering to measure the diameter and the refractive index of individual colloidal particles in their native dispersions. This wealth of information has proved invaluable in fields as diverse as soft-matter physics, biopharmaceuticals, wastewater management, and food science but so far has been available only for dispersions in transparent media. Here, we demonstrate that holographic characterization can yield precise and accurate results even when the particles of interest are dispersed in turbid media. By elucidating how multiple light scattering contributes to image formation in holographic microscopy, we establish the range conditions under which holographic characterization can reliably probe turbid samples. We validate the technique with measurements on model colloidal spheres dispersed in commercial nanoparticle slurries.

  10. Shrinkage Simulation of Holographic Grating Using Diffusion Model in PQ-PMMA Photopolymer

    Directory of Open Access Journals (Sweden)

    Wei Zepeng

    2015-01-01

    Full Text Available An extended model based on nonlocal polymerization-driven diffusion model is derived by introducing shrinkage process for describing photopolymerized dynamics in PQ-PMMA photopolymer. The kinetic parameters, polymerization rate and diffusion rate are experimentally determined to provide quantitative simulation. The numerical results show that the fringes at edge of grating are firstly shifted and consequently, it leads to a contrast reduction of holograms. Finally, theoretical results are experimentally checked by temporal evolution of diffraction efficiency, and the shrinkage coefficient 0.5% is approximately achieved under incident intensity 25.3mw/cm2. This work can enhance the applicability of diffusion model and contribute to the reasonable description of the grating formation in the photopolymer.

  11. Dynamic classification of fetal heart rates by hierarchical Dirichlet process mixture models.

    Directory of Open Access Journals (Sweden)

    Kezi Yu

    Full Text Available In this paper, we propose an application of non-parametric Bayesian (NPB models for classification of fetal heart rate (FHR recordings. More specifically, we propose models that are used to differentiate between FHR recordings that are from fetuses with or without adverse outcomes. In our work, we rely on models based on hierarchical Dirichlet processes (HDP and the Chinese restaurant process with finite capacity (CRFC. Two mixture models were inferred from real recordings, one that represents healthy and another, non-healthy fetuses. The models were then used to classify new recordings and provide the probability of the fetus being healthy. First, we compared the classification performance of the HDP models with that of support vector machines on real data and concluded that the HDP models achieved better performance. Then we demonstrated the use of mixture models based on CRFC for dynamic classification of the performance of (FHR recordings in a real-time setting.

  12. HDDM: Hierarchical Bayesian estimation of the Drift-Diffusion Model in Python.

    Science.gov (United States)

    Wiecki, Thomas V; Sofer, Imri; Frank, Michael J

    2013-01-01

    The diffusion model is a commonly used tool to infer latent psychological processes underlying decision-making, and to link them to neural mechanisms based on response times. Although efficient open source software has been made available to quantitatively fit the model to data, current estimation methods require an abundance of response time measurements to recover meaningful parameters, and only provide point estimates of each parameter. In contrast, hierarchical Bayesian parameter estimation methods are useful for enhancing statistical power, allowing for simultaneous estimation of individual subject parameters and the group distribution that they are drawn from, while also providing measures of uncertainty in these parameters in the posterior distribution. Here, we present a novel Python-based toolbox called HDDM (hierarchical drift diffusion model), which allows fast and flexible estimation of the the drift-diffusion model and the related linear ballistic accumulator model. HDDM requires fewer data per subject/condition than non-hierarchical methods, allows for full Bayesian data analysis, and can handle outliers in the data. Finally, HDDM supports the estimation of how trial-by-trial measurements (e.g., fMRI) influence decision-making parameters. This paper will first describe the theoretical background of the drift diffusion model and Bayesian inference. We then illustrate usage of the toolbox on a real-world data set from our lab. Finally, parameter recovery studies show that HDDM beats alternative fitting methods like the χ(2)-quantile method as well as maximum likelihood estimation. The software and documentation can be downloaded at: http://ski.clps.brown.edu/hddm_docs/

  13. HDDM: Hierarchical Bayesian estimation of the Drift-Diffusion Model in Python

    Directory of Open Access Journals (Sweden)

    Thomas V Wiecki

    2013-08-01

    Full Text Available The diffusion model is a commonly used tool to infer latent psychological processes underlying decision making, and to link them to neural mechanisms based on reaction times. Although efficient open source software has been made available to quantitatively fit the model to data, current estimation methods require an abundance of reaction time measurements to recover meaningful parameters, and only provide point estimates of each parameter. In contrast, hierarchical Bayesian parameter estimation methods are useful for enhancing statistical power, allowing for simultaneous estimation of individual subject parameters and the group distribution that they are drawn from, while also providing measures of uncertainty in these parameters in the posterior distribution. Here, we present a novel Python-based toolbox called HDDM (hierarchical drift diffusion model, which allows fast and flexible estimation of the the drift-diffusion model and the related linear ballistic accumulator model. HDDM requires fewer data per subject / condition than non-hierarchical method, allows for full Bayesian data analysis, and can handle outliers in the data. Finally, HDDM supports the estimation of how trial-by-trial measurements (e.g. fMRI influence decision making parameters. This paper will first describe the theoretical background of drift-diffusion model and Bayesian inference. We then illustrate usage of the toolbox on a real-world data set from our lab. Finally, parameter recovery studies show that HDDM beats alternative fitting methods like the chi-quantile method as well as maximum likelihood estimation. The software and documentation can be downloaded at: http://ski.clps.brown.edu/hddm_docs

  14. Hierarchical modeling for rare event detection and cell subset alignment across flow cytometry samples.

    Directory of Open Access Journals (Sweden)

    Andrew Cron

    Full Text Available Flow cytometry is the prototypical assay for multi-parameter single cell analysis, and is essential in vaccine and biomarker research for the enumeration of antigen-specific lymphocytes that are often found in extremely low frequencies (0.1% or less. Standard analysis of flow cytometry data relies on visual identification of cell subsets by experts, a process that is subjective and often difficult to reproduce. An alternative and more objective approach is the use of statistical models to identify cell subsets of interest in an automated fashion. Two specific challenges for automated analysis are to detect extremely low frequency event subsets without biasing the estimate by pre-processing enrichment, and the ability to align cell subsets across multiple data samples for comparative analysis. In this manuscript, we develop hierarchical modeling extensions to the Dirichlet Process Gaussian Mixture Model (DPGMM approach we have previously described for cell subset identification, and show that the hierarchical DPGMM (HDPGMM naturally generates an aligned data model that captures both commonalities and variations across multiple samples. HDPGMM also increases the sensitivity to extremely low frequency events by sharing information across multiple samples analyzed simultaneously. We validate the accuracy and reproducibility of HDPGMM estimates of antigen-specific T cells on clinically relevant reference peripheral blood mononuclear cell (PBMC samples with known frequencies of antigen-specific T cells. These cell samples take advantage of retrovirally TCR-transduced T cells spiked into autologous PBMC samples to give a defined number of antigen-specific T cells detectable by HLA-peptide multimer binding. We provide open source software that can take advantage of both multiple processors and GPU-acceleration to perform the numerically-demanding computations. We show that hierarchical modeling is a useful probabilistic approach that can provide a

  15. Hierarchical Models for Type Ia Supernova Light Curves in the Optical and Near Infrared

    Science.gov (United States)

    Mandel, Kaisey; Narayan, G.; Kirshner, R. P.

    2011-01-01

    I have constructed a comprehensive statistical model for Type Ia supernova optical and near infrared light curves. Since the near infrared light curves are excellent standard candles and are less sensitive to dust extinction and reddening, the combination of near infrared and optical data better constrains the host galaxy extinction and improves the precision of distance predictions to SN Ia. A hierarchical probabilistic model coherently accounts for multiple random and uncertain effects, including photometric error, intrinsic supernova light curve variations and correlations across phase and wavelength, dust extinction and reddening, peculiar velocity dispersion and distances. An improved BayeSN MCMC code is implemented for computing probabilistic inferences for individual supernovae and the SN Ia and host galaxy dust populations. I use this hierarchical model to analyze nearby Type Ia supernovae with optical and near infared data from the PAIRITEL, CfA3, and CSP samples and the literature. Using cross-validation to test the robustness of the model predictions, I find that the rms Hubble diagram scatter of predicted distance moduli is 0.11 mag for SN with optical and near infrared data versus 0.15 mag for SN with only optical data. Accounting for the dispersion expected from random peculiar velocities, the rms intrinsic prediction error is 0.08-0.10 mag for SN with both optical and near infrared light curves. I discuss results for the inferred intrinsic correlation structures of the optical-NIR SN Ia light curves and the host galaxy dust distribution captured by the hierarchical model. The continued observation and analysis of Type Ia SN in the optical and near infrared is important for improving their utility as precise and accurate cosmological distance indicators.

  16. A conceptual modeling framework for discrete event simulation using hierarchical control structures.

    Science.gov (United States)

    Furian, N; O'Sullivan, M; Walker, C; Vössner, S; Neubacher, D

    2015-08-01

    Conceptual Modeling (CM) is a fundamental step in a simulation project. Nevertheless, it is only recently that structured approaches towards the definition and formulation of conceptual models have gained importance in the Discrete Event Simulation (DES) community. As a consequence, frameworks and guidelines for applying CM to DES have emerged and discussion of CM for DES is increasing. However, both the organization of model-components and the identification of behavior and system control from standard CM approaches have shortcomings that limit CM's applicability to DES. Therefore, we discuss the different aspects of previous CM frameworks and identify their limitations. Further, we present the Hierarchical Control Conceptual Modeling framework that pays more attention to the identification of a models' system behavior, control policies and dispatching routines and their structured representation within a conceptual model. The framework guides the user step-by-step through the modeling process and is illustrated by a worked example.

  17. Efficient Actor-Critic Algorithm with Hierarchical Model Learning and Planning

    Science.gov (United States)

    Fu, QiMing

    2016-01-01

    To improve the convergence rate and the sample efficiency, two efficient learning methods AC-HMLP and RAC-HMLP (AC-HMLP with ℓ 2-regularization) are proposed by combining actor-critic algorithm with hierarchical model learning and planning. The hierarchical models consisting of the local and the global models, which are learned at the same time during learning of the value function and the policy, are approximated by local linear regression (LLR) and linear function approximation (LFA), respectively. Both the local model and the global model are applied to generate samples for planning; the former is used only if the state-prediction error does not surpass the threshold at each time step, while the latter is utilized at the end of each episode. The purpose of taking both models is to improve the sample efficiency and accelerate the convergence rate of the whole algorithm through fully utilizing the local and global information. Experimentally, AC-HMLP and RAC-HMLP are compared with three representative algorithms on two Reinforcement Learning (RL) benchmark problems. The results demonstrate that they perform best in terms of convergence rate and sample efficiency. PMID:27795704

  18. Collapse and revival in holographic quenches

    International Nuclear Information System (INIS)

    Silva, Emilia da; Lopez, Esperanza; Mas, Javier; Serantes, Alexandre

    2015-01-01

    We study holographic models related to global quantum quenches in finite size systems. The holographic set up describes naturally a CFT, which we consider on a circle and a sphere. The enhanced symmetry of the conformal group on the circle motivates us to compare the evolution in both cases. Depending on the initial conditions, the dual geometry exhibits oscillations that we holographically interpret as revivals of the initial field theory state. On the sphere, this only happens when the energy density created by the quench is small compared to the system size. However on the circle considerably larger energy densities are compatible with revivals. Two different timescales emerge in this latter case. A collapse time, when the system appears to have dephased, and the revival time, when after rephasing the initial state is partially recovered. The ratio of these two times depends upon the initial conditions in a similar way to what is observed in some experimental setups exhibiting collapse and revivals.

  19. Holographic Spherically Symmetric Metrics

    Science.gov (United States)

    Petri, Michael

    The holographic principle (HP) conjectures, that the maximum number of degrees of freedom of any realistic physical system is proportional to the system's boundary area. The HP has its roots in the study of black holes. It has recently been applied to cosmological solutions. In this article we apply the HP to spherically symmetric static space-times. We find that any regular spherically symmetric object saturating the HP is subject to tight constraints on the (interior) metric, energy-density, temperature and entropy-density. Whenever gravity can be described by a metric theory, gravity is macroscopically scale invariant and the laws of thermodynamics hold locally and globally, the (interior) metric of a regular holographic object is uniquely determined up to a constant factor and the interior matter-state must follow well defined scaling relations. When the metric theory of gravity is general relativity, the interior matter has an overall string equation of state (EOS) and a unique total energy-density. Thus the holographic metric derived in this article can serve as simple interior 4D realization of Mathur's string fuzzball proposal. Some properties of the holographic metric and its possible experimental verification are discussed. The geodesics of the holographic metric describe an isotropically expanding (or contracting) universe with a nearly homogeneous matter-distribution within the local Hubble volume. Due to the overall string EOS the active gravitational mass-density is zero, resulting in a coasting expansion with Ht = 1, which is compatible with the recent GRB-data.

  20. Reconstructions of f(T) gravity from entropy-corrected holographic and new agegraphic dark energy models in power-law and logarithmic versions

    Energy Technology Data Exchange (ETDEWEB)

    Saha, Pameli; Debnath, Ujjal [Indian Institute of Engineering Science and Technology, Department of Mathematics, Howrah (India)

    2016-09-15

    Here, we peruse cosmological usage of the most promising candidates of dark energy in the framework of f(T) gravity theory where T represents the torsion scalar teleparallel gravity. We reconstruct the different f(T) modified gravity models in the spatially flat Friedmann-Robertson-Walker universe according to entropy-corrected versions of the holographic and new agegraphic dark energy models in power-law and logarithmic corrections, which describe an accelerated expansion history of the universe. We conclude that the equation of state parameter of the entropy-corrected models can transit from the quintessence state to the phantom regime as indicated by recent observations or can lie entirely in the phantom region. Also, using these models, we investigate the different areas of the stability with the help of the squared speed of sound. (orig.)

  1. Melting spectral functions of the scalar and vector mesons in a holographic QCD model

    International Nuclear Information System (INIS)

    Fujita, Mitsutoshi; Kikuchi, Toru; Fukushima, Kenji; Misumi, Tatsuhiro; Murata, Masaki

    2010-01-01

    We investigate the finite-temperature spectral functions of heavy quarkonia by using the soft-wall anti-de Sitter/QCD model. We discuss the scalar, the pseudoscalar, the vector, and the axial-vector mesons and compare their qualitative features of the melting temperature and growing width. We find that the axial-vector meson melts earlier than the vector meson, while there appears only a slight difference between the scalar and pseudoscalar mesons, which also melt earlier than the vector meson.

  2. Hierarchical Self Assembly of Patterns from the Robinson Tilings: DNA Tile Design in an Enhanced Tile Assembly Model.

    Science.gov (United States)

    Padilla, Jennifer E; Liu, Wenyan; Seeman, Nadrian C

    2012-06-01

    We introduce a hierarchical self assembly algorithm that produces the quasiperiodic patterns found in the Robinson tilings and suggest a practical implementation of this algorithm using DNA origami tiles. We modify the abstract Tile Assembly Model, (aTAM), to include active signaling and glue activation in response to signals to coordinate the hierarchical assembly of Robinson patterns of arbitrary size from a small set of tiles according to the tile substitution algorithm that generates them. Enabling coordinated hierarchical assembly in the aTAM makes possible the efficient encoding of the recursive process of tile substitution.

  3. Holographic quantum error-correcting codes: toy models for the bulk/boundary correspondence

    Energy Technology Data Exchange (ETDEWEB)

    Pastawski, Fernando; Yoshida, Beni [Institute for Quantum Information & Matter and Walter Burke Institute for Theoretical Physics,California Institute of Technology,1200 E. California Blvd., Pasadena CA 91125 (United States); Harlow, Daniel [Princeton Center for Theoretical Science, Princeton University,400 Jadwin Hall, Princeton NJ 08540 (United States); Preskill, John [Institute for Quantum Information & Matter and Walter Burke Institute for Theoretical Physics,California Institute of Technology,1200 E. California Blvd., Pasadena CA 91125 (United States)

    2015-06-23

    We propose a family of exactly solvable toy models for the AdS/CFT correspondence based on a novel construction of quantum error-correcting codes with a tensor network structure. Our building block is a special type of tensor with maximal entanglement along any bipartition, which gives rise to an isometry from the bulk Hilbert space to the boundary Hilbert space. The entire tensor network is an encoder for a quantum error-correcting code, where the bulk and boundary degrees of freedom may be identified as logical and physical degrees of freedom respectively. These models capture key features of entanglement in the AdS/CFT correspondence; in particular, the Ryu-Takayanagi formula and the negativity of tripartite information are obeyed exactly in many cases. That bulk logical operators can be represented on multiple boundary regions mimics the Rindler-wedge reconstruction of boundary operators from bulk operators, realizing explicitly the quantum error-correcting features of AdS/CFT recently proposed in http://dx.doi.org/10.1007/JHEP04(2015)163.

  4. Interacting holographic dark energy with logarithmic correction

    International Nuclear Information System (INIS)

    Jamil, Mubasher; Farooq, M. Umar

    2010-01-01

    The holographic dark energy (HDE) is considered to be the most promising candidate of dark energy. Its definition is motivated from the entropy-area relation which depends on the theory of gravity under consideration. Recently a new definition of HDE is proposed with the help of quantum corrections to the entropy-area relation in the setup of loop quantum cosmology. Employing this new definition, we investigate the model of interacting dark energy and derive its effective equation of state. Finally we establish a correspondence between generalized Chaplygin gas and entropy-corrected holographic dark energy

  5. TYPE Ia SUPERNOVA LIGHT CURVE INFERENCE: HIERARCHICAL MODELS IN THE OPTICAL AND NEAR-INFRARED

    International Nuclear Information System (INIS)

    Mandel, Kaisey S.; Narayan, Gautham; Kirshner, Robert P.

    2011-01-01

    We have constructed a comprehensive statistical model for Type Ia supernova (SN Ia) light curves spanning optical through near-infrared (NIR) data. A hierarchical framework coherently models multiple random and uncertain effects, including intrinsic supernova (SN) light curve covariances, dust extinction and reddening, and distances. An improved BAYESN Markov Chain Monte Carlo code computes probabilistic inferences for the hierarchical model by sampling the global probability density of parameters describing individual SNe and the population. We have applied this hierarchical model to optical and NIR data of 127 SNe Ia from PAIRITEL, CfA3, Carnegie Supernova Project, and the literature. We find an apparent population correlation between the host galaxy extinction A V and the ratio of total-to-selective dust absorption R V . For SNe with low dust extinction, A V ∼ V ∼ 2.5-2.9, while at high extinctions, A V ∼> 1, low values of R V < 2 are favored. The NIR luminosities are excellent standard candles and are less sensitive to dust extinction. They exhibit low correlation with optical peak luminosities, and thus provide independent information on distances. The combination of NIR and optical data constrains the dust extinction and improves the predictive precision of individual SN Ia distances by about 60%. Using cross-validation, we estimate an rms distance modulus prediction error of 0.11 mag for SNe with optical and NIR data versus 0.15 mag for SNe with optical data alone. Continued study of SNe Ia in the NIR is important for improving their utility as precise and accurate cosmological distance indicators.

  6. Encoding Sequential Information in Semantic Space Models: Comparing Holographic Reduced Representation and Random Permutation

    Directory of Open Access Journals (Sweden)

    Gabriel Recchia

    2015-01-01

    Full Text Available Circular convolution and random permutation have each been proposed as neurally plausible binding operators capable of encoding sequential information in semantic memory. We perform several controlled comparisons of circular convolution and random permutation as means of encoding paired associates as well as encoding sequential information. Random permutations outperformed convolution with respect to the number of paired associates that can be reliably stored in a single memory trace. Performance was equal on semantic tasks when using a small corpus, but random permutations were ultimately capable of achieving superior performance due to their higher scalability to large corpora. Finally, “noisy” permutations in which units are mapped to other units arbitrarily (no one-to-one mapping perform nearly as well as true permutations. These findings increase the neurological plausibility of random permutations and highlight their utility in vector space models of semantics.

  7. Process-based modelling of tree and stand growth: towards a hierarchical treatment of multiscale processes

    International Nuclear Information System (INIS)

    Makela, A.

    2003-01-01

    A generally accepted method has not emerged for managing the different temporal and spatial scales in a forest ecosystem. This paper reviews a hierarchical-modular modelling tradition, with the main focus on individual tree growth throughout the rotation. At this scale, model performance requires (i) realistic long-term dynamic properties, (ii) realistic responses of growth and mortality of competing individuals, and (iii) realistic responses to ecophysio-logical inputs. Model development and validation are illustrated through allocation patterns, height growth, and size-related feedbacks. Empirical work to test the approach is reviewed. In this approach, finer scale effects are embedded in parameters calculated using more detailed, interacting modules. This is exemplified by (i) the within-year effect of weather on annual photosynthesis, (ii) the effects of fast soil processes on carbon allocation and photosynthesis, and (iii) the utilization of detailed stem structure to predict wood quality. Prevailing management paradigms are reflected in growth modelling. A shift of emphasis has occurred from productivity in homogeneous canopies towards, e.g., wood quality versus total yield, spatially more explicit models, and growth decline in old-growth forests. The new problems emphasize the hierarchy of the system and interscale interactions, suggesting that the hierarchical-modular approach could prove constructive. (author)

  8. Hierarchical modeling and robust synthesis for the preliminary design of large scale complex systems

    Science.gov (United States)

    Koch, Patrick Nathan

    Large-scale complex systems are characterized by multiple interacting subsystems and the analysis of multiple disciplines. The design and development of such systems inevitably requires the resolution of multiple conflicting objectives. The size of complex systems, however, prohibits the development of comprehensive system models, and thus these systems must be partitioned into their constituent parts. Because simultaneous solution of individual subsystem models is often not manageable iteration is inevitable and often excessive. In this dissertation these issues are addressed through the development of a method for hierarchical robust preliminary design exploration to facilitate concurrent system and subsystem design exploration, for the concurrent generation of robust system and subsystem specifications for the preliminary design of multi-level, multi-objective, large-scale complex systems. This method is developed through the integration and expansion of current design techniques: (1) Hierarchical partitioning and modeling techniques for partitioning large-scale complex systems into more tractable parts, and allowing integration of subproblems for system synthesis, (2) Statistical experimentation and approximation techniques for increasing both the efficiency and the comprehensiveness of preliminary design exploration, and (3) Noise modeling techniques for implementing robust preliminary design when approximate models are employed. The method developed and associated approaches are illustrated through their application to the preliminary design of a commercial turbofan turbine propulsion system; the turbofan system-level problem is partitioned into engine cycle and configuration design and a compressor module is integrated for more detailed subsystem-level design exploration, improving system evaluation.

  9. A Hierarchical Feature Extraction Model for Multi-Label Mechanical Patent Classification

    Directory of Open Access Journals (Sweden)

    Jie Hu

    2018-01-01

    Full Text Available Various studies have focused on feature extraction methods for automatic patent classification in recent years. However, most of these approaches are based on the knowledge from experts in related domains. Here we propose a hierarchical feature extraction model (HFEM for multi-label mechanical patent classification, which is able to capture both local features of phrases as well as global and temporal semantics. First, a n-gram feature extractor based on convolutional neural networks (CNNs is designed to extract salient local lexical-level features. Next, a long dependency feature extraction model based on the bidirectional long–short-term memory (BiLSTM neural network model is proposed to capture sequential correlations from higher-level sequence representations. Then the HFEM algorithm and its hierarchical feature extraction architecture are detailed. We establish the training, validation and test datasets, containing 72,532, 18,133, and 2679 mechanical patent documents, respectively, and then check the performance of HFEMs. Finally, we compared the results of the proposed HFEM and three other single neural network models, namely CNN, long–short-term memory (LSTM, and BiLSTM. The experimental results indicate that our proposed HFEM outperforms the other compared models in both precision and recall.

  10. A hierarchical bayesian model to quantify uncertainty of stream water temperature forecasts.

    Directory of Open Access Journals (Sweden)

    Guillaume Bal

    Full Text Available Providing generic and cost effective modelling approaches to reconstruct and forecast freshwater temperature using predictors as air temperature and water discharge is a prerequisite to understanding ecological processes underlying the impact of water temperature and of global warming on continental aquatic ecosystems. Using air temperature as a simple linear predictor of water temperature can lead to significant bias in forecasts as it does not disentangle seasonality and long term trends in the signal. Here, we develop an alternative approach based on hierarchical Bayesian statistical time series modelling of water temperature, air temperature and water discharge using seasonal sinusoidal periodic signals and time varying means and amplitudes. Fitting and forecasting performances of this approach are compared with that of simple linear regression between water and air temperatures using i an emotive simulated example, ii application to three French coastal streams with contrasting bio-geographical conditions and sizes. The time series modelling approach better fit data and does not exhibit forecasting bias in long term trends contrary to the linear regression. This new model also allows for more accurate forecasts of water temperature than linear regression together with a fair assessment of the uncertainty around forecasting. Warming of water temperature forecast by our hierarchical Bayesian model was slower and more uncertain than that expected with the classical regression approach. These new forecasts are in a form that is readily usable in further ecological analyses and will allow weighting of outcomes from different scenarios to manage climate change impacts on freshwater wildlife.

  11. A conceptual modeling framework for discrete event simulation using hierarchical control structures

    Science.gov (United States)

    Furian, N.; O’Sullivan, M.; Walker, C.; Vössner, S.; Neubacher, D.

    2015-01-01

    Conceptual Modeling (CM) is a fundamental step in a simulation project. Nevertheless, it is only recently that structured approaches towards the definition and formulation of conceptual models have gained importance in the Discrete Event Simulation (DES) community. As a consequence, frameworks and guidelines for applying CM to DES have emerged and discussion of CM for DES is increasing. However, both the organization of model-components and the identification of behavior and system control from standard CM approaches have shortcomings that limit CM’s applicability to DES. Therefore, we discuss the different aspects of previous CM frameworks and identify their limitations. Further, we present the Hierarchical Control Conceptual Modeling framework that pays more attention to the identification of a models’ system behavior, control policies and dispatching routines and their structured representation within a conceptual model. The framework guides the user step-by-step through the modeling process and is illustrated by a worked example. PMID:26778940

  12. Holographic Entanglement Entropy

    CERN Document Server

    Rangamani, Mukund

    2016-01-01

    We review the developments in the past decade on holographic entanglement entropy, a subject that has garnered much attention owing to its potential to teach us about the emergence of spacetime in holography. We provide an introduction to the concept of entanglement entropy in quantum field theories, review the holographic proposals for computing the same, providing some justification for where these proposals arise from in the first two parts. The final part addresses recent developments linking entanglement and geometry. We provide an overview of the various arguments and technical developments that teach us how to use field theory entanglement to detect geometry. Our discussion is by design eclectic; we have chosen to focus on developments that appear to us most promising for further insights into the holographic map. This is a preliminary draft of a few chapters of a book which will appear sometime in the near future, to be published by Springer. The book in addition contains a discussion of application o...

  13. The holographic universe

    CERN Document Server

    Talbot, Michael

    1991-01-01

    'There is evidence to suggest that our world and everything in it - from snowflakes to maple trees to falling stars and spinning electrons - are only ghostly images, projections from a level of reality literally beyond both space and time.' This is the astonishing idea behind the holographic theory of the universe, pioneered by two eminent thinkers: physicist David Bohm, a former protege of Albert Einstein, and quantum physicist Karl Pribram. The holographic theory of the universe encompasses consciousness and reality as we know them, but can also explain such hitherto unexplained phenomena as telepathy, out-of-body experiences and even miraculous healing. In this remarkable book, Michael Talbot reveals the extraordinary depth and power of the holographic theory of the universe, illustrating how it makes sense of the entire range of experiences within our universe - and in other universes beyond our own.

  14. Photopolymer holographic recording material

    Science.gov (United States)

    Lawrence, J. R.; O'Neill, F. T.; Sheridan, J. T.

    Photopolymers are promising materials for use in holography. They have many advantages, such as ease of preparation, and are capable of efficiencies of up to 100%. A disadvantage of these materials is their inability to record high spatial frequency gratings when compared to other materials such as dichromated gelatin and silver halide photographic emulsion. Until recently, the drop off at high spatial frequencies of the material response was not predicted by any of the diffusion based models available. It has recently been proposed that this effect is due to polymer chains growing away from their initiation point and causing a smeared profile to be recorded. This is termed a non-local material response. Simple analytic expressions have been derived using this model and fits to experimental data have allowed values to be estimated for material parameters such as the diffusion coefficient of monomer, the ratio of polymerisation rate to diffusion rate and the distance that the polymer chains spread during holographic recording. The model predicts that the spatial frequency response might be improved by decreasing the mean polymer chain lengths and/or by increasing the mobility of the molecules used in the material. The experimental work carried out to investigate these predictions is reported here. This work involved (a) the changing of the molecular weights of chemical components within the material (dyes and binders) and (b) the addition of a chemical retarder in order to shorten the polymer chains, thereby decreasing the extent of the non-local effect. Although no significant improvement in spatial frequency response was observed the model appears to offer an improved understanding of the operation of the material.

  15. Hierarchical modeling of plasma and transport phenomena in a dielectric barrier discharge reactor

    Science.gov (United States)

    Bali, N.; Aggelopoulos, C. A.; Skouras, E. D.; Tsakiroglou, C. D.; Burganos, V. N.

    2017-12-01

    A novel dual-time hierarchical approach is developed to link the plasma process to macroscopic transport phenomena in the interior of a dielectric barrier discharge (DBD) reactor that has been used for soil remediation (Aggelopoulos et al 2016 Chem. Eng. J. 301 353-61). The generation of active species by plasma reactions is simulated at the microseconds (µs) timescale, whereas convection and thermal conduction are simulated at the macroscopic (minutes) timescale. This hierarchical model is implemented in order to investigate the influence of the plasma DBD process on the transport and reaction mechanisms during remediation of polluted soil. In the microscopic model, the variables of interest include the plasma-induced reactive concentrations, while in the macroscopic approach, the temperature distribution, and the velocity field both inside the discharge gap and within the polluted soil material as well. For the latter model, the Navier-Stokes and Darcy Brinkman equations for the transport phenomena in the porous domain are solved numerically using a FEM software. The effective medium theory is employed to provide estimates of the effective time-evolving and three-phase transport properties in the soil sample. Model predictions considering the temporal evolution of the plasma remediation process are presented and compared with corresponding experimental data.

  16. A model of shape memory materials with hierarchical twinning: statics and dynamics

    International Nuclear Information System (INIS)

    Saxena, A.; Bishop, A.R.; Wu, Y.; Lookman, T.

    1995-01-01

    We consider a model of shape memory materials in which hierarchical twinning near the habit plane (austenite-martensite interface) is a new and crucial ingredient. The model includes (1) a triple-well potential (φ 6 model) in local shear strain, (2) strain gradient terms up to second order in strain and fourth order in gradient, and (3) all symmetry allowed compositional fluctuation-induced strain gradient terms. The last term favors hierarchy which enables communication between macroscopic (cm) and microscopic (A) regions essential for shape memory. Hierarchy also stabilizes tweed formation (criss-cross patterns of twins). External stress or pressure modulates (''patterns'') the spacing of domain walls. Therefore the ''pattern'' is encoded in the modulated hierarchical variation of the depth and width of the twins. This hierarchy of length scales provides a related hierarchy of time scales and thus the possibility of non-exponential decay. The four processes of the complete shape memory cycle-write, record, erase and recall-are explained within this model. Preliminary results based on 2D molecular dynamics are shown for tweed and hierarchy formation. (orig.)

  17. Relating Memory To Functional Performance In Normal Aging to Dementia Using Hierarchical Bayesian Cognitive Processing Models

    Science.gov (United States)

    Shankle, William R.; Pooley, James P.; Steyvers, Mark; Hara, Junko; Mangrola, Tushar; Reisberg, Barry; Lee, Michael D.

    2012-01-01

    Determining how cognition affects functional abilities is important in Alzheimer’s disease and related disorders (ADRD). 280 patients (normal or ADRD) received a total of 1,514 assessments using the Functional Assessment Staging Test (FAST) procedure and the MCI Screen (MCIS). A hierarchical Bayesian cognitive processing (HBCP) model was created by embedding a signal detection theory (SDT) model of the MCIS delayed recognition memory task into a hierarchical Bayesian framework. The SDT model used latent parameters of discriminability (memory process) and response bias (executive function) to predict, simultaneously, recognition memory performance for each patient and each FAST severity group. The observed recognition memory data did not distinguish the six FAST severity stages, but the latent parameters completely separated them. The latent parameters were also used successfully to transform the ordinal FAST measure into a continuous measure reflecting the underlying continuum of functional severity. HBCP models applied to recognition memory data from clinical practice settings accurately translated a latent measure of cognition to a continuous measure of functional severity for both individuals and FAST groups. Such a translation links two levels of brain information processing, and may enable more accurate correlations with other levels, such as those characterized by biomarkers. PMID:22407225

  18. Hierarchical Bayesian Markov switching models with application to predicting spawning success of shovelnose sturgeon

    Science.gov (United States)

    Holan, S.H.; Davis, G.M.; Wildhaber, M.L.; DeLonay, A.J.; Papoulias, D.M.

    2009-01-01

    The timing of spawning in fish is tightly linked to environmental factors; however, these factors are not very well understood for many species. Specifically, little information is available to guide recruitment efforts for endangered species such as the sturgeon. Therefore, we propose a Bayesian hierarchical model for predicting the success of spawning of the shovelnose sturgeon which uses both biological and behavioural (longitudinal) data. In particular, we use data that were produced from a tracking study that was conducted in the Lower Missouri River. The data that were produced from this study consist of biological variables associated with readiness to spawn along with longitudinal behavioural data collected by using telemetry and archival data storage tags. These high frequency data are complex both biologically and in the underlying behavioural process. To accommodate such complexity we developed a hierarchical linear regression model that uses an eigenvalue predictor, derived from the transition probability matrix of a two-state Markov switching model with generalized auto-regressive conditional heteroscedastic dynamics. Finally, to minimize the computational burden that is associated with estimation of this model, a parallel computing approach is proposed. ?? Journal compilation 2009 Royal Statistical Society.

  19. Hierarchical Bayesian modelling of gene expression time series across irregularly sampled replicates and clusters.

    Science.gov (United States)

    Hensman, James; Lawrence, Neil D; Rattray, Magnus

    2013-08-20

    Time course data from microarrays and high-throughput sequencing experiments require simple, computationally efficient and powerful statistical models to extract meaningful biological signal, and for tasks such as data fusion and clustering. Existing methodologies fail to capture either the temporal or replicated nature of the experiments, and often impose constraints on the data collection process, such as regularly spaced samples, or similar sampling schema across replications. We propose hierarchical Gaussian processes as a general model of gene expression time-series, with application to a variety of problems. In particular, we illustrate the method's capacity for missing data imputation, data fusion and clustering.The method can impute data which is missing both systematically and at random: in a hold-out test on real data, performance is significantly better than commonly used imputation methods. The method's ability to model inter- and intra-cluster variance leads to more biologically meaningful clusters. The approach removes the necessity for evenly spaced samples, an advantage illustrated on a developmental Drosophila dataset with irregular replications. The hierarchical Gaussian process model provides an excellent statistical basis for several gene-expression time-series tasks. It has only a few additional parameters over a regular GP, has negligible additional complexity, is easily implemented and can be integrated into several existing algorithms. Our experiments were implemented in python, and are available from the authors' website: http://staffwww.dcs.shef.ac.uk/people/J.Hensman/.

  20. Diagnostics for generalized linear hierarchical models in network meta-analysis.

    Science.gov (United States)

    Zhao, Hong; Hodges, James S; Carlin, Bradley P

    2017-09-01

    Network meta-analysis (NMA) combines direct and indirect evidence comparing more than 2 treatments. Inconsistency arises when these 2 information sources differ. Previous work focuses on inconsistency detection, but little has been done on how to proceed after identifying inconsistency. The key issue is whether inconsistency changes an NMA's substantive conclusions. In this paper, we examine such discrepancies from a diagnostic point of view. Our methods seek to detect influential and outlying observations in NMA at a trial-by-arm level. These observations may have a large effect on the parameter estimates in NMA, or they may deviate markedly from other observations. We develop formal diagnostics for a Bayesian hierarchical model to check the effect of deleting any observation. Diagnostics are specified for generalized linear hierarchical NMA models and investigated for both published and simulated datasets. Results from our example dataset using either contrast- or arm-based models and from the simulated datasets indicate that the sources of inconsistency in NMA tend not to be influential, though results from the example dataset suggest that they are likely to be outliers. This mimics a familiar result from linear model theory, in which outliers with low leverage are not influential. Future extensions include incorporating baseline covariates and individual-level patient data. Copyright © 2017 John Wiley & Sons, Ltd.

  1. Hierarchical spatial models for predicting pygmy rabbit distribution and relative abundance

    Science.gov (United States)

    Wilson, T.L.; Odei, J.B.; Hooten, M.B.; Edwards, T.C.

    2010-01-01

    Conservationists routinely use species distribution models to plan conservation, restoration and development actions, while ecologists use them to infer process from pattern. These models tend to work well for common or easily observable species, but are of limited utility for rare and cryptic species. This may be because honest accounting of known observation bias and spatial autocorrelation are rarely included, thereby limiting statistical inference of resulting distribution maps. We specified and implemented a spatially explicit Bayesian hierarchical model for a cryptic mammal species (pygmy rabbit Brachylagus idahoensis). Our approach used two levels of indirect sign that are naturally hierarchical (burrows and faecal pellets) to build a model that allows for inference on regression coefficients as well as spatially explicit model parameters. We also produced maps of rabbit distribution (occupied burrows) and relative abundance (number of burrows expected to be occupied by pygmy rabbits). The model demonstrated statistically rigorous spatial prediction by including spatial autocorrelation and measurement uncertainty. We demonstrated flexibility of our modelling framework by depicting probabilistic distribution predictions using different assumptions of pygmy rabbit habitat requirements. Spatial representations of the variance of posterior predictive distributions were obtained to evaluate heterogeneity in model fit across the spatial domain. Leave-one-out cross-validation was conducted to evaluate the overall model fit. Synthesis and applications. Our method draws on the strengths of previous work, thereby bridging and extending two active areas of ecological research: species distribution models and multi-state occupancy modelling. Our framework can be extended to encompass both larger extents and other species for which direct estimation of abundance is difficult. ?? 2010 The Authors. Journal compilation ?? 2010 British Ecological Society.

  2. Phases of kinky holographic nuclear matter

    Energy Technology Data Exchange (ETDEWEB)

    Elliot-Ripley, Matthew; Sutcliffe, Paul; Zamaklar, Marija [Department of Mathematical Sciences, Durham University,South Road, Durham (United Kingdom)

    2016-10-17

    Holographic QCD at finite baryon number density and zero temperature is studied within the five-dimensional Sakai-Sugimoto model. We introduce a new approximation that models a smeared crystal of solitonic baryons by assuming spatial homogeneity to obtain an effective kink theory in the holographic direction. The kink theory correctly reproduces a first order phase transition to lightly bound nuclear matter. As the density is further increased the kink splits into a pair of half-kink constituents, providing a concrete realization of the previously suggested dyonic salt phase, where the bulk soliton splits into constituents at high density. The kink model also captures the phenomenon of baryonic popcorn, in which a first order phase transition generates an additional soliton layer in the holographic direction. We find that this popcorn transition takes place at a density below the dyonic salt phase, making the latter energetically unfavourable. However, the kink model predicts only one pop, rather than the sequence of pops suggested by previous approximations. In the kink model the two layers produced by the single pop form the surface of a soliton bag that increases in size as the baryon chemical potential is increased. The interior of the bag is filled with abelian electric potential and the instanton charge density is localized on the surface of the bag. The soliton bag may provide a holographic description of a quarkyonic phase.

  3. Estimating effectiveness in HIV prevention trials with a Bayesian hierarchical compound Poisson frailty model

    Science.gov (United States)

    Coley, Rebecca Yates; Browna, Elizabeth R.

    2016-01-01

    Inconsistent results in recent HIV prevention trials of pre-exposure prophylactic interventions may be due to heterogeneity in risk among study participants. Intervention effectiveness is most commonly estimated with the Cox model, which compares event times between populations. When heterogeneity is present, this population-level measure underestimates intervention effectiveness for individuals who are at risk. We propose a likelihood-based Bayesian hierarchical model that estimates the individual-level effectiveness of candidate interventions by accounting for heterogeneity in risk with a compound Poisson-distributed frailty term. This model reflects the mechanisms of HIV risk and allows that some participants are not exposed to HIV and, therefore, have no risk of seroconversion during the study. We assess model performance via simulation and apply the model to data from an HIV prevention trial. PMID:26869051

  4. Hierarchical competition models with the Allee effect II: the case of immigration.

    Science.gov (United States)

    Assas, Laila; Dennis, Brian; Elaydi, Saber; Kwessi, Eddy; Livadiotis, George

    2015-01-01

    This is part II of an earlier paper that dealt with hierarchical models with the Allee effect but with no immigration. In this paper, we greatly simplify the proofs in part I and provide a proof of the global dynamics of the non-hyperbolic cases that were previously conjectured. Then, we show how immigration to one of the species or to both would, drastically, change the dynamics of the system. It is shown that if the level of immigration to one or to both species is above a specified level, then there will be no extinction region where both species go to extinction.

  5. High-accuracy critical exponents for O(N) hierarchical 3D sigma models

    International Nuclear Information System (INIS)

    Godina, J. J.; Li, L.; Meurice, Y.; Oktay, M. B.

    2006-01-01

    The critical exponent γ and its subleading exponent Δ in the 3D O(N) Dyson's hierarchical model for N up to 20 are calculated with high accuracy. We calculate the critical temperatures for the measure δ(φ-vector.φ-vector-1). We extract the first coefficients of the 1/N expansion from our numerical data. We show that the leading and subleading exponents agree with Polchinski equation and the equivalent Litim equation, in the local potential approximation, with at least 4 significant digits

  6. A hierarchical Markov decision process modeling feeding and marketing decisions of growing pigs

    DEFF Research Database (Denmark)

    Pourmoayed, Reza; Nielsen, Lars Relund; Kristensen, Anders Ringgaard

    2016-01-01

    Feeding is the most important cost in the production of growing pigs and has a direct impact on the marketing decisions, growth and the final quality of the meat. In this paper, we address the sequential decision problem of when to change the feed-mix within a finisher pig pen and when to pick pigs...... for marketing. We formulate a hierarchical Markov decision process with three levels representing the decision process. The model considers decisions related to feeding and marketing and finds the optimal decision given the current state of the pen. The state of the system is based on information from on...

  7. Computer generated holographic microtags

    International Nuclear Information System (INIS)

    Sweatt, W.C.

    1998-01-01

    A microlithographic tag comprising an array of individual computer generated holographic patches having feature sizes between 250 and 75 nanometers is disclosed. The tag is a composite hologram made up of the individual holographic patches and contains identifying information when read out with a laser of the proper wavelength and at the proper angles of probing and reading. The patches are fabricated in a steep angle Littrow readout geometry to maximize returns in the -1 diffracted order. The tags are useful as anti-counterfeiting markers because of the extreme difficulty in reproducing them. 5 figs

  8. Design and evaluation of daylighting applications of holographic glazings

    Energy Technology Data Exchange (ETDEWEB)

    Papamichael, K.; Ehrlich, C.; Ward, G.

    1996-12-01

    According to the contractual agreement, BTP would develop a computer model of the POC holographic structures and then simulate the performance of alternative designs using the RADIANCE lighting and rendering computer program [Ward 1990]. The RADIANCE model would then be used to evaluate the daylight performance of alternative designs of holographic glazings in a prototypical office space. The simulation process would be validated against actual photometric measurements of holographic glazing samples developed by POC. The results would be used to evaluate the potential for increased electric lighting savings through increased daylight illuminance levels at distances more than 15 ft--20 ft (4.6 m--6.1 m ) from the window wall.

  9. Large-scale model of flow in heterogeneous and hierarchical porous media

    Science.gov (United States)

    Chabanon, Morgan; Valdés-Parada, Francisco J.; Ochoa-Tapia, J. Alberto; Goyeau, Benoît

    2017-11-01

    Heterogeneous porous structures are very often encountered in natural environments, bioremediation processes among many others. Reliable models for momentum transport are crucial whenever mass transport or convective heat occurs in these systems. In this work, we derive a large-scale average model for incompressible single-phase flow in heterogeneous and hierarchical soil porous media composed of two distinct porous regions embedding a solid impermeable structure. The model, based on the local mechanical equilibrium assumption between the porous regions, results in a unique momentum transport equation where the global effective permeability naturally depends on the permeabilities at the intermediate mesoscopic scales and therefore includes the complex hierarchical structure of the soil. The associated closure problem is numerically solved for various configurations and properties of the heterogeneous medium. The results clearly show that the effective permeability increases with the volume fraction of the most permeable porous region. It is also shown that the effective permeability is sensitive to the dimensionality spatial arrangement of the porous regions and in particular depends on the contact between the impermeable solid and the two porous regions.

  10. Evolutionary-Hierarchical Bases of the Formation of Cluster Model of Innovation Economic Development

    Directory of Open Access Journals (Sweden)

    Yuliya Vladimirovna Dubrovskaya

    2016-10-01

    Full Text Available The functioning of a modern economic system is based on the interaction of objects of different hierarchical levels. Thus, the problem of the study of innovation processes taking into account the mutual influence of the activities of these economic actors becomes important. The paper dwells evolutionary basis for the formation of models of innovation development on the basis of micro and macroeconomic analysis. Most of the concepts recognized that despite a big number of diverse models, the coordination of the relations between economic agents is of crucial importance for the successful innovation development. According to the results of the evolutionary-hierarchical analysis, the authors reveal key phases of the development of forms of business cooperation, science and government in the domestic economy. It has become the starting point of the conception of the characteristics of the interaction in the cluster models of innovation development of the economy. Considerable expectancies on improvement of the national innovative system are connected with the development of cluster and network structures. The main objective of government authorities is the formation of mechanisms and institutions that will foster cooperation between members of the clusters. The article explains that the clusters cannot become the factors in the growth of the national economy, not being an effective tool for interaction between the actors of the regional innovative systems.

  11. Bayesian hierarchical model for variations in earthquake peak ground acceleration within small-aperture arrays

    KAUST Repository

    Rahpeyma, Sahar; Halldorsson, Benedikt; Hrafnkelsson, Birgir; Jonsson, Sigurjon

    2018-01-01

    Knowledge of the characteristics of earthquake ground motion is fundamental for earthquake hazard assessments. Over small distances, relative to the source–site distance, where uniform site conditions are expected, the ground motion variability is also expected to be insignificant. However, despite being located on what has been characterized as a uniform lava‐rock site condition, considerable peak ground acceleration (PGA) variations were observed on stations of a small‐aperture array (covering approximately 1 km2) of accelerographs in Southwest Iceland during the Ölfus earthquake of magnitude 6.3 on May 29, 2008 and its sequence of aftershocks. We propose a novel Bayesian hierarchical model for the PGA variations accounting separately for earthquake event effects, station effects, and event‐station effects. An efficient posterior inference scheme based on Markov chain Monte Carlo (MCMC) simulations is proposed for the new model. The variance of the station effect is certainly different from zero according to the posterior density, indicating that individual station effects are different from one another. The Bayesian hierarchical model thus captures the observed PGA variations and quantifies to what extent the source and recording sites contribute to the overall variation in ground motions over relatively small distances on the lava‐rock site condition.

  12. Bayesian hierarchical model for variations in earthquake peak ground acceleration within small-aperture arrays

    KAUST Repository

    Rahpeyma, Sahar

    2018-04-17

    Knowledge of the characteristics of earthquake ground motion is fundamental for earthquake hazard assessments. Over small distances, relative to the source–site distance, where uniform site conditions are expected, the ground motion variability is also expected to be insignificant. However, despite being located on what has been characterized as a uniform lava‐rock site condition, considerable peak ground acceleration (PGA) variations were observed on stations of a small‐aperture array (covering approximately 1 km2) of accelerographs in Southwest Iceland during the Ölfus earthquake of magnitude 6.3 on May 29, 2008 and its sequence of aftershocks. We propose a novel Bayesian hierarchical model for the PGA variations accounting separately for earthquake event effects, station effects, and event‐station effects. An efficient posterior inference scheme based on Markov chain Monte Carlo (MCMC) simulations is proposed for the new model. The variance of the station effect is certainly different from zero according to the posterior density, indicating that individual station effects are different from one another. The Bayesian hierarchical model thus captures the observed PGA variations and quantifies to what extent the source and recording sites contribute to the overall variation in ground motions over relatively small distances on the lava‐rock site condition.

  13. A hierarchical probabilistic model for rapid object categorization in natural scenes.

    Directory of Open Access Journals (Sweden)

    Xiaofu He

    Full Text Available Humans can categorize objects in complex natural scenes within 100-150 ms. This amazing ability of rapid categorization has motivated many computational models. Most of these models require extensive training to obtain a decision boundary in a very high dimensional (e.g., ∼6,000 in a leading model feature space and often categorize objects in natural scenes by categorizing the context that co-occurs with objects when objects do not occupy large portions of the scenes. It is thus unclear how humans achieve rapid scene categorization.To address this issue, we developed a hierarchical probabilistic model for rapid object categorization in natural scenes. In this model, a natural object category is represented by a coarse hierarchical probability distribution (PD, which includes PDs of object geometry and spatial configuration of object parts. Object parts are encoded by PDs of a set of natural object structures, each of which is a concatenation of local object features. Rapid categorization is performed as statistical inference. Since the model uses a very small number (∼100 of structures for even complex object categories such as animals and cars, it requires little training and is robust in the presence of large variations within object categories and in their occurrences in natural scenes. Remarkably, we found that the model categorized animals in natural scenes and cars in street scenes with a near human-level performance. We also found that the model located animals and cars in natural scenes, thus overcoming a flaw in many other models which is to categorize objects in natural context by categorizing contextual features. These results suggest that coarse PDs of object categories based on natural object structures and statistical operations on these PDs may underlie the human ability to rapidly categorize scenes.

  14. A dust spectral energy distribution model with hierarchical Bayesian inference - I. Formalism and benchmarking

    Science.gov (United States)

    Galliano, Frédéric

    2018-05-01

    This article presents a new dust spectral energy distribution (SED) model, named HerBIE, aimed at eliminating the noise-induced correlations and large scatter obtained when performing least-squares fits. The originality of this code is to apply the hierarchical Bayesian approach to full dust models, including realistic optical properties, stochastic heating, and the mixing of physical conditions in the observed regions. We test the performances of our model by applying it to synthetic observations. We explore the impact on the recovered parameters of several effects: signal-to-noise ratio, SED shape, sample size, the presence of intrinsic correlations, the wavelength coverage, and the use of different SED model components. We show that this method is very efficient: the recovered parameters are consistently distributed around their true values. We do not find any clear bias, even for the most degenerate parameters, or with extreme signal-to-noise ratios.

  15. Merging information from multi-model flood projections in a hierarchical Bayesian framework

    Science.gov (United States)

    Le Vine, Nataliya

    2016-04-01

    Multi-model ensembles are becoming widely accepted for flood frequency change analysis. The use of multiple models results in large uncertainty around estimates of flood magnitudes, due to both uncertainty in model selection and natural variability of river flow. The challenge is therefore to extract the most meaningful signal from the multi-model predictions, accounting for both model quality and uncertainties in individual model estimates. The study demonstrates the potential of a recently proposed hierarchical Bayesian approach to combine information from multiple models. The approach facilitates explicit treatment of shared multi-model discrepancy as well as the probabilistic nature of the flood estimates, by treating the available models as a sample from a hypothetical complete (but unobserved) set of models. The advantages of the approach are: 1) to insure an adequate 'baseline' conditions with which to compare future changes; 2) to reduce flood estimate uncertainty; 3) to maximize use of statistical information in circumstances where multiple weak predictions individually lack power, but collectively provide meaningful information; 4) to adjust multi-model consistency criteria when model biases are large; and 5) to explicitly consider the influence of the (model performance) stationarity assumption. Moreover, the analysis indicates that reducing shared model discrepancy is the key to further reduction of uncertainty in the flood frequency analysis. The findings are of value regarding how conclusions about changing exposure to flooding are drawn, and to flood frequency change attribution studies.

  16. Hierarchical neural network model of the visual system determining figure/ground relation

    Science.gov (United States)

    Kikuchi, Masayuki

    2017-07-01

    One of the most important functions of the visual perception in the brain is figure/ground interpretation from input images. Figural region in 2D image corresponding to object in 3D space are distinguished from background region extended behind the object. Previously the author proposed a neural network model of figure/ground separation constructed on the standpoint that local geometric features such as curvatures and outer angles at corners are extracted and propagated along input contour in a single layer network (Kikuchi & Akashi, 2001). However, such a processing principle has the defect that signal propagation requires manyiterations despite the fact that actual visual system determines figure/ground relation within the short period (Zhou et al., 2000). In order to attain speed-up for determining figure/ground, this study incorporates hierarchical architecture into the previous model. This study confirmed the effect of the hierarchization as for the computation time by simulation. As the number of layers increased, the required computation time reduced. However, such speed-up effect was saturatedas the layers increased to some extent. This study attempted to explain this saturation effect by the notion of average distance between vertices in the area of complex network, and succeeded to mimic the saturation effect by computer simulation.

  17. Toward combining thematic information with hierarchical multiscale segmentations using tree Markov random field model

    Science.gov (United States)

    Zhang, Xueliang; Xiao, Pengfeng; Feng, Xuezhi

    2017-09-01

    It has been a common idea to produce multiscale segmentations to represent the various geographic objects in high-spatial resolution remote sensing (HR) images. However, it remains a great challenge to automatically select the proper segmentation scale(s) just according to the image information. In this study, we propose a novel way of information fusion at object level by combining hierarchical multiscale segmentations with existed thematic information produced by classification or recognition. The tree Markov random field (T-MRF) model is designed for the multiscale combination framework, through which the object type is determined as close as the existed thematic information. At the same time, the object boundary is jointly determined by the thematic labels and the multiscale segments through the minimization of the energy function. The benefits of the proposed T-MRF combination model include: (1) reducing the dependence of segmentation scale selection when utilizing multiscale segmentations; (2) exploring the hierarchical context naturally imbedded in the multiscale segmentations. The HR images in both urban and rural areas are used in the experiments to show the effectiveness of the proposed combination framework on these two aspects.

  18. Empirical Bayes ranking and selection methods via semiparametric hierarchical mixture models in microarray studies.

    Science.gov (United States)

    Noma, Hisashi; Matsui, Shigeyuki

    2013-05-20

    The main purpose of microarray studies is screening of differentially expressed genes as candidates for further investigation. Because of limited resources in this stage, prioritizing genes are relevant statistical tasks in microarray studies. For effective gene selections, parametric empirical Bayes methods for ranking and selection of genes with largest effect sizes have been proposed (Noma et al., 2010; Biostatistics 11: 281-289). The hierarchical mixture model incorporates the differential and non-differential components and allows information borrowing across differential genes with separation from nuisance, non-differential genes. In this article, we develop empirical Bayes ranking methods via a semiparametric hierarchical mixture model. A nonparametric prior distribution, rather than parametric prior distributions, for effect sizes is specified and estimated using the "smoothing by roughening" approach of Laird and Louis (1991; Computational statistics and data analysis 12: 27-37). We present applications to childhood and infant leukemia clinical studies with microarrays for exploring genes related to prognosis or disease progression. Copyright © 2012 John Wiley & Sons, Ltd.

  19. Hierarchical modeling of genome-wide Short Tandem Repeat (STR) markers infers native American prehistory.

    Science.gov (United States)

    Lewis, Cecil M

    2010-02-01

    This study examines a genome-wide dataset of 678 Short Tandem Repeat loci characterized in 444 individuals representing 29 Native American populations as well as the Tundra Netsi and Yakut populations from Siberia. Using these data, the study tests four current hypotheses regarding the hierarchical distribution of neutral genetic variation in native South American populations: (1) the western region of South America harbors more variation than the eastern region of South America, (2) Central American and western South American populations cluster exclusively, (3) populations speaking the Chibchan-Paezan and Equatorial-Tucanoan language stock emerge as a group within an otherwise South American clade, (4) Chibchan-Paezan populations in Central America emerge together at the tips of the Chibchan-Paezan cluster. This study finds that hierarchical models with the best fit place Central American populations, and populations speaking the Chibchan-Paezan language stock, at a basal position or separated from the South American group, which is more consistent with a serial founder effect into South America than that previously described. Western (Andean) South America is found to harbor similar levels of variation as eastern (Equatorial-Tucanoan and Ge-Pano-Carib) South America, which is inconsistent with an initial west coast migration into South America. Moreover, in all relevant models, the estimates of genetic diversity within geographic regions suggest a major bottleneck or founder effect occurring within the North American subcontinent, before the peopling of Central and South America. 2009 Wiley-Liss, Inc.

  20. An approach based on Hierarchical Bayesian Graphical Models for measurement interpretation under uncertainty

    Science.gov (United States)

    Skataric, Maja; Bose, Sandip; Zeroug, Smaine; Tilke, Peter

    2017-02-01

    It is not uncommon in the field of non-destructive evaluation that multiple measurements encompassing a variety of modalities are available for analysis and interpretation for determining the underlying states of nature of the materials or parts being tested. Despite and sometimes due to the richness of data, significant challenges arise in the interpretation manifested as ambiguities and inconsistencies due to various uncertain factors in the physical properties (inputs), environment, measurement device properties, human errors, and the measurement data (outputs). Most of these uncertainties cannot be described by any rigorous mathematical means, and modeling of all possibilities is usually infeasible for many real time applications. In this work, we will discuss an approach based on Hierarchical Bayesian Graphical Models (HBGM) for the improved interpretation of complex (multi-dimensional) problems with parametric uncertainties that lack usable physical models. In this setting, the input space of the physical properties is specified through prior distributions based on domain knowledge and expertise, which are represented as Gaussian mixtures to model the various possible scenarios of interest for non-destructive testing applications. Forward models are then used offline to generate the expected distribution of the proposed measurements which are used to train a hierarchical Bayesian network. In Bayesian analysis, all model parameters are treated as random variables, and inference of the parameters is made on the basis of posterior distribution given the observed data. Learned parameters of the posterior distribution obtained after the training can therefore be used to build an efficient classifier for differentiating new observed data in real time on the basis of pre-trained models. We will illustrate the implementation of the HBGM approach to ultrasonic measurements used for cement evaluation of cased wells in the oil industry.

  1. Holographic optical security systems

    Science.gov (United States)

    Fagan, William F.

    1990-06-01

    One of the most successful applications of Holography,in recent years,has been its use as an optical security technique.Indeed the general public's awareness of holograms has been greatly enhanced by the incorporation of holographic elements into the VISA and MASTERCHARGE credit cards.Optical techniques related to Holography,are also being used to protect the currencies of several countries against the counterfeiter. The mass production of high quality holographic images is by no means a trivial task as a considerable degree of expertise is required together with an optical laboratory and embossing machinery.This paper will present an overview of the principal holographic and related optical techniques used for security purposes.Worldwide, over thirty companies are involved in the production of security elements utilising holographic and related optical technologies.Counterfeiting of many products is a major criminal activity with severe consequences not only for the manufacturer but for the public in general as defective automobile parts,aircraft components,and pharmaceutical products, to cite only a few of the more prominent examples,have at one time or another been illegally copied.

  2. Holographic renormalization and supersymmetry

    Energy Technology Data Exchange (ETDEWEB)

    Genolini, Pietro Benetti [Mathematical Institute, University of Oxford,Woodstock Road, Oxford OX2 6GG (United Kingdom); Cassani, Davide [LPTHE, Sorbonne Universités UPMC Paris 6 and CNRS, UMR 7589,F-75005, Paris (France); Martelli, Dario [Department of Mathematics, King’s College London,The Strand, London, WC2R 2LS (United Kingdom); Sparks, James [Mathematical Institute, University of Oxford,Woodstock Road, Oxford OX2 6GG (United Kingdom)

    2017-02-27

    Holographic renormalization is a systematic procedure for regulating divergences in observables in asymptotically locally AdS spacetimes. For dual boundary field theories which are supersymmetric it is natural to ask whether this defines a supersymmetric renormalization scheme. Recent results in localization have brought this question into sharp focus: rigid supersymmetry on a curved boundary requires specific geometric structures, and general arguments imply that BPS observables, such as the partition function, are invariant under certain deformations of these structures. One can then ask if the dual holographic observables are similarly invariant. We study this question in minimal N=2 gauged supergravity in four and five dimensions. In four dimensions we show that holographic renormalization precisely reproduces the expected field theory results. In five dimensions we find that no choice of standard holographic counterterms is compatible with supersymmetry, which leads us to introduce novel finite boundary terms. For a class of solutions satisfying certain topological assumptions we provide some independent tests of these new boundary terms, in particular showing that they reproduce the expected VEVs of conserved charges.

  3. Bayesian hierarchical models for smoothing in two-phase studies, with application to small area estimation.

    Science.gov (United States)

    Ross, Michelle; Wakefield, Jon

    2015-10-01

    Two-phase study designs are appealing since they allow for the oversampling of rare sub-populations which improves efficiency. In this paper we describe a Bayesian hierarchical model for the analysis of two-phase data. Such a model is particularly appealing in a spatial setting in which random effects are introduced to model between-area variability. In such a situation, one may be interested in estimating regression coefficients or, in the context of small area estimation, in reconstructing the population totals by strata. The efficiency gains of the two-phase sampling scheme are compared to standard approaches using 2011 birth data from the research triangle area of North Carolina. We show that the proposed method can overcome small sample difficulties and improve on existing techniques. We conclude that the two-phase design is an attractive approach for small area estimation.

  4. A Bayesian Hierarchical Model for Relating Multiple SNPs within Multiple Genes to Disease Risk

    Directory of Open Access Journals (Sweden)

    Lewei Duan

    2013-01-01

    Full Text Available A variety of methods have been proposed for studying the association of multiple genes thought to be involved in a common pathway for a particular disease. Here, we present an extension of a Bayesian hierarchical modeling strategy that allows for multiple SNPs within each gene, with external prior information at either the SNP or gene level. The model involves variable selection at the SNP level through latent indicator variables and Bayesian shrinkage at the gene level towards a prior mean vector and covariance matrix that depend on external information. The entire model is fitted using Markov chain Monte Carlo methods. Simulation studies show that the approach is capable of recovering many of the truly causal SNPs and genes, depending upon their frequency and size of their effects. The method is applied to data on 504 SNPs in 38 candidate genes involved in DNA damage response in the WECARE study of second breast cancers in relation to radiotherapy exposure.

  5. Parallel Motion Simulation of Large-Scale Real-Time Crowd in a Hierarchical Environmental Model

    Directory of Open Access Journals (Sweden)

    Xin Wang

    2012-01-01

    Full Text Available This paper presents a parallel real-time crowd simulation method based on a hierarchical environmental model. A dynamical model of the complex environment should be constructed to simulate the state transition and propagation of individual motions. By modeling of a virtual environment where virtual crowds reside, we employ different parallel methods on a topological layer, a path layer and a perceptual layer. We propose a parallel motion path matching method based on the path layer and a parallel crowd simulation method based on the perceptual layer. The large-scale real-time crowd simulation becomes possible with these methods. Numerical experiments are carried out to demonstrate the methods and results.

  6. Covariant generalized holographic dark energy and accelerating universe

    Science.gov (United States)

    Nojiri, Shin'ichi; Odintsov, S. D.

    2017-08-01

    We propose the generalized holographic dark energy model where the infrared cutoff is identified with the combination of the FRW universe parameters: the Hubble rate, particle and future horizons, cosmological constant, the universe lifetime (if finite) and their derivatives. It is demonstrated that with the corresponding choice of the cutoff one can map such holographic dark energy to modified gravity or gravity with a general fluid. Explicitly, F( R) gravity and the general perfect fluid are worked out in detail and the corresponding infrared cutoff is found. Using this correspondence, we get realistic inflation or viable dark energy or a unified inflationary-dark energy universe in terms of covariant holographic dark energy.

  7. Linking landscape characteristics to local grizzly bear abundance using multiple detection methods in a hierarchical model

    Science.gov (United States)

    Graves, T.A.; Kendall, Katherine C.; Royle, J. Andrew; Stetz, J.B.; Macleod, A.C.

    2011-01-01

    Few studies link habitat to grizzly bear Ursus arctos abundance and these have not accounted for the variation in detection or spatial autocorrelation. We collected and genotyped bear hair in and around Glacier National Park in northwestern Montana during the summer of 2000. We developed a hierarchical Markov chain Monte Carlo model that extends the existing occupancy and count models by accounting for (1) spatially explicit variables that we hypothesized might influence abundance; (2) separate sub-models of detection probability for two distinct sampling methods (hair traps and rub trees) targeting different segments of the population; (3) covariates to explain variation in each sub-model of detection; (4) a conditional autoregressive term to account for spatial autocorrelation; (5) weights to identify most important variables. Road density and per cent mesic habitat best explained variation in female grizzly bear abundance; spatial autocorrelation was not supported. More female bears were predicted in places with lower road density and with more mesic habitat. Detection rates of females increased with rub tree sampling effort. Road density best explained variation in male grizzly bear abundance and spatial autocorrelation was supported. More male bears were predicted in areas of low road density. Detection rates of males increased with rub tree and hair trap sampling effort and decreased over the sampling period. We provide a new method to (1) incorporate multiple detection methods into hierarchical models of abundance; (2) determine whether spatial autocorrelation should be included in final models. Our results suggest that the influence of landscape variables is consistent between habitat selection and abundance in this system.

  8. How does aging affect recognition-based inference? A hierarchical Bayesian modeling approach.

    Science.gov (United States)

    Horn, Sebastian S; Pachur, Thorsten; Mata, Rui

    2015-01-01

    The recognition heuristic (RH) is a simple strategy for probabilistic inference according to which recognized objects are judged to score higher on a criterion than unrecognized objects. In this article, a hierarchical Bayesian extension of the multinomial r-model is applied to measure use of the RH on the individual participant level and to re-evaluate differences between younger and older adults' strategy reliance across environments. Further, it is explored how individual r-model parameters relate to alternative measures of the use of recognition and other knowledge, such as adherence rates and indices from signal-detection theory (SDT). Both younger and older adults used the RH substantially more often in an environment with high than low recognition validity, reflecting adaptivity in strategy use across environments. In extension of previous analyses (based on adherence rates), hierarchical modeling revealed that in an environment with low recognition validity, (a) older adults had a stronger tendency than younger adults to rely on the RH and (b) variability in RH use between individuals was larger than in an environment with high recognition validity; variability did not differ between age groups. Further, the r-model parameters correlated moderately with an SDT measure expressing how well people can discriminate cases where the RH leads to a correct vs. incorrect inference; this suggests that the r-model and the SDT measures may offer complementary insights into the use of recognition in decision making. In conclusion, younger and older adults are largely adaptive in their application of the RH, but cognitive aging may be associated with an increased tendency to rely on this strategy. Copyright © 2014 Elsevier B.V. All rights reserved.

  9. Hierarchical Colored Petri Nets for Modeling and Analysis of Transit Signal Priority Control Systems

    Directory of Open Access Journals (Sweden)

    Yisheng An

    2018-01-01

    Full Text Available In this paper, we consider the problem of developing a model for traffic signal control with transit priority using Hierarchical Colored Petri nets (HCPN. Petri nets (PN are useful for state analysis of discrete event systems due to their powerful modeling capability and mathematical formalism. This paper focuses on their use to formalize the transit signal priority (TSP control model. In a four-phase traffic signal control model, the transit detection and two kinds of transit priority strategies are integrated to obtain the HCPN-based TSP control models. One of the advantages to use these models is the clear presentation of traffic light behaviors in terms of conditions and events that cause the detection of a priority request by a transit vehicle. Another advantage of the resulting models is that the correctness and reliability of the proposed strategies are easily analyzed. After their full reachable states are generated, the boundness, liveness, and fairness of the proposed models are verified. Experimental results show that the proposed control model provides transit vehicles with better effectiveness at intersections. This work helps advance the state of the art in the design of signal control models related to the intersection of roadways.

  10. Prion Amplification and Hierarchical Bayesian Modeling Refine Detection of Prion Infection

    Science.gov (United States)

    Wyckoff, A. Christy; Galloway, Nathan; Meyerett-Reid, Crystal; Powers, Jenny; Spraker, Terry; Monello, Ryan J.; Pulford, Bruce; Wild, Margaret; Antolin, Michael; Vercauteren, Kurt; Zabel, Mark

    2015-02-01

    Prions are unique infectious agents that replicate without a genome and cause neurodegenerative diseases that include chronic wasting disease (CWD) of cervids. Immunohistochemistry (IHC) is currently considered the gold standard for diagnosis of a prion infection but may be insensitive to early or sub-clinical CWD that are important to understanding CWD transmission and ecology. We assessed the potential of serial protein misfolding cyclic amplification (sPMCA) to improve detection of CWD prior to the onset of clinical signs. We analyzed tissue samples from free-ranging Rocky Mountain elk (Cervus elaphus nelsoni) and used hierarchical Bayesian analysis to estimate the specificity and sensitivity of IHC and sPMCA conditional on simultaneously estimated disease states. Sensitivity estimates were higher for sPMCA (99.51%, credible interval (CI) 97.15-100%) than IHC of obex (brain stem, 76.56%, CI 57.00-91.46%) or retropharyngeal lymph node (90.06%, CI 74.13-98.70%) tissues, or both (98.99%, CI 90.01-100%). Our hierarchical Bayesian model predicts the prevalence of prion infection in this elk population to be 18.90% (CI 15.50-32.72%), compared to previous estimates of 12.90%. Our data reveal a previously unidentified sub-clinical prion-positive portion of the elk population that could represent silent carriers capable of significantly impacting CWD ecology.

  11. Prion amplification and hierarchical Bayesian modeling refine detection of prion infection.

    Science.gov (United States)

    Wyckoff, A Christy; Galloway, Nathan; Meyerett-Reid, Crystal; Powers, Jenny; Spraker, Terry; Monello, Ryan J; Pulford, Bruce; Wild, Margaret; Antolin, Michael; VerCauteren, Kurt; Zabel, Mark

    2015-02-10

    Prions are unique infectious agents that replicate without a genome and cause neurodegenerative diseases that include chronic wasting disease (CWD) of cervids. Immunohistochemistry (IHC) is currently considered the gold standard for diagnosis of a prion infection but may be insensitive to early or sub-clinical CWD that are important to understanding CWD transmission and ecology. We assessed the potential of serial protein misfolding cyclic amplification (sPMCA) to improve detection of CWD prior to the onset of clinical signs. We analyzed tissue samples from free-ranging Rocky Mountain elk (Cervus elaphus nelsoni) and used hierarchical Bayesian analysis to estimate the specificity and sensitivity of IHC and sPMCA conditional on simultaneously estimated disease states. Sensitivity estimates were higher for sPMCA (99.51%, credible interval (CI) 97.15-100%) than IHC of obex (brain stem, 76.56%, CI 57.00-91.46%) or retropharyngeal lymph node (90.06%, CI 74.13-98.70%) tissues, or both (98.99%, CI 90.01-100%). Our hierarchical Bayesian model predicts the prevalence of prion infection in this elk population to be 18.90% (CI 15.50-32.72%), compared to previous estimates of 12.90%. Our data reveal a previously unidentified sub-clinical prion-positive portion of the elk population that could represent silent carriers capable of significantly impacting CWD ecology.

  12. Multilevel Hierarchical Modeling of Benthic Macroinvertebrate Responses to Urbanization in Nine Metropolitan Regions across the Conterminous United States

    Science.gov (United States)

    Kashuba, Roxolana; Cha, YoonKyung; Alameddine, Ibrahim; Lee, Boknam; Cuffney, Thomas F.

    2010-01-01

    Multilevel hierarchical modeling methodology has been developed for use in ecological data analysis. The effect of urbanization on stream macroinvertebrate communities was measured across a gradient of basins in each of nine metropolitan regions across the conterminous United States. The hierarchical nature of this dataset was harnessed in a multi-tiered model structure, predicting both invertebrate response at the basin scale and differences in invertebrate response at the region scale. Ordination site scores, total taxa richness, Ephemeroptera, Plecoptera, Trichoptera (EPT) taxa richness, and richness-weighted mean tolerance of organisms at a site were used to describe invertebrate responses. Percentage of urban land cover was used as a basin-level predictor variable. Regional mean precipitation, air temperature, and antecedent agriculture were used as region-level predictor variables. Multilevel hierarchical models were fit to both levels of data simultaneously, borrowing statistical strength from the complete dataset to reduce uncertainty in regional coefficient estimates. Additionally, whereas non-hierarchical regressions were only able to show differing relations between invertebrate responses and urban intensity separately for each region, the multilevel hierarchical regressions were able to explain and quantify those differences within a single model. In this way, this modeling approach directly establishes the importance of antecedent agricultural conditions in masking the response of invertebrates to urbanization in metropolitan regions such as Milwaukee-Green Bay, Wisconsin; Denver, Colorado; and Dallas-Fort Worth, Texas. Also, these models show that regions with high precipitation, such as Atlanta, Georgia; Birmingham, Alabama; and Portland, Oregon, start out with better regional background conditions of invertebrates prior to urbanization but experience faster negative rates of change with urbanization. Ultimately, this urbanization

  13. Epigenetic change detection and pattern recognition via Bayesian hierarchical hidden Markov models.

    Science.gov (United States)

    Wang, Xinlei; Zang, Miao; Xiao, Guanghua

    2013-06-15

    Epigenetics is the study of changes to the genome that can switch genes on or off and determine which proteins are transcribed without altering the DNA sequence. Recently, epigenetic changes have been linked to the development and progression of disease such as psychiatric disorders. High-throughput epigenetic experiments have enabled researchers to measure genome-wide epigenetic profiles and yield data consisting of intensity ratios of immunoprecipitation versus reference samples. The intensity ratios can provide a view of genomic regions where protein binding occur under one experimental condition and further allow us to detect epigenetic alterations through comparison between two different conditions. However, such experiments can be expensive, with only a few replicates available. Moreover, epigenetic data are often spatially correlated with high noise levels. In this paper, we develop a Bayesian hierarchical model, combined with hidden Markov processes with four states for modeling spatial dependence, to detect genomic sites with epigenetic changes from two-sample experiments with paired internal control. One attractive feature of the proposed method is that the four states of the hidden Markov process have well-defined biological meanings and allow us to directly call the change patterns based on the corresponding posterior probabilities. In contrast, none of existing methods can offer this advantage. In addition, the proposed method offers great power in statistical inference by spatial smoothing (via hidden Markov modeling) and information pooling (via hierarchical modeling). Both simulation studies and real data analysis in a cocaine addiction study illustrate the reliability and success of this method. Copyright © 2012 John Wiley & Sons, Ltd.

  14. HOMES - Holographic Optical Method for Exoplanet Spectroscopy

    Data.gov (United States)

    National Aeronautics and Space Administration — HOMES (Holographic Optical Method for Exoplanet Spectroscopy) is a space telescope that employs a double dispersion architecture, using a holographic optical element...

  15. Hierarchical modeling of bycatch rates of sea turtles in the western North Atlantic

    Science.gov (United States)

    Gardner, B.; Sullivan, P.J.; Epperly, S.; Morreale, S.J.

    2008-01-01

    Previous studies indicate that the locations of the endangered loggerhead Caretta caretta and critically endangered leatherback Dermochelys coriacea sea turtles are influenced by water temperatures, and that incidental catch rates in the pelagic longline fishery vary by region. We present a Bayesian hierarchical model to examine the effects of environmental variables, including water temperature, on the number of sea turtles captured in the US pelagic longline fishery in the western North Atlantic. The modeling structure is highly flexible, utilizes a Bayesian model selection technique, and is fully implemented in the software program WinBUGS. The number of sea turtles captured is modeled as a zero-inflated Poisson distribution and the model incorporates fixed effects to examine region-specific differences in the parameter estimates. Results indicate that water temperature, region, bottom depth, and target species are all significant predictors of the number of loggerhead sea turtles captured. For leatherback sea turtles, the model with only target species had the most posterior model weight, though a re-parameterization of the model indicates that temperature influences the zero-inflation parameter. The relationship between the number of sea turtles captured and the variables of interest all varied by region. This suggests that management decisions aimed at reducing sea turtle bycatch may be more effective if they are spatially explicit. ?? Inter-Research 2008.

  16. A hierarchical updating method for finite element model of airbag buffer system under landing impact

    Directory of Open Access Journals (Sweden)

    He Huan

    2015-12-01

    Full Text Available In this paper, we propose an impact finite element (FE model for an airbag landing buffer system. First, an impact FE model has been formulated for a typical airbag landing buffer system. We use the independence of the structure FE model from the full impact FE model to develop a hierarchical updating scheme for the recovery module FE model and the airbag system FE model. Second, we define impact responses at key points to compare the computational and experimental results to resolve the inconsistency between the experimental data sampling frequency and experimental triggering. To determine the typical characteristics of the impact dynamics response of the airbag landing buffer system, we present the impact response confidence factors (IRCFs to evaluate how consistent the computational and experiment results are. An error function is defined between the experimental and computational results at key points of the impact response (KPIR to serve as a modified objective function. A radial basis function (RBF is introduced to construct updating variables for a surrogate model for updating the objective function, thereby converting the FE model updating problem to a soluble optimization problem. Finally, the developed method has been validated using an experimental and computational study on the impact dynamics of a classic airbag landing buffer system.

  17. Parameterization of aquatic ecosystem functioning and its natural variation: Hierarchical Bayesian modelling of plankton food web dynamics

    Science.gov (United States)

    Norros, Veera; Laine, Marko; Lignell, Risto; Thingstad, Frede

    2017-10-01

    Methods for extracting empirically and theoretically sound parameter values are urgently needed in aquatic ecosystem modelling to describe key flows and their variation in the system. Here, we compare three Bayesian formulations for mechanistic model parameterization that differ in their assumptions about the variation in parameter values between various datasets: 1) global analysis - no variation, 2) separate analysis - independent variation and 3) hierarchical analysis - variation arising from a shared distribution defined by hyperparameters. We tested these methods, using computer-generated and empirical data, coupled with simplified and reasonably realistic plankton food web models, respectively. While all methods were adequate, the simulated example demonstrated that a well-designed hierarchical analysis can result in the most accurate and precise parameter estimates and predictions, due to its ability to combine information across datasets. However, our results also highlighted sensitivity to hyperparameter prior distributions as an important caveat of hierarchical analysis. In the more complex empirical example, hierarchical analysis was able to combine precise identification of parameter values with reasonably good predictive performance, although the ranking of the methods was less straightforward. We conclude that hierarchical Bayesian analysis is a promising tool for identifying key ecosystem-functioning parameters and their variation from empirical datasets.

  18. Holographic renormalization group and cosmology in theories with quasilocalized gravity

    International Nuclear Information System (INIS)

    Csaki, Csaba; Erlich, Joshua; Hollowood, Timothy J.; Terning, John

    2001-01-01

    We study the long distance behavior of brane theories with quasilocalized gravity. The five-dimensional (5D) effective theory at large scales follows from a holographic renormalization group flow. As intuitively expected, the graviton is effectively four dimensional at intermediate scales and becomes five dimensional at large scales. However, in the holographic effective theory the essentially 4D radion dominates at long distances and gives rise to scalar antigravity. The holographic description shows that at large distances the Gregory-Rubakov-Sibiryakov (GRS) model is equivalent to the model recently proposed by Dvali, Gabadadze, and Porrati (DGP), where a tensionless brane is embedded into 5D Minkowski space, with an additional induced 4D Einstein-Hilbert term on the brane. In the holographic description the radion of the GRS model is automatically localized on the tensionless brane, and provides the ghostlike field necessary to cancel the extra graviton polarization of the DGP model. Thus, there is a holographic duality between these theories. This analysis provides physical insight into how the GRS model works at intermediate scales; in particular it sheds light on the size of the width of the graviton resonance, and also demonstrates how the holographic renormalization group can be used as a practical tool for calculations

  19. Real-time holographic endoscopy

    Science.gov (United States)

    Smigielski, Paul; Albe, Felix; Dischli, Bernard

    1992-08-01

    Some new experiments concerning holographic endoscopy are presented. The quantitative measurements of deformations of objects are obtained by the double-exposure and double- reference beam method, using either a cw-laser or a pulsed laser. Qualitative experiments using an argon laser with time-average holographic endoscopy are also presented. A video film on real-time endoscopic holographic interferometry was recorded with the help of a frequency-doubled YAG-laser working at 25 Hz for the first time.

  20. Using hierarchical linear growth models to evaluate protective mechanisms that mediate science achievement

    Science.gov (United States)

    von Secker, Clare Elaine

    The study of students at risk is a major topic of science education policy and discussion. Much research has focused on describing conditions and problems associated with the statistical risk of low science achievement among individuals who are members of groups characterized by problems such as poverty and social disadvantage. But outcomes attributed to these factors do not explain the nature and extent of mechanisms that account for differences in performance among individuals at risk. There is ample theoretical and empirical evidence that demographic differences should be conceptualized as social contexts, or collections of variables, that alter the psychological significance and social demands of life events, and affect subsequent relationships between risk and resilience. The hierarchical linear growth models used in this dissertation provide greater specification of the role of social context and the protective effects of attitude, expectations, parenting practices, peer influences, and learning opportunities on science achievement. While the individual influences of these protective factors on science achievement were small, their cumulative effect was substantial. Meta-analysis conducted on the effects associated with psychological and environmental processes that mediate risk mechanisms in sixteen social contexts revealed twenty-two significant differences between groups of students. Positive attitudes, high expectations, and more intense science course-taking had positive effects on achievement of all students, although these factors were not equally protective in all social contexts. In general, effects associated with authoritative parenting and peer influences were negative, regardless of social context. An evaluation comparing the performance and stability of hierarchical linear growth models with traditional repeated measures models is included as well.

  1. An Integrated Risk Index Model Based on Hierarchical Fuzzy Logic for Underground Risk Assessment

    Directory of Open Access Journals (Sweden)

    Muhammad Fayaz

    2017-10-01

    Full Text Available Available space in congested cities is getting scarce due to growing urbanization in the recent past. The utilization of underground space is considered as a solution to the limited space in smart cities. The numbers of underground facilities are growing day by day in the developing world. Typical underground facilities include the transit subway, parking lots, electric lines, water supply and sewer lines. The likelihood of the occurrence of accidents due to underground facilities is a random phenomenon. To avoid any accidental loss, a risk assessment method is required to conduct the continuous risk assessment and report any abnormality before it happens. In this paper, we have proposed a hierarchical fuzzy inference based model for under-ground risk assessment. The proposed hierarchical fuzzy inference architecture reduces the total number of rules from the rule base. Rule reduction is important because the curse of dimensionality damages the transparency and interpretation as it is very tough to understand and justify hundreds or thousands of fuzzy rules. The computation time also increases as rules increase. The proposed model takes 175 rules having eight input parameters to compute the risk index, and the conventional fuzzy logic requires 390,625 rules, having the same number of input parameters to compute risk index. Hence, the proposed model significantly reduces the curse of dimensionality. Rule design for fuzzy logic is also a tedious task. In this paper, we have also introduced new rule schemes, namely maximum rule-based and average rule-based; both schemes can be used interchangeably according to the logic needed for rule design. The experimental results show that the proposed method is a virtuous choice for risk index calculation where the numbers of variables are greater.

  2. Spatial patterns of breeding success of grizzly bears derived from hierarchical multistate models.

    Science.gov (United States)

    Fisher, Jason T; Wheatley, Matthew; Mackenzie, Darryl

    2014-10-01

    Conservation programs often manage populations indirectly through the landscapes in which they live. Empirically, linking reproductive success with landscape structure and anthropogenic change is a first step in understanding and managing the spatial mechanisms that affect reproduction, but this link is not sufficiently informed by data. Hierarchical multistate occupancy models can forge these links by estimating spatial patterns of reproductive success across landscapes. To illustrate, we surveyed the occurrence of grizzly bears (Ursus arctos) in the Canadian Rocky Mountains Alberta, Canada. We deployed camera traps for 6 weeks at 54 surveys sites in different types of land cover. We used hierarchical multistate occupancy models to estimate probability of detection, grizzly bear occupancy, and probability of reproductive success at each site. Grizzly bear occupancy varied among cover types and was greater in herbaceous alpine ecotones than in low-elevation wetlands or mid-elevation conifer forests. The conditional probability of reproductive success given grizzly bear occupancy was 30% (SE = 0.14). Grizzly bears with cubs had a higher probability of detection than grizzly bears without cubs, but sites were correctly classified as being occupied by breeding females 49% of the time based on raw data and thus would have been underestimated by half. Repeated surveys and multistate modeling reduced the probability of misclassifying sites occupied by breeders as unoccupied to <2%. The probability of breeding grizzly bear occupancy varied across the landscape. Those patches with highest probabilities of breeding occupancy-herbaceous alpine ecotones-were small and highly dispersed and are projected to shrink as treelines advance due to climate warming. Understanding spatial correlates in breeding distribution is a key requirement for species conservation in the face of climate change and can help identify priorities for landscape management and protection. © 2014 Society

  3. Reduced Rank Mixed Effects Models for Spatially Correlated Hierarchical Functional Data

    KAUST Repository

    Zhou, Lan

    2010-03-01

    Hierarchical functional data are widely seen in complex studies where sub-units are nested within units, which in turn are nested within treatment groups. We propose a general framework of functional mixed effects model for such data: within unit and within sub-unit variations are modeled through two separate sets of principal components; the sub-unit level functions are allowed to be correlated. Penalized splines are used to model both the mean functions and the principal components functions, where roughness penalties are used to regularize the spline fit. An EM algorithm is developed to fit the model, while the specific covariance structure of the model is utilized for computational efficiency to avoid storage and inversion of large matrices. Our dimension reduction with principal components provides an effective solution to the difficult tasks of modeling the covariance kernel of a random function and modeling the correlation between functions. The proposed methodology is illustrated using simulations and an empirical data set from a colon carcinogenesis study. Supplemental materials are available online.

  4. A Hierarchical Poisson Log-Normal Model for Network Inference from RNA Sequencing Data

    Science.gov (United States)

    Gallopin, Mélina; Rau, Andrea; Jaffrézic, Florence

    2013-01-01

    Gene network inference from transcriptomic data is an important methodological challenge and a key aspect of systems biology. Although several methods have been proposed to infer networks from microarray data, there is a need for inference methods able to model RNA-seq data, which are count-based and highly variable. In this work we propose a hierarchical Poisson log-normal model with a Lasso penalty to infer gene networks from RNA-seq data; this model has the advantage of directly modelling discrete data and accounting for inter-sample variance larger than the sample mean. Using real microRNA-seq data from breast cancer tumors and simulations, we compare this method to a regularized Gaussian graphical model on log-transformed data, and a Poisson log-linear graphical model with a Lasso penalty on power-transformed data. For data simulated with large inter-sample dispersion, the proposed model performs better than the other methods in terms of sensitivity, specificity and area under the ROC curve. These results show the necessity of methods specifically designed for gene network inference from RNA-seq data. PMID:24147011

  5. A Bayesian Approach to Model Selection in Hierarchical Mixtures-of-Experts Architectures.

    Science.gov (United States)

    Tanner, Martin A.; Peng, Fengchun; Jacobs, Robert A.

    1997-03-01

    There does not exist a statistical model that shows good performance on all tasks. Consequently, the model selection problem is unavoidable; investigators must decide which model is best at summarizing the data for each task of interest. This article presents an approach to the model selection problem in hierarchical mixtures-of-experts architectures. These architectures combine aspects of generalized linear models with those of finite mixture models in order to perform tasks via a recursive "divide-and-conquer" strategy. Markov chain Monte Carlo methodology is used to estimate the distribution of the architectures' parameters. One part of our approach to model selection attempts to estimate the worth of each component of an architecture so that relatively unused components can be pruned from the architecture's structure. A second part of this approach uses a Bayesian hypothesis testing procedure in order to differentiate inputs that carry useful information from nuisance inputs. Simulation results suggest that the approach presented here adheres to the dictum of Occam's razor; simple architectures that are adequate for summarizing the data are favored over more complex structures. Copyright 1997 Elsevier Science Ltd. All Rights Reserved.

  6. Bayesian Hierarchical Scale Mixtures of Log-Normal Models for Inference in Reliability with Stochastic Constraint

    Directory of Open Access Journals (Sweden)

    Hea-Jung Kim

    2017-06-01

    Full Text Available This paper develops Bayesian inference in reliability of a class of scale mixtures of log-normal failure time (SMLNFT models with stochastic (or uncertain constraint in their reliability measures. The class is comprehensive and includes existing failure time (FT models (such as log-normal, log-Cauchy, and log-logistic FT models as well as new models that are robust in terms of heavy-tailed FT observations. Since classical frequency approaches to reliability analysis based on the SMLNFT model with stochastic constraint are intractable, the Bayesian method is pursued utilizing a Markov chain Monte Carlo (MCMC sampling based approach. This paper introduces a two-stage maximum entropy (MaxEnt prior, which elicits a priori uncertain constraint and develops Bayesian hierarchical SMLNFT model by using the prior. The paper also proposes an MCMC method for Bayesian inference in the SMLNFT model reliability and calls attention to properties of the MaxEnt prior that are useful for method development. Finally, two data sets are used to illustrate how the proposed methodology works.

  7. Soft Pomeron in Holographic QCD

    CERN Document Server

    Ballon-Bayona, Alfonso; Costa, Miguel S; Djurić, Marko

    2016-01-01

    We study the graviton Regge trajectory in Holographic QCD as a model for high energy scattering processes dominated by soft pomeron exchange. This is done by considering spin J fields from the closed string sector that are dual to glueball states of even spin and parity. In particular, we construct a model that governs the analytic continuation of the spin J field equation to the region of real J < 2, which includes the scattering domain of negative Maldelstam variable t. The model leads to approximately linear Regge trajectories and is compatible with the measured values of 1.08 for the intercept and 0.25 GeV$^{-2}$ for the slope of the soft pomeron. The intercept of the secondary pomeron trajectory is in the same region of the subleading trajectories, made of mesons, proposed by Donnachie and Landshoff, and should therefore be taken into account.

  8. Class hierarchical test case generation algorithm based on expanded EMDPN model

    Institute of Scientific and Technical Information of China (English)

    LI Jun-yi; GONG Hong-fang; HU Ji-ping; ZOU Bei-ji; SUN Jia-guang

    2006-01-01

    A new model of event and message driven Petri network(EMDPN) based on the characteristic of class interaction for messages passing between two objects was extended. Using EMDPN interaction graph, a class hierarchical test-case generation algorithm with cooperated paths (copaths) was proposed, which can be used to solve the problems resulting from the class inheritance mechanism encountered in object-oriented software testing such as oracle, message transfer errors, and unreachable statement. Finally, the testing sufficiency was analyzed with the ordered sequence testing criterion(OSC). The results indicate that the test cases stemmed from newly proposed automatic algorithm of copaths generation satisfies synchronization message sequences testing criteria, therefore the proposed new algorithm of copaths generation has a good coverage rate.

  9. LIMO EEG: a toolbox for hierarchical LInear MOdeling of ElectroEncephaloGraphic data.

    Science.gov (United States)

    Pernet, Cyril R; Chauveau, Nicolas; Gaspar, Carl; Rousselet, Guillaume A

    2011-01-01

    Magnetic- and electric-evoked brain responses have traditionally been analyzed by comparing the peaks or mean amplitudes of signals from selected channels and averaged across trials. More recently, tools have been developed to investigate single trial response variability (e.g., EEGLAB) and to test differences between averaged evoked responses over the entire scalp and time dimensions (e.g., SPM, Fieldtrip). LIMO EEG is a Matlab toolbox (EEGLAB compatible) to analyse evoked responses over all space and time dimensions, while accounting for single trial variability using a simple hierarchical linear modelling of the data. In addition, LIMO EEG provides robust parametric tests, therefore providing a new and complementary tool in the analysis of neural evoked responses.

  10. Deriving covariant holographic entanglement

    Energy Technology Data Exchange (ETDEWEB)

    Dong, Xi [School of Natural Sciences, Institute for Advanced Study, Princeton, NJ 08540 (United States); Lewkowycz, Aitor [Jadwin Hall, Princeton University, Princeton, NJ 08544 (United States); Rangamani, Mukund [Center for Quantum Mathematics and Physics (QMAP), Department of Physics, University of California, Davis, CA 95616 (United States)

    2016-11-07

    We provide a gravitational argument in favour of the covariant holographic entanglement entropy proposal. In general time-dependent states, the proposal asserts that the entanglement entropy of a region in the boundary field theory is given by a quarter of the area of a bulk extremal surface in Planck units. The main element of our discussion is an implementation of an appropriate Schwinger-Keldysh contour to obtain the reduced density matrix (and its powers) of a given region, as is relevant for the replica construction. We map this contour into the bulk gravitational theory, and argue that the saddle point solutions of these replica geometries lead to a consistent prescription for computing the field theory Rényi entropies. In the limiting case where the replica index is taken to unity, a local analysis suffices to show that these saddles lead to the extremal surfaces of interest. We also comment on various properties of holographic entanglement that follow from this construction.

  11. Holographic Optical Data Storage

    Science.gov (United States)

    Timucin, Dogan A.; Downie, John D.; Norvig, Peter (Technical Monitor)

    2000-01-01

    Although the basic idea may be traced back to the earlier X-ray diffraction studies of Sir W. L. Bragg, the holographic method as we know it was invented by D. Gabor in 1948 as a two-step lensless imaging technique to enhance the resolution of electron microscopy, for which he received the 1971 Nobel Prize in physics. The distinctive feature of holography is the recording of the object phase variations that carry the depth information, which is lost in conventional photography where only the intensity (= squared amplitude) distribution of an object is captured. Since all photosensitive media necessarily respond to the intensity incident upon them, an ingenious way had to be found to convert object phase into intensity variations, and Gabor achieved this by introducing a coherent reference wave along with the object wave during exposure. Gabor's in-line recording scheme, however, required the object in question to be largely transmissive, and could provide only marginal image quality due to unwanted terms simultaneously reconstructed along with the desired wavefront. Further handicapped by the lack of a strong coherent light source, optical holography thus seemed fated to remain just another scientific curiosity, until the field was revolutionized in the early 1960s by some major breakthroughs: the proposition and demonstration of the laser principle, the introduction of off-axis holography, and the invention of volume holography. Consequently, the remainder of that decade saw an exponential growth in research on theory, practice, and applications of holography. Today, holography not only boasts a wide variety of scientific and technical applications (e.g., holographic interferometry for strain, vibration, and flow analysis, microscopy and high-resolution imagery, imaging through distorting media, optical interconnects, holographic optical elements, optical neural networks, three-dimensional displays, data storage, etc.), but has become a prominent am advertising

  12. Probabilistic daily ILI syndromic surveillance with a spatio-temporal Bayesian hierarchical model.

    Directory of Open Access Journals (Sweden)

    Ta-Chien Chan

    Full Text Available BACKGROUND: For daily syndromic surveillance to be effective, an efficient and sensible algorithm would be expected to detect aberrations in influenza illness, and alert public health workers prior to any impending epidemic. This detection or alert surely contains uncertainty, and thus should be evaluated with a proper probabilistic measure. However, traditional monitoring mechanisms simply provide a binary alert, failing to adequately address this uncertainty. METHODS AND FINDINGS: Based on the Bayesian posterior probability of influenza-like illness (ILI visits, the intensity of outbreak can be directly assessed. The numbers of daily emergency room ILI visits at five community hospitals in Taipei City during 2006-2007 were collected and fitted with a Bayesian hierarchical model containing meteorological factors such as temperature and vapor pressure, spatial interaction with conditional autoregressive structure, weekend and holiday effects, seasonality factors, and previous ILI visits. The proposed algorithm recommends an alert for action if the posterior probability is larger than 70%. External data from January to February of 2008 were retained for validation. The decision rule detects successfully the peak in the validation period. When comparing the posterior probability evaluation with the modified Cusum method, results show that the proposed method is able to detect the signals 1-2 days prior to the rise of ILI visits. CONCLUSIONS: This Bayesian hierarchical model not only constitutes a dynamic surveillance system but also constructs a stochastic evaluation of the need to call for alert. The monitoring mechanism provides earlier detection as well as a complementary tool for current surveillance programs.

  13. Interneuronal Mechanism for Tinbergen’s Hierarchical Model of Behavioral Choice

    Science.gov (United States)

    Pirger, Zsolt; Crossley, Michael; László, Zita; Naskar, Souvik; Kemenes, György; O’Shea, Michael; Benjamin, Paul R.; Kemenes, Ildikó

    2014-01-01

    Summary Recent studies of behavioral choice support the notion that the decision to carry out one behavior rather than another depends on the reconfiguration of shared interneuronal networks [1]. We investigated another decision-making strategy, derived from the classical ethological literature [2, 3], which proposes that behavioral choice depends on competition between autonomous networks. According to this model, behavioral choice depends on inhibitory interactions between incompatible hierarchically organized behaviors. We provide evidence for this by investigating the interneuronal mechanisms mediating behavioral choice between two autonomous circuits that underlie whole-body withdrawal [4, 5] and feeding [6] in the pond snail Lymnaea. Whole-body withdrawal is a defensive reflex that is initiated by tactile contact with predators. As predicted by the hierarchical model, tactile stimuli that evoke whole-body withdrawal responses also inhibit ongoing feeding in the presence of feeding stimuli. By recording neurons from the feeding and withdrawal networks, we found no direct synaptic connections between the interneuronal and motoneuronal elements that generate the two behaviors. Instead, we discovered that behavioral choice depends on the interaction between two unique types of interneurons with asymmetrical synaptic connectivity that allows withdrawal to override feeding. One type of interneuron, the Pleuro-Buccal (PlB), is an extrinsic modulatory neuron of the feeding network that completely inhibits feeding when excited by touch-induced monosynaptic input from the second type of interneuron, Pedal-Dorsal12 (PeD12). PeD12 plays a critical role in behavioral choice by providing a synaptic pathway joining the two behavioral networks that underlies the competitive dominance of whole-body withdrawal over feeding. PMID:25155505

  14. Hierarchical Bayesian Spatio Temporal Model Comparison on the Earth Trapped Particle Forecast

    International Nuclear Information System (INIS)

    Suparta, Wayan; Gusrizal

    2014-01-01

    We compared two hierarchical Bayesian spatio temporal (HBST) results, Gaussian process (GP) and autoregressive (AR) models, on the Earth trapped particle forecast. Two models were employed on the South Atlantic Anomaly (SAA) region. Electron of >30 keV (mep0e1) from National Oceanic and Atmospheric Administration (NOAA) 15-18 satellites data was chosen as the particle modeled. We used two weeks data to perform the model fitting on a 5°x5° grid of longitude and latitude, and 31 August 2007 was set as the date of forecast. Three statistical validations were performed on the data, i.e. the root mean square error (RMSE), mean absolute percentage error (MAPE) and bias (BIAS). The statistical analysis showed that GP model performed better than AR with the average of RMSE = 0.38 and 0.63, MAPE = 11.98 and 17.30, and BIAS = 0.32 and 0.24, for GP and AR, respectively. Visual validation on both models with the NOAA map's also confirmed the superior of the GP than the AR. The variance of log flux minimum = 0.09 and 1.09, log flux maximum = 1.15 and 1.35, and in successively represents GP and AR

  15. A hierarchical model for estimating density in camera-trap studies

    Science.gov (United States)

    Royle, J. Andrew; Nichols, James D.; Karanth, K.Ullas; Gopalaswamy, Arjun M.

    2009-01-01

    Estimating animal density using capture–recapture data from arrays of detection devices such as camera traps has been problematic due to the movement of individuals and heterogeneity in capture probability among them induced by differential exposure to trapping.We develop a spatial capture–recapture model for estimating density from camera-trapping data which contains explicit models for the spatial point process governing the distribution of individuals and their exposure to and detection by traps.We adopt a Bayesian approach to analysis of the hierarchical model using the technique of data augmentation.The model is applied to photographic capture–recapture data on tigers Panthera tigris in Nagarahole reserve, India. Using this model, we estimate the density of tigers to be 14·3 animals per 100 km2 during 2004.Synthesis and applications. Our modelling framework largely overcomes several weaknesses in conventional approaches to the estimation of animal density from trap arrays. It effectively deals with key problems such as individual heterogeneity in capture probabilities, movement of traps, presence of potential ‘holes’ in the array and ad hoc estimation of sample area. The formulation, thus, greatly enhances flexibility in the conduct of field surveys as well as in the analysis of data, from studies that may involve physical, photographic or DNA-based ‘captures’ of individual animals.

  16. Automatic relative RPC image model bias compensation through hierarchical image matching for improving DEM quality

    Science.gov (United States)

    Noh, Myoung-Jong; Howat, Ian M.

    2018-02-01

    The quality and efficiency of automated Digital Elevation Model (DEM) extraction from stereoscopic satellite imagery is critically dependent on the accuracy of the sensor model used for co-locating pixels between stereo-pair images. In the absence of ground control or manual tie point selection, errors in the sensor models must be compensated with increased matching search-spaces, increasing both the computation time and the likelihood of spurious matches. Here we present an algorithm for automatically determining and compensating the relative bias in Rational Polynomial Coefficients (RPCs) between stereo-pairs utilizing hierarchical, sub-pixel image matching in object space. We demonstrate the algorithm using a suite of image stereo-pairs from multiple satellites over a range stereo-photogrammetrically challenging polar terrains. Besides providing a validation of the effectiveness of the algorithm for improving DEM quality, experiments with prescribed sensor model errors yield insight into the dependence of DEM characteristics and quality on relative sensor model bias. This algorithm is included in the Surface Extraction through TIN-based Search-space Minimization (SETSM) DEM extraction software package, which is the primary software used for the U.S. National Science Foundation ArcticDEM and Reference Elevation Model of Antarctica (REMA) products.

  17. Estimating the Term Structure With a Semiparametric Bayesian Hierarchical Model: An Application to Corporate Bonds1

    Science.gov (United States)

    Cruz-Marcelo, Alejandro; Ensor, Katherine B.; Rosner, Gary L.

    2011-01-01

    The term structure of interest rates is used to price defaultable bonds and credit derivatives, as well as to infer the quality of bonds for risk management purposes. We introduce a model that jointly estimates term structures by means of a Bayesian hierarchical model with a prior probability model based on Dirichlet process mixtures. The modeling methodology borrows strength across term structures for purposes of estimation. The main advantage of our framework is its ability to produce reliable estimators at the company level even when there are only a few bonds per company. After describing the proposed model, we discuss an empirical application in which the term structure of 197 individual companies is estimated. The sample of 197 consists of 143 companies with only one or two bonds. In-sample and out-of-sample tests are used to quantify the improvement in accuracy that results from approximating the term structure of corporate bonds with estimators by company rather than by credit rating, the latter being a popular choice in the financial literature. A complete description of a Markov chain Monte Carlo (MCMC) scheme for the proposed model is available as Supplementary Material. PMID:21765566

  18. Holographic Raman lidar

    International Nuclear Information System (INIS)

    Andersen, G.

    2000-01-01

    Full text: We have constructed a Raman lidar system that incorporates a holographic optical element. By resolving just 3 nitrogen lines in the Resonance Raman spectroscopy (RRS) spectrum, temperature fits as good as 1% at altitudes of 20km can be made in 30 minutes. Due to the narrowband selectivity of the HOE, the lidar provides measurements over a continuous 24hr period. By adding a 4th channel to capture the Rayleigh backscattered light, temperature profiles can be extended to 80km

  19. A holographic perspective on phonons and pseudo-phonons

    Energy Technology Data Exchange (ETDEWEB)

    Amoretti, Andrea [Institute of Theoretical Physics and Astrophysics, University of Würzburg,97074 Würzburg (Germany); Physique Théorique et Mathématique and International Solvay Institutes,Université Libre de Bruxelles,C.P. 231, 1050 Brussels (Belgium); Areán, Daniel [Max-Planck-Institut für Physik (Werner-Heisenberg-Institut),Föhringer Ring 6, D-80805, Munich (Germany); Argurio, Riccardo [Physique Théorique et Mathématique and International Solvay Institutes,Université Libre de Bruxelles,C.P. 231, 1050 Brussels (Belgium); Musso, Daniele [Departamento de Física de Partículas, Universidade de Santiago de Compostelaand Instituto Galego de Física de Altas Enerxías (IGFAE),E-15782, Santiago de Compostela (Spain); Zayas, Leopoldo A. Pando [Michigan Center for Theoretical Physics, Department of Physics, University of Michigan,Ann Arbor, MI 48109 (United States)

    2017-05-10

    We analyze the concomitant spontaneous breaking of translation and conformal symmetries by introducing in a CFT a complex scalar operator that acquires a spatially dependent expectation value. The model, inspired by the holographic Q-lattice, provides a privileged setup to study the emergence of phonons from a spontaneous translational symmetry breaking in a conformal field theory and offers valuable hints for the treatment of phonons in QFT at large. We first analyze the Ward identity structure by means of standard QFT techniques, considering both spontaneous and explicit symmetry breaking. Next, by implementing holographic renormalization, we show that the same set of Ward identities holds in the holographic Q-lattice. Eventually, relying on the holographic and QFT results, we study the correlators realizing the symmetry breaking pattern and how they encode information about the low-energy spectrum.

  20. Scaling local species-habitat relations to the larger landscape with a hierarchical spatial count model

    Science.gov (United States)

    Thogmartin, W.E.; Knutson, M.G.

    2007-01-01

    Much of what is known about avian species-habitat relations has been derived from studies of birds at local scales. It is entirely unclear whether the relations observed at these scales translate to the larger landscape in a predictable linear fashion. We derived habitat models and mapped predicted abundances for three forest bird species of eastern North America using bird counts, environmental variables, and hierarchical models applied at three spatial scales. Our purpose was to understand habitat associations at multiple spatial scales and create predictive abundance maps for purposes of conservation planning at a landscape scale given the constraint that the variables used in this exercise were derived from local-level studies. Our models indicated a substantial influence of landscape context for all species, many of which were counter to reported associations at finer spatial extents. We found land cover composition provided the greatest contribution to the relative explained variance in counts for all three species; spatial structure was second in importance. No single spatial scale dominated any model, indicating that these species are responding to factors at multiple spatial scales. For purposes of conservation planning, areas of predicted high abundance should be investigated to evaluate the conservation potential of the landscape in their general vicinity. In addition, the models and spatial patterns of abundance among species suggest locations where conservation actions may benefit more than one species. ?? 2006 Springer Science+Business Media B.V.

  1. Hierarchical Kinematic Modelling and Optimal Design of a Novel Hexapod Robot with Integrated Limb Mechanism

    Directory of Open Access Journals (Sweden)

    Guiyang Xin

    2015-09-01

    Full Text Available This paper presents a novel hexapod robot, hereafter named PH-Robot, with three degrees of freedom (3-DOF parallel leg mechanisms based on the concept of an integrated limb mechanism (ILM for the integration of legged locomotion and arm manipulation. The kinematic model plays an important role in the parametric optimal design and motion planning of robots. However, models of parallel mechanisms are often difficult to obtain because of the implicit relationship between the motions of actuated joints and the motion of a moving platform. In order to derive the kinematic equations of the proposed hexapod robot, an extended hierarchical kinematic modelling method is proposed. According to the kinematic model, the geometrical parameters of the leg are optimized utilizing a comprehensive objective function that considers both dexterity and payload. PH-Robot has distinct advantages in accuracy and load ability over a robot with serial leg mechanisms through the former's comparison of performance indices. The reachable workspace of the leg verifies its ability to walk and manipulate. The results of the trajectory tracking experiment demonstrate the correctness of the kinematic model of the hexapod robot.

  2. Calibrating the sqHIMMELI v1.0 wetland methane emission model with hierarchical modeling and adaptive MCMC

    Science.gov (United States)

    Susiluoto, Jouni; Raivonen, Maarit; Backman, Leif; Laine, Marko; Makela, Jarmo; Peltola, Olli; Vesala, Timo; Aalto, Tuula

    2018-03-01

    Estimating methane (CH4) emissions from natural wetlands is complex, and the estimates contain large uncertainties. The models used for the task are typically heavily parameterized and the parameter values are not well known. In this study, we perform a Bayesian model calibration for a new wetland CH4 emission model to improve the quality of the predictions and to understand the limitations of such models.The detailed process model that we analyze contains descriptions for CH4 production from anaerobic respiration, CH4 oxidation, and gas transportation by diffusion, ebullition, and the aerenchyma cells of vascular plants. The processes are controlled by several tunable parameters. We use a hierarchical statistical model to describe the parameters and obtain the posterior distributions of the parameters and uncertainties in the processes with adaptive Markov chain Monte Carlo (MCMC), importance resampling, and time series analysis techniques. For the estimation, the analysis utilizes measurement data from the Siikaneva flux measurement site in southern Finland. The uncertainties related to the parameters and the modeled processes are described quantitatively. At the process level, the flux measurement data are able to constrain the CH4 production processes, methane oxidation, and the different gas transport processes. The posterior covariance structures explain how the parameters and the processes are related. Additionally, the flux and flux component uncertainties are analyzed both at the annual and daily levels. The parameter posterior densities obtained provide information regarding importance of the different processes, which is also useful for development of wetland methane emission models other than the square root HelsinkI Model of MEthane buiLd-up and emIssion for peatlands (sqHIMMELI). The hierarchical modeling allows us to assess the effects of some of the parameters on an annual basis. The results of the calibration and the cross validation suggest that

  3. LiDAR based prediction of forest biomass using hierarchical models with spatially varying coefficients

    Science.gov (United States)

    Babcock, Chad; Finley, Andrew O.; Bradford, John B.; Kolka, Randall K.; Birdsey, Richard A.; Ryan, Michael G.

    2015-01-01

    Many studies and production inventory systems have shown the utility of coupling covariates derived from Light Detection and Ranging (LiDAR) data with forest variables measured on georeferenced inventory plots through regression models. The objective of this study was to propose and assess the use of a Bayesian hierarchical modeling framework that accommodates both residual spatial dependence and non-stationarity of model covariates through the introduction of spatial random effects. We explored this objective using four forest inventory datasets that are part of the North American Carbon Program, each comprising point-referenced measures of above-ground forest biomass and discrete LiDAR. For each dataset, we considered at least five regression model specifications of varying complexity. Models were assessed based on goodness of fit criteria and predictive performance using a 10-fold cross-validation procedure. Results showed that the addition of spatial random effects to the regression model intercept improved fit and predictive performance in the presence of substantial residual spatial dependence. Additionally, in some cases, allowing either some or all regression slope parameters to vary spatially, via the addition of spatial random effects, further improved model fit and predictive performance. In other instances, models showed improved fit but decreased predictive performance—indicating over-fitting and underscoring the need for cross-validation to assess predictive ability. The proposed Bayesian modeling framework provided access to pixel-level posterior predictive distributions that were useful for uncertainty mapping, diagnosing spatial extrapolation issues, revealing missing model covariates, and discovering locally significant parameters.

  4. The Hubble IR cutoff in holographic ellipsoidal cosmologies

    Energy Technology Data Exchange (ETDEWEB)

    Cataldo, Mauricio [Universidad del Bio-Bio, Departamento de Fisica, Facultad de Ciencias, Concepcion (Chile); Cruz, Norman [Grupo de Cosmologia y Gravitacion-UBB, Concepcion (Chile)

    2018-01-15

    It is well known that for spatially flat FRW cosmologies, the holographic dark energy disfavors the Hubble parameter as a candidate for the IR cutoff. For overcoming this problem, we explore the use of this cutoff in holographic ellipsoidal cosmological models, and derive the general ellipsoidal metric induced by a such holographic energy density. Despite the drawbacks that this cutoff presents in homogeneous and isotropic universes, based on this general metric, we developed a suitable ellipsoidal holographic cosmological model, filled with a dark matter and a dark energy components. At late time stages, the cosmic evolution is dominated by a holographic anisotropic dark energy with barotropic equations of state. The cosmologies expand in all directions in accelerated manner. Since the ellipsoidal cosmologies given here are not asymptotically FRW, the deviation from homogeneity and isotropy of the universe on large cosmological scales remains constant during all cosmic evolution. This feature allows the studied holographic ellipsoidal cosmologies to be ruled by an equation of state ω = p/ρ, whose range belongs to quintessence or even phantom matter. (orig.)

  5. Hierarchical Bayesian models to assess between- and within-batch variability of pathogen contamination in food.

    Science.gov (United States)

    Commeau, Natalie; Cornu, Marie; Albert, Isabelle; Denis, Jean-Baptiste; Parent, Eric

    2012-03-01

    Assessing within-batch and between-batch variability is of major interest for risk assessors and risk managers in the context of microbiological contamination of food. For example, the ratio between the within-batch variability and the between-batch variability has a large impact on the results of a sampling plan. Here, we designed hierarchical Bayesian models to represent such variability. Compatible priors were built mathematically to obtain sound model comparisons. A numeric criterion is proposed to assess the contamination structure comparing the ability of the models to replicate grouped data at the batch level using a posterior predictive loss approach. Models were applied to two case studies: contamination by Listeria monocytogenes of pork breast used to produce diced bacon and contamination by the same microorganism on cold smoked salmon at the end of the process. In the first case study, a contamination structure clearly exists and is located at the batch level, that is, between batches variability is relatively strong, whereas in the second a structure also exists but is less marked. © 2012 Society for Risk Analysis.

  6. Bayesian Uncertainty Quantification for Subsurface Inversion Using a Multiscale Hierarchical Model

    KAUST Repository

    Mondal, Anirban

    2014-07-03

    We consider a Bayesian approach to nonlinear inverse problems in which the unknown quantity is a random field (spatial or temporal). The Bayesian approach contains a natural mechanism for regularization in the form of prior information, can incorporate information from heterogeneous sources and provide a quantitative assessment of uncertainty in the inverse solution. The Bayesian setting casts the inverse solution as a posterior probability distribution over the model parameters. The Karhunen-Loeve expansion is used for dimension reduction of the random field. Furthermore, we use a hierarchical Bayes model to inject multiscale data in the modeling framework. In this Bayesian framework, we show that this inverse problem is well-posed by proving that the posterior measure is Lipschitz continuous with respect to the data in total variation norm. Computational challenges in this construction arise from the need for repeated evaluations of the forward model (e.g., in the context of MCMC) and are compounded by high dimensionality of the posterior. We develop two-stage reversible jump MCMC that has the ability to screen the bad proposals in the first inexpensive stage. Numerical results are presented by analyzing simulated as well as real data from hydrocarbon reservoir. This article has supplementary material available online. © 2014 American Statistical Association and the American Society for Quality.

  7. Comparison of Extreme Precipitation Return Levels using Spatial Bayesian Hierarchical Modeling versus Regional Frequency Analysis

    Science.gov (United States)

    Love, C. A.; Skahill, B. E.; AghaKouchak, A.; Karlovits, G. S.; England, J. F.; Duren, A. M.

    2017-12-01

    We compare gridded extreme precipitation return levels obtained using spatial Bayesian hierarchical modeling (BHM) with their respective counterparts from a traditional regional frequency analysis (RFA) using the same set of extreme precipitation data. Our study area is the 11,478 square mile Willamette River basin (WRB) located in northwestern Oregon, a major tributary of the Columbia River whose 187 miles long main stem, the Willamette River, flows northward between the Coastal and Cascade Ranges. The WRB contains approximately two ­thirds of Oregon's population and 20 of the 25 most populous cities in the state. The U.S. Army Corps of Engineers (USACE) Portland District operates thirteen dams and extreme precipitation estimates are required to support risk­ informed hydrologic analyses as part of the USACE Dam Safety Program. Our intent is to profile for the USACE an alternate methodology to an RFA that was developed in 2008 due to the lack of an official NOAA Atlas 14 update for the state of Oregon. We analyze 24-hour annual precipitation maxima data for the WRB utilizing the spatial BHM R package "spatial.gev.bma", which has been shown to be efficient in developing coherent maps of extreme precipitation by return level. Our BHM modeling analysis involved application of leave-one-out cross validation (LOO-CV), which not only supported model selection but also a comprehensive assessment of location specific model performance. The LOO-CV results will provide a basis for the BHM RFA comparison.

  8. TOPICAL REVIEW: Nonlinear aspects of the renormalization group flows of Dyson's hierarchical model

    Science.gov (United States)

    Meurice, Y.

    2007-06-01

    We review recent results concerning the renormalization group (RG) transformation of Dyson's hierarchical model (HM). This model can be seen as an approximation of a scalar field theory on a lattice. We introduce the HM and show that its large group of symmetry simplifies drastically the blockspinning procedure. Several equivalent forms of the recursion formula are presented with unified notations. Rigourous and numerical results concerning the recursion formula are summarized. It is pointed out that the recursion formula of the HM is inequivalent to both Wilson's approximate recursion formula and Polchinski's equation in the local potential approximation (despite the very small difference with the exponents of the latter). We draw a comparison between the RG of the HM and functional RG equations in the local potential approximation. The construction of the linear and nonlinear scaling variables is discussed in an operational way. We describe the calculation of non-universal critical amplitudes in terms of the scaling variables of two fixed points. This question appears as a problem of interpolation between these fixed points. Universal amplitude ratios are calculated. We discuss the large-N limit and the complex singularities of the critical potential calculable in this limit. The interpolation between the HM and more conventional lattice models is presented as a symmetry breaking problem. We briefly introduce models with an approximate supersymmetry. One important goal of this review is to present a configuration space counterpart, suitable for lattice formulations, of functional RG equations formulated in momentum space (often called exact RG equations and abbreviated ERGE).

  9. A Hierarchical Building Segmentation in Digital Surface Models for 3D Reconstruction

    Directory of Open Access Journals (Sweden)

    Yiming Yan

    2017-01-01

    Full Text Available In this study, a hierarchical method for segmenting buildings in a digital surface model (DSM, which is used in a novel framework for 3D reconstruction, is proposed. Most 3D reconstructions of buildings are model-based. However, the limitations of these methods are overreliance on completeness of the offline-constructed models of buildings, and the completeness is not easily guaranteed since in modern cities buildings can be of a variety of types. Therefore, a model-free framework using high precision DSM and texture-images buildings was introduced. There are two key problems with this framework. The first one is how to accurately extract the buildings from the DSM. Most segmentation methods are limited by either the terrain factors or the difficult choice of parameter-settings. A level-set method are employed to roughly find the building regions in the DSM, and then a recently proposed ‘occlusions of random textures model’ are used to enhance the local segmentation of the buildings. The second problem is how to generate the facades of buildings. Synergizing with the corresponding texture-images, we propose a roof-contour guided interpolation of building facades. The 3D reconstruction results achieved by airborne-like images and satellites are compared. Experiments show that the segmentation method has good performance, and 3D reconstruction is easily performed by our framework, and better visualization results can be obtained by airborne-like images, which can be further replaced by UAV images.

  10. A hierarchical model for structure learning based on the physiological characteristics of neurons

    Institute of Scientific and Technical Information of China (English)

    WEI Hui

    2007-01-01

    Almost all applications of Artificial Neural Networks (ANNs) depend mainly on their memory ability.The characteristics of typical ANN models are fixed connections,with evolved weights,globalized representations,and globalized optimizations,all based on a mathematical approach.This makes those models to be deficient in robustness,efficiency of learning,capacity,anti-jamming between training sets,and correlativity of samples,etc.In this paper,we attempt to address these problems by adopting the characteristics of biological neurons in morphology and signal processing.A hierarchical neural network was designed and realized to implement structure learning and representations based on connected structures.The basic characteristics of this model are localized and random connections,field limitations of neuron fan-in and fan-out,dynamic behavior of neurons,and samples represented through different sub-circuits of neurons specialized into different response patterns.At the end of this paper,some important aspects of error correction,capacity,learning efficiency,and soundness of structural representation are analyzed theoretically.This paper has demonstrated the feasibility and advantages of structure learning and representation.This model can serve as a fundamental element of cognitive systems such as perception and associative memory.Key-words structure learning,representation,associative memory,computational neuroscience

  11. Resolving the Antarctic contribution to sea-level rise: a hierarchical modelling framework.

    Science.gov (United States)

    Zammit-Mangion, Andrew; Rougier, Jonathan; Bamber, Jonathan; Schön, Nana

    2014-06-01

    Determining the Antarctic contribution to sea-level rise from observational data is a complex problem. The number of physical processes involved (such as ice dynamics and surface climate) exceeds the number of observables, some of which have very poor spatial definition. This has led, in general, to solutions that utilise strong prior assumptions or physically based deterministic models to simplify the problem. Here, we present a new approach for estimating the Antarctic contribution, which only incorporates descriptive aspects of the physically based models in the analysis and in a statistical manner. By combining physical insights with modern spatial statistical modelling techniques, we are able to provide probability distributions on all processes deemed to play a role in both the observed data and the contribution to sea-level rise. Specifically, we use stochastic partial differential equations and their relation to geostatistical fields to capture our physical understanding and employ a Gaussian Markov random field approach for efficient computation. The method, an instantiation of Bayesian hierarchical modelling, naturally incorporates uncertainty in order to reveal credible intervals on all estimated quantities. The estimated sea-level rise contribution using this approach corroborates those found using a statistically independent method. © 2013 The Authors. Environmetrics Published by John Wiley & Sons, Ltd.

  12. Resolving the Antarctic contribution to sea-level rise: a hierarchical modelling framework†

    Science.gov (United States)

    Zammit-Mangion, Andrew; Rougier, Jonathan; Bamber, Jonathan; Schön, Nana

    2014-01-01

    Determining the Antarctic contribution to sea-level rise from observational data is a complex problem. The number of physical processes involved (such as ice dynamics and surface climate) exceeds the number of observables, some of which have very poor spatial definition. This has led, in general, to solutions that utilise strong prior assumptions or physically based deterministic models to simplify the problem. Here, we present a new approach for estimating the Antarctic contribution, which only incorporates descriptive aspects of the physically based models in the analysis and in a statistical manner. By combining physical insights with modern spatial statistical modelling techniques, we are able to provide probability distributions on all processes deemed to play a role in both the observed data and the contribution to sea-level rise. Specifically, we use stochastic partial differential equations and their relation to geostatistical fields to capture our physical understanding and employ a Gaussian Markov random field approach for efficient computation. The method, an instantiation of Bayesian hierarchical modelling, naturally incorporates uncertainty in order to reveal credible intervals on all estimated quantities. The estimated sea-level rise contribution using this approach corroborates those found using a statistically independent method. © 2013 The Authors. Environmetrics Published by John Wiley & Sons, Ltd. PMID:25505370

  13. Benefits of Applying Hierarchical Models to the Empirical Green's Function Approach

    Science.gov (United States)

    Denolle, M.; Van Houtte, C.

    2017-12-01

    Stress drops calculated from source spectral studies currently show larger variability than what is implied by empirical ground motion models. One of the potential origins of the inflated variability is the simplified model-fitting techniques used in most source spectral studies. This study improves upon these existing methods, and shows that the fitting method may explain some of the discrepancy. In particular, Bayesian hierarchical modelling is shown to be a method that can reduce bias, better quantify uncertainties and allow additional effects to be resolved. The method is applied to the Mw7.1 Kumamoto, Japan earthquake, and other global, moderate-magnitude, strike-slip earthquakes between Mw5 and Mw7.5. It is shown that the variation of the corner frequency, fc, and the falloff rate, n, across the focal sphere can be reliably retrieved without overfitting the data. Additionally, it is shown that methods commonly used to calculate corner frequencies can give substantial biases. In particular, if fc were calculated for the Kumamoto earthquake using a model with a falloff rate fixed at 2 instead of the best fit 1.6, the obtained fc would be as large as twice its realistic value. The reliable retrieval of the falloff rate allows deeper examination of this parameter for a suite of global, strike-slip earthquakes, and its scaling with magnitude. The earthquake sequences considered in this study are from Japan, New Zealand, Haiti and California.

  14. A Bayesian hierarchical model with novel prior specifications for estimating HIV testing rates.

    Science.gov (United States)

    An, Qian; Kang, Jian; Song, Ruiguang; Hall, H Irene

    2016-04-30

    Human immunodeficiency virus (HIV) infection is a severe infectious disease actively spreading globally, and acquired immunodeficiency syndrome (AIDS) is an advanced stage of HIV infection. The HIV testing rate, that is, the probability that an AIDS-free HIV infected person seeks a test for HIV during a particular time interval, given no previous positive test has been obtained prior to the start of the time, is an important parameter for public health. In this paper, we propose a Bayesian hierarchical model with two levels of hierarchy to estimate the HIV testing rate using annual AIDS and AIDS-free HIV diagnoses data. At level one, we model the latent number of HIV infections for each year using a Poisson distribution with the intensity parameter representing the HIV incidence rate. At level two, the annual numbers of AIDS and AIDS-free HIV diagnosed cases and all undiagnosed cases stratified by the HIV infections at different years are modeled using a multinomial distribution with parameters including the HIV testing rate. We propose a new class of priors for the HIV incidence rate and HIV testing rate taking into account the temporal dependence of these parameters to improve the estimation accuracy. We develop an efficient posterior computation algorithm based on the adaptive rejection metropolis sampling technique. We demonstrate our model using simulation studies and the analysis of the national HIV surveillance data in the USA. Copyright © 2015 John Wiley & Sons, Ltd.

  15. Exploring the Effects of Congruence and Holland's Personality Codes on Job Satisfaction: An Application of Hierarchical Linear Modeling Techniques

    Science.gov (United States)

    Ishitani, Terry T.

    2010-01-01

    This study applied hierarchical linear modeling to investigate the effect of congruence on intrinsic and extrinsic aspects of job satisfaction. Particular focus was given to differences in job satisfaction by gender and by Holland's first-letter codes. The study sample included nationally represented 1462 female and 1280 male college graduates who…

  16. Factors associated with leisure time physical inactivity in black individuals: hierarchical model

    Directory of Open Access Journals (Sweden)

    Francisco José Gondim Pitanga

    2014-09-01

    Full Text Available Background. A number of studies have shown that the black population exhibits higher levels of leisure-time physical inactivity (LTPI, but few have investigated the factors associated with this behavior.Objective. The aim of this study was to analyze associated factors and the explanatory model proposed for LTPI in black adults.Methods. The design was cross-sectional with a sample of 2,305 adults from 20–96 years of age, 902 (39.1% men, living in the city of Salvador, Brazil. LTPI was analyzed using the International Physical Activity Questionnaire (IPAQ. A hierarchical model was built with the possible factors associated with LTPI, distributed in distal (age and sex, intermediate 1 (socioeconomic status, educational level and marital status, intermediate 2 (perception of safety/violence in the neighborhood, racial discrimination in private settings and physical activity at work and proximal blocks (smoking and participation in Carnival block rehearsals. We estimated crude and adjusted odds ratio (OR using logistic regression.Results. The variables inversely associated with LTPI were male gender, socioeconomic status and secondary/university education, although the proposed model explains only 4.2% of LTPI.Conclusions. We conclude that male gender, higher education and socioeconomic status can reduce LTPI in black adults.

  17. An Integrated Model Based on a Hierarchical Indices System for Monitoring and Evaluating Urban Sustainability

    Directory of Open Access Journals (Sweden)

    Xulin Guo

    2013-02-01

    Full Text Available Over 50% of world’s population presently resides in cities, and this number is expected to rise to ~70% by 2050. Increasing urbanization problems including population growth, urban sprawl, land use change, unemployment, and environmental degradation, have markedly impacted urban residents’ Quality of Life (QOL. Therefore, urban sustainability and its measurement have gained increasing attention from administrators, urban planners, and scientific communities throughout the world with respect to improving urban development and human well-being. The widely accepted definition of urban sustainability emphasizes the balancing development of three primary domains (urban economy, society, and environment. This article attempts to improve the aforementioned definition of urban sustainability by incorporating a human well-being dimension. Major problems identified in existing urban sustainability indicator (USI models include a weak integration of potential indicators, poor measurement and quantification, and insufficient spatial-temporal analysis. To tackle these challenges an integrated USI model based on a hierarchical indices system was established for monitoring and evaluating urban sustainability. This model can be performed by quantifying indicators using both traditional statistical approaches and advanced geomatic techniques based on satellite imagery and census data, which aims to provide a theoretical basis for a comprehensive assessment of urban sustainability from a spatial-temporal perspective.

  18. A bayesian hierarchical model for classification with selection of functional predictors.

    Science.gov (United States)

    Zhu, Hongxiao; Vannucci, Marina; Cox, Dennis D

    2010-06-01

    In functional data classification, functional observations are often contaminated by various systematic effects, such as random batch effects caused by device artifacts, or fixed effects caused by sample-related factors. These effects may lead to classification bias and thus should not be neglected. Another issue of concern is the selection of functions when predictors consist of multiple functions, some of which may be redundant. The above issues arise in a real data application where we use fluorescence spectroscopy to detect cervical precancer. In this article, we propose a Bayesian hierarchical model that takes into account random batch effects and selects effective functions among multiple functional predictors. Fixed effects or predictors in nonfunctional form are also included in the model. The dimension of the functional data is reduced through orthonormal basis expansion or functional principal components. For posterior sampling, we use a hybrid Metropolis-Hastings/Gibbs sampler, which suffers slow mixing. An evolutionary Monte Carlo algorithm is applied to improve the mixing. Simulation and real data application show that the proposed model provides accurate selection of functional predictors as well as good classification.

  19. Teacher characteristics and student performance: An analysis using hierarchical linear modelling

    Directory of Open Access Journals (Sweden)

    Paula Armstrong

    2015-12-01

    Full Text Available This research makes use of hierarchical linear modelling to investigate which teacher characteristics are significantly associated with student performance. Using data from the SACMEQ III study of 2007, an interesting and potentially important finding is that younger teachers are better able to improve the mean mathematics performance of their students. Furthermore, younger teachers themselves perform better on subject tests than do their older counterparts. Identical models are run for Sub Saharan countries bordering on South Africa, as well for Kenya and the strong relationship between teacher age and student performance is not observed. Similarly, the model is run for South Africa using data from SACMEQ II (conducted in 2002 and the relationship between teacher age and student performance is also not observed. It must be noted that South African teachers were not tested in SACMEQ II so it was not possible to observe differences in subject knowledge amongst teachers in different cohorts and it was not possible to control for teachers’ level of subject knowledge when observing the relationship between teacher age and student performance. Changes in teacher education in the late 1990s and early 2000s may explain the differences in the performance of younger teachers relative to their older counterparts observed in the later dataset.

  20. A Bayesian Hierarchical Modeling Approach to Predicting Flow in Ungauged Basins

    Science.gov (United States)

    Gronewold, A.; Alameddine, I.; Anderson, R. M.

    2009-12-01

    Recent innovative approaches to identifying and applying regression-based relationships between land use patterns (such as increasing impervious surface area and decreasing vegetative cover) and rainfall-runoff model parameters represent novel and promising improvements to predicting flow from ungauged basins. In particular, these approaches allow for predicting flows under uncertain and potentially variable future conditions due to rapid land cover changes, variable climate conditions, and other factors. Despite the broad range of literature on estimating rainfall-runoff model parameters, however, the absence of a robust set of modeling tools for identifying and quantifying uncertainties in (and correlation between) rainfall-runoff model parameters represents a significant gap in current hydrological modeling research. Here, we build upon a series of recent publications promoting novel Bayesian and probabilistic modeling strategies for quantifying rainfall-runoff model parameter estimation uncertainty. Our approach applies alternative measures of rainfall-runoff model parameter joint likelihood (including Nash-Sutcliffe efficiency, among others) to simulate samples from the joint parameter posterior probability density function. We then use these correlated samples as response variables in a Bayesian hierarchical model with land use coverage data as predictor variables in order to develop a robust land use-based tool for forecasting flow in ungauged basins while accounting for, and explicitly acknowledging, parameter estimation uncertainty. We apply this modeling strategy to low-relief coastal watersheds of Eastern North Carolina, an area representative of coastal resource waters throughout the world because of its sensitive embayments and because of the abundant (but currently threatened) natural resources it hosts. Consequently, this area is the subject of several ongoing studies and large-scale planning initiatives, including those conducted through the United

  1. Fear of Failure, 2x2 Achievement Goal and Self-Handicapping: An Examination of the Hierarchical Model of Achievement Motivation in Physical Education

    Science.gov (United States)

    Chen, Lung Hung; Wu, Chia-Huei; Kee, Ying Hwa; Lin, Meng-Shyan; Shui, Shang-Hsueh

    2009-01-01

    In this study, the hierarchical model of achievement motivation [Elliot, A. J. (1997). Integrating the "classic" and "contemporary" approaches to achievement motivation: A hierarchical model of approach and avoidance achievement motivation. In P. Pintrich & M. Maehr (Eds.), "Advances in motivation and achievement"…

  2. Application of hierarchical Bayesian unmixing models in river sediment source apportionment

    Science.gov (United States)

    Blake, Will; Smith, Hugh; Navas, Ana; Bodé, Samuel; Goddard, Rupert; Zou Kuzyk, Zou; Lennard, Amy; Lobb, David; Owens, Phil; Palazon, Leticia; Petticrew, Ellen; Gaspar, Leticia; Stock, Brian; Boeckx, Pacsal; Semmens, Brice

    2016-04-01

    Fingerprinting and unmixing concepts are used widely across environmental disciplines for forensic evaluation of pollutant sources. In aquatic and marine systems, this includes tracking the source of organic and inorganic pollutants in water and linking problem sediment to soil erosion and land use sources. It is, however, the particular complexity of ecological systems that has driven creation of the most sophisticated mixing models, primarily to (i) evaluate diet composition in complex ecological food webs, (ii) inform population structure and (iii) explore animal movement. In the context of the new hierarchical Bayesian unmixing model, MIXSIAR, developed to characterise intra-population niche variation in ecological systems, we evaluate the linkage between ecological 'prey' and 'consumer' concepts and river basin sediment 'source' and sediment 'mixtures' to exemplify the value of ecological modelling tools to river basin science. Recent studies have outlined advantages presented by Bayesian unmixing approaches in handling complex source and mixture datasets while dealing appropriately with uncertainty in parameter probability distributions. MixSIAR is unique in that it allows individual fixed and random effects associated with mixture hierarchy, i.e. factors that might exert an influence on model outcome for mixture groups, to be explored within the source-receptor framework. This offers new and powerful ways of interpreting river basin apportionment data. In this contribution, key components of the model are evaluated in the context of common experimental designs for sediment fingerprinting studies namely simple, nested and distributed catchment sampling programmes. Illustrative examples using geochemical and compound specific stable isotope datasets are presented and used to discuss best practice with specific attention to (1) the tracer selection process, (2) incorporation of fixed effects relating to sample timeframe and sediment type in the modelling

  3. A holographic bound for D3-brane

    Energy Technology Data Exchange (ETDEWEB)

    Momeni, Davood; Myrzakul, Aizhan; Myrzakulov, Ratbay [Eurasian National University, Eurasian International Center for Theoretical Physics, Astana (Kazakhstan); Eurasian National University, Department of General Theoretical Physics, Astana (Kazakhstan); Faizal, Mir [University of British Columbia-Okanagan, Irving K. Barber School of Arts and Sciences, Kelowna, BC (Canada); University of Lethbridge, Department of Physics and Astronomy, Lethbridge, AB (Canada); Bahamonde, Sebastian [University College London, Department of Mathematics, London (United Kingdom)

    2017-06-15

    In this paper, we will regularize the holographic entanglement entropy, holographic complexity and fidelity susceptibility for a configuration of D3-branes. We will also study the regularization of the holographic complexity from the action for a configuration of D3-branes. It will be demonstrated that for a spherical shell of D3-branes the regularized holographic complexity is always greater than or equal to the regularized fidelity susceptibility. Furthermore, we will also demonstrate that the regularized holographic complexity is related to the regularized holographic entanglement entropy for this system. Thus, we will obtain a holographic bound involving regularized holographic complexity, regularized holographic entanglement entropy and regularized fidelity susceptibility of a configuration of D3-brane. We will also discuss a bound for regularized holographic complexity from action, for a D3-brane configuration. (orig.)

  4. Hierarchical Model for the Similarity Measurement of a Complex Holed-Region Entity Scene

    Directory of Open Access Journals (Sweden)

    Zhanlong Chen

    2017-11-01

    Full Text Available Complex multi-holed-region entity scenes (i.e., sets of random region with holes are common in spatial database systems, spatial query languages, and the Geographic Information System (GIS. A multi-holed-region (region with an arbitrary number of holes is an abstraction of the real world that primarily represents geographic objects that have more than one interior boundary, such as areas that contain several lakes or lakes that contain islands. When the similarity of the two complex holed-region entity scenes is measured, the number of regions in the scenes and the number of holes in the regions are usually different between the two scenes, which complicates the matching relationships of holed-regions and holes. The aim of this research is to develop several holed-region similarity metrics and propose a hierarchical model to measure comprehensively the similarity between two complex holed-region entity scenes. The procedure first divides a complex entity scene into three layers: a complex scene, a micro-spatial-scene, and a simple entity (hole. The relationships between the adjacent layers are considered to be sets of relationships, and each level of similarity measurements is nested with the adjacent one. Next, entity matching is performed from top to bottom, while the similarity results are calculated from local to global. In addition, we utilize position graphs to describe the distribution of the holed-regions and subsequently describe the directions between the holes using a feature matrix. A case study that uses the Great Lakes in North America in 1986 and 2015 as experimental data illustrates the entire similarity measurement process between two complex holed-region entity scenes. The experimental results show that the hierarchical model accounts for the relationships of the different layers in the entire complex holed-region entity scene. The model can effectively calculate the similarity of complex holed-region entity scenes, even if the

  5. Holographic Jet Quenching

    Science.gov (United States)

    Ficnar, Andrej

    In this dissertation we study the phenomenon of jet quenching in quark-gluon plasma using the AdS/CFT correspondence. We start with a weakly coupled, perturbative QCD approach to energy loss, and present a Monte Carlo code for computation of the DGLV radiative energy loss of quarks and gluons at an arbitrary order in opacity. We use the code to compute the radiated gluon distribution up to n=9 order in opacity, and compare it to the thin plasma (n=1) and the multiple soft scattering (n=infinity) approximations. We furthermore show that the gluon distribution at finite opacity depends in detail on the screening mass mu and the mean free path lambda. In the next part, we turn to the studies of how heavy quarks, represented as "trailing strings" in AdS/CFT, lose energy in a strongly coupled plasma. We study how the heavy quark energy loss gets modified in a "bottom-up" non-conformal holographic model, constructed to reproduce some properties of QCD at finite temperature and constrained by fitting the lattice gauge theory results. The energy loss of heavy quarks is found to be strongly sensitive to the medium properties. We use this model to compute the nuclear modification factor RAA of charm and bottom quarks in an expanding plasma with Glauber initial conditions, and comment on the range of validity of the model. The central part of this thesis is the energy loss of light quarks in a strongly coupled plasma. Using the standard model of "falling strings", we present an analytic derivation of the stopping distance of light quarks, previously available only through numerical simulations, and also apply it to the case of Gauss-Bonnet higher derivative gravity. We then present a general formula for computing the instantaneous energy loss in non-stationary string configurations. Application of this formula to the case of falling strings reveals interesting phenomenology, including a modified Bragg-like peak at late times and an approximately linear path dependence. Based

  6. Developing a Hierarchical Decision Model to Evaluate Nuclear Power Plant Alternative Siting Technologies

    Science.gov (United States)

    Lingga, Marwan Mossa

    A strong trend of returning to nuclear power is evident in different places in the world. Forty-five countries are planning to add nuclear power to their grids and more than 66 nuclear power plants are under construction. Nuclear power plants that generate electricity and steam need to improve safety to become more acceptable to governments and the public. One novel practical solution to increase nuclear power plants' safety factor is to build them away from urban areas, such as offshore or underground. To date, Land-Based siting is the dominant option for siting all commercial operational nuclear power plants. However, the literature reveals several options for building nuclear power plants in safer sitings than Land-Based sitings. The alternatives are several and each has advantages and disadvantages, and it is difficult to distinguish among them and choose the best for a specific project. In this research, we recall the old idea of using the alternatives of offshore and underground sitings for new nuclear power plants and propose a tool to help in choosing the best siting technology. This research involved the development of a decision model for evaluating several potential nuclear power plant siting technologies, both those that are currently available and future ones. The decision model was developed based on the Hierarchical Decision Modeling (HDM) methodology. The model considers five major dimensions, social, technical, economic, environmental, and political (STEEP), and their related criteria and sub-criteria. The model was designed and developed by the author, and its elements' validation and evaluation were done by a large number of experts in the field of nuclear energy. The decision model was applied in evaluating five potential siting technologies and ranked the Natural Island as the best in comparison to Land-Based, Floating Plant, Artificial Island, and Semi-Embedded plant.

  7. Micromechanics of hierarchical materials

    DEFF Research Database (Denmark)

    Mishnaevsky, Leon, Jr.

    2012-01-01

    A short overview of micromechanical models of hierarchical materials (hybrid composites, biomaterials, fractal materials, etc.) is given. Several examples of the modeling of strength and damage in hierarchical materials are summarized, among them, 3D FE model of hybrid composites...... with nanoengineered matrix, fiber bundle model of UD composites with hierarchically clustered fibers and 3D multilevel model of wood considered as a gradient, cellular material with layered composite cell walls. The main areas of research in micromechanics of hierarchical materials are identified, among them......, the investigations of the effects of load redistribution between reinforcing elements at different scale levels, of the possibilities to control different material properties and to ensure synergy of strengthening effects at different scale levels and using the nanoreinforcement effects. The main future directions...

  8. Enriching the hierarchical model of achievement motivation: autonomous and controlling reasons underlying achievement goals.

    Science.gov (United States)

    Michou, Aikaterini; Vansteenkiste, Maarten; Mouratidis, Athanasios; Lens, Willy

    2014-12-01

    The hierarchical model of achievement motivation presumes that achievement goals channel the achievement motives of need for achievement and fear of failure towards motivational outcomes. Yet, less is known whether autonomous and controlling reasons underlying the pursuit of achievement goals can serve as additional pathways between achievement motives and outcomes. We tested whether mastery approach, performance approach, and performance avoidance goals and their underlying autonomous and controlling reasons would jointly explain the relation between achievement motives (i.e., fear of failure and need for achievement) and learning strategies (Study 1). Additionally, we examined whether the autonomous and controlling reasons underlying learners' dominant achievement goal would account for the link between achievement motives and the educational outcomes of learning strategies and cheating (Study 2). Six hundred and six Greek adolescent students (Mage = 15.05, SD = 1.43) and 435 university students (Mage M = 20.51, SD = 2.80) participated in studies 1 and 2, respectively. In both studies, a correlational design was used and the hypotheses were tested via path modelling. Autonomous and controlling reasons underlying the pursuit of achievement goals mediated, respectively, the relation of need for achievement and fear of failure to aspects of learning outcomes. Autonomous and controlling reasons underlying achievement goals could further explain learners' functioning in achievement settings. © 2014 The British Psychological Society.

  9. The SIS Model of Epidemic Spreading in a Hierarchical Social Network

    International Nuclear Information System (INIS)

    Grabowski, A.; Kosinski, R.A.

    2005-01-01

    The phenomenon of epidemic spreading in a population with a hierarchical structure of interpersonal interactions is described and investigated numerically. The SIS model with temporal immunity to a disease and a time of incubation is used. In our model spatial localization of individuals belonging to different social groups, effectiveness of different interpersonal interactions and the mobility of a contemporary community are taken into account. The structure of interpersonal connections is based on a scale-free network. The influence of the structure of the social network on typical relations characterizing the spreading process, like a range of epidemic and epidemic curves, is discussed. The probability that endemic state occurs is also calculated. Surprisingly it occurs, that less contagious diseases has greater chance to survive. The influence of preventive vaccinations on the spreading process is investigated and critical range of vaccinations that is sufficient for the suppression of an epidemic is calculated. Our results of numerical calculations are compared with the solutions of the master equation for the spreading process, and good agreement is found. (author)

  10. Interacting holographic dark energy with logarithmic correction

    OpenAIRE

    Jamil, Mubasher; Farooq, M. Umar

    2010-01-01

    The holographic dark energy (HDE) is considered to be the most promising candidate of dark energy. Its definition is originally motivated from the entropy-area relation which depends on the theory of gravity under consideration. Recently a new definition of HDE is proposed with the help of quantum corrections to the entropy-area relation in the setup of loop quantum cosmology. Using this new definition, we investigate the model of interacting dark energy and derive its effective equation of s...

  11. Holographic processing of track chamber data

    Energy Technology Data Exchange (ETDEWEB)

    Bykovsky, Y A; Larkin, A I; Markilov, A A; Starikov, S N [Moskovskij Fiziko-Tekhnicheskij Inst. (USSR)

    1975-12-01

    The holographic pattern recognition method was applied for processing of track chamber photographs. Experiments on detection of such events as a definitely directed track, an angle formed by two tracks, a three-pronged star, a definitely curved track were performed by using models. It is proposed to recognize these events in a filmshot by the shape of correlation signals. The experiment to recognize the event in a real bubble chamber filmshot was realized; requirements to the processing films were determined.

  12. Motivation, Classroom Environment, and Learning in Introductory Geology: A Hierarchical Linear Model

    Science.gov (United States)

    Gilbert, L. A.; Hilpert, J. C.; Van Der Hoeven Kraft, K.; Budd, D.; Jones, M. H.; Matheney, R.; Mcconnell, D. A.; Perkins, D.; Stempien, J. A.; Wirth, K. R.

    2013-12-01

    Prior research has indicated that highly motivated students perform better and that learning increases in innovative, reformed classrooms, but untangling the student effects from the instructor effects is essential to understanding how to best support student learning. Using a hierarchical linear model, we examine these effects separately and jointly. We use data from nearly 2,000 undergraduate students surveyed by the NSF-funded GARNET (Geoscience Affective Research NETwork) project in 65 different introductory geology classes at research universities, public masters-granting universities, liberal arts colleges and community colleges across the US. Student level effects were measured as increases in expectancy and self-regulation using the Motivated Strategies for Learning Questionnaire (MSLQ; Pintrich et al., 1991). Instructor level effects were measured using the Reformed Teaching Observation Protocol, (RTOP; Sawada et al., 2000), with higher RTOP scores indicating a more reformed, student-centered classroom environment. Learning was measured by learning gains on a Geology Concept Inventory (GCI; Libarkin and Anderson, 2005) and normalized final course grade. The hierarchical linear model yielded significant results at several levels. At the student level, increases in expectancy and self-regulation are significantly and positively related to higher grades regardless of instructor; the higher the increase, the higher the grade. At the instructor level, RTOP scores are positively related to normalized average GCI learning gains. The higher the RTOP score, the higher the average class GCI learning gains. Across both levels, average class GCI learning gains are significantly and positively related to student grades; the higher the GCI learning gain, the higher the grade. Further, the RTOP scores are significantly and negatively related to the relationship between expectancy and course grade. The lower the RTOP score, the higher the correlation between change in

  13. Hierarchical modeling of systems with similar components: A framework for adaptive monitoring and control

    International Nuclear Information System (INIS)

    Memarzadeh, Milad; Pozzi, Matteo; Kolter, J. Zico

    2016-01-01

    System management includes the selection of maintenance actions depending on the available observations: when a system is made up by components known to be similar, data collected on one is also relevant for the management of others. This is typically the case of wind farms, which are made up by similar turbines. Optimal management of wind farms is an important task due to high cost of turbines' operation and maintenance: in this context, we recently proposed a method for planning and learning at system-level, called PLUS, built upon the Partially Observable Markov Decision Process (POMDP) framework, which treats transition and emission probabilities as random variables, and is therefore suitable for including model uncertainty. PLUS models the components as independent or identical. In this paper, we extend that formulation, allowing for a weaker similarity among components. The proposed approach, called Multiple Uncertain POMDP (MU-POMDP), models the components as POMDPs, and assumes the corresponding parameters as dependent random variables. Through this framework, we can calibrate specific degradation and emission models for each component while, at the same time, process observations at system-level. We compare the performance of the proposed MU-POMDP with PLUS, and discuss its potential and computational complexity. - Highlights: • A computational framework is proposed for adaptive monitoring and control. • It adopts a scheme based on Markov Chain Monte Carlo for inference and learning. • Hierarchical Bayesian modeling is used to allow a system-level flow of information. • Results show potential of significant savings in management of wind farms.

  14. Laser adaptive holographic hydrophone

    Energy Technology Data Exchange (ETDEWEB)

    Romashko, R V; Kulchin, Yu N; Bezruk, M N; Ermolaev, S A [Institute of Automation and Control Processes, Far Eastern Branch of the Russian Academy of Sciences, Vladivostok (Russian Federation)

    2016-03-31

    A new type of a laser hydrophone based on dynamic holograms, formed in a photorefractive crystal, is proposed and studied. It is shown that the use of dynamic holograms makes it unnecessary to use complex optical schemes and systems for electronic stabilisation of the interferometer operating point. This essentially simplifies the scheme of the laser hydrophone preserving its high sensitivity, which offers the possibility to use it under a strong variation of the environment parameters. The laser adaptive holographic hydrophone implemented at present possesses the sensitivity at a level of 3.3 mV Pa{sup -1} in the frequency range from 1 to 30 kHz. (laser hydrophones)

  15. Volume holographic memory

    Directory of Open Access Journals (Sweden)

    Cornelia Denz

    2000-05-01

    Full Text Available Volume holography represents a promising alternative to existing storage technologies. Its parallel data storage leads to high capacities combined with short access times and high transfer rates. The design and realization of a compact volume holographic storage demonstrator is presented. The technique of phase-coded multiplexing implemented to superimpose many data pages in a single location enables to store up to 480 holograms per storage location without any moving parts. Results of analog and digital data storage are shown and real time optical image processing is demonstrated.

  16. Holographic magnetisation density waves

    Energy Technology Data Exchange (ETDEWEB)

    Donos, Aristomenis [Centre for Particle Theory and Department of Mathematical Sciences, Durham University,Stockton Road, Durham, DH1 3LE (United Kingdom); Pantelidou, Christiana [Departament de Fisica Quantica i Astrofisica & Institut de Ciencies del Cosmos (ICC),Universitat de Barcelona,Marti i Franques 1, 08028 Barcelona (Spain)

    2016-10-10

    We numerically construct asymptotically AdS black brane solutions of D=4 Einstein theory coupled to a scalar and two U(1) gauge fields. The solutions are holographically dual to d=3 CFTs in a constant external magnetic field along one of the U(1)’s. Below a critical temperature the system’s magnetisation density becomes inhomogeneous, leading to spontaneous formation of current density waves. We find that the transition can be of second order and that the solutions which minimise the free energy locally in the parameter space of solutions have averaged stressed tensor of a perfect fluid.

  17. Use of hierarchical models to analyze European trends in congenital anomaly prevalence

    DEFF Research Database (Denmark)

    Cavadino, Alana; Prieto-Merino, David; Addor, Marie-Claude

    2016-01-01

    BACKGROUND: Surveillance of congenital anomalies is important to identify potential teratogens. Despite known associations between different anomalies, current surveillance methods examine trends within each subgroup separately. We aimed to evaluate whether hierarchical statistical methods that c...

  18. Constraining holographic cosmology using Planck data

    Science.gov (United States)

    Afshordi, Niayesh; Gould, Elizabeth; Skenderis, Kostas

    2017-06-01

    Holographic cosmology offers a novel framework for describing the very early Universe in which cosmological predictions are expressed in terms of the observables of a three-dimensional quantum field theory (QFT). This framework includes conventional slow-roll inflation, which is described in terms of a strongly coupled QFT, but it also allows for qualitatively new models for the very early Universe, where the dual QFT may be weakly coupled. The new models describe a universe which is nongeometric at early times. While standard slow-roll inflation leads to a (near-) power-law primordial power spectrum, perturbative super-renormalizable QFTs yield a new holographic spectral shape. Here, we compare the two predictions against cosmological observations. We use CosmoMC to determine the best fit parameters, and MultiNest for Bayesian evidence, comparing the likelihoods. We find that the dual QFT should be nonperturbative at the very low multipoles (l ≲30 ), while for higher multipoles (l ≳30 ) the new holographic model, based on perturbative QFT, fits the data just as well as the standard power-law spectrum assumed in Λ CDM cosmology. This finding opens the door to applications of nonperturbative QFT techniques, such as lattice simulations, to observational cosmology on gigaparsec scales and beyond.

  19. A GIS-Enabled, Michigan-Specific, Hierarchical Groundwater Modeling and Visualization System

    Science.gov (United States)

    Liu, Q.; Li, S.; Mandle, R.; Simard, A.; Fisher, B.; Brown, E.; Ross, S.

    2005-12-01

    Efficient management of groundwater resources relies on a comprehensive database that represents the characteristics of the natural groundwater system as well as analysis and modeling tools to describe the impacts of decision alternatives. Many agencies in Michigan have spent several years compiling expensive and comprehensive surface water and groundwater inventories and other related spatial data that describe their respective areas of responsibility. However, most often this wealth of descriptive data has only been utilized for basic mapping purposes. The benefits from analyzing these data, using GIS analysis functions or externally developed analysis models or programs, has yet to be systematically realized. In this talk, we present a comprehensive software environment that allows Michigan groundwater resources managers and frontline professionals to make more effective use of the available data and improve their ability to manage and protect groundwater resources, address potential conflicts, design cleanup schemes, and prioritize investigation activities. In particular, we take advantage of the Interactive Ground Water (IGW) modeling system and convert it to a customized software environment specifically for analyzing, modeling, and visualizing the Michigan statewide groundwater database. The resulting Michigan IGW modeling system (IGW-M) is completely window-based, fully interactive, and seamlessly integrated with a GIS mapping engine. The system operates in real-time (on the fly) providing dynamic, hierarchical mapping, modeling, spatial analysis, and visualization. Specifically, IGW-M allows water resources and environmental professionals in Michigan to: * Access and utilize the extensive data from the statewide groundwater database, interactively manipulate GIS objects, and display and query the associated data and attributes; * Analyze and model the statewide groundwater database, interactively convert GIS objects into numerical model features

  20. Hierarchical mixture of experts and diagnostic modeling approach to reduce hydrologic model structural uncertainty: STRUCTURAL UNCERTAINTY DIAGNOSTICS

    Energy Technology Data Exchange (ETDEWEB)

    Moges, Edom [Civil and Environmental Engineering Department, Washington State University, Richland Washington USA; Demissie, Yonas [Civil and Environmental Engineering Department, Washington State University, Richland Washington USA; Li, Hong-Yi [Hydrology Group, Pacific Northwest National Laboratory, Richland Washington USA

    2016-04-01

    In most water resources applications, a single model structure might be inadequate to capture the dynamic multi-scale interactions among different hydrological processes. Calibrating single models for dynamic catchments, where multiple dominant processes exist, can result in displacement of errors from structure to parameters, which in turn leads to over-correction and biased predictions. An alternative to a single model structure is to develop local expert structures that are effective in representing the dominant components of the hydrologic process and adaptively integrate them based on an indicator variable. In this study, the Hierarchical Mixture of Experts (HME) framework is applied to integrate expert model structures representing the different components of the hydrologic process. Various signature diagnostic analyses are used to assess the presence of multiple dominant processes and the adequacy of a single model, as well as to identify the structures of the expert models. The approaches are applied for two distinct catchments, the Guadalupe River (Texas) and the French Broad River (North Carolina) from the Model Parameter Estimation Experiment (MOPEX), using different structures of the HBV model. The results show that the HME approach has a better performance over the single model for the Guadalupe catchment, where multiple dominant processes are witnessed through diagnostic measures. Whereas, the diagnostics and aggregated performance measures prove that French Broad has a homogeneous catchment response, making the single model adequate to capture the response.

  1. A Poisson hierarchical modelling approach to detecting copy number variation in sequence coverage data

    KAUST Repository

    Sepú lveda, Nuno; Campino, Susana G; Assefa, Samuel A; Sutherland, Colin J; Pain, Arnab; Clark, Taane G

    2013-01-01

    Background: The advent of next generation sequencing technology has accelerated efforts to map and catalogue copy number variation (CNV) in genomes of important micro-organisms for public health. A typical analysis of the sequence data involves mapping reads onto a reference genome, calculating the respective coverage, and detecting regions with too-low or too-high coverage (deletions and amplifications, respectively). Current CNV detection methods rely on statistical assumptions (e.g., a Poisson model) that may not hold in general, or require fine-tuning the underlying algorithms to detect known hits. We propose a new CNV detection methodology based on two Poisson hierarchical models, the Poisson-Gamma and Poisson-Lognormal, with the advantage of being sufficiently flexible to describe different data patterns, whilst robust against deviations from the often assumed Poisson model.Results: Using sequence coverage data of 7 Plasmodium falciparum malaria genomes (3D7 reference strain, HB3, DD2, 7G8, GB4, OX005, and OX006), we showed that empirical coverage distributions are intrinsically asymmetric and overdispersed in relation to the Poisson model. We also demonstrated a low baseline false positive rate for the proposed methodology using 3D7 resequencing data and simulation. When applied to the non-reference isolate data, our approach detected known CNV hits, including an amplification of the PfMDR1 locus in DD2 and a large deletion in the CLAG3.2 gene in GB4, and putative novel CNV regions. When compared to the recently available FREEC and cn.MOPS approaches, our findings were more concordant with putative hits from the highest quality array data for the 7G8 and GB4 isolates.Conclusions: In summary, the proposed methodology brings an increase in flexibility, robustness, accuracy and statistical rigour to CNV detection using sequence coverage data. 2013 Seplveda et al.; licensee BioMed Central Ltd.

  2. Clarifying the Hubble constant tension with a Bayesian hierarchical model of the local distance ladder

    Science.gov (United States)

    Feeney, Stephen M.; Mortlock, Daniel J.; Dalmasso, Niccolò

    2018-05-01

    Estimates of the Hubble constant, H0, from the local distance ladder and from the cosmic microwave background (CMB) are discrepant at the ˜3σ level, indicating a potential issue with the standard Λ cold dark matter (ΛCDM) cosmology. A probabilistic (i.e. Bayesian) interpretation of this tension requires a model comparison calculation, which in turn depends strongly on the tails of the H0 likelihoods. Evaluating the tails of the local H0 likelihood requires the use of non-Gaussian distributions to faithfully represent anchor likelihoods and outliers, and simultaneous fitting of the complete distance-ladder data set to ensure correct uncertainty propagation. We have hence developed a Bayesian hierarchical model of the full distance ladder that does not rely on Gaussian distributions and allows outliers to be modelled without arbitrary data cuts. Marginalizing over the full ˜3000-parameter joint posterior distribution, we find H0 = (72.72 ± 1.67) km s-1 Mpc-1 when applied to the outlier-cleaned Riess et al. data, and (73.15 ± 1.78) km s-1 Mpc-1 with supernova outliers reintroduced (the pre-cut Cepheid data set is not available). Using our precise evaluation of the tails of the H0 likelihood, we apply Bayesian model comparison to assess the evidence for deviation from ΛCDM given the distance-ladder and CMB data. The odds against ΛCDM are at worst ˜10:1 when considering the Planck 2015 XIII data, regardless of outlier treatment, considerably less dramatic than naïvely implied by the 2.8σ discrepancy. These odds become ˜60:1 when an approximation to the more-discrepant Planck Intermediate XLVI likelihood is included.

  3. A Poisson hierarchical modelling approach to detecting copy number variation in sequence coverage data.

    Science.gov (United States)

    Sepúlveda, Nuno; Campino, Susana G; Assefa, Samuel A; Sutherland, Colin J; Pain, Arnab; Clark, Taane G

    2013-02-26

    The advent of next generation sequencing technology has accelerated efforts to map and catalogue copy number variation (CNV) in genomes of important micro-organisms for public health. A typical analysis of the sequence data involves mapping reads onto a reference genome, calculating the respective coverage, and detecting regions with too-low or too-high coverage (deletions and amplifications, respectively). Current CNV detection methods rely on statistical assumptions (e.g., a Poisson model) that may not hold in general, or require fine-tuning the underlying algorithms to detect known hits. We propose a new CNV detection methodology based on two Poisson hierarchical models, the Poisson-Gamma and Poisson-Lognormal, with the advantage of being sufficiently flexible to describe different data patterns, whilst robust against deviations from the often assumed Poisson model. Using sequence coverage data of 7 Plasmodium falciparum malaria genomes (3D7 reference strain, HB3, DD2, 7G8, GB4, OX005, and OX006), we showed that empirical coverage distributions are intrinsically asymmetric and overdispersed in relation to the Poisson model. We also demonstrated a low baseline false positive rate for the proposed methodology using 3D7 resequencing data and simulation. When applied to the non-reference isolate data, our approach detected known CNV hits, including an amplification of the PfMDR1 locus in DD2 and a large deletion in the CLAG3.2 gene in GB4, and putative novel CNV regions. When compared to the recently available FREEC and cn.MOPS approaches, our findings were more concordant with putative hits from the highest quality array data for the 7G8 and GB4 isolates. In summary, the proposed methodology brings an increase in flexibility, robustness, accuracy and statistical rigour to CNV detection using sequence coverage data.

  4. A Poisson hierarchical modelling approach to detecting copy number variation in sequence coverage data

    KAUST Repository

    Sepúlveda, Nuno

    2013-02-26

    Background: The advent of next generation sequencing technology has accelerated efforts to map and catalogue copy number variation (CNV) in genomes of important micro-organisms for public health. A typical analysis of the sequence data involves mapping reads onto a reference genome, calculating the respective coverage, and detecting regions with too-low or too-high coverage (deletions and amplifications, respectively). Current CNV detection methods rely on statistical assumptions (e.g., a Poisson model) that may not hold in general, or require fine-tuning the underlying algorithms to detect known hits. We propose a new CNV detection methodology based on two Poisson hierarchical models, the Poisson-Gamma and Poisson-Lognormal, with the advantage of being sufficiently flexible to describe different data patterns, whilst robust against deviations from the often assumed Poisson model.Results: Using sequence coverage data of 7 Plasmodium falciparum malaria genomes (3D7 reference strain, HB3, DD2, 7G8, GB4, OX005, and OX006), we showed that empirical coverage distributions are intrinsically asymmetric and overdispersed in relation to the Poisson model. We also demonstrated a low baseline false positive rate for the proposed methodology using 3D7 resequencing data and simulation. When applied to the non-reference isolate data, our approach detected known CNV hits, including an amplification of the PfMDR1 locus in DD2 and a large deletion in the CLAG3.2 gene in GB4, and putative novel CNV regions. When compared to the recently available FREEC and cn.MOPS approaches, our findings were more concordant with putative hits from the highest quality array data for the 7G8 and GB4 isolates.Conclusions: In summary, the proposed methodology brings an increase in flexibility, robustness, accuracy and statistical rigour to CNV detection using sequence coverage data. 2013 Seplveda et al.; licensee BioMed Central Ltd.

  5. Chemical and morphological gradient scaffolds to mimic hierarchically complex tissues: From theoretical modeling to their fabrication.

    Science.gov (United States)

    Marrella, Alessandra; Aiello, Maurizio; Quarto, Rodolfo; Scaglione, Silvia

    2016-10-01

    Porous multiphase scaffolds have been proposed in different tissue engineering applications because of their potential to artificially recreate the heterogeneous structure of hierarchically complex tissues. Recently, graded scaffolds have been also realized, offering a continuum at the interface among different phases for an enhanced structural stability of the scaffold. However, their internal architecture is often obtained empirically and the architectural parameters rarely predetermined. The aim of this work is to offer a theoretical model as tool for the design and fabrication of functional and structural complex graded scaffolds with predicted morphological and chemical features, to overcome the time-consuming trial and error experimental method. This developed mathematical model uses laws of motions, Stokes equations, and viscosity laws to describe the dependence between centrifugation speed and fiber/particles sedimentation velocity over time, which finally affects the fiber packing, and thus the total porosity of the 3D scaffolds. The efficacy of the theoretical model was tested by realizing engineered graded grafts for osteochondral tissue engineering applications. The procedure, based on combined centrifugation and freeze-drying technique, was applied on both polycaprolactone (PCL) and collagen-type-I (COL) to test the versatility of the entire process. A functional gradient was combined to the morphological one by adding hydroxyapatite (HA) powders, to mimic the bone mineral phase. Results show that 3D bioactive morphologically and chemically graded grafts can be properly designed and realized in agreement with the theoretical model. Biotechnol. Bioeng. 2016;113: 2286-2297. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  6. Sub-seasonal-to-seasonal Reservoir Inflow Forecast using Bayesian Hierarchical Hidden Markov Model

    Science.gov (United States)

    Mukhopadhyay, S.; Arumugam, S.

    2017-12-01

    Sub-seasonal-to-seasonal (S2S) (15-90 days) streamflow forecasting is an emerging area of research that provides seamless information for reservoir operation from weather time scales to seasonal time scales. From an operational perspective, sub-seasonal inflow forecasts are highly valuable as these enable water managers to decide short-term releases (15-30 days), while holding water for seasonal needs (e.g., irrigation and municipal supply) and to meet end-of-the-season target storage at a desired level. We propose a Bayesian Hierarchical Hidden Markov Model (BHHMM) to develop S2S inflow forecasts for the Tennessee Valley Area (TVA) reservoir system. Here, the hidden states are predicted by relevant indices that influence the inflows at S2S time scale. The hidden Markov model also captures the both spatial and temporal hierarchy in predictors that operate at S2S time scale with model parameters being estimated as a posterior distribution using a Bayesian framework. We present our work in two steps, namely single site model and multi-site model. For proof of concept, we consider inflows to Douglas Dam, Tennessee, in the single site model. For multisite model we consider reservoirs in the upper Tennessee valley. Streamflow forecasts are issued and updated continuously every day at S2S time scale. We considered precipitation forecasts obtained from NOAA Climate Forecast System (CFSv2) GCM as predictors for developing S2S streamflow forecasts along with relevant indices for predicting hidden states. Spatial dependence of the inflow series of reservoirs are also preserved in the multi-site model. To circumvent the non-normality of the data, we consider the HMM in a Generalized Linear Model setting. Skill of the proposed approach is tested using split sample validation against a traditional multi-site canonical correlation model developed using the same set of predictors. From the posterior distribution of the inflow forecasts, we also highlight different system behavior

  7. Properties of multilayer nonuniform holographic structures

    International Nuclear Information System (INIS)

    Pen, E F; Rodionov, Mikhail Yu

    2010-01-01

    Experimental results and analysis of properties of multilayer nonuniform holographic structures formed in photopolymer materials are presented. The theoretical hypotheses is proved that the characteristics of angular selectivity for the considered structures have a set of local maxima, whose number and width are determined by the thicknesses of intermediate layers and deep holograms and that the envelope of the maxima coincides with the selectivity contour of a single holographic array. It is also experimentally shown that hologram nonuniformities substantially distort shapes of selectivity characteristics: they become asymmetric, the local maxima differ in size and the depths of local minima reduce. The modelling results are made similar to experimental data by appropriately choosing the nonuniformity parameters. (imaging and image processing. holography)

  8. Reheating of the Universe as holographic thermalization

    Energy Technology Data Exchange (ETDEWEB)

    Kawai, Shinsuke, E-mail: shinsuke.kawai@gmail.com [Department of Physics, Sungkyunkwan University, Suwon 16419 (Korea, Republic of); Nakayama, Yu [California Institute of Technology, 452-48, Pasadena, CA 91125 (United States); Kavli Institute for the Physics and Mathematics of the Universe (WPI), Todai Institutes for Advanced Study, Kashiwa, Chiba 277-8583 (Japan)

    2016-08-10

    Assuming gauge/gravity correspondence we study reheating of the Universe using its holographic dual. Inflaton decay and thermalisation of the decay products correspond to collapse of a spherical shell and formation of a blackhole in the dual anti-de Sitter (AdS) spacetime. The reheating temperature is computed as the Hawking temperature of the developed blackhole probed by a dynamical boundary, and is determined by the inflaton energy density and the AdS radius, with corrections from the dynamics of the shell collapse. For given initial energy density of the inflaton field the holographic model typically gives lower reheating temperature than the instant reheating scenario, while it is shown to be safely within phenomenological bounds.

  9. Reheating of the Universe as holographic thermalization

    Directory of Open Access Journals (Sweden)

    Shinsuke Kawai

    2016-08-01

    Full Text Available Assuming gauge/gravity correspondence we study reheating of the Universe using its holographic dual. Inflaton decay and thermalisation of the decay products correspond to collapse of a spherical shell and formation of a blackhole in the dual anti-de Sitter (AdS spacetime. The reheating temperature is computed as the Hawking temperature of the developed blackhole probed by a dynamical boundary, and is determined by the inflaton energy density and the AdS radius, with corrections from the dynamics of the shell collapse. For given initial energy density of the inflaton field the holographic model typically gives lower reheating temperature than the instant reheating scenario, while it is shown to be safely within phenomenological bounds.

  10. Predictors of Drinking Water Boiling and Bottled Water Consumption in Rural China: A Hierarchical Modeling Approach.

    Science.gov (United States)

    Cohen, Alasdair; Zhang, Qi; Luo, Qing; Tao, Yong; Colford, John M; Ray, Isha

    2017-06-20

    Approximately two billion people drink unsafe water. Boiling is the most commonly used household water treatment (HWT) method globally and in China. HWT can make water safer, but sustained adoption is rare and bottled water consumption is growing. To successfully promote HWT, an understanding of associated socioeconomic factors is critical. We collected survey data and water samples from 450 rural households in Guangxi Province, China. Covariates were grouped into blocks to hierarchically construct modified Poisson models and estimate risk ratios (RR) associated with boiling methods, bottled water, and untreated water. Female-headed households were most likely to boil (RR = 1.36, p water, or use electric kettles if they boiled. Our findings show that boiling is not an undifferentiated practice, but one with different methods of varying effectiveness, environmental impact, and adoption across socioeconomic strata. Our results can inform programs to promote safer and more efficient boiling using electric kettles, and suggest that if rural China's economy continues to grow then bottled water use will increase.

  11. Modelling the dynamics of an experimental host-pathogen microcosm within a hierarchical Bayesian framework.

    Directory of Open Access Journals (Sweden)

    David Lunn

    Full Text Available The advantages of Bayesian statistical approaches, such as flexibility and the ability to acknowledge uncertainty in all parameters, have made them the prevailing method for analysing the spread of infectious diseases in human or animal populations. We introduce a Bayesian approach to experimental host-pathogen systems that shares these attractive features. Since uncertainty in all parameters is acknowledged, existing information can be accounted for through prior distributions, rather than through fixing some parameter values. The non-linear dynamics, multi-factorial design, multiple measurements of responses over time and sampling error that are typical features of experimental host-pathogen systems can also be naturally incorporated. We analyse the dynamics of the free-living protozoan Paramecium caudatum and its specialist bacterial parasite Holospora undulata. Our analysis provides strong evidence for a saturable infection function, and we were able to reproduce the two waves of infection apparent in the data by separating the initial inoculum from the parasites released after the first cycle of infection. In addition, the parameter estimates from the hierarchical model can be combined to infer variations in the parasite's basic reproductive ratio across experimental groups, enabling us to make predictions about the effect of resources and host genotype on the ability of the parasite to spread. Even though the high level of variability between replicates limited the resolution of the results, this Bayesian framework has strong potential to be used more widely in experimental ecology.

  12. Determination of a Differential Item Functioning Procedure Using the Hierarchical Generalized Linear Model

    Directory of Open Access Journals (Sweden)

    Tülin Acar

    2012-01-01

    Full Text Available The aim of this research is to compare the result of the differential item functioning (DIF determining with hierarchical generalized linear model (HGLM technique and the results of the DIF determining with logistic regression (LR and item response theory–likelihood ratio (IRT-LR techniques on the test items. For this reason, first in this research, it is determined whether the students encounter DIF with HGLM, LR, and IRT-LR techniques according to socioeconomic status (SES, in the Turkish, Social Sciences, and Science subtest items of the Secondary School Institutions Examination. When inspecting the correlations among the techniques in terms of determining the items having DIF, it was discovered that there was significant correlation between the results of IRT-LR and LR techniques in all subtests; merely in Science subtest, the results of the correlation between HGLM and IRT-LR techniques were found significant. DIF applications can be made on test items with other DIF analysis techniques that were not taken to the scope of this research. The analysis results, which were determined by using the DIF techniques in different sample sizes, can be compared.

  13. Hierarchical modeling of indoor radon concentration: how much do geology and building factors matter?

    Science.gov (United States)

    Borgoni, Riccardo; De Francesco, Davide; De Bartolo, Daniela; Tzavidis, Nikos

    2014-12-01

    Radon is a natural gas known to be the main contributor to natural background radiation exposure and only second to smoking as major leading cause of lung cancer. The main concern is in indoor environments where the gas tends to accumulate and can reach high concentrations. The primary contributor of this gas into the building is from the soil although architectonic characteristics, such as building materials, can largely affect concentration values. Understanding the factors affecting the concentration in dwellings and workplaces is important both in prevention, when the construction of a new building is being planned, and in mitigation when the amount of Radon detected inside a building is too high. In this paper we investigate how several factors, such as geologic typologies of the soil and a range of building characteristics, impact on indoor concentration focusing, in particular, on how concentration changes as a function of the floor level. Adopting a mixed effects model to account for the hierarchical nature of the data, we also quantify the extent to which such measurable factors manage to explain the variability of indoor radon concentration. Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. Subjective value of risky foods for individual domestic chicks: a hierarchical Bayesian model.

    Science.gov (United States)

    Kawamori, Ai; Matsushima, Toshiya

    2010-05-01

    For animals to decide which prey to attack, the gain and delay of the food item must be integrated in a value function. However, the subjective value is not obtained by expected profitability when it is accompanied by risk. To estimate the subjective value, we examined choices in a cross-shaped maze with two colored feeders in domestic chicks. When tested by a reversal in food amount or delay, chicks changed choices similarly in both conditions (experiment 1). We therefore examined risk sensitivity for amount and delay (experiment 2) by supplying one feeder with food of fixed profitability and the alternative feeder with high- or low-profitability food at equal probability. Profitability varied in amount (groups 1 and 2 at high and low variance) or in delay (group 3). To find the equilibrium, the amount (groups 1 and 2) or delay (group 3) of the food in the fixed feeder was adjusted in a total of 18 blocks. The Markov chain Monte Carlo method was applied to a hierarchical Bayesian model to estimate the subjective value. Chicks undervalued the variable feeder in group 1 and were indifferent in group 2 but overvalued the variable feeder in group 3 at a population level. Re-examination without the titration procedure (experiment 3) suggested that the subjective value was not absolute for each option. When the delay was varied, the variable option was often given a paradoxically high value depending on fixed alternative. Therefore, the basic assumption of the uniquely determined value function might be questioned.

  15. Factors influencing the occupational injuries of physical therapists in Taiwan: A hierarchical linear model approach.

    Science.gov (United States)

    Tao, Yu-Hui; Wu, Yu-Lung; Huang, Wan-Yun

    2017-01-01

    The evidence literature suggests that physical therapy practitioners are subjected to a high probability of acquiring work-related injuries, but only a few studies have specifically investigated Taiwanese physical therapy practitioners. This study was conducted to determine the relationships among individual and group hospital-level factors that contribute to the medical expenses for the occupational injuries of physical therapy practitioners in Taiwan. Physical therapy practitioners in Taiwan with occupational injuries were selected from the 2013 National Health Insurance Research Databases (NHIRD). The age, gender, job title, hospitals attributes, and outpatient data of physical therapy practitioners who sustained an occupational injury in 2013 were obtained with SAS 9.3. SPSS 20.0 and HLM 7.01 were used to conduct descriptive and hierarchical linear model analyses, respectively. The job title of physical therapy practitioners at the individual level and the hospital type at the group level exert positive effects on per person medical expenses. Hospital hierarchy moderates the individual-level relationships of age and job title with the per person medical expenses. Considering that age, job title, and hospital hierarchy affect medical expenses for the occupational injuries of physical therapy practitioners, we suggest strengthening related safety education and training and elevating the self-awareness of the risk of occupational injuries of physical therapy practitioners to reduce and prevent the occurrence of such injuries.

  16. Hierarchical modeling of indoor radon concentration: how much do geology and building factors matter?

    International Nuclear Information System (INIS)

    Borgoni, Riccardo; De Francesco, Davide; De Bartolo, Daniela; Tzavidis, Nikos

    2014-01-01

    Radon is a natural gas known to be the main contributor to natural background radiation exposure and only second to smoking as major leading cause of lung cancer. The main concern is in indoor environments where the gas tends to accumulate and can reach high concentrations. The primary contributor of this gas into the building is from the soil although architectonic characteristics, such as building materials, can largely affect concentration values. Understanding the factors affecting the concentration in dwellings and workplaces is important both in prevention, when the construction of a new building is being planned, and in mitigation when the amount of Radon detected inside a building is too high. In this paper we investigate how several factors, such as geologic typologies of the soil and a range of building characteristics, impact on indoor concentration focusing, in particular, on how concentration changes as a function of the floor level. Adopting a mixed effects model to account for the hierarchical nature of the data, we also quantify the extent to which such measurable factors manage to explain the variability of indoor radon concentration. - Highlights: • It is assessed how the variability of indoor radon concentration depends on buildings and lithologies. • The lithological component has been found less relevant than the building one. • Radon-prone lithologies have been identified. • The effect of the floor where the room is located has been estimated. • Indoor radon concentration have been predicted for different dwelling typologies

  17. Visualization and Hierarchical Analysis of Flow in Discrete Fracture Network Models

    Science.gov (United States)

    Aldrich, G. A.; Gable, C. W.; Painter, S. L.; Makedonska, N.; Hamann, B.; Woodring, J.

    2013-12-01

    Flow and transport in low permeability fractured rock is primary in interconnected fracture networks. Prediction and characterization of flow and transport in fractured rock has important implications in underground repositories for hazardous materials (eg. nuclear and chemical waste), contaminant migration and remediation, groundwater resource management, and hydrocarbon extraction. We have developed methods to explicitly model flow in discrete fracture networks and track flow paths using passive particle tracking algorithms. Visualization and analysis of particle trajectory through the fracture network is important to understanding fracture connectivity, flow patterns, potential contaminant pathways and fast paths through the network. However, occlusion due to the large number of highly tessellated and intersecting fracture polygons preclude the effective use of traditional visualization methods. We would also like quantitative analysis methods to characterize the trajectory of a large number of particle paths. We have solved these problems by defining a hierarchal flow network representing the topology of particle flow through the fracture network. This approach allows us to analyses the flow and the dynamics of the system as a whole. We are able to easily query the flow network, and use paint-and-link style framework to filter the fracture geometry and particle traces based on the flow analytics. This allows us to greatly reduce occlusion while emphasizing salient features such as the principal transport pathways. Examples are shown that demonstrate the methodology and highlight how use of this new method allows quantitative analysis and characterization of flow and transport in a number of representative fracture networks.

  18. Assessing exposure to violence using multiple informants: application of hierarchical linear model.

    Science.gov (United States)

    Kuo, M; Mohler, B; Raudenbush, S L; Earls, F J

    2000-11-01

    The present study assesses the effects of demographic risk factors on children's exposure to violence (ETV) and how these effects vary by informants. Data on exposure to violence of 9-, 12-, and 15-year-olds were collected from both child participants (N = 1880) and parents (N = 1776), as part of the assessment of the Project on Human Development in Chicago Neighborhoods (PHDCN). A two-level hierarchical linear model (HLM) with multivariate outcomes was employed to analyze information obtained from these two different groups of informants. The findings indicate that parents generally report less ETV than do their children and that associations of age, gender, and parent education with ETV are stronger in the self-reports than in the parent reports. The findings support a multivariate approach when information obtained from different sources is being integrated. The application of HLM allows an assessment of interactions between risk factors and informants and uses all available data, including data from one informant when data from the other informant is missing.

  19. A Bayesian Hierarchical Model for Glacial Dynamics Based on the Shallow Ice Approximation and its Evaluation Using Analytical Solutions

    Science.gov (United States)

    Gopalan, Giri; Hrafnkelsson, Birgir; Aðalgeirsdóttir, Guðfinna; Jarosch, Alexander H.; Pálsson, Finnur

    2018-03-01

    Bayesian hierarchical modeling can assist the study of glacial dynamics and ice flow properties. This approach will allow glaciologists to make fully probabilistic predictions for the thickness of a glacier at unobserved spatio-temporal coordinates, and it will also allow for the derivation of posterior probability distributions for key physical parameters such as ice viscosity and basal sliding. The goal of this paper is to develop a proof of concept for a Bayesian hierarchical model constructed, which uses exact analytical solutions for the shallow ice approximation (SIA) introduced by Bueler et al. (2005). A suite of test simulations utilizing these exact solutions suggests that this approach is able to adequately model numerical errors and produce useful physical parameter posterior distributions and predictions. A byproduct of the development of the Bayesian hierarchical model is the derivation of a novel finite difference method for solving the SIA partial differential equation (PDE). An additional novelty of this work is the correction of numerical errors induced through a numerical solution using a statistical model. This error correcting process models numerical errors that accumulate forward in time and spatial variation of numerical errors between the dome, interior, and margin of a glacier.

  20. The traveltime holographic principle

    KAUST Repository

    Huang, Y.; Schuster, Gerard T.

    2014-01-01

    Fermat's interferometric principle is used to compute interior transmission traveltimes τpq from exterior transmission traveltimes τsp and τsq. Here, the exterior traveltimes are computed for sources s on a boundary B that encloses a volume V of interior points p and q. Once the exterior traveltimes are computed, no further ray tracing is needed to calculate the interior times τpq. Therefore this interferometric approach can be more efficient than explicitly computing interior traveltimes τpq by ray tracing. Moreover, the memory requirement of the traveltimes is reduced by one dimension, because the boundary B is of one fewer dimension than the volume V. An application of this approach is demonstrated with interbed multiple (IM) elimination. Here, the IMs in the observed data are predicted from the migration image and are subsequently removed by adaptive subtraction. This prediction is enabled by the knowledge of interior transmission traveltimes τpq computed according to Fermat's interferometric principle. We denote this principle as the ‘traveltime holographic principle’, by analogy with the holographic principle in cosmology where information in a volume is encoded on the region's boundary.

  1. The traveltime holographic principle

    KAUST Repository

    Huang, Y.

    2014-11-06

    Fermat\\'s interferometric principle is used to compute interior transmission traveltimes τpq from exterior transmission traveltimes τsp and τsq. Here, the exterior traveltimes are computed for sources s on a boundary B that encloses a volume V of interior points p and q. Once the exterior traveltimes are computed, no further ray tracing is needed to calculate the interior times τpq. Therefore this interferometric approach can be more efficient than explicitly computing interior traveltimes τpq by ray tracing. Moreover, the memory requirement of the traveltimes is reduced by one dimension, because the boundary B is of one fewer dimension than the volume V. An application of this approach is demonstrated with interbed multiple (IM) elimination. Here, the IMs in the observed data are predicted from the migration image and are subsequently removed by adaptive subtraction. This prediction is enabled by the knowledge of interior transmission traveltimes τpq computed according to Fermat\\'s interferometric principle. We denote this principle as the ‘traveltime holographic principle’, by analogy with the holographic principle in cosmology where information in a volume is encoded on the region\\'s boundary.

  2. The traveltime holographic principle

    Science.gov (United States)

    Huang, Yunsong; Schuster, Gerard T.

    2015-01-01

    Fermat's interferometric principle is used to compute interior transmission traveltimes τpq from exterior transmission traveltimes τsp and τsq. Here, the exterior traveltimes are computed for sources s on a boundary B that encloses a volume V of interior points p and q. Once the exterior traveltimes are computed, no further ray tracing is needed to calculate the interior times τpq. Therefore this interferometric approach can be more efficient than explicitly computing interior traveltimes τpq by ray tracing. Moreover, the memory requirement of the traveltimes is reduced by one dimension, because the boundary B is of one fewer dimension than the volume V. An application of this approach is demonstrated with interbed multiple (IM) elimination. Here, the IMs in the observed data are predicted from the migration image and are subsequently removed by adaptive subtraction. This prediction is enabled by the knowledge of interior transmission traveltimes τpq computed according to Fermat's interferometric principle. We denote this principle as the `traveltime holographic principle', by analogy with the holographic principle in cosmology where information in a volume is encoded on the region's boundary.

  3. Clustering and Bayesian hierarchical modeling for the definition of informative prior distributions in hydrogeology

    Science.gov (United States)

    Cucchi, K.; Kawa, N.; Hesse, F.; Rubin, Y.

    2017-12-01

    In order to reduce uncertainty in the prediction of subsurface flow and transport processes, practitioners should use all data available. However, classic inverse modeling frameworks typically only make use of information contained in in-situ field measurements to provide estimates of hydrogeological parameters. Such hydrogeological information about an aquifer is difficult and costly to acquire. In this data-scarce context, the transfer of ex-situ information coming from previously investigated sites can be critical for improving predictions by better constraining the estimation procedure. Bayesian inverse modeling provides a coherent framework to represent such ex-situ information by virtue of the prior distribution and combine them with in-situ information from the target site. In this study, we present an innovative data-driven approach for defining such informative priors for hydrogeological parameters at the target site. Our approach consists in two steps, both relying on statistical and machine learning methods. The first step is data selection; it consists in selecting sites similar to the target site. We use clustering methods for selecting similar sites based on observable hydrogeological features. The second step is data assimilation; it consists in assimilating data from the selected similar sites into the informative prior. We use a Bayesian hierarchical model to account for inter-site variability and to allow for the assimilation of multiple types of site-specific data. We present the application and validation of the presented methods on an established database of hydrogeological parameters. Data and methods are implemented in the form of an open-source R-package and therefore facilitate easy use by other practitioners.

  4. 3D hierarchical computational model of wood as a cellular material with fibril reinforced, heterogeneous multiple layers

    DEFF Research Database (Denmark)

    Qing, Hai; Mishnaevsky, Leon

    2009-01-01

    A 3D hierarchical computational model of deformation and stiffness of wood, which takes into account the structures of wood at several scale levels (cellularity, multilayered nature of cell walls, composite-like structures of the wall layers) is developed. At the mesoscale, the softwood cell...... cellular model. With the use of the developed hierarchical model, the influence of the microstructure, including microfibril angles (MFAs, which characterizes the orientation of the cellulose fibrils with respect to the cell axis), the thickness of the cell wall, the shape of the cell cross...... is presented as a 3D hexagon-shape-tube with multilayered walls. The layers in the softwood cell are considered as considered as composite reinforced by microfibrils (celluloses). The elastic properties of the layers are determined with Halpin–Tsai equations, and introduced into mesoscale finite element...

  5. Approximate Bayesian Computation by Subset Simulation using hierarchical state-space models

    Science.gov (United States)

    Vakilzadeh, Majid K.; Huang, Yong; Beck, James L.; Abrahamsson, Thomas

    2017-02-01

    A new multi-level Markov Chain Monte Carlo algorithm for Approximate Bayesian Computation, ABC-SubSim, has recently appeared that exploits the Subset Simulation method for efficient rare-event simulation. ABC-SubSim adaptively creates a nested decreasing sequence of data-approximating regions in the output space that correspond to increasingly closer approximations of the observed output vector in this output space. At each level, multiple samples of the model parameter vector are generated by a component-wise Metropolis algorithm so that the predicted output corresponding to each parameter value falls in the current data-approximating region. Theoretically, if continued to the limit, the sequence of data-approximating regions would converge on to the observed output vector and the approximate posterior distributions, which are conditional on the data-approximation region, would become exact, but this is not practically feasible. In this paper we study the performance of the ABC-SubSim algorithm for Bayesian updating of the parameters of dynamical systems using a general hierarchical state-space model. We note that the ABC methodology gives an approximate posterior distribution that actually corresponds to an exact posterior where a uniformly distributed combined measurement and modeling error is added. We also note that ABC algorithms have a problem with learning the uncertain error variances in a stochastic state-space model and so we treat them as nuisance parameters and analytically integrate them out of the posterior distribution. In addition, the statistical efficiency of the original ABC-SubSim algorithm is improved by developing a novel strategy to regulate the proposal variance for the component-wise Metropolis algorithm at each level. We demonstrate that Self-regulated ABC-SubSim is well suited for Bayesian system identification by first applying it successfully to model updating of a two degree-of-freedom linear structure for three cases: globally

  6. A hierarchical spatial model of avian abundance with application to Cerulean Warblers

    Science.gov (United States)

    Thogmartin, Wayne E.; Sauer, John R.; Knutson, Melinda G.

    2004-01-01

    Surveys collecting count data are the primary means by which abundance is indexed for birds. These counts are confounded, however, by nuisance effects including observer effects and spatial correlation between counts. Current methods poorly accommodate both observer and spatial effects because modeling these spatially autocorrelated counts within a hierarchical framework is not practical using standard statistical approaches. We propose a Bayesian approach to this problem and provide as an example of its implementation a spatial model of predicted abundance for the Cerulean Warbler (Dendroica cerulea) in the Prairie-Hardwood Transition of the upper midwestern United States. We used an overdispersed Poisson regression with fixed and random effects, fitted by Markov chain Monte Carlo methods. We used 21 years of North American Breeding Bird Survey counts as the response in a loglinear function of explanatory variables describing habitat, spatial relatedness, year effects, and observer effects. The model included a conditional autoregressive term representing potential correlation between adjacent route counts. Categories of explanatory habitat variables in the model included land cover composition and configuration, climate, terrain heterogeneity, and human influence. The inherent hierarchy in the model was from counts occurring, in part, as a function of observers within survey routes within years. We found that the percentage of forested wetlands, an index of wetness potential, and an interaction between mean annual precipitation and deciduous forest patch size best described Cerulean Warbler abundance. Based on a map of relative abundance derived from the posterior parameter estimates, we estimated that only 15% of the species' population occurred on federal land, necessitating active engagement of public landowners and state agencies in the conservation of the breeding habitat for this species. Models of this type can be applied to any data in which the response

  7. The CP-odd sector and $θ$ dynamics in holographic QCD

    NARCIS (Netherlands)

    Arean, Daniel; Iatrakis, Ioannis; Jarvinen, Matti; Kiritsis, Elias

    2017-01-01

    The holographic model of V-QCD is used to analyze the physics of QCD in the Veneziano large-N limit. An unprecedented analysis of the CP-odd physics is performed going beyond the level of effective field theories. The structure of holographic saddle-points at finite $\\theta$ is determined, as well

  8. Constraints on holographic dark energy from type Ia supernova observations

    International Nuclear Information System (INIS)

    Zhang Xin; Wu Fengquan

    2005-01-01

    In this paper, we use the type Ia supernovae data to constrain the holographic dark energy model proposed by Li. We also apply a cosmic age test to this analysis. We consider in this paper a spatially flat Friedmann-Robertson-Walker universe with a matter component and a holographic dark energy component. The fit result shows that the case c m 0 =0.28, and h=0.65, which lead to the present equation of state of dark energy w 0 =-1.03 and the deceleration/acceleration transition redshift z T =0.63. Finally, an expected supernova/acceleration probe simulation using ΛCDM as a fiducial model is performed on this model, and the result shows that the holographic dark energy model takes on c<1 (c=0.92) even though the dark energy is indeed a cosmological constant

  9. Bayesian hierarchical models for regional climate reconstructions of the last glacial maximum

    Science.gov (United States)

    Weitzel, Nils; Hense, Andreas; Ohlwein, Christian

    2017-04-01

    Spatio-temporal reconstructions of past climate are important for the understanding of the long term behavior of the climate system and the sensitivity to forcing changes. Unfortunately, they are subject to large uncertainties, have to deal with a complex proxy-climate structure, and a physically reasonable interpolation between the sparse proxy observations is difficult. Bayesian Hierarchical Models (BHMs) are a class of statistical models that is well suited for spatio-temporal reconstructions of past climate because they permit the inclusion of multiple sources of information (e.g. records from different proxy types, uncertain age information, output from climate simulations) and quantify uncertainties in a statistically rigorous way. BHMs in paleoclimatology typically consist of three stages which are modeled individually and are combined using Bayesian inference techniques. The data stage models the proxy-climate relation (often named transfer function), the process stage models the spatio-temporal distribution of the climate variables of interest, and the prior stage consists of prior distributions of the model parameters. For our BHMs, we translate well-known proxy-climate transfer functions for pollen to a Bayesian framework. In addition, we can include Gaussian distributed local climate information from preprocessed proxy records. The process stage combines physically reasonable spatial structures from prior distributions with proxy records which leads to a multivariate posterior probability distribution for the reconstructed climate variables. The prior distributions that constrain the possible spatial structure of the climate variables are calculated from climate simulation output. We present results from pseudoproxy tests as well as new regional reconstructions of temperatures for the last glacial maximum (LGM, ˜ 21,000 years BP). These reconstructions combine proxy data syntheses with information from climate simulations for the LGM that were

  10. Magnonic holographic imaging of magnetic microstructures

    Energy Technology Data Exchange (ETDEWEB)

    Gutierrez, D.; Chiang, H.; Bhowmick, T.; Volodchenkov, A.D.; Ranjbar, M.; Liu, G.; Jiang, C.; Warren, C. [Department of Electrical and Computer Engineering, University of California - Riverside, Riverside, CA 92521 (United States); Khivintsev, Y.; Filimonov, Y. [Kotelnikov Institute of Radioengineering and Electronics of Russian Academy of Sciences, Saratov Branch, Saratov 410019 (Russian Federation); Saratov State University, Saratov 410012 (Russian Federation); Garay, J.; Lake, R.; Balandin, A.A. [Department of Electrical and Computer Engineering, University of California - Riverside, Riverside, CA 92521 (United States); Khitun, A., E-mail: akhitun@engr.ucr.edu [Department of Electrical and Computer Engineering, University of California - Riverside, Riverside, CA 92521 (United States)

    2017-04-15

    We propose and demonstrate a technique for magnetic microstructure imaging via their interaction with propagating spin waves. In this approach, the object of interest is placed on top of a magnetic testbed made of material with low spin wave damping. There are micro-antennas incorporated in the testbed. Two of these antennas are used for spin wave excitation while another one is used for the detecting of inductive voltage produced by the interfering spin waves. The measurements are repeated for different phase differences between the spin wave generating antennas which is equivalent to changing the angle of illumination. The collected data appear as a 3D plot – the holographic image of the object. We present experimental data showing magnonic holographic images of a low-coercivity Si/Co sample, a high-coercivity sample made of SrFe{sub 12}O{sub 19} and a diamagnetic copper sample. We also present images of the three samples consisting of a different amount of SrFe{sub 12}O{sub 19} powder. The imaging was accomplished on a Y{sub 3}Fe{sub 2}(FeO{sub 4}){sub 3} testbed at room temperature. The obtained data reveal the unique magnonic signatures of the objects. Experimental data is complemented by the results of numerical modeling, which qualitatively explain the characteristic features of the images. Potentially, magnonic holographic imaging may complement existing techniques and be utilized for non-destructive in-situ magnetic object characterization. The fundamental physical limits of this approach are also discussed. - Highlights: • A technique for magnetic microstructure imaging via their interaction with propagating spin waves is proposed. • In this technique, magnetic structures appear as 3D objects. • Several holographic images of magnetic microstructures are presented.

  11. A holographic waveguide based eye tracker

    Science.gov (United States)

    Liu, Changgeng; Pazzucconi, Beatrice; Liu, Juan; Liu, Lei; Yao, Xincheng

    2018-02-01

    We demonstrated the feasibility of using holographic waveguide for eye tracking. A custom-built holographic waveguide, a 20 mm x 60 mm x 3 mm flat glass substrate with integrated in- and out-couplers, was used for the prototype development. The in- and out-couplers, photopolymer films with holographic fringes, induced total internal reflection in the glass substrate. Diffractive optical elements were integrated into the in-coupler to serve as an optical collimator. The waveguide captured images of the anterior segment of the eye right in front of it and guided the images to a processing unit distant from the eye. The vector connecting the pupil center (PC) and the corneal reflex (CR) of the eye was used to compute eye position in the socket. An eye model, made of a high quality prosthetic eye, was used prototype validation. The benchtop prototype demonstrated a linear relationship between the angular eye position and the PC/CR vector over a range of 60 horizontal degrees and 30 vertical degrees at a resolution of 0.64-0.69 degrees/pixel by simple pixel count. The uncertainties of the measurements at different angular positions were within 1.2 pixels, which indicated that the prototype exhibited a high level of repeatability. These results confirmed that the holographic waveguide technology could be a feasible platform for developing a wearable eye tracker. Further development can lead to a compact, see-through eye tracker, which allows continuous monitoring of eye movement during real life tasks, and thus benefits diagnosis of oculomotor disorders.

  12. Type Ia Supernova Light Curve Inference: Hierarchical Models for Nearby SN Ia in the Optical and Near Infrared

    Science.gov (United States)

    Mandel, Kaisey; Kirshner, R. P.; Narayan, G.; Wood-Vasey, W. M.; Friedman, A. S.; Hicken, M.

    2010-01-01

    I have constructed a comprehensive statistical model for Type Ia supernova light curves spanning optical through near infrared data simultaneously. The near infrared light curves are found to be excellent standard candles (sigma(MH) = 0.11 +/- 0.03 mag) that are less vulnerable to systematic error from dust extinction, a major confounding factor for cosmological studies. A hierarchical statistical framework incorporates coherently multiple sources of randomness and uncertainty, including photometric error, intrinsic supernova light curve variations and correlations, dust extinction and reddening, peculiar velocity dispersion and distances, for probabilistic inference with Type Ia SN light curves. Inferences are drawn from the full probability density over individual supernovae and the SN Ia and dust populations, conditioned on a dataset of SN Ia light curves and redshifts. To compute probabilistic inferences with hierarchical models, I have developed BayeSN, a Markov Chain Monte Carlo algorithm based on Gibbs sampling. This code explores and samples the global probability density of parameters describing individual supernovae and the population. I have applied this hierarchical model to optical and near infrared data of over 100 nearby Type Ia SN from PAIRITEL, the CfA3 sample, and the literature. Using this statistical model, I find that SN with optical and NIR data have a smaller residual scatter in the Hubble diagram than SN with only optical data. The continued study of Type Ia SN in the near infrared will be important for improving their utility as precise and accurate cosmological distance indicators.

  13. Construction of Hierarchical Models for Fluid Dynamics in Earth and Planetary Sciences : DCMODEL project

    Science.gov (United States)

    Takahashi, Y. O.; Takehiro, S.; Sugiyama, K.; Odaka, M.; Ishiwatari, M.; Sasaki, Y.; Nishizawa, S.; Ishioka, K.; Nakajima, K.; Hayashi, Y.

    2012-12-01

    Toward the understanding of fluid motions of planetary atmospheres and planetary interiors by performing multiple numerical experiments with multiple models, we are now proceeding ``dcmodel project'', where a series of hierarchical numerical models with various complexity is developed and maintained. In ``dcmodel project'', a series of the numerical models are developed taking care of the following points: 1) a common ``style'' of program codes assuring readability of the software, 2) open source codes of the models to the public, 3) scalability of the models assuring execution on various scales of computational resources, 4) stressing the importance of documentation and presenting a method for writing reference manuals. The lineup of the models and utility programs of the project is as follows: Gtool5, ISPACK/SPML, SPMODEL, Deepconv, Dcpam, and Rdoc-f95. In the followings, features of each component are briefly described. Gtool5 (Ishiwatari et al., 2012) is a Fortran90 library, which provides data input/output interfaces and various utilities commonly used in the models of dcmodel project. A self-descriptive data format netCDF is adopted as a IO format of Gtool5. The interfaces of gtool5 library can reduce the number of operation steps for the data IO in the program code of the models compared with the interfaces of the raw netCDF library. Further, by use of gtool5 library, procedures for data IO and addition of metadata for post-processing can be easily implemented in the program codes in a consolidated form independent of the size and complexity of the models. ``ISPACK'' is the spectral transformation library and ``SPML (SPMODEL library)'' (Takehiro et al., 2006) is its wrapper library. Most prominent feature of SPML is a series of array-handling functions with systematic function naming rules, and this enables us to write codes with a form which is easily deduced from the mathematical expressions of the governing equations. ``SPMODEL'' (Takehiro et al., 2006

  14. Effect of quintessence on holographic fermionic spectrum

    Energy Technology Data Exchange (ETDEWEB)

    Kuang, Xiao-Mei [Yangzhou University, Center for Gravitation and Cosmology, College of Physical Science and Technology, Yangzhou (China); Pontificia Universidad Catolica de Valparaiso, Instituto de Fisica, Valparaiso (Chile); Wu, Jian-Pin [Bohai University, Institute of Gravitation and Cosmology, Department of Physics, School of Mathematics and Physics, Jinzhou (China)

    2017-10-15

    In this letter, we investigate the holographic fermionic spectrum without/with dipole coupling dual to the Reissner-Nordstroem anti-de Sitter (RN-AdS) black brane surrounded by quintessence. We find that the low energy excitation of this fermionic system without dipole coupling behaves as a non-Fermi liquid. In particular, the introduction of quintessence aggravates the degree of deviation from a Fermi liquid. For the system with dipole coupling, the phase transition from (non-)Fermi liquid to Mott phase can be observed. The ratio between the width of gap and the critical temperature, beyond which the gap closes, is also worked out. We find that this ratio is larger than that of the holographic fermionic system dual to the RN-AdS black brane and even the material of V O{sub 2}. It means that our holographic system with quintessence can model new phenomena of the condensed matter system and provide some new insights in their regard. (orig.)

  15. Holographic sensors for diagnostics of solution components

    International Nuclear Information System (INIS)

    Kraiskii, A V; Suitanov, T T; Postnikov, V A; Khamidulin, A V

    2010-01-01

    The properties of holographic sensors of two types are studied. The sensors are based on a three-dimensional polymer-network matrix of copolymers of acrylamide, acrylic acid (which are sensitive to the medium acidity and bivalent metal ions) and aminophenylboronic acid (sensitive to glucose). It is found that a change in the ionic composition of a solution results in changes in the distance between layers and in the diffraction efficiency of holograms. Variations in the shape of spectral lines, which are attributed to the inhomogeneity of a sensitive layer, and nonmonotonic changes in the emulsion thickness and diffraction efficiency were observed during transient processes. The composition of the components of a hydrogel medium is selected for systems which can be used as a base for glucose sensors with the mean holographic response in the region of physiological glucose concentration in model solutions achieving 40 nm/(mmol L -1 ). It is shown that the developed holographic sensors can be used for the visual and instrumental determination of the medium acidity, alcohol content, ionic strength, bivalent metal salts and the quality of water, in particular, for drinking. (laser applications and other topics in quantum electronics)

  16. Magnonic holographic imaging of magnetic microstructures

    Science.gov (United States)

    Gutierrez, D.; Chiang, H.; Bhowmick, T.; Volodchenkov, A. D.; Ranjbar, M.; Liu, G.; Jiang, C.; Warren, C.; Khivintsev, Y.; Filimonov, Y.; Garay, J.; Lake, R.; Balandin, A. A.; Khitun, A.

    2017-04-01

    We propose and demonstrate a technique for magnetic microstructure imaging via their interaction with propagating spin waves. In this approach, the object of interest is placed on top of a magnetic testbed made of material with low spin wave damping. There are micro-antennas incorporated in the testbed. Two of these antennas are used for spin wave excitation while another one is used for the detecting of inductive voltage produced by the interfering spin waves. The measurements are repeated for different phase differences between the spin wave generating antennas which is equivalent to changing the angle of illumination. The collected data appear as a 3D plot - the holographic image of the object. We present experimental data showing magnonic holographic images of a low-coercivity Si/Co sample, a high-coercivity sample made of SrFe12O19 and a diamagnetic copper sample. We also present images of the three samples consisting of a different amount of SrFe12O19 powder. The imaging was accomplished on a Y3Fe2(FeO4)3 testbed at room temperature. The obtained data reveal the unique magnonic signatures of the objects. Experimental data is complemented by the results of numerical modeling, which qualitatively explain the characteristic features of the images. Potentially, magnonic holographic imaging may complement existing techniques and be utilized for non-destructive in-situ magnetic object characterization. The fundamental physical limits of this approach are also discussed.

  17. Computer assisted holographic moire contouring

    Science.gov (United States)

    Sciammarella, Cesar A.

    2000-01-01

    Theoretical analyses and experimental results on holographic moire contouring on diffusely reflecting objects are presented. The sensitivity and limitations of the method are discussed. Particular emphasis is put on computer-assisted data retrieval, processing, and recording.

  18. Monitoring Farmland Loss Caused by Urbanization in Beijing from Modis Time Series Using Hierarchical Hidden Markov Model

    Science.gov (United States)

    Yuan, Y.; Meng, Y.; Chen, Y. X.; Jiang, C.; Yue, A. Z.

    2018-04-01

    In this study, we proposed a method to map urban encroachment onto farmland using satellite image time series (SITS) based on the hierarchical hidden Markov model (HHMM). In this method, the farmland change process is decomposed into three hierarchical levels, i.e., the land cover level, the vegetation phenology level, and the SITS level. Then a three-level HHMM is constructed to model the multi-level semantic structure of farmland change process. Once the HHMM is established, a change from farmland to built-up could be detected by inferring the underlying state sequence that is most likely to generate the input time series. The performance of the method is evaluated on MODIS time series in Beijing. Results on both simulated and real datasets demonstrate that our method improves the change detection accuracy compared with the HMM-based method.

  19. Hierarchical Bayesian Model for Simultaneous EEG Source and Forward Model Reconstruction (SOFOMORE)

    DEFF Research Database (Denmark)

    Stahlhut, Carsten; Mørup, Morten; Winther, Ole

    2009-01-01

    In this paper we propose an approach to handle forward model uncertainty for EEG source reconstruction. A stochastic forward model is motivated by the many uncertain contributions that form the forward propagation model including the tissue conductivity distribution, the cortical surface, and ele......In this paper we propose an approach to handle forward model uncertainty for EEG source reconstruction. A stochastic forward model is motivated by the many uncertain contributions that form the forward propagation model including the tissue conductivity distribution, the cortical surface...

  20. Assessing Local Model Adequacy in Bayesian Hierarchical Models Using the Partitioned Deviance Information Criterion

    Science.gov (United States)

    Wheeler, David C.; Hickson, DeMarc A.; Waller, Lance A.

    2010-01-01

    Many diagnostic tools and goodness-of-fit measures, such as the Akaike information criterion (AIC) and the Bayesian deviance information criterion (DIC), are available to evaluate the overall adequacy of linear regression models. In addition, visually assessing adequacy in models has become an essential part of any regression analysis. In this paper, we focus on a spatial consideration of the local DIC measure for model selection and goodness-of-fit evaluation. We use a partitioning of the DIC into the local DIC, leverage, and deviance residuals to assess local model fit and influence for both individual observations and groups of observations in a Bayesian framework. We use visualization of the local DIC and differences in local DIC between models to assist in model selection and to visualize the global and local impacts of adding covariates or model parameters. We demonstrate the utility of the local DIC in assessing model adequacy using HIV prevalence data from pregnant women in the Butare province of Rwanda during 1989-1993 using a range of linear model specifications, from global effects only to spatially varying coefficient models, and a set of covariates related to sexual behavior. Results of applying the diagnostic visualization approach include more refined model selection and greater understanding of the models as applied to the data. PMID:21243121