WorldWideScience

Sample records for hierarchical error diffusion

  1. Hierarchical Boltzmann simulations and model error estimation

    Science.gov (United States)

    Torrilhon, Manuel; Sarna, Neeraj

    2017-08-01

    A hierarchical simulation approach for Boltzmann's equation should provide a single numerical framework in which a coarse representation can be used to compute gas flows as accurately and efficiently as in computational fluid dynamics, but a subsequent refinement allows to successively improve the result to the complete Boltzmann result. We use Hermite discretization, or moment equations, for the steady linearized Boltzmann equation for a proof-of-concept of such a framework. All representations of the hierarchy are rotationally invariant and the numerical method is formulated on fully unstructured triangular and quadrilateral meshes using a implicit discontinuous Galerkin formulation. We demonstrate the performance of the numerical method on model problems which in particular highlights the relevance of stability of boundary conditions on curved domains. The hierarchical nature of the method allows also to provide model error estimates by comparing subsequent representations. We present various model errors for a flow through a curved channel with obstacles.

  2. Renormalization of Hierarchically Interacting Isotropic Diffusions

    Science.gov (United States)

    den Hollander, F.; Swart, J. M.

    1998-10-01

    We study a renormalization transformation arising in an infinite system of interacting diffusions. The components of the system are labeled by the N-dimensional hierarchical lattice ( N≥2) and take values in the closure of a compact convex set bar D subset {R}^d (d ≥slant 1). Each component starts at some θ ∈ D and is subject to two motions: (1) an isotropic diffusion according to a local diffusion rate g: bar D to [0,infty ] chosen from an appropriate class; (2) a linear drift toward an average of the surrounding components weighted according to their hierarchical distance. In the local mean-field limit N→∞, block averages of diffusions within a hierarchical distance k, on an appropriate time scale, are expected to perform a diffusion with local diffusion rate F ( k) g, where F^{(k)} g = (F_{c_k } circ ... circ F_{c_1 } ) g is the kth iterate of renormalization transformations F c ( c>0) applied to g. Here the c k measure the strength of the interaction at hierarchical distance k. We identify F c and study its orbit ( F ( k) g) k≥0. We show that there exists a "fixed shape" g* such that lim k→∞ σk F ( k) g = g* for all g, where the σ k are normalizing constants. In terms of the infinite system, this property means that there is complete universal behavior on large space-time scales. Our results extend earlier work for d = 1 and bar D = [0,1], resp. [0, ∞). The renormalization transformation F c is defined in terms of the ergodic measure of a d-dimensional diffusion. In d = 1 this diffusion allows a Yamada-Watanabe-type coupling, its ergodic measure is reversible, and the renormalization transformation F c is given by an explicit formula. All this breaks down in d≥2, which complicates the analysis considerably and forces us to new methods. Part of our results depend on a certain martingale problem being well-posed.

  3. Hierarchical error representation in medial prefrontal cortex.

    Science.gov (United States)

    Zarr, Noah; Brown, Joshua W

    2016-01-01

    The medial prefrontal cortex (mPFC) is reliably activated by both performance and prediction errors. Error signals have typically been treated as a scalar, and it is unknown to what extent multiple error signals may co-exist within mPFC. Previous studies have shown that lateral frontal cortex (LFC) is arranged in a hierarchy of abstraction, such that more abstract concepts and rules are represented in more anterior cortical regions. Given the close interaction between lateral and medial prefrontal cortex, we explored the hypothesis that mPFC would be organized along a similar rostro-caudal gradient of abstraction, such that more abstract prediction errors are represented further anterior and more concrete errors further posterior. We show that multiple prediction error signals can be found in mPFC, and furthermore, these are arranged in a rostro-caudal gradient of abstraction which parallels that found in LFC. We used a task that requires a three-level hierarchy of rules to be followed, in which the rules changed without warning at each level of the hierarchy. Task feedback indicated which level of the rule hierarchy changed and led to corresponding prediction error signals in mPFC. Moreover, each identified region of mPFC was preferentially functionally connected to correspondingly anterior regions of LFC. These results suggest the presence of a parallel structure between lateral and medial prefrontal cortex, with the medial regions monitoring and evaluating performance based on rules maintained in the corresponding lateral regions.

  4. Hierarchical prediction errors in midbrain and septum during social learning

    Science.gov (United States)

    Mathys, Christoph; Weber, Lilian A. E.; Kasper, Lars; Mauer, Jan; Stephan, Klaas E.

    2017-01-01

    Abstract Social learning is fundamental to human interactions, yet its computational and physiological mechanisms are not well understood. One prominent open question concerns the role of neuromodulatory transmitters. We combined fMRI, computational modelling and genetics to address this question in two separate samples (N = 35, N = 47). Participants played a game requiring inference on an adviser’s intentions whose motivation to help or mislead changed over time. Our analyses suggest that hierarchically structured belief updates about current advice validity and the adviser’s trustworthiness, respectively, depend on different neuromodulatory systems. Low-level prediction errors (PEs) about advice accuracy not only activated regions known to support ‘theory of mind’, but also the dopaminergic midbrain. Furthermore, PE responses in ventral striatum were influenced by the Met/Val polymorphism of the Catechol-O-Methyltransferase (COMT) gene. By contrast, high-level PEs (‘expected uncertainty’) about the adviser’s fidelity activated the cholinergic septum. These findings, replicated in both samples, have important implications: They suggest that social learning rests on hierarchically related PEs encoded by midbrain and septum activity, respectively, in the same manner as other forms of learning under volatility. Furthermore, these hierarchical PEs may be broadcast by dopaminergic and cholinergic projections to induce plasticity specifically in cortical areas known to represent beliefs about others. PMID:28119508

  5. Color extended visual cryptography using error diffusion.

    Science.gov (United States)

    Kang, InKoo; Arce, Gonzalo R; Lee, Heung-Kyu

    2011-01-01

    Color visual cryptography (VC) encrypts a color secret message into n color halftone image shares. Previous methods in the literature show good results for black and white or gray scale VC schemes, however, they are not sufficient to be applied directly to color shares due to different color structures. Some methods for color visual cryptography are not satisfactory in terms of producing either meaningless shares or meaningful shares with low visual quality, leading to suspicion of encryption. This paper introduces the concept of visual information pixel (VIP) synchronization and error diffusion to attain a color visual cryptography encryption method that produces meaningful color shares with high visual quality. VIP synchronization retains the positions of pixels carrying visual information of original images throughout the color channels and error diffusion generates shares pleasant to human eyes. Comparisons with previous approaches show the superior performance of the new method.

  6. Hierarchical set of models to estimate soil thermal diffusivity

    Science.gov (United States)

    Arkhangelskaya, Tatiana; Lukyashchenko, Ksenia

    2016-04-01

    Soil thermal properties significantly affect the land-atmosphere heat exchange rates. Intra-soil heat fluxes depend both on temperature gradients and soil thermal conductivity. Soil temperature changes due to energy fluxes are determined by soil specific heat. Thermal diffusivity is equal to thermal conductivity divided by volumetric specific heat and reflects both the soil ability to transfer heat and its ability to change temperature when heat is supplied or withdrawn. The higher soil thermal diffusivity is, the thicker is the soil/ground layer in which diurnal and seasonal temperature fluctuations are registered and the smaller are the temperature fluctuations at the soil surface. Thermal diffusivity vs. moisture dependencies for loams, sands and clays of the East European Plain were obtained using the unsteady-state method. Thermal diffusivity of different soils differed greatly, and for a given soil it could vary by 2, 3 or even 5 times depending on soil moisture. The shapes of thermal diffusivity vs. moisture dependencies were different: peak curves were typical for sandy soils and sigmoid curves were typical for loamy and especially for compacted soils. The lowest thermal diffusivities and the smallest range of their variability with soil moisture were obtained for clays with high humus content. Hierarchical set of models will be presented, allowing an estimate of soil thermal diffusivity from available data on soil texture, moisture, bulk density and organic carbon. When developing these models the first step was to parameterize the experimental thermal diffusivity vs. moisture dependencies with a 4-parameter function; the next step was to obtain regression formulas to estimate the function parameters from available data on basic soil properties; the last step was to evaluate the accuracy of suggested models using independent data on soil thermal diffusivity. The simplest models were based on soil bulk density and organic carbon data and provided different

  7. Bayesian hierarchical error model for analysis of gene expression data

    National Research Council Canada - National Science Library

    Cho, HyungJun; Lee, Jae K

    2004-01-01

    .... Moreover, the same gene often shows quite heterogeneous error variability under different biological and experimental conditions, which must be estimated separately for evaluating the statistical...

  8. A new anisotropic mesh adaptation method based upon hierarchical a posteriori error estimates

    Science.gov (United States)

    Huang, Weizhang; Kamenski, Lennard; Lang, Jens

    2010-03-01

    A new anisotropic mesh adaptation strategy for finite element solution of elliptic differential equations is presented. It generates anisotropic adaptive meshes as quasi-uniform ones in some metric space, with the metric tensor being computed based on hierarchical a posteriori error estimates. A global hierarchical error estimate is employed in this study to obtain reliable directional information of the solution. Instead of solving the global error problem exactly, which is costly in general, we solve it iteratively using the symmetric Gauß-Seidel method. Numerical results show that a few GS iterations are sufficient for obtaining a reasonably good approximation to the error for use in anisotropic mesh adaptation. The new method is compared with several strategies using local error estimators or recovered Hessians. Numerical results are presented for a selection of test examples and a mathematical model for heat conduction in a thermal battery with large orthotropic jumps in the material coefficients.

  9. Trans-dimensional matched-field geoacoustic inversion with hierarchical error models and interacting Markov chains.

    Science.gov (United States)

    Dettmer, Jan; Dosso, Stan E

    2012-10-01

    This paper develops a trans-dimensional approach to matched-field geoacoustic inversion, including interacting Markov chains to improve efficiency and an autoregressive model to account for correlated errors. The trans-dimensional approach and hierarchical seabed model allows inversion without assuming any particular parametrization by relaxing model specification to a range of plausible seabed models (e.g., in this case, the number of sediment layers is an unknown parameter). Data errors are addressed by sampling statistical error-distribution parameters, including correlated errors (covariance), by applying a hierarchical autoregressive error model. The well-known difficulty of low acceptance rates for trans-dimensional jumps is addressed with interacting Markov chains, resulting in a substantial increase in efficiency. The trans-dimensional seabed model and the hierarchical error model relax the degree of prior assumptions required in the inversion, resulting in substantially improved (more realistic) uncertainty estimates and a more automated algorithm. In particular, the approach gives seabed parameter uncertainty estimates that account for uncertainty due to prior model choice (layering and data error statistics). The approach is applied to data measured on a vertical array in the Mediterranean Sea.

  10. Data with hierarchical structure: impact of intraclass correlation and sample size on type-I error.

    Science.gov (United States)

    Musca, Serban C; Kamiejski, Rodolphe; Nugier, Armelle; Méot, Alain; Er-Rafiy, Abdelatif; Brauer, Markus

    2011-01-01

    Least squares analyses (e.g., ANOVAs, linear regressions) of hierarchical data leads to Type-I error rates that depart severely from the nominal Type-I error rate assumed. Thus, when least squares methods are used to analyze hierarchical data coming from designs in which some groups are assigned to the treatment condition, and others to the control condition (i.e., the widely used "groups nested under treatment" experimental design), the Type-I error rate is seriously inflated, leading too often to the incorrect rejection of the null hypothesis (i.e., the incorrect conclusion of an effect of the treatment). To highlight the severity of the problem, we present simulations showing how the Type-I error rate is affected under different conditions of intraclass correlation and sample size. For all simulations the Type-I error rate after application of the popular Kish (1965) correction is also considered, and the limitations of this correction technique discussed. We conclude with suggestions on how one should collect and analyze data bearing a hierarchical structure.

  11. Data with hierarchical structure: impact of intraclass correlation and sample size on Type-I error

    Directory of Open Access Journals (Sweden)

    Serban C Musca

    2011-04-01

    Full Text Available Least squares analyses (e.g., ANOVAs, linear regressions of hierarchical data leads to Type-I error rates that depart severely from the nominal Type-I error rate assumed. Thus, when least squares methods are used to analyze hierarchical data coming from designs in which some groups are assigned to the treatment condition, and others to the control condition (i.e., the widely used "groups nested under treatment" experimental design, the Type-I error rate is seriously inflated, leading too often to the incorrect rejection of the null hypothesis (i.e., the incorrect conclusion of an effect of the treatment. To highlight the severity of the problem, we present simulations showing how the Type-I error rate is affected under different conditions of intraclass correlation and sample size. For all simulations the Type-I error rate after application of the popular Kish (1965 correction is also considered, and the limitations of this correction technique discussed. We conclude with suggestions on how one should collect and analyze data bearing a hierarchical structure.

  12. Classification errors in contingency tables analyzed with hierarchical log-linear models. Technical report No. 20

    Energy Technology Data Exchange (ETDEWEB)

    Korn, E L

    1978-08-01

    This thesis is concerned with the effect of classification error on contingency tables being analyzed with hierarchical log-linear models (independence in an I x J table is a particular hierarchical log-linear model). Hierarchical log-linear models provide a concise way of describing independence and partial independences between the different dimensions of a contingency table. The structure of classification errors on contingency tables that will be used throughout is defined. This structure is a generalization of Bross' model, but here attention is paid to the different possible ways a contingency table can be sampled. Hierarchical log-linear models and the effect of misclassification on them are described. Some models, such as independence in an I x J table, are preserved by misclassification, i.e., the presence of classification error will not change the fact that a specific table belongs to that model. Other models are not preserved by misclassification; this implies that the usual tests to see if a sampled table belong to that model will not be of the right significance level. A simple criterion will be given to determine which hierarchical log-linear models are preserved by misclassification. Maximum likelihood theory is used to perform log-linear model analysis in the presence of known misclassification probabilities. It will be shown that the Pitman asymptotic power of tests between different hierarchical log-linear models is reduced because of the misclassification. A general expression will be given for the increase in sample size necessary to compensate for this loss of power and some specific cases will be examined.

  13. Hierarchical Bayesian modeling of the space - time diffusion patterns of cholera epidemic in Kumasi, Ghana

    NARCIS (Netherlands)

    Osei, Frank B.; Osei, F.B.; Duker, Alfred A.; Stein, A.

    2011-01-01

    This study analyses the joint effects of the two transmission routes of cholera on the space-time diffusion dynamics. Statistical models are developed and presented to investigate the transmission network routes of cholera diffusion. A hierarchical Bayesian modelling approach is employed for a joint

  14. Hierarchical Bayesian modeling of the space-time diffusion patterns of cholera epidemic in Kumasi, Ghana

    NARCIS (Netherlands)

    Osei, Frank B.; Duker, Alfred A.; Stein, Alfred

    2011-01-01

    This study analyses the joint effects of the two transmission routes of cholera on the space-time diffusion dynamics. Statistical models are developed and presented to investigate the transmission network routes of cholera diffusion. A hierarchical Bayesian modelling approach is employed for a joint

  15. Improved spectral vector error diffusion by dot gain compensation

    Science.gov (United States)

    Nyström, Daniel; Norberg, Ole

    2013-02-01

    Spectral Vector Error Diffusion, sVED, is an interesting approach to achieve spectral color reproduction, i.e. reproducing the spectral reflectance of an original, creating a reproduction that will match under any illumination. For each pixel in the spectral image, the colorant combination producing the spectrum closest to the target spectrum is selected, and the spectral error is diffused to surrounding pixels using an error distribution filter. However, since the colorant separation and halftoning is performed in a single step in sVED, compensation for dot gain cannot be made for each color channel independently, as in a conventional workflow where the colorant separation and halftoning is performed sequentially. In this study, we modify the sVED routine to compensate for the dot gain, applying the Yule-Nielsen n-factor to modify the target spectra, i.e. performing the computations in (1/n)-space. A global n-factor, optimal for each print resolution, reduces the spectral reproduction errors by approximately a factor of 4, while an n-factor that is individually optimized for each target spectrum reduces the spectral reproduction error to 7% of that for the unmodified prints. However, the improvements when using global n-values are still not sufficient for the method to be of any real use in practice, and to individually optimize the n-values for each target is not feasible in a real workflow. The results illustrate the necessity to properly account for the dot gain in the printing process, and that further developments is needed in order to make Spectral Vector Error Diffusion a realistic alternative for spectral color reproduction.

  16. A Bayesian hierarchical diffusion model decomposition of performance in Approach-Avoidance Tasks.

    Science.gov (United States)

    Krypotos, Angelos-Miltiadis; Beckers, Tom; Kindt, Merel; Wagenmakers, Eric-Jan

    2015-01-01

    Common methods for analysing response time (RT) tasks, frequently used across different disciplines of psychology, suffer from a number of limitations such as the failure to directly measure the underlying latent processes of interest and the inability to take into account the uncertainty associated with each individual's point estimate of performance. Here, we discuss a Bayesian hierarchical diffusion model and apply it to RT data. This model allows researchers to decompose performance into meaningful psychological processes and to account optimally for individual differences and commonalities, even with relatively sparse data. We highlight the advantages of the Bayesian hierarchical diffusion model decomposition by applying it to performance on Approach-Avoidance Tasks, widely used in the emotion and psychopathology literature. Model fits for two experimental data-sets demonstrate that the model performs well. The Bayesian hierarchical diffusion model overcomes important limitations of current analysis procedures and provides deeper insight in latent psychological processes of interest.

  17. Evaluation of digital halftones image by vector error diffusion

    Science.gov (United States)

    Kouzaki, Masahiro; Itoh, Tetsuya; Kawaguchi, Takayuki; Tsumura, Norimichi; Haneishi, Hideaki; Miyake, Yoichi

    1998-12-01

    The vector error diffusion (VED) method is applied to proudce the digital halftone images by an electrophotographic printer with 600 dpi. Objective image quality of those obtained images is evaluated and analyzed. As a result, in the color reproduction of halftone image by the VED method, it was clear that there are large color difference between target color and printed color typically in the mid-tone colors. We consider it is due to the printer properties including dot-gain. It was also clear that the color noise of the VED method is larger compared with that of the conventional scalar error diffusion method in some patches. It was remarkable that ununiform patterns are generated by the VED method.

  18. Trans-dimensional inversion of microtremor array dispersion data with hierarchical autoregressive error models

    Science.gov (United States)

    Dettmer, Jan; Molnar, Sheri; Steininger, Gavin; Dosso, Stan E.; Cassidy, John F.

    2012-02-01

    This paper applies a general trans-dimensional Bayesian inference methodology and hierarchical autoregressive data-error models to the inversion of microtremor array dispersion data for shear wave velocity (vs) structure. This approach accounts for the limited knowledge of the optimal earth model parametrization (e.g. the number of layers in the vs profile) and of the data-error statistics in the resulting vs parameter uncertainty estimates. The assumed earth model parametrization influences estimates of parameter values and uncertainties due to different parametrizations leading to different ranges of data predictions. The support of the data for a particular model is often non-unique and several parametrizations may be supported. A trans-dimensional formulation accounts for this non-uniqueness by including a model-indexing parameter as an unknown so that groups of models (identified by the indexing parameter) are considered in the results. The earth model is parametrized in terms of a partition model with interfaces given over a depth-range of interest. In this work, the number of interfaces (layers) in the partition model represents the trans-dimensional model indexing. In addition, serial data-error correlations are addressed by augmenting the geophysical forward model with a hierarchical autoregressive error model that can account for a wide range of error processes with a small number of parameters. Hence, the limited knowledge about the true statistical distribution of data errors is also accounted for in the earth model parameter estimates, resulting in more realistic uncertainties and parameter values. Hierarchical autoregressive error models do not rely on point estimates of the model vector to estimate data-error statistics, and have no requirement for computing the inverse or determinant of a data-error covariance matrix. This approach is particularly useful for trans-dimensional inverse problems, as point estimates may not be representative of the

  19. Hierarchical prediction errors in midbrain and basal forebrain during sensory learning.

    Science.gov (United States)

    Iglesias, Sandra; Mathys, Christoph; Brodersen, Kay H; Kasper, Lars; Piccirelli, Marco; den Ouden, Hanneke E M; Stephan, Klaas E

    2013-10-16

    In Bayesian brain theories, hierarchically related prediction errors (PEs) play a central role for predicting sensory inputs and inferring their underlying causes, e.g., the probabilistic structure of the environment and its volatility. Notably, PEs at different hierarchical levels may be encoded by different neuromodulatory transmitters. Here, we tested this possibility in computational fMRI studies of audio-visual learning. Using a hierarchical Bayesian model, we found that low-level PEs about visual stimulus outcome were reflected by widespread activity in visual and supramodal areas but also in the midbrain. In contrast, high-level PEs about stimulus probabilities were encoded by the basal forebrain. These findings were replicated in two groups of healthy volunteers. While our fMRI measures do not reveal the exact neuron types activated in midbrain and basal forebrain, they suggest a dichotomy between neuromodulatory systems, linking dopamine to low-level PEs about stimulus outcome and acetylcholine to more abstract PEs about stimulus probabilities.

  20. Edge-Directed Error Diffused Digital Halftoning: A Steerable Filter Approach

    Directory of Open Access Journals (Sweden)

    Pardeep Garg

    2009-09-01

    Full Text Available In this paper the edge-directed error diffused digital halftoning in noisy media is analyzed. It is known that there occurs error in transmitting data through a communication channel due toaddition of noise, generally additive white Gaussian noise (AWGN. The proposed work employs Steerable stochastic error diffusion (SSED approach, a hybrid scheme that utilizes the advantages of Steerable filter for edge detection purpose and five neighbor stochastic error diffusion (FNSED approach for error diffusion purpose. Analysis of different methods of edgedetectionand error diffusion in the presence of zero mean AWGN with different values of variance has also been made. The results show that the proposed scheme produces halftones of better quality in the presence of even large value of noise variance compared to other approaches of edge detection and error diffusion.

  1. On the hierarchical risk-averse control problems for diffusion processes

    OpenAIRE

    Befekadu, Getachew K.; Veremyev, Alexander; Pasiliao, Eduardo L.

    2016-01-01

    In this paper, we consider a risk-averse control problem for diffusion processes, in which there is a partition of the admissible control strategy into two decision-making groups (namely, the {\\it leader} and {\\it follower}) with different cost functionals and risk-averse satisfactions. Our approach, based on a hierarchical optimization framework, requires that a certain level of risk-averse satisfaction be achieved for the {\\it leader} as a priority over that of the {\\it follower's} risk-ave...

  2. The type I error rate for in vivo Comet assay data when the hierarchical structure is disregarded

    DEFF Research Database (Denmark)

    Hansen, Merete Kjær; Kulahci, Murat

    The Comet assay is a sensitive technique for detection of DNA strand breaks. The experimental design of in vivo Comet assay studies are often hierarchically structured, which should be reWected in the statistical analysis. However, the hierarchical structure sometimes seems to be disregarded, and...... the exposition of the statistical methodology and to suitably account for the hierarchical structure of Comet assay data whenever present.......The Comet assay is a sensitive technique for detection of DNA strand breaks. The experimental design of in vivo Comet assay studies are often hierarchically structured, which should be reWected in the statistical analysis. However, the hierarchical structure sometimes seems to be disregarded......, and this imposes considerable impact on the type I error rate. This study aims to demonstrate the implications that result from disregarding the hierarchical structure. DiUerent combinations of the factor levels as they appear in a literature study give type I error rates up to 0.51 and for all combinations...

  3. HDDM: Hierarchical Bayesian estimation of the Drift-Diffusion Model in Python.

    Science.gov (United States)

    Wiecki, Thomas V; Sofer, Imri; Frank, Michael J

    2013-01-01

    The diffusion model is a commonly used tool to infer latent psychological processes underlying decision-making, and to link them to neural mechanisms based on response times. Although efficient open source software has been made available to quantitatively fit the model to data, current estimation methods require an abundance of response time measurements to recover meaningful parameters, and only provide point estimates of each parameter. In contrast, hierarchical Bayesian parameter estimation methods are useful for enhancing statistical power, allowing for simultaneous estimation of individual subject parameters and the group distribution that they are drawn from, while also providing measures of uncertainty in these parameters in the posterior distribution. Here, we present a novel Python-based toolbox called HDDM (hierarchical drift diffusion model), which allows fast and flexible estimation of the the drift-diffusion model and the related linear ballistic accumulator model. HDDM requires fewer data per subject/condition than non-hierarchical methods, allows for full Bayesian data analysis, and can handle outliers in the data. Finally, HDDM supports the estimation of how trial-by-trial measurements (e.g., fMRI) influence decision-making parameters. This paper will first describe the theoretical background of the drift diffusion model and Bayesian inference. We then illustrate usage of the toolbox on a real-world data set from our lab. Finally, parameter recovery studies show that HDDM beats alternative fitting methods like the χ(2)-quantile method as well as maximum likelihood estimation. The software and documentation can be downloaded at: http://ski.clps.brown.edu/hddm_docs/

  4. HDDM: Hierarchical Bayesian estimation of the Drift-Diffusion Model in Python

    Directory of Open Access Journals (Sweden)

    Thomas V Wiecki

    2013-08-01

    Full Text Available The diffusion model is a commonly used tool to infer latent psychological processes underlying decision making, and to link them to neural mechanisms based on reaction times. Although efficient open source software has been made available to quantitatively fit the model to data, current estimation methods require an abundance of reaction time measurements to recover meaningful parameters, and only provide point estimates of each parameter. In contrast, hierarchical Bayesian parameter estimation methods are useful for enhancing statistical power, allowing for simultaneous estimation of individual subject parameters and the group distribution that they are drawn from, while also providing measures of uncertainty in these parameters in the posterior distribution. Here, we present a novel Python-based toolbox called HDDM (hierarchical drift diffusion model, which allows fast and flexible estimation of the the drift-diffusion model and the related linear ballistic accumulator model. HDDM requires fewer data per subject / condition than non-hierarchical method, allows for full Bayesian data analysis, and can handle outliers in the data. Finally, HDDM supports the estimation of how trial-by-trial measurements (e.g. fMRI influence decision making parameters. This paper will first describe the theoretical background of drift-diffusion model and Bayesian inference. We then illustrate usage of the toolbox on a real-world data set from our lab. Finally, parameter recovery studies show that HDDM beats alternative fitting methods like the chi-quantile method as well as maximum likelihood estimation. The software and documentation can be downloaded at: http://ski.clps.brown.edu/hddm_docs

  5. Computable error estimates for Monte Carlo finite element approximation of elliptic PDE with lognormal diffusion coefficients

    KAUST Repository

    Hall, Eric

    2016-01-09

    The Monte Carlo (and Multi-level Monte Carlo) finite element method can be used to approximate observables of solutions to diffusion equations with lognormal distributed diffusion coefficients, e.g. modeling ground water flow. Typical models use lognormal diffusion coefficients with H´ older regularity of order up to 1/2 a.s. This low regularity implies that the high frequency finite element approximation error (i.e. the error from frequencies larger than the mesh frequency) is not negligible and can be larger than the computable low frequency error. We address how the total error can be estimated by the computable error.

  6. Computational error estimates for Monte Carlo finite element approximation with log normal diffusion coefficients

    KAUST Repository

    Sandberg, Mattias

    2015-01-07

    The Monte Carlo (and Multi-level Monte Carlo) finite element method can be used to approximate observables of solutions to diffusion equations with log normal distributed diffusion coefficients, e.g. modelling ground water flow. Typical models use log normal diffusion coefficients with H¨older regularity of order up to 1/2 a.s. This low regularity implies that the high frequency finite element approximation error (i.e. the error from frequencies larger than the mesh frequency) is not negligible and can be larger than the computable low frequency error. This talk will address how the total error can be estimated by the computable error.

  7. Statistical error in simulations of Poisson processes: Example of diffusion in solids

    Science.gov (United States)

    Nilsson, Johan O.; Leetmaa, Mikael; Vekilova, Olga Yu.; Simak, Sergei I.; Skorodumova, Natalia V.

    2016-08-01

    Simulations of diffusion in solids often produce poor statistics of diffusion events. We present an analytical expression for the statistical error in ion conductivity obtained in such simulations. The error expression is not restricted to any computational method in particular, but valid in the context of simulation of Poisson processes in general. This analytical error expression is verified numerically for the case of Gd-doped ceria by running a large number of kinetic Monte Carlo calculations.

  8. A Logistic Regression Model with a Hierarchical Random Error Term for Analyzing the Utilization of Public Transport

    Directory of Open Access Journals (Sweden)

    Chong Wei

    2015-01-01

    Full Text Available Logistic regression models have been widely used in previous studies to analyze public transport utilization. These studies have shown travel time to be an indispensable variable for such analysis and usually consider it to be a deterministic variable. This formulation does not allow us to capture travelers’ perception error regarding travel time, and recent studies have indicated that this error can have a significant effect on modal choice behavior. In this study, we propose a logistic regression model with a hierarchical random error term. The proposed model adds a new random error term for the travel time variable. This term structure enables us to investigate travelers’ perception error regarding travel time from a given choice behavior dataset. We also propose an extended model that allows constraining the sign of this error in the model. We develop two Gibbs samplers to estimate the basic hierarchical model and the extended model. The performance of the proposed models is examined using a well-known dataset.

  9. Analysis of phase error effects in multishot diffusion-prepared turbo spin echo imaging.

    Science.gov (United States)

    Van, Anh T; Cervantes, Barbara; Kooijman, Hendrik; Karampinos, Dimitrios C

    2017-04-01

    To characterize the effect of phase errors on the magnitude and the phase of the diffusion-weighted (DW) signal acquired with diffusion-prepared turbo spin echo (dprep-TSE) sequences. Motion and eddy currents were identified as the main sources of phase errors. An analytical expression for the effect of phase errors on the acquired signal was derived and verified using Bloch simulations, phantom, and in vivo experiments. Simulations and experiments showed that phase errors during the diffusion preparation cause both magnitude and phase modulation on the acquired data. When motion-induced phase error (MiPe) is accounted for (e.g., with motion-compensated diffusion encoding), the signal magnitude modulation due to the leftover eddy-current-induced phase error cannot be eliminated by the conventional phase cycling and sum-of-squares (SOS) method. By employing magnitude stabilizers, the phase-error-induced magnitude modulation, regardless of its cause, was removed but the phase modulation remained. The in vivo comparison between pulsed gradient and flow-compensated diffusion preparations showed that MiPe needed to be addressed in multi-shot dprep-TSE acquisitions employing magnitude stabilizers. A comprehensive analysis of phase errors in dprep-TSE sequences showed that magnitude stabilizers are mandatory in removing the phase error induced magnitude modulation. Additionally, when multi-shot dprep-TSE is employed the inconsistent signal phase modulation across shots has to be resolved before shot-combination is performed.

  10. Classification of Error-Diffused Halftone Images Based on Spectral Regression Kernel Discriminant Analysis

    Directory of Open Access Journals (Sweden)

    Zhigao Zeng

    2016-01-01

    Full Text Available This paper proposes a novel algorithm to solve the challenging problem of classifying error-diffused halftone images. We firstly design the class feature matrices, after extracting the image patches according to their statistics characteristics, to classify the error-diffused halftone images. Then, the spectral regression kernel discriminant analysis is used for feature dimension reduction. The error-diffused halftone images are finally classified using an idea similar to the nearest centroids classifier. As demonstrated by the experimental results, our method is fast and can achieve a high classification accuracy rate with an added benefit of robustness in tackling noise.

  11. Bayesian inversion of microtremor array dispersion data with hierarchical trans-dimensional earth and autoregressive error models

    Science.gov (United States)

    Molnar, S.; Dettmer, J.; Steininger, G.; Dosso, S. E.; Cassidy, J. F.

    2013-12-01

    This paper applies hierarchical, trans-dimensional Bayesian models for earth and residual-error parametrizations to the inversion of microtremor array dispersion data for shear-wave velocity (Vs) structure. The earth is parametrized in terms of flat-lying, homogeneous layers and residual errors are parametrized with a first-order autoregressive data-error model. The inversion accounts for the limited knowledge of the optimal earth and residual error model parametrization (e.g. the number of layers in the Vs profile) in the resulting Vs parameter uncertainty estimates. The assumed earth model parametrization influences estimates of parameter values and uncertainties due to different parametrizations leading to different ranges of data predictions. The support of the data for a particular model is often non-unique and several parametrizations may be supported. A trans-dimensional formulation accounts for this non-uniqueness by including a model-indexing parameter as an unknown so that groups of models (identified by the index) are considered in the results. In addition, serial residual-error correlations are addressed by augmenting the geophysical forward model with a hierarchical autoregressive error model that can account for a wide range of error processes with a small number of parameters. Hence, the limited knowledge about the true statistical distribution of data errors is also accounted for in the earth model parameter estimates, resulting in more realistic uncertainties and parameter values. Hierarchical autoregressive error models do not rely on point estimates of the model vector to estimate residual-error statistics, and have no requirement for computing the inverse or determinant of a covariance matrix. This approach is particularly useful for trans-dimensional inverse problems, as point estimates may not be representative of the state space that spans multiple subspaces of different dimensions. The autoregressive process is restricted to first order and

  12. Two-dimensional finite element neutron diffusion analysis using hierarchic shape functions

    Energy Technology Data Exchange (ETDEWEB)

    Carpenter, D.C.

    1997-04-01

    Recent advances have been made in the use of p-type finite element method (FEM) for structural and fluid dynamics problems that hold promise for reactor physics problems. These advances include using hierarchic shape functions, element-by-element iterative solvers and more powerful mapping techniques. Use of the hierarchic shape functions allows greater flexibility and efficiency in implementing energy-dependent flux expansions and incorporating localized refinement of the solution space. The irregular matrices generated by the p-type FEM can be solved efficiently using element-by-element conjugate gradient iterative solvers. These solvers do not require storage of either the global or local stiffness matrices and can be highly vectorized. Mapping techniques based on blending function interpolation allow exact representation of curved boundaries using coarse element grids. These features were implemented in a developmental two-dimensional neutron diffusion program based on the use of hierarchic shape functions (FEM2DH). Several aspects in the effective use of p-type analysis were explored. Two choices of elemental preconditioning were examined--the proper selection of the polynomial shape functions and the proper number of functions to use. Of the five shape function polynomials tested, the integral Legendre functions were the most effective. The serendipity set of functions is preferable over the full tensor product set. Two global preconditioners were also examined--simple diagonal and incomplete Cholesky. The full effectiveness of the finite element methodology was demonstrated on a two-region, two-group cylindrical problem but solved in the x-y coordinate space, using a non-structured element grid. The exact, analytic eigenvalue solution was achieved with FEM2DH using various combinations of element grids and flux expansions.

  13. Optimization of a Segmented Filter with a New Error Diffusion Approach

    Institute of Scientific and Technical Information of China (English)

    Ayman; Al; Falou; Marwa; ELBouz

    2003-01-01

    The segmented filters, based on spectral cutting, proved their efficiency for the multi-correlation. In this article we propose an optimisation of this cutting according to a new error diffusion method.

  14. A posteriori error estimates of constrained optimal control problem governed by convection diffusion equations

    Institute of Scientific and Technical Information of China (English)

    Ningning YAN; Zhaojie ZHOU

    2008-01-01

    In this paper,we study a posteriori error estimates of the edge stabilization Galerkin method for the constrained optimal control problem governed by convection-dominated diffusion equations.The residual-type a posteriori error estimators yield both upper and lower bounds for control u measured in L2-norm and for state y and costate p measured in energy norm.Two numerical examples are presented to illustrate the effectiveness of the error estimators provided in this paper.

  15. Anisotropic mesh adaptation for solution of finite element problems using hierarchical edge-based error estimates

    Energy Technology Data Exchange (ETDEWEB)

    Lipnikov, Konstantin [Los Alamos National Laboratory; Agouzal, Abdellatif [UNIV DE LYON; Vassilevski, Yuri [Los Alamos National Laboratory

    2009-01-01

    We present a new technology for generating meshes minimizing the interpolation and discretization errors or their gradients. The key element of this methodology is construction of a space metric from edge-based error estimates. For a mesh with N{sub h} triangles, the error is proportional to N{sub h}{sup -1} and the gradient of error is proportional to N{sub h}{sup -1/2} which are optimal asymptotics. The methodology is verified with numerical experiments.

  16. An attempt to lower sources of systematic measurement error using Hierarchical Generalized Linear Modeling (HGLM).

    Science.gov (United States)

    Sideridis, George D; Tsaousis, Ioannis; Katsis, Athanasios

    2014-01-01

    The purpose of the present studies was to test the effects of systematic sources of measurement error on the parameter estimates of scales using the Rasch model. Studies 1 and 2 tested the effects of mood and affectivity. Study 3 evaluated the effects of fatigue. Last, studies 4 and 5 tested the effects of motivation on a number of parameters of the Rasch model (e.g., ability estimates). Results indicated that (a) the parameters of interest and the psychometric properties of the scales were substantially distorted in the presence of all systematic sources of error, and, (b) the use of HGLM provides a way of adjusting the parameter estimates in the presence of these sources of error. It is concluded that validity in measurement requires a thorough evaluation of potential sources of error and appropriate adjustments based on each occasion.

  17. Bayesian Hierarchical Model Characterization of Model Error in Ocean Data Assimilation and Forecasts

    Science.gov (United States)

    2013-09-30

    tracer concentration measurements are available; circles indicate a regular 19 × 37 spatial grid. Time-Varying Error Covariance Models: Extending...2013. (Wikle) Invited; Using quadratic nonlinear statistical emulators to facilitate ocean biogeochemical data assimilation, Institute for

  18. Color Extended Visual Cryptography Using Error Diffusion for High Visual Quality Shares

    Directory of Open Access Journals (Sweden)

    Lavanya Bandamneni

    2012-06-01

    Full Text Available for providing meaningful shares with high visual quality color visual cryptography is not sufficient. This paper introduces a color visual cryptography encryption method that produces meaningful color shares with high visual quality via visual information pixel (VIP synchronization and error diffusion. VIPs synchronize the positions of pixels that carry visual information of original images across the color channels so as to retain the original pixel values the same before and after encryption. To generate shares pleasant to human eyes Error diffusion is used. This method provides better results compared to the previous techniques.

  19. Local error estimates for adaptive simulation of the reaction-diffusion master equation via operator splitting

    Science.gov (United States)

    Hellander, Andreas; Lawson, Michael J.; Drawert, Brian; Petzold, Linda

    2014-06-01

    The efficiency of exact simulation methods for the reaction-diffusion master equation (RDME) is severely limited by the large number of diffusion events if the mesh is fine or if diffusion constants are large. Furthermore, inherent properties of exact kinetic-Monte Carlo simulation methods limit the efficiency of parallel implementations. Several approximate and hybrid methods have appeared that enable more efficient simulation of the RDME. A common feature to most of them is that they rely on splitting the system into its reaction and diffusion parts and updating them sequentially over a discrete timestep. This use of operator splitting enables more efficient simulation but it comes at the price of a temporal discretization error that depends on the size of the timestep. So far, existing methods have not attempted to estimate or control this error in a systematic manner. This makes the solvers hard to use for practitioners since they must guess an appropriate timestep. It also makes the solvers potentially less efficient than if the timesteps were adapted to control the error. Here, we derive estimates of the local error and propose a strategy to adaptively select the timestep when the RDME is simulated via a first order operator splitting. While the strategy is general and applicable to a wide range of approximate and hybrid methods, we exemplify it here by extending a previously published approximate method, the diffusive finite-state projection (DFSP) method, to incorporate temporal adaptivity.

  20. Local error estimates for adaptive simulation of the Reaction–Diffusion Master Equation via operator splitting

    Science.gov (United States)

    Hellander, Andreas; Lawson, Michael J; Drawert, Brian; Petzold, Linda

    2015-01-01

    The efficiency of exact simulation methods for the reaction-diffusion master equation (RDME) is severely limited by the large number of diffusion events if the mesh is fine or if diffusion constants are large. Furthermore, inherent properties of exact kinetic-Monte Carlo simulation methods limit the efficiency of parallel implementations. Several approximate and hybrid methods have appeared that enable more efficient simulation of the RDME. A common feature to most of them is that they rely on splitting the system into its reaction and diffusion parts and updating them sequentially over a discrete timestep. This use of operator splitting enables more efficient simulation but it comes at the price of a temporal discretization error that depends on the size of the timestep. So far, existing methods have not attempted to estimate or control this error in a systematic manner. This makes the solvers hard to use for practitioners since they must guess an appropriate timestep. It also makes the solvers potentially less efficient than if the timesteps are adapted to control the error. Here, we derive estimates of the local error and propose a strategy to adaptively select the timestep when the RDME is simulated via a first order operator splitting. While the strategy is general and applicable to a wide range of approximate and hybrid methods, we exemplify it here by extending a previously published approximate method, the Diffusive Finite-State Projection (DFSP) method, to incorporate temporal adaptivity. PMID:26865735

  1. A Hierarchical Bayes Error Correction Model to Explain Dynamic Effects of Price Changes

    NARCIS (Netherlands)

    D. Fok (Dennis); R. Paap (Richard); C. Horváth (Csilla); Ph.H.B.F. Franses (Philip Hans)

    2005-01-01

    textabstractThe authors put forward a sales response model to explain the differences in immediate and dynamic effects of promotional prices and regular prices on sales. The model consists of a vector autoregression rewritten in error-correction format which allows to disentangle the immediate

  2. A Hierarchical Bayes Error Correction Model to Explain Dynamic Effects of Price Changes

    NARCIS (Netherlands)

    D. Fok (Dennis); R. Paap (Richard); C. Horváth (Csilla); Ph.H.B.F. Franses (Philip Hans)

    2005-01-01

    textabstractThe authors put forward a sales response model to explain the differences in immediate and dynamic effects of promotional prices and regular prices on sales. The model consists of a vector autoregression rewritten in error-correction format which allows to disentangle the immediate effec

  3. Wide-aperture laser beam measurement using transmission diffuser: errors modeling

    Science.gov (United States)

    Matsak, Ivan S.

    2015-06-01

    Instrumental errors of measurement wide-aperture laser beam diameter were modeled to build measurement setup and justify its metrological characteristics. Modeled setup is based on CCD camera and transmission diffuser. This method is appropriate for precision measurement of large laser beam width from 10 mm up to 1000 mm. It is impossible to measure such beams with other methods based on slit, pinhole, knife edge or direct CCD camera measurement. The method is suitable for continuous and pulsed laser irradiation. However, transmission diffuser method has poor metrological justification required in field of wide aperture beam forming system verification. Considering the fact of non-availability of a standard of wide-aperture flat top beam modelling is preferred way to provide basic reference points for development measurement system. Modelling was conducted in MathCAD. Super-Lorentz distribution with shape parameter 6-12 was used as a model of the beam. Using theoretical evaluations there was found that the key parameters influencing on error are: relative beam size, spatial non-uniformity of the diffuser, lens distortion, physical vignetting, CCD spatial resolution and, effective camera ADC resolution. Errors were modeled for 90% of power beam diameter criteria. 12-order Super-Lorentz distribution was primary model, because it precisely meets experimental distribution at the output of test beam forming system, although other orders were also used. The analytic expressions were obtained analyzing the modelling results for each influencing data. Attainability of <1% error based on choice of parameters of expression was shown. The choice was based on parameters of commercially available components of the setup. The method can provide up to 0.1% error in case of using calibration procedures and multiple measurements.

  4. Consistent robust a posteriori error majorants for approximate solutions of diffusion-reaction equations

    Science.gov (United States)

    Korneev, V. G.

    2016-11-01

    Efficiency of the error control of numerical solutions of partial differential equations entirely depends on the two factors: accuracy of an a posteriori error majorant and the computational cost of its evaluation for some test function/vector-function plus the cost of the latter. In the paper consistency of an a posteriori bound implies that it is the same in the order with the respective unimprovable a priori bound. Therefore, it is the basic characteristic related to the first factor. The paper is dedicated to the elliptic diffusion-reaction equations. We present a guaranteed robust a posteriori error majorant effective at any nonnegative constant reaction coefficient (r.c.). For a wide range of finite element solutions on a quasiuniform meshes the majorant is consistent. For big values of r.c. the majorant coincides with the majorant of Aubin (1972), which, as it is known, for relatively small r.c. (< ch -2 ) is inconsistent and looses its sense at r.c. approaching zero. Our majorant improves also some other majorants derived for the Poisson and reaction-diffusion equations.

  5. Error analysis of semidiscrete finite element methods for inhomogeneous time-fractional diffusion

    KAUST Repository

    Jin, B.

    2014-05-30

    © 2014 Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rights reserved. We consider the initial-boundary value problem for an inhomogeneous time-fractional diffusion equation with a homogeneous Dirichlet boundary condition, a vanishing initial data and a nonsmooth right-hand side in a bounded convex polyhedral domain. We analyse two semidiscrete schemes based on the standard Galerkin and lumped mass finite element methods. Almost optimal error estimates are obtained for right-hand side data f (x, t) ε L∞ (0, T; Hq(ω)), ≤1≥ 1, for both semidiscrete schemes. For the lumped mass method, the optimal L2(ω)-norm error estimate requires symmetric meshes. Finally, twodimensional numerical experiments are presented to verify our theoretical results.

  6. Numerical analysis of magnetic field diffusion in ferromagnetic laminations by minimization of constitutive error

    Energy Technology Data Exchange (ETDEWEB)

    Fresa, R. [Consorzio CREATE, DIIIE, University of Salerno, I-84084 Fisciano (Saudi Arabia), (Italy); Serpico, C. [Department of Electrical and Computer Engineering, University of Maryland, College Park, Maryland 20742 (United States); Department of Electrical Engineering, University of Naples ' ' Federico II' ' , I-80152 Napoli, (Italy); Visone, C. [Department of Electrical Engineering, University of Naples ' ' Federico II' ' , I-80152 Napoli, (Italy)

    2000-05-01

    In this article, the diffusion of electromagnetic fields into a ferromagnetic lamination is numerically studied by means of an error-based numerical method. This technique has been developed so far only for the case of nonhysteretic constitutive relations. The generalization to the hysteretic case requires a modification of the technique in order to take into account the evolution of the ''magnetization state'' of the media. Numerical computations obtained by using this approach are reported and discussed. (c) 2000 American Institute of Physics.

  7. Drifting through basic subprocesses of reading: A hierarchical diffusion model analysis of age effects on visual word recognition

    Directory of Open Access Journals (Sweden)

    Eva Froehlich

    2016-11-01

    Full Text Available Reading is one of the most popular leisure activities and it is routinely performed by most individuals even in old age. Successful reading enables older people to master and actively participate in everyday life and maintain functional independence. Yet, reading comprises a multitude of subprocesses and it is undoubtedly one of the most complex accomplishments of the human brain. Not surprisingly, findings of age-related effects on word recognition and reading have been partly contradictory and are often confined to only one of four central reading subprocesses, i.e., sublexical, orthographic, phonological and lexico-semantic processing. The aim of the present study was therefore to systematically investigate the impact of age on each of these subprocesses. A total of 1,807 participants (young, N = 384; old, N = 1,423 performed four decision tasks specifically designed to tap one of the subprocesses. To account for the behavioral heterogeneity in older adults, this subsample was split into high and low performing readers. Data were analyzed using a hierarchical diffusion modelling approach which provides more information than standard response times/accuracy analyses. Taking into account incorrect and correct response times, their distributions and accuracy data, hierarchical diffusion modelling allowed us to differentiate between age-related changes in decision threshold, non-decision time and the speed of information uptake. We observed longer non-decision times for older adults and a more conservative decision threshold. More importantly, high-performing older readers outperformed younger adults at the speed of information uptake in orthographic and lexico-semantic processing whereas a general age-disadvantage was observed at the sublexical and phonological levels. Low-performing older readers were slowest in information uptake in all four subprocesses. Discussing these results in terms of computational models of word recognition, we propose

  8. Drifting through Basic Subprocesses of Reading: A Hierarchical Diffusion Model Analysis of Age Effects on Visual Word Recognition

    Science.gov (United States)

    Froehlich, Eva; Liebig, Johanna; Ziegler, Johannes C.; Braun, Mario; Lindenberger, Ulman; Heekeren, Hauke R.; Jacobs, Arthur M.

    2016-01-01

    Reading is one of the most popular leisure activities and it is routinely performed by most individuals even in old age. Successful reading enables older people to master and actively participate in everyday life and maintain functional independence. Yet, reading comprises a multitude of subprocesses and it is undoubtedly one of the most complex accomplishments of the human brain. Not surprisingly, findings of age-related effects on word recognition and reading have been partly contradictory and are often confined to only one of four central reading subprocesses, i.e., sublexical, orthographic, phonological and lexico-semantic processing. The aim of the present study was therefore to systematically investigate the impact of age on each of these subprocesses. A total of 1,807 participants (young, N = 384; old, N = 1,423) performed four decision tasks specifically designed to tap one of the subprocesses. To account for the behavioral heterogeneity in older adults, this subsample was split into high and low performing readers. Data were analyzed using a hierarchical diffusion modeling approach, which provides more information than standard response time/accuracy analyses. Taking into account incorrect and correct response times, their distributions and accuracy data, hierarchical diffusion modeling allowed us to differentiate between age-related changes in decision threshold, non-decision time and the speed of information uptake. We observed longer non-decision times for older adults and a more conservative decision threshold. More importantly, high-performing older readers outperformed younger adults at the speed of information uptake in orthographic and lexico-semantic processing, whereas a general age-disadvantage was observed at the sublexical and phonological levels. Low-performing older readers were slowest in information uptake in all four subprocesses. Discussing these results in terms of computational models of word recognition, we propose age

  9. Improved optical properties of silica/UV-cured polymer composite films made of hollow silica nanoparticles with a hierarchical structure for light diffuser film applications.

    Science.gov (United States)

    Suthabanditpong, W; Takai, C; Fuji, M; Buntem, R; Shirai, T

    2016-06-28

    This study successfully improved the optical properties of silica/UV-cured polymer composite films made of hollow silica nanoparticles having a hierarchical structure. The particles were synthesized by an inorganic particle method, which involves two steps of sol-gel silica coating around the template and acid dissolution removal of the template. The pH of the acid was varied to achieve different hierarchical structures of the particles. The morphologies and surface properties of the obtained particles were characterized before dispersing in a UV-curable acrylate monomer solution to prepare dispersions for fabricating light diffuser films. The optical properties and the light diffusing ability of the fabricated films were studied. The results revealed that the increased pH of the acid provides the particles with a thinner shell, a larger hollow interior and a higher specific surface area. Moreover, the films with these particles exhibit a better light diffusing ability and a higher diffuse transmittance value when compared to those without particles. Therefore, the composite films can be used as light diffuser films, which is an essential part of optical diffusers in the back-light unit of LCDs. In addition, utilizing the hierarchical particles probably reduces the number of back-light units in the LCDs leading to energy-savings and subsequently lightweight LCDs.

  10. Novel method for converting digital Fresnel hologram to phase-only hologram based on bidirectional error diffusion.

    Science.gov (United States)

    Tsang, P W M; Poon, T-C

    2013-10-01

    We report a novel and fast method for converting a digital, complex Fresnel hologram into a phase-only hologram. Briefly, the pixels in the complex hologram are scanned sequentially in a row by row manner. The odd and even rows are scanned from opposite directions, constituting to a bidirectional error diffusion process. The magnitude of each visited pixel is forced to be a constant value, while preserving the exact phase value. The resulting error is diffused to the neighboring pixels that have not been visited before. The resulting novel phase-only hologram is called the bidirectional error diffusion (BERD) hologram. The reconstructed image from the BERD hologram exhibits high fidelity as compared with those obtained with the original complex hologram.

  11. Hierarchical Bass model: a product diffusion model considering a diversity of sensitivity to fashion

    Science.gov (United States)

    Tashiro, Tohru

    2016-11-01

    We propose a new product diffusion model including the number of how many adopters or advertisements a non-adopter met until he/she adopts the product, where (non-)adopters mean people (not) possessing it. By this effect not considered in the Bass model, we can depict a diversity of sensitivity to fashion. As an application, we utilize the model to fit the iPod and the iPhone unit sales data, and so the better agreement is obtained than the Bass model for the iPod data. We also present a new method to estimate the number of advertisements in a society from fitting parameters of the Bass model and this new model.

  12. Mesoporous zeolite single crystal catalysts: Diffusion and catalysis in hierarchical zeolites

    DEFF Research Database (Denmark)

    Christensen, Christina Hviid; Johannsen, Kim; Toernqvist, Eric

    2007-01-01

    During the last years, several new routes to produce zeolites with controlled mesoporosity have appeared. Moreover, an improved catalytic performance of the resulting mesoporous zeolites over conventional zeolites has been demonstrated in several reactions. In most cases, the mesoporous zeolites...... exhibit higher catalytic activity, but in some cases also improved selectivity and longer catalyst lifetime has been reported. The beneficial effects of introducing mesopores into the zeolites has in most instances been attributed to improved mass transport to and from the active sites located...... in the zeolite micropores. Here, we briefly discuss the most important ways of introducing mesopores into zeolites and, for the first time, we show experimentally that the presence of mesopores dramatically increases the rate of diffusion in zeolite catalysts. This is done by studying the elution of iso...

  13. Fast conversion of digital Fresnel hologram to phase-only hologram based on localized error diffusion and redistribution.

    Science.gov (United States)

    Tsang, P W M; Jiao, A S M; Poon, T-C

    2014-03-10

    Past research has demonstrated that a digital, complex Fresnel hologram can be converted into a phase-only hologram with the use of the bi-direction error diffusion (BERD) algorithm. However, the recursive nature error diffusion process is lengthy and increases monotonically with hologram size. In this paper, we propose a method to overcome this problem. Briefly, each row of a hologram is partitioned into short non-overlapping segments, and a localized error diffusion algorithm is applied to convert the pixels in each segment into phase only values. Subsequently, the error signal is redistributed with low-pass filtering. As the operation on each segment is independent of others, the conversion process can be conducted at high speed with the graphic processing unit. The hologram obtained with the proposed method, known as the Localized Error Diffusion and Redistribution (LERDR) hologram, is over two orders of magnitude faster than that obtained by BERD for a 2048×2048 hologram, exceeding the capability of generating quality phase-only holograms in video rate.

  14. Hierarchical multi-innovation identification methods for multivariable equation-error-like type systems%类多变量方程误差类系统的递阶多新息辨识方法

    Institute of Scientific and Technical Information of China (English)

    丁锋; 王艳娇

    2014-01-01

    According to the hierarchical identification principle,this paper presents the hierarchical stochastic gra-dient algorithms and the hierarchical gradient based iterative algorithms, the hierarchical least squares algorithms and the hierarchical least squares based iterative algorithms for multivariable equation-error-like systems and multi-variable equation-error ARMA-like systems,and further derives the hierarchical multi-innovation gradient algorithms and the hierarchical multi-innovation least squares algorithms. In order to reduce computational burdens,this paper derives the filtering based hierarchical identification algorithms and the filtering based hierarchical multi-innovation identification algorithms for multivariable equation-error ARMA-like systems using the filtering technique. Finally, the computational efficiency and the computational steps of some typical identification algorithms are discussed.%根据递阶辨识原理,研究了类多变量方程误差系统和类多变量方程误差ARMA系统递阶随机梯度方法和递阶梯度迭代方法、递阶最小二乘方法和递阶最小二乘迭代方法。进一步利用多新息辨识理论,推导了递阶多新息梯度辨识方法和递阶多新息最小二乘辨识方法。为减小计算量,推导了基于滤波的类多变量方程误差ARMA系统递阶辨识方法和递阶多新息辨识方法。讨论了几个典型辨识算法的计算量,并给出了计算参数估计的步骤。

  15. Numerical solutions and error estimations for the space fractional diffusion equation with variable coefficients via Fibonacci collocation method.

    Science.gov (United States)

    Bahşı, Ayşe Kurt; Yalçınbaş, Salih

    2016-01-01

    In this study, the Fibonacci collocation method based on the Fibonacci polynomials are presented to solve for the fractional diffusion equations with variable coefficients. The fractional derivatives are described in the Caputo sense. This method is derived by expanding the approximate solution with Fibonacci polynomials. Using this method of the fractional derivative this equation can be reduced to a set of linear algebraic equations. Also, an error estimation algorithm which is based on the residual functions is presented for this method. The approximate solutions are improved by using this error estimation algorithm. If the exact solution of the problem is not known, the absolute error function of the problems can be approximately computed by using the Fibonacci polynomial solution. By using this error estimation function, we can find improved solutions which are more efficient than direct numerical solutions. Numerical examples, figures, tables are comparisons have been presented to show efficiency and usable of proposed method.

  16. Hierarchical Statistical 3D ' Atomistic' Simulation of Decanano MOSFETs: Drift-Diffusion, Hydrodynamic and Quantum Mechanical Approaches

    Science.gov (United States)

    Asenov, Asen; Brown, A. R.; Slavcheva, G.; Davies, J. H.

    2000-01-01

    When MOSFETs are scaled to deep submicron dimensions the discreteness and randomness of the dopant charges in the channel region introduces significant fluctuations in the device characteristics. This effect, predicted 20 year ago, has been confirmed experimentally and in simulation studies. The impact of the fluctuations on the functionality, yield, and reliability of the corresponding systems shifts the paradigm of the numerical device simulation. It becomes insufficient to simulate only one device representing one macroscopical design in a continuous charge approximation. An ensemble of macroscopically identical but microscopically different devices has to be characterized by simulation of statistically significant samples. The aims of the numerical simulations shift from predicting the characteristics of a single device with continuous doping towards estimating the mean values and the standard deviations of basic design parameters such as threshold voltage, subthreshold slope, transconductance, drive current, etc. for the whole ensemble of 'atomistically' different devices in the system. It has to be pointed out that even the mean values obtained from 'atomistic' simulations are not identical to the values obtained from continuous doping simulations. In this paper we present a hierarchical approach to the 'atomistic' simulation of aggressively scaled decanano MOSFETs. A full scale 3D drift-diffusion'atomostic' simulation approach is first described and used for verification of the more economical, but also more restricted, options. To reduce the processor time and memory requirements at high drain voltage we have developed a self-consistent option based on a thin slab solution of the current continuity equation only in the channel region. This is coupled to the Poisson's equation solution in the whole simulation domain in the Gummel iteration cycles. The accuracy of this approach is investigated in comparison with the full self-consistent solution. At low drain

  17. High-Resolution Multi-Shot Spiral Diffusion Tensor Imaging with Inherent Correction of Motion-Induced Phase Errors

    Science.gov (United States)

    Truong, Trong-Kha; Guidon, Arnaud

    2014-01-01

    Purpose To develop and compare three novel reconstruction methods designed to inherently correct for motion-induced phase errors in multi-shot spiral diffusion tensor imaging (DTI) without requiring a variable-density spiral trajectory or a navigator echo. Theory and Methods The first method simply averages magnitude images reconstructed with sensitivity encoding (SENSE) from each shot, whereas the second and third methods rely on SENSE to estimate the motion-induced phase error for each shot, and subsequently use either a direct phase subtraction or an iterative conjugate gradient (CG) algorithm, respectively, to correct for the resulting artifacts. Numerical simulations and in vivo experiments on healthy volunteers were performed to assess the performance of these methods. Results The first two methods suffer from a low signal-to-noise ratio (SNR) or from residual artifacts in the reconstructed diffusion-weighted images and fractional anisotropy maps. In contrast, the third method provides high-quality, high-resolution DTI results, revealing fine anatomical details such as a radial diffusion anisotropy in cortical gray matter. Conclusion The proposed SENSE+CG method can inherently and effectively correct for phase errors, signal loss, and aliasing artifacts caused by both rigid and nonrigid motion in multi-shot spiral DTI, without increasing the scan time or reducing the SNR. PMID:23450457

  18. Nonlinear calibration transfer based on hierarchical Bayesian models and Lagrange Multipliers: Error bounds of estimates via Monte Carlo - Markov Chain sampling.

    Science.gov (United States)

    Seichter, Felicia; Vogt, Josef; Radermacher, Peter; Mizaikoff, Boris

    2017-01-25

    The calibration of analytical systems is time-consuming and the effort for daily calibration routines should therefore be minimized, while maintaining the analytical accuracy and precision. The 'calibration transfer' approach proposes to combine calibration data already recorded with actual calibrations measurements. However, this strategy was developed for the multivariate, linear analysis of spectroscopic data, and thus, cannot be applied to sensors with a single response channel and/or a non-linear relationship between signal and desired analytical concentration. To fill this gap for a non-linear calibration equation, we assume that the coefficients for the equation, collected over several calibration runs, are normally distributed. Considering that coefficients of an actual calibration are a sample of this distribution, only a few standards are needed for a complete calibration data set. The resulting calibration transfer approach is demonstrated for a fluorescence oxygen sensor and implemented as a hierarchical Bayesian model, combined with a Lagrange Multipliers technique and Monte-Carlo Markov-Chain sampling. The latter provides realistic estimates for coefficients and prediction together with accurate error bounds by simulating known measurement errors and system fluctuations. Performance criteria for validation and optimal selection of a reduced set of calibration samples were developed and lead to a setup which maintains the analytical performance of a full calibration. Strategies for a rapid determination of problems occurring in a daily calibration routine, are proposed, thereby opening the possibility of correcting the problem just in time.

  19. On progress of the solution of the stationary 2-dimensional neutron diffusion equation: a polynomial approximation method with error analysis

    Energy Technology Data Exchange (ETDEWEB)

    Ceolin, C., E-mail: celina.ceolin@gmail.com [Universidade Federal de Santa Maria (UFSM), Frederico Westphalen, RS (Brazil). Centro de Educacao Superior Norte; Schramm, M.; Bodmann, B.E.J.; Vilhena, M.T., E-mail: celina.ceolin@gmail.com [Universidade Federal do Rio Grande do Sul (UFRGS), Porto Alegre, RS (Brazil). Programa de Pos-Graduacao em Engenharia Mecanica

    2015-07-01

    Recently the stationary neutron diffusion equation in heterogeneous rectangular geometry was solved by the expansion of the scalar fluxes in polynomials in terms of the spatial variables (x; y), considering the two-group energy model. The focus of the present discussion consists in the study of an error analysis of the aforementioned solution. More specifically we show how the spatial subdomain segmentation is related to the degree of the polynomial and the Lipschitz constant. This relation allows to solve the 2-D neutron diffusion problem for second degree polynomials in each subdomain. This solution is exact at the knots where the Lipschitz cone is centered. Moreover, the solution has an analytical representation in each subdomain with supremum and infimum functions that shows the convergence of the solution. We illustrate the analysis with a selection of numerical case studies. (author)

  20. Accelerating inference for diffusions observed with measurement error and large sample sizes using approximate Bayesian computation

    DEFF Research Database (Denmark)

    Picchini, Umberto; Forman, Julie Lyng

    2016-01-01

    In recent years, dynamical modelling has been provided with a range of breakthrough methods to perform exact Bayesian inference. However, it is often computationally unfeasible to apply exact statistical methodologies in the context of large data sets and complex models. This paper considers...... a nonlinear stochastic differential equation model observed with correlated measurement errors and an application to protein folding modelling. An approximate Bayesian computation (ABC)-MCMC algorithm is suggested to allow inference for model parameters within reasonable time constraints. The ABC algorithm...... applications. A simulation study is conducted to compare our strategy with exact Bayesian inference, the latter resulting two orders of magnitude slower than ABC-MCMC for the considered set-up. Finally, the ABC algorithm is applied to a large size protein data. The suggested methodology is fairly general...

  1. Quantifying equation-of-state and opacity errors using integrated supersonic diffusive radiation flow experiments on the National Ignition Facility

    Energy Technology Data Exchange (ETDEWEB)

    Guymer, T. M., E-mail: Thomas.Guymer@awe.co.uk; Moore, A. S.; Morton, J.; Allan, S.; Bazin, N.; Benstead, J.; Bentley, C.; Comley, A. J.; Garbett, W.; Reed, L.; Stevenson, R. M. [AWE Plc., Aldermaston, Reading RG7 4PR (United Kingdom); Kline, J. L.; Cowan, J.; Flippo, K.; Hamilton, C.; Lanier, N. E.; Mussack, K.; Obrey, K.; Schmidt, D. W.; Taccetti, J. M. [Los Alamos National Laboratory, Los Alamos, New Mexico 87545 (United States); and others

    2015-04-15

    A well diagnosed campaign of supersonic, diffusive radiation flow experiments has been fielded on the National Ignition Facility. These experiments have used the accurate measurements of delivered laser energy and foam density to enable an investigation into SESAME's tabulated equation-of-state values and CASSANDRA's predicted opacity values for the low-density C{sub 8}H{sub 7}Cl foam used throughout the campaign. We report that the results from initial simulations under-predicted the arrival time of the radiation wave through the foam by ≈22%. A simulation study was conducted that artificially scaled the equation-of-state and opacity with the intended aim of quantifying the systematic offsets in both CASSANDRA and SESAME. Two separate hypotheses which describe these errors have been tested using the entire ensemble of data, with one being supported by these data.

  2. Discretization error analysis and adaptive meshing algorithms for fluorescence diffuse optical tomography in the presence of measurement noise.

    Science.gov (United States)

    Zhou, Lu; Yazici, Birsen

    2011-04-01

    Quantitatively accurate fluorescence diffuse optical tomographic (FDOT) image reconstruction is a computationally demanding problem that requires repeated numerical solutions of two coupled partial differential equations and an associated inverse problem. Recently, adaptive finite element methods have been explored to reduce the computation requirements of the FDOT image reconstruction. However, existing approaches ignore the ubiquitous presence of noise in boundary measurements. In this paper, we analyze the effect of finite element discretization on the FDOT forward and inverse problems in the presence of measurement noise and develop novel adaptive meshing algorithms for FDOT that take into account noise statistics. We formulate the FDOT inverse problem as an optimization problem in the maximum a posteriori framework to estimate the fluorophore concentration in a bounded domain. We use the mean-square-error (MSE) between the exact solution and the discretized solution as a figure of merit to evaluate the image reconstruction accuracy, and derive an upper bound on the MSE which depends upon the forward and inverse problem discretization parameters, noise statistics, a priori information of fluorophore concentration, source and detector geometry, as well as background optical properties. Next, we use this error bound to develop adaptive meshing algorithms for the FDOT forward and inverse problems to reduce the MSE due to discretization in the reconstructed images. Finally, we present a set of numerical simulations to illustrate the practical advantages of our adaptive meshing algorithms for FDOT image reconstruction.

  3. Discontinuous Galerkin methods and a posteriori error analysis for heterogenous diffusion problems; Methodes de Galerkine discontinues et analyse d'erreur a posteriori pour les problemes de diffusion heterogene

    Energy Technology Data Exchange (ETDEWEB)

    Stephansen, A.F

    2007-12-15

    In this thesis we analyse a discontinuous Galerkin (DG) method and two computable a posteriori error estimators for the linear and stationary advection-diffusion-reaction equation with heterogeneous diffusion. The DG method considered, the SWIP method, is a variation of the Symmetric Interior Penalty Galerkin method. The difference is that the SWIP method uses weighted averages with weights that depend on the diffusion. The a priori analysis shows optimal convergence with respect to mesh-size and robustness with respect to heterogeneous diffusion, which is confirmed by numerical tests. Both a posteriori error estimators are of the residual type and control the energy (semi-)norm of the error. Local lower bounds are obtained showing that almost all indicators are independent of heterogeneities. The exception is for the non-conforming part of the error, which has been evaluated using the Oswald interpolator. The second error estimator is sharper in its estimate with respect to the first one, but it is slightly more costly. This estimator is based on the construction of an H(div)-conforming Raviart-Thomas-Nedelec flux using the conservativeness of DG methods. Numerical results show that both estimators can be used for mesh-adaptation. (author)

  4. 山区LIDAR点云数据的阶层次粗差探测与分析%GROSS ERROR DETECTION AND ANALYSIS BY HIERARCHICAL CLASSIFICATION OF MOUNTAINOUS LIDAR DATA

    Institute of Scientific and Technical Information of China (English)

    李芸; 杨志强; 杨博

    2012-01-01

    Gross error detection is one of the important data processing steps of mountainous LIDAR point cloud data. Through analysing the features of gross error distribution, original LIDAR point cloud data can be divided into extreme outliers, outlier clusters and isolated points. On this basis, the idea of hierarchical gross error detection of mountainous LIDAR point cloud data is proposed, and an example of experimental data is verified. Experimental results show that the method can effectively remove gross errors from original mountainous LIDAR point cloud data, and, to a certain extent, improving the effect of pre-processing of point cloud.%针对山区LIDAR原始点云数据粗差的空间分布特性,将粗差分为极值粗差、粗差簇群和孤立点,在此基础上提出了山区机载LIDAR点云数据粗差探测的阶层次处理,并用实验数据进行了验证.实验结果表明,该方法可以有效地去除山区机载LIDAR原始点云数据中的粗差,在一定程度上提高了点云预处理的效果.

  5. Hierarchical photocatalysts.

    Science.gov (United States)

    Li, Xin; Yu, Jiaguo; Jaroniec, Mietek

    2016-05-01

    As a green and sustainable technology, semiconductor-based heterogeneous photocatalysis has received much attention in the last few decades because it has potential to solve both energy and environmental problems. To achieve efficient photocatalysts, various hierarchical semiconductors have been designed and fabricated at the micro/nanometer scale in recent years. This review presents a critical appraisal of fabrication methods, growth mechanisms and applications of advanced hierarchical photocatalysts. Especially, the different synthesis strategies such as two-step templating, in situ template-sacrificial dissolution, self-templating method, in situ template-free assembly, chemically induced self-transformation and post-synthesis treatment are highlighted. Finally, some important applications including photocatalytic degradation of pollutants, photocatalytic H2 production and photocatalytic CO2 reduction are reviewed. A thorough assessment of the progress made in photocatalysis may open new opportunities in designing highly effective hierarchical photocatalysts for advanced applications ranging from thermal catalysis, separation and purification processes to solar cells.

  6. Towards a systematic assessment of errors in diffusion Monte Carlo calculations of semiconductors: Case study of zinc selenide and zinc oxide

    Energy Technology Data Exchange (ETDEWEB)

    Yu, Jaehyung [Department of Mechanical Science and Engineering, 1206 W Green Street, University of Illinois at Urbana-Champaign, Urbana, Illinois 61801 (United States); Wagner, Lucas K. [Department of Physics, University of Illinois at Urbana-Champaign, Urbana, Illinois 61801 (United States); Ertekin, Elif, E-mail: ertekin@illinois.edu [Department of Mechanical Science and Engineering, 1206 W Green Street, University of Illinois at Urbana-Champaign, Urbana, Illinois 61801 (United States); International Institute for Carbon Neutral Energy Research - WPI-I" 2CNER, Kyushu University, 744 Moto-oka, Nishi-ku, Fukuoka 819-0395 (Japan)

    2015-12-14

    The fixed node diffusion Monte Carlo (DMC) method has attracted interest in recent years as a way to calculate properties of solid materials with high accuracy. However, the framework for the calculation of properties such as total energies, atomization energies, and excited state energies is not yet fully established. Several outstanding questions remain as to the effect of pseudopotentials, the magnitude of the fixed node error, and the size of supercell finite size effects. Here, we consider in detail the semiconductors ZnSe and ZnO and carry out systematic studies to assess the magnitude of the energy differences arising from controlled and uncontrolled approximations in DMC. The former include time step errors and supercell finite size effects for ground and optically excited states, and the latter include pseudopotentials, the pseudopotential localization approximation, and the fixed node approximation. We find that for these compounds, the errors can be controlled to good precision using modern computational resources and that quantum Monte Carlo calculations using Dirac-Fock pseudopotentials can offer good estimates of both cohesive energy and the gap of these systems. We do however observe differences in calculated optical gaps that arise when different pseudopotentials are used.

  7. Reference optical phantoms for diffuse optical spectroscopy. Part 1--Error analysis of a time resolved transmittance characterization method.

    Science.gov (United States)

    Bouchard, Jean-Pierre; Veilleux, Israël; Jedidi, Rym; Noiseux, Isabelle; Fortin, Michel; Mermut, Ozzy

    2010-05-24

    Development, production quality control and calibration of optical tissue-mimicking phantoms require a convenient and robust characterization method with known absolute accuracy. We present a solid phantom characterization technique based on time resolved transmittance measurement of light through a relatively small phantom sample. The small size of the sample enables characterization of every material batch produced in a routine phantoms production. Time resolved transmittance data are pre-processed to correct for dark noise, sample thickness and instrument response function. Pre-processed data are then compared to a forward model based on the radiative transfer equation solved through Monte Carlo simulations accurately taking into account the finite geometry of the sample. The computational burden of the Monte-Carlo technique was alleviated by building a lookup table of pre-computed results and using interpolation to obtain modeled transmittance traces at intermediate values of the optical properties. Near perfect fit residuals are obtained with a fit window using all data above 1% of the maximum value of the time resolved transmittance trace. Absolute accuracy of the method is estimated through a thorough error analysis which takes into account the following contributions: measurement noise, system repeatability, instrument response function stability, sample thickness variation refractive index inaccuracy, time correlated single photon counting system time based inaccuracy and forward model inaccuracy. Two sigma absolute error estimates of 0.01 cm(-1) (11.3%) and 0.67 cm(-1) (6.8%) are obtained for the absorption coefficient and reduced scattering coefficient respectively.

  8. Volumetric apparatus for hydrogen adsorption and diffusion measurements: sources of systematic error and impact of their experimental resolutions.

    Science.gov (United States)

    Policicchio, Alfonso; Maccallini, Enrico; Kalantzopoulos, Georgios N; Cataldi, Ugo; Abate, Salvatore; Desiderio, Giovanni; Agostino, Raffaele Giuseppe

    2013-10-01

    The development of a volumetric apparatus (also known as a Sieverts' apparatus) for accurate and reliable hydrogen adsorption measurement is shown. The instrument minimizes the sources of systematic errors which are mainly due to inner volume calibration, stability and uniformity of the temperatures, precise evaluation of the skeletal volume of the measured samples, and thermodynamical properties of the gas species. A series of hardware and software solutions were designed and introduced in the apparatus, which we will indicate as f-PcT, in order to deal with these aspects. The results are represented in terms of an accurate evaluation of the equilibrium and dynamical characteristics of the molecular hydrogen adsorption on two well-known porous media. The contribution of each experimental solution to the error propagation of the adsorbed moles is assessed. The developed volumetric apparatus for gas storage capacity measurements allows an accurate evaluation over a 4 order-of-magnitude pressure range (from 1 kPa to 8 MPa) and in temperatures ranging between 77 K and 470 K. The acquired results are in good agreement with the values reported in the literature.

  9. A neural signature of hierarchical reinforcement learning.

    Science.gov (United States)

    Ribas-Fernandes, José J F; Solway, Alec; Diuk, Carlos; McGuire, Joseph T; Barto, Andrew G; Niv, Yael; Botvinick, Matthew M

    2011-07-28

    Human behavior displays hierarchical structure: simple actions cohere into subtask sequences, which work together to accomplish overall task goals. Although the neural substrates of such hierarchy have been the target of increasing research, they remain poorly understood. We propose that the computations supporting hierarchical behavior may relate to those in hierarchical reinforcement learning (HRL), a machine-learning framework that extends reinforcement-learning mechanisms into hierarchical domains. To test this, we leveraged a distinctive prediction arising from HRL. In ordinary reinforcement learning, reward prediction errors are computed when there is an unanticipated change in the prospects for accomplishing overall task goals. HRL entails that prediction errors should also occur in relation to task subgoals. In three neuroimaging studies we observed neural responses consistent with such subgoal-related reward prediction errors, within structures previously implicated in reinforcement learning. The results reported support the relevance of HRL to the neural processes underlying hierarchical behavior.

  10. Hierarchical matrices algorithms and analysis

    CERN Document Server

    Hackbusch, Wolfgang

    2015-01-01

    This self-contained monograph presents matrix algorithms and their analysis. The new technique enables not only the solution of linear systems but also the approximation of matrix functions, e.g., the matrix exponential. Other applications include the solution of matrix equations, e.g., the Lyapunov or Riccati equation. The required mathematical background can be found in the appendix. The numerical treatment of fully populated large-scale matrices is usually rather costly. However, the technique of hierarchical matrices makes it possible to store matrices and to perform matrix operations approximately with almost linear cost and a controllable degree of approximation error. For important classes of matrices, the computational cost increases only logarithmically with the approximation error. The operations provided include the matrix inversion and LU decomposition. Since large-scale linear algebra problems are standard in scientific computing, the subject of hierarchical matrices is of interest to scientists ...

  11. 基于图像阶调与人眼视觉模型的彩色误差扩散网目调方法%Color Error Diffusion Halftoning Method Based on Image Tone and Human Visual System

    Institute of Scientific and Technical Information of China (English)

    易尧华; 于晓庆

    2009-01-01

    In the process of color error diffusion halfioning, the quality of the color halftoning image will be affected directly by the design of the error diffusion filter with different color channels. This paper studied the method of error diffusion based on tone and the human visual system(HVS), optimized the filter coefficient and the threshold by applying the luminance and chrominance HVS, and the color error diffusion halftoning method based on the image tone and HVS had been received. The results showed that this method can reduce the artifacts in color halftoning images effectively and improve the accuracy of color rendition.%在彩色误差扩散网目调处理过程中,各色通道不同的误差滤波器设计将直接影响彩色网目调图像的质量.本文对基于阶调的误差扩散方法以及人眼视觉特性进行了分析研究,应用亮度和色度人眼视觉模型对误差扩散过程中的滤波器系数和阈值进行优化,实现了基于图像阶调与人眼视觉模型的彩色误差扩散网目调方法.实验结果表明,该方法能够有效地减少彩色网目调图像的人工纹理,并显著提高再现彩色图像的色彩还原精度.

  12. Hierarchical Network Design

    DEFF Research Database (Denmark)

    Thomadsen, Tommy

    2005-01-01

    of different types of hierarchical networks. This is supplemented by a review of ring network design problems and a presentation of a model allowing for modeling most hierarchical networks. We use methods based on linear programming to design the hierarchical networks. Thus, a brief introduction to the various....... The thesis investigates models for hierarchical network design and methods used to design such networks. In addition, ring network design is considered, since ring networks commonly appear in the design of hierarchical networks. The thesis introduces hierarchical networks, including a classification scheme...... linear programming based methods is included. The thesis is thus suitable as a foundation for study of design of hierarchical networks. The major contribution of the thesis consists of seven papers which are included in the appendix. The papers address hierarchical network design and/or ring network...

  13. Hierarchical Multiagent Reinforcement Learning

    Science.gov (United States)

    2004-01-25

    In this paper, we investigate the use of hierarchical reinforcement learning (HRL) to speed up the acquisition of cooperative multiagent tasks. We...introduce a hierarchical multiagent reinforcement learning (RL) framework and propose a hierarchical multiagent RL algorithm called Cooperative HRL. In

  14. Hierarchical Bayes Ensemble Kalman Filtering

    CERN Document Server

    Tsyrulnikov, Michael

    2015-01-01

    Ensemble Kalman filtering (EnKF), when applied to high-dimensional systems, suffers from an inevitably small affordable ensemble size, which results in poor estimates of the background error covariance matrix ${\\bf B}$. The common remedy is a kind of regularization, usually an ad-hoc spatial covariance localization (tapering) combined with artificial covariance inflation. Instead of using an ad-hoc regularization, we adopt the idea by Myrseth and Omre (2010) and explicitly admit that the ${\\bf B}$ matrix is unknown and random and estimate it along with the state (${\\bf x}$) in an optimal hierarchical Bayes analysis scheme. We separate forecast errors into predictability errors (i.e. forecast errors due to uncertainties in the initial data) and model errors (forecast errors due to imperfections in the forecast model) and include the two respective components ${\\bf P}$ and ${\\bf Q}$ of the ${\\bf B}$ matrix into the extended control vector $({\\bf x},{\\bf P},{\\bf Q})$. Similarly, we break the traditional backgrou...

  15. Statistical theory of hierarchical avalanche ensemble

    OpenAIRE

    Olemskoi, Alexander I.

    1999-01-01

    The statistical ensemble of avalanche intensities is considered to investigate diffusion in ultrametric space of hierarchically subordinated avalanches. The stationary intensity distribution and the steady-state current are obtained. The critical avalanche intensity needed to initiate the global avalanche formation is calculated depending on noise intensity. The large time asymptotic for the probability of the global avalanche appearance is derived.

  16. Hierarchical Parallel Evaluation of a Hamming Code

    Directory of Open Access Journals (Sweden)

    Shmuel T. Klein

    2017-04-01

    Full Text Available The Hamming code is a well-known error correction code and can correct a single error in an input vector of size n bits by adding logn parity checks. A new parallel implementation of the code is presented, using a hierarchical structure of n processors in logn layers. All the processors perform similar simple tasks, and need only a few bytes of internal memory.

  17. Hierarchical Network Design

    DEFF Research Database (Denmark)

    Thomadsen, Tommy

    2005-01-01

    Communication networks are immensely important today, since both companies and individuals use numerous services that rely on them. This thesis considers the design of hierarchical (communication) networks. Hierarchical networks consist of layers of networks and are well-suited for coping...... the clusters. The design of hierarchical networks involves clustering of nodes, hub selection, and network design, i.e. selection of links and routing of ows. Hierarchical networks have been in use for decades, but integrated design of these networks has only been considered for very special types of networks....... The thesis investigates models for hierarchical network design and methods used to design such networks. In addition, ring network design is considered, since ring networks commonly appear in the design of hierarchical networks. The thesis introduces hierarchical networks, including a classification scheme...

  18. Improving broadcast channel rate using hierarchical modulation

    CERN Document Server

    Meric, Hugo; Arnal, Fabrice; Lesthievent, Guy; Boucheret, Marie-Laure

    2011-01-01

    We investigate the design of a broadcast system where the aim is to maximise the throughput. This task is usually challenging due to the channel variability. Forty years ago, Cover introduced and compared two schemes: time sharing and superposition coding. The second scheme was proved to be optimal for some channels. Modern satellite communications systems such as DVB-SH and DVB-S2 mainly rely on time sharing strategy to optimize throughput. They consider hierarchical modulation, a practical implementation of superposition coding, but only for unequal error protection or backward compatibility purposes. We propose in this article to combine time sharing and hierarchical modulation together and show how this scheme can improve the performance in terms of available rate. We present the gain on a simple channel modeling the broadcasting area of a satellite. Our work is applied to the DVB-SH standard, which considers hierarchical modulation as an optional feature.

  19. Onboard hierarchical network

    Science.gov (United States)

    Tunesi, Luca; Armbruster, Philippe

    2004-02-01

    developing a part of the system. Only when all the units are delivered to the system integrator, it is possible to test the complete system. Consequently, this normally happens at the final development stage and it is then often costly to face serious compatibility problems. Pre-integration would be a possible way of anticipating problems during the integration phase. In this case, a scheme allowing the interconnection of unit models (simulators, breadboards and flight-representative hardware) must be defined. For this purpose intranets and Internet can be of significant help. As a consequence of these well-identified needs a new concept has been formulated by the Agency and will extensively be described in this paper. On-board hierarchical networks have to be seen as an integrated infrastructure able to support not only software level functions but also hardware oriented diagnostic tools. As a complement to presently developed SpaceWire networks, a lower level bus must be selected. It must be reliable, flexible, easy-to-implement and it should have a strong error control and management scheme in order to ensure an appropriate availability figure. Of course, the adoption of an industrial standard bus is advisable because of the existence of development tools, devices and experience. Therefore, the use of a standard bus provides the possibility of evaluating and potentially using commercial systems, with a significant reduction of non-recurrent costs. As a consequence, ESA has recently set-up a working group with the objective of evaluating and, if needed, customising the Controller Area Network (CAN) bus (http://groups.yahoo.com/group/CAN_Space/). On this basis, it has been decided to consider the use of the CAN bus for payload systems and steps are being issued for its on-board implementation in space. As far as the lowest hierarchical level is concerned, a JTAG-like interface appears to be adequate but this selection is still subject to investigations. In the scenario

  20. Semiparametric Quantile Modelling of Hierarchical Data

    Institute of Scientific and Technical Information of China (English)

    Mao Zai TIAN; Man Lai TANG; Ping Shing CHAN

    2009-01-01

    The classic hierarchical linear model formulation provides a considerable flexibility for modelling the random effects structure and a powerful tool for analyzing nested data that arise in various areas such as biology, economics and education. However, it assumes the within-group errors to be independently and identically distributed (i.i.d.) and models at all levels to be linear. Most importantly, traditional hierarchical models (just like other ordinary mean regression methods) cannot characterize the entire conditional distribution of a dependent variable given a set of covariates and fail to yield robust estimators. In this article, we relax the aforementioned and normality assumptions, and develop a so-called Hierarchical Semiparametric Quantile Regression Models in which the within-group errors could be heteroscedastic and models at some levels are allowed to be nonparametric. We present the ideas with a 2-level model. The level-l model is specified as a nonparametric model whereas level-2 model is set as a parametric model. Under the proposed semiparametric setting the vector of partial derivatives of the nonparametric function in level-1 becomes the response variable vector in level 2. The proposed method allows us to model the fixed effects in the innermost level (i.e., level 2) as a function of the covariates instead of a constant effect. We outline some mild regularity conditions required for convergence and asymptotic normality for our estimators. We illustrate our methodology with a real hierarchical data set from a laboratory study and some simulation studies.

  1. Dynamic Organization of Hierarchical Memories.

    Science.gov (United States)

    Kurikawa, Tomoki; Kaneko, Kunihiko

    2016-01-01

    In the brain, external objects are categorized in a hierarchical way. Although it is widely accepted that objects are represented as static attractors in neural state space, this view does not take account interaction between intrinsic neural dynamics and external input, which is essential to understand how neural system responds to inputs. Indeed, structured spontaneous neural activity without external inputs is known to exist, and its relationship with evoked activities is discussed. Then, how categorical representation is embedded into the spontaneous and evoked activities has to be uncovered. To address this question, we studied bifurcation process with increasing input after hierarchically clustered associative memories are learned. We found a "dynamic categorization"; neural activity without input wanders globally over the state space including all memories. Then with the increase of input strength, diffuse representation of higher category exhibits transitions to focused ones specific to each object. The hierarchy of memories is embedded in the transition probability from one memory to another during the spontaneous dynamics. With increased input strength, neural activity wanders over a narrower state space including a smaller set of memories, showing more specific category or memory corresponding to the applied input. Moreover, such coarse-to-fine transitions are also observed temporally during transient process under constant input, which agrees with experimental findings in the temporal cortex. These results suggest the hierarchy emerging through interaction with an external input underlies hierarchy during transient process, as well as in the spontaneous activity.

  2. Micromechanics of hierarchical materials

    DEFF Research Database (Denmark)

    Mishnaevsky, Leon, Jr.

    2012-01-01

    A short overview of micromechanical models of hierarchical materials (hybrid composites, biomaterials, fractal materials, etc.) is given. Several examples of the modeling of strength and damage in hierarchical materials are summarized, among them, 3D FE model of hybrid composites...... with nanoengineered matrix, fiber bundle model of UD composites with hierarchically clustered fibers and 3D multilevel model of wood considered as a gradient, cellular material with layered composite cell walls. The main areas of research in micromechanics of hierarchical materials are identified, among them......, the investigations of the effects of load redistribution between reinforcing elements at different scale levels, of the possibilities to control different material properties and to ensure synergy of strengthening effects at different scale levels and using the nanoreinforcement effects. The main future directions...

  3. Hierarchical auxetic mechanical metamaterials.

    Science.gov (United States)

    Gatt, Ruben; Mizzi, Luke; Azzopardi, Joseph I; Azzopardi, Keith M; Attard, Daphne; Casha, Aaron; Briffa, Joseph; Grima, Joseph N

    2015-02-11

    Auxetic mechanical metamaterials are engineered systems that exhibit the unusual macroscopic property of a negative Poisson's ratio due to sub-unit structure rather than chemical composition. Although their unique behaviour makes them superior to conventional materials in many practical applications, they are limited in availability. Here, we propose a new class of hierarchical auxetics based on the rotating rigid units mechanism. These systems retain the enhanced properties from having a negative Poisson's ratio with the added benefits of being a hierarchical system. Using simulations on typical hierarchical multi-level rotating squares, we show that, through design, one can control the extent of auxeticity, degree of aperture and size of the different pores in the system. This makes the system more versatile than similar non-hierarchical ones, making them promising candidates for industrial and biomedical applications, such as stents and skin grafts.

  4. Introduction into Hierarchical Matrices

    KAUST Repository

    Litvinenko, Alexander

    2013-12-05

    Hierarchical matrices allow us to reduce computational storage and cost from cubic to almost linear. This technique can be applied for solving PDEs, integral equations, matrix equations and approximation of large covariance and precision matrices.

  5. Hierarchical Auxetic Mechanical Metamaterials

    Science.gov (United States)

    Gatt, Ruben; Mizzi, Luke; Azzopardi, Joseph I.; Azzopardi, Keith M.; Attard, Daphne; Casha, Aaron; Briffa, Joseph; Grima, Joseph N.

    2015-02-01

    Auxetic mechanical metamaterials are engineered systems that exhibit the unusual macroscopic property of a negative Poisson's ratio due to sub-unit structure rather than chemical composition. Although their unique behaviour makes them superior to conventional materials in many practical applications, they are limited in availability. Here, we propose a new class of hierarchical auxetics based on the rotating rigid units mechanism. These systems retain the enhanced properties from having a negative Poisson's ratio with the added benefits of being a hierarchical system. Using simulations on typical hierarchical multi-level rotating squares, we show that, through design, one can control the extent of auxeticity, degree of aperture and size of the different pores in the system. This makes the system more versatile than similar non-hierarchical ones, making them promising candidates for industrial and biomedical applications, such as stents and skin grafts.

  6. Applied Bayesian Hierarchical Methods

    CERN Document Server

    Congdon, Peter D

    2010-01-01

    Bayesian methods facilitate the analysis of complex models and data structures. Emphasizing data applications, alternative modeling specifications, and computer implementation, this book provides a practical overview of methods for Bayesian analysis of hierarchical models.

  7. Programming with Hierarchical Maps

    DEFF Research Database (Denmark)

    Ørbæk, Peter

    This report desribes the hierarchical maps used as a central data structure in the Corundum framework. We describe its most prominent features, ague for its usefulness and briefly describe some of the software prototypes implemented using the technology....

  8. Catalysis with hierarchical zeolites

    DEFF Research Database (Denmark)

    Holm, Martin Spangsberg; Taarning, Esben; Egeblad, Kresten

    2011-01-01

    Hierarchical (or mesoporous) zeolites have attracted significant attention during the first decade of the 21st century, and so far this interest continues to increase. There have already been several reviews giving detailed accounts of the developments emphasizing different aspects of this research...... topic. Until now, the main reason for developing hierarchical zeolites has been to achieve heterogeneous catalysts with improved performance but this particular facet has not yet been reviewed in detail. Thus, the present paper summaries and categorizes the catalytic studies utilizing hierarchical...... zeolites that have been reported hitherto. Prototypical examples from some of the different categories of catalytic reactions that have been studied using hierarchical zeolite catalysts are highlighted. This clearly illustrates the different ways that improved performance can be achieved with this family...

  9. Multicollinearity in hierarchical linear models.

    Science.gov (United States)

    Yu, Han; Jiang, Shanhe; Land, Kenneth C

    2015-09-01

    This study investigates an ill-posed problem (multicollinearity) in Hierarchical Linear Models from both the data and the model perspectives. We propose an intuitive, effective approach to diagnosing the presence of multicollinearity and its remedies in this class of models. A simulation study demonstrates the impacts of multicollinearity on coefficient estimates, associated standard errors, and variance components at various levels of multicollinearity for finite sample sizes typical in social science studies. We further investigate the role multicollinearity plays at each level for estimation of coefficient parameters in terms of shrinkage. Based on these analyses, we recommend a top-down method for assessing multicollinearity in HLMs that first examines the contextual predictors (Level-2 in a two-level model) and then the individual predictors (Level-1) and uses the results for data collection, research problem redefinition, model re-specification, variable selection and estimation of a final model.

  10. Self-healing diffusion quantum Monte Carlo algorithms: methods for direct reduction of the fermion sign error in electronic structure calculations

    Energy Technology Data Exchange (ETDEWEB)

    Reboredo, F A; Hood, R Q; Kent, P C

    2009-01-06

    We develop a formalism and present an algorithm for optimization of the trial wave-function used in fixed-node diffusion quantum Monte Carlo (DMC) methods. The formalism is based on the DMC mixed estimator of the ground state probability density. We take advantage of a basic property of the walker configuration distribution generated in a DMC calculation, to (i) project-out a multi-determinant expansion of the fixed node ground state wave function and (ii) to define a cost function that relates the interacting-ground-state-fixed-node and the non-interacting trial wave functions. We show that (a) locally smoothing out the kink of the fixed-node ground-state wave function at the node generates a new trial wave function with better nodal structure and (b) we argue that the noise in the fixed-node wave function resulting from finite sampling plays a beneficial role, allowing the nodes to adjust towards the ones of the exact many-body ground state in a simulated annealing-like process. Based on these principles, we propose a method to improve both single determinant and multi-determinant expansions of the trial wave function. The method can be generalized to other wave function forms such as pfaffians. We test the method in a model system where benchmark configuration interaction calculations can be performed and most components of the Hamiltonian are evaluated analytically. Comparing the DMC calculations with the exact solutions, we find that the trial wave function is systematically improved. The overlap of the optimized trial wave function and the exact ground state converges to 100% even starting from wave functions orthogonal to the exact ground state. Similarly, the DMC total energy and density converges to the exact solutions for the model. In the optimization process we find an optimal non-interacting nodal potential of density-functional-like form whose existence was predicted in a previous publication [Phys. Rev. B 77 245110 (2008)]. Tests of the method are

  11. GSW-type hierarchical identity-based fully homomorphic encryption scheme from learning with errors%基于容错学习的GSW-型全同态层次型IBE方案

    Institute of Scientific and Technical Information of China (English)

    戴晓明; 张薇; 郑志恒; 李镇林

    2016-01-01

    针对传统的基于身份的加密(IBE)方案不能够对密文直接进行计算这一功能上的缺陷,提出了一个新的IBE方案.该方案利用Gentry等提出的同态转化机制,结合Agrawal等构造的层次型IBE方案,构造了一个具有全同态性质的层次型IBE方案.与Gentry等提出的全同态加密(GSW)方案(GENTRY C,SAHAI A,WATERS B.Homomorphic encryption from learning with errors:conceptually-simpler,asymptotically-faster,attribute-based.CRYPTO 2013:Proceedings of the 33rd Annual Cryptology Conference on Advances in Cryptology.Berlin:Springer,2013:75-92)和Clear等提出的全同态IBE(CM)方案(CLEAR M,MCGOLDRICK C.Bootstrappable identity-based fully homomorphic encryption.CANS 2014:Proceedings of 13th International Conference on Cryptology and Network Security.Berlin:Springer,2014:1-19)相比,该方案构造方法更加自然,空间复杂度由立方级降低到平方级,效率更高.在当前云计算背景下,有助于基于容错学习(LWE)的全同态加密方案从理论向实践转化.通过性能分析并在随机预言机模型下验证了所提方案具有完全安全下的选择明文攻击(IND-ID-CPA)安全性.

  12. Parallel hierarchical radiosity rendering

    Energy Technology Data Exchange (ETDEWEB)

    Carter, M.

    1993-07-01

    In this dissertation, the step-by-step development of a scalable parallel hierarchical radiosity renderer is documented. First, a new look is taken at the traditional radiosity equation, and a new form is presented in which the matrix of linear system coefficients is transformed into a symmetric matrix, thereby simplifying the problem and enabling a new solution technique to be applied. Next, the state-of-the-art hierarchical radiosity methods are examined for their suitability to parallel implementation, and scalability. Significant enhancements are also discovered which both improve their theoretical foundations and improve the images they generate. The resultant hierarchical radiosity algorithm is then examined for sources of parallelism, and for an architectural mapping. Several architectural mappings are discussed. A few key algorithmic changes are suggested during the process of making the algorithm parallel. Next, the performance, efficiency, and scalability of the algorithm are analyzed. The dissertation closes with a discussion of several ideas which have the potential to further enhance the hierarchical radiosity method, or provide an entirely new forum for the application of hierarchical methods.

  13. Neutrosophic Hierarchical Clustering Algoritms

    Directory of Open Access Journals (Sweden)

    Rıdvan Şahin

    2014-03-01

    Full Text Available Interval neutrosophic set (INS is a generalization of interval valued intuitionistic fuzzy set (IVIFS, whose the membership and non-membership values of elements consist of fuzzy range, while single valued neutrosophic set (SVNS is regarded as extension of intuitionistic fuzzy set (IFS. In this paper, we extend the hierarchical clustering techniques proposed for IFSs and IVIFSs to SVNSs and INSs respectively. Based on the traditional hierarchical clustering procedure, the single valued neutrosophic aggregation operator, and the basic distance measures between SVNSs, we define a single valued neutrosophic hierarchical clustering algorithm for clustering SVNSs. Then we extend the algorithm to classify an interval neutrosophic data. Finally, we present some numerical examples in order to show the effectiveness and availability of the developed clustering algorithms.

  14. Hierarchical Porous Structures

    Energy Technology Data Exchange (ETDEWEB)

    Grote, Christopher John [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-06-07

    Materials Design is often at the forefront of technological innovation. While there has always been a push to generate increasingly low density materials, such as aero or hydrogels, more recently the idea of bicontinuous structures has gone more into play. This review will cover some of the methods and applications for generating both porous, and hierarchically porous structures.

  15. Background Error Correlation Modeling with Diffusion Operators

    Science.gov (United States)

    2013-01-01

    functions defined on the orthogonal curvilin- ear grid of the Navy Coastal Ocean Model (NCOM) [28] set up in the Monterrey Bay (Fig. 4). The number N...H2 = [1 1; 1−1], the HMs with order N = 2n, n= 1,2... can be easily constructed. HMs with N = 12,20 were constructed ” manually ” more than a century

  16. Analysis of Error Propagation Within Hierarchical Air Combat Models

    Science.gov (United States)

    2016-06-01

    the Falkland war between Argentina and the United Kingdom over two British overseas territories took ten weeks—with many days not involving air...Janes/Display/1343368 Kleijnen, J. P., Sanchez, S. M., Lucas, T. W., & Cioppa, T. M. (2005). State-of-the- art review: A user’s guide to the brave

  17. A hierarchical Bayes error correction model to explain dynamic effects

    NARCIS (Netherlands)

    D. Fok (Dennis); C. Horváth (Csilla); R. Paap (Richard); Ph.H.B.F. Franses (Philip Hans)

    2004-01-01

    textabstractFor promotional planning and market segmentation it is important to understand the short-run and long-run effects of the marketing mix on category and brand sales. In this paper we put forward a sales response model to explain the differences in short-run and long-run effects of promotio

  18. Experiments in Error Propagation within Hierarchal Combat Models

    Science.gov (United States)

    2015-09-01

    and variances of Blue MTTK, Red MTTK, and P[Blue Wins] by Experimental Design are statistically different (Wackerly, Mendenhall III and Schaeffer...2008). Although the data is not normally distributed, the t-test is robust to non-normality (Wackerly, Mendenhall III and Schaeffer 2008). There is...this is handled by transforming the predicted values with a natural logarithm (Wackerly, Mendenhall III and Schaeffer 2008). The model considers

  19. A Hierarchical Bayes Ensemble Kalman Filter

    Science.gov (United States)

    Tsyrulnikov, Michael; Rakitko, Alexander

    2017-01-01

    A new ensemble filter that allows for the uncertainty in the prior distribution is proposed and tested. The filter relies on the conditional Gaussian distribution of the state given the model-error and predictability-error covariance matrices. The latter are treated as random matrices and updated in a hierarchical Bayes scheme along with the state. The (hyper)prior distribution of the covariance matrices is assumed to be inverse Wishart. The new Hierarchical Bayes Ensemble Filter (HBEF) assimilates ensemble members as generalized observations and allows ordinary observations to influence the covariances. The actual probability distribution of the ensemble members is allowed to be different from the true one. An approximation that leads to a practicable analysis algorithm is proposed. The new filter is studied in numerical experiments with a doubly stochastic one-variable model of "truth". The model permits the assessment of the variance of the truth and the true filtering error variance at each time instance. The HBEF is shown to outperform the EnKF and the HEnKF by Myrseth and Omre (2010) in a wide range of filtering regimes in terms of performance of its primary and secondary filters.

  20. Information diffusion on adaptive network

    Institute of Scientific and Technical Information of China (English)

    Hu Ke; Tang Yi

    2008-01-01

    Based on the adaptive network,the feedback mechanism and interplay between the network topology and the diffusive process of information are studied.The results reveal that the adaptation of network topology can drive systems into the scale-free one with the assortative or disassortative degree correlations,and the hierarchical clustering.Meanwhile,the processes of the information diffusion are extremely speeded up by the adaptive changes of network topology.

  1. Collaborative Hierarchical Sparse Modeling

    CERN Document Server

    Sprechmann, Pablo; Sapiro, Guillermo; Eldar, Yonina C

    2010-01-01

    Sparse modeling is a powerful framework for data analysis and processing. Traditionally, encoding in this framework is done by solving an l_1-regularized linear regression problem, usually called Lasso. In this work we first combine the sparsity-inducing property of the Lasso model, at the individual feature level, with the block-sparsity property of the group Lasso model, where sparse groups of features are jointly encoded, obtaining a sparsity pattern hierarchically structured. This results in the hierarchical Lasso, which shows important practical modeling advantages. We then extend this approach to the collaborative case, where a set of simultaneously coded signals share the same sparsity pattern at the higher (group) level but not necessarily at the lower one. Signals then share the same active groups, or classes, but not necessarily the same active set. This is very well suited for applications such as source separation. An efficient optimization procedure, which guarantees convergence to the global opt...

  2. Hierarchical manifold learning.

    Science.gov (United States)

    Bhatia, Kanwal K; Rao, Anil; Price, Anthony N; Wolz, Robin; Hajnal, Jo; Rueckert, Daniel

    2012-01-01

    We present a novel method of hierarchical manifold learning which aims to automatically discover regional variations within images. This involves constructing manifolds in a hierarchy of image patches of increasing granularity, while ensuring consistency between hierarchy levels. We demonstrate its utility in two very different settings: (1) to learn the regional correlations in motion within a sequence of time-resolved images of the thoracic cavity; (2) to find discriminative regions of 3D brain images in the classification of neurodegenerative disease,

  3. Hierarchically Structured Electrospun Fibers

    Directory of Open Access Journals (Sweden)

    Nicole E. Zander

    2013-01-01

    Full Text Available Traditional electrospun nanofibers have a myriad of applications ranging from scaffolds for tissue engineering to components of biosensors and energy harvesting devices. The generally smooth one-dimensional structure of the fibers has stood as a limitation to several interesting novel applications. Control of fiber diameter, porosity and collector geometry will be briefly discussed, as will more traditional methods for controlling fiber morphology and fiber mat architecture. The remainder of the review will focus on new techniques to prepare hierarchically structured fibers. Fibers with hierarchical primary structures—including helical, buckled, and beads-on-a-string fibers, as well as fibers with secondary structures, such as nanopores, nanopillars, nanorods, and internally structured fibers and their applications—will be discussed. These new materials with helical/buckled morphology are expected to possess unique optical and mechanical properties with possible applications for negative refractive index materials, highly stretchable/high-tensile-strength materials, and components in microelectromechanical devices. Core-shell type fibers enable a much wider variety of materials to be electrospun and are expected to be widely applied in the sensing, drug delivery/controlled release fields, and in the encapsulation of live cells for biological applications. Materials with a hierarchical secondary structure are expected to provide new superhydrophobic and self-cleaning materials.

  4. HDS: Hierarchical Data System

    Science.gov (United States)

    Pearce, Dave; Walter, Anton; Lupton, W. F.; Warren-Smith, Rodney F.; Lawden, Mike; McIlwrath, Brian; Peden, J. C. M.; Jenness, Tim; Draper, Peter W.

    2015-02-01

    The Hierarchical Data System (HDS) is a file-based hierarchical data system designed for the storage of a wide variety of information. It is particularly suited to the storage of large multi-dimensional arrays (with their ancillary data) where efficient access is needed. It is a key component of the Starlink software collection (ascl:1110.012) and is used by the Starlink N-Dimensional Data Format (NDF) library (ascl:1411.023). HDS organizes data into hierarchies, broadly similar to the directory structure of a hierarchical filing system, but contained within a single HDS container file. The structures stored in these files are self-describing and flexible; HDS supports modification and extension of structures previously created, as well as functions such as deletion, copying, and renaming. All information stored in HDS files is portable between the machines on which HDS is implemented. Thus, there are no format conversion problems when moving between machines. HDS can write files in a private binary format (version 4), or be layered on top of HDF5 (version 5).

  5. Hierarchical video summarization

    Science.gov (United States)

    Ratakonda, Krishna; Sezan, M. Ibrahim; Crinon, Regis J.

    1998-12-01

    We address the problem of key-frame summarization of vide in the absence of any a priori information about its content. This is a common problem that is encountered in home videos. We propose a hierarchical key-frame summarization algorithm where a coarse-to-fine key-frame summary is generated. A hierarchical key-frame summary facilitates multi-level browsing where the user can quickly discover the content of the video by accessing its coarsest but most compact summary and then view a desired segment of the video with increasingly more detail. At the finest level, the summary is generated on the basis of color features of video frames, using an extension of a recently proposed key-frame extraction algorithm. The finest level key-frames are recursively clustered using a novel pairwise K-means clustering approach with temporal consecutiveness constraint. We also address summarization of MPEG-2 compressed video without fully decoding the bitstream. We also propose efficient mechanisms that facilitate decoding the video when the hierarchical summary is utilized in browsing and playback of video segments starting at selected key-frames.

  6. Refractive Errors

    Science.gov (United States)

    ... does the eye focus light? In order to see clearly, light rays from an object must focus onto the ... The refractive errors are: myopia, hyperopia and astigmatism [See figures 2 and 3]. What is hyperopia (farsightedness)? Hyperopia occurs when light rays focus behind the retina (because the eye ...

  7. Medication Errors

    Science.gov (United States)

    ... Proprietary Names (PDF - 146KB) Draft Guidance for Industry: Best Practices in Developing Proprietary Names for Drugs (PDF - 279KB) ... or (301) 796-3400 druginfo@fda.hhs.gov Human Drug ... in Medication Errors Resources for You Agency for Healthcare Research and Quality: ...

  8. A Hierarchical Framework for Facial Age Estimation

    Directory of Open Access Journals (Sweden)

    Yuyu Liang

    2014-01-01

    Full Text Available Age estimation is a complex issue of multiclassification or regression. To address the problems of uneven distribution of age database and ignorance of ordinal information, this paper shows a hierarchic age estimation system, comprising age group and specific age estimation. In our system, two novel classifiers, sequence k-nearest neighbor (SKNN and ranking-KNN, are introduced to predict age group and value, respectively. Notably, ranking-KNN utilizes the ordinal information between samples in estimation process rather than regards samples as separate individuals. Tested on FG-NET database, our system achieves 4.97 evaluated by MAE (mean absolute error for age estimation.

  9. Non-Trivial Feature Derivation for Intensifying Feature Detection Using LIDAR Datasets Through Allometric Aggregation Data Analysis Applying Diffused Hierarchical Clustering for Discriminating Agricultural Land Cover in Portions of Northern Mindanao, Philippines

    Science.gov (United States)

    Villar, Ricardo G.; Pelayo, Jigg L.; Mozo, Ray Mari N.; Salig, James B., Jr.; Bantugan, Jojemar

    2016-06-01

    Leaning on the derived results conducted by Central Mindanao University Phil-LiDAR 2.B.11 Image Processing Component, the paper attempts to provides the application of the Light Detection and Ranging (LiDAR) derived products in arriving quality Landcover classification considering the theoretical approach of data analysis principles to minimize the common problems in image classification. These are misclassification of objects and the non-distinguishable interpretation of pixelated features that results to confusion of class objects due to their closely-related spectral resemblance, unbalance saturation of RGB information is a challenged at the same time. Only low density LiDAR point cloud data is exploited in the research denotes as 2 pts/m2 of accuracy which bring forth essential derived information such as textures and matrices (number of returns, intensity textures, nDSM, etc.) in the intention of pursuing the conditions for selection characteristic. A novel approach that takes gain of the idea of object-based image analysis and the principle of allometric relation of two or more observables which are aggregated for each acquisition of datasets for establishing a proportionality function for data-partioning. In separating two or more data sets in distinct regions in a feature space of distributions, non-trivial computations for fitting distribution were employed to formulate the ideal hyperplane. Achieving the distribution computations, allometric relations were evaluated and match with the necessary rotation, scaling and transformation techniques to find applicable border conditions. Thus, a customized hybrid feature was developed and embedded in every object class feature to be used as classifier with employed hierarchical clustering strategy for cross-examining and filtering features. This features are boost using machine learning algorithms as trainable sets of information for a more competent feature detection. The product classification in this

  10. Detecting Hierarchical Structure in Networks

    DEFF Research Database (Denmark)

    Herlau, Tue; Mørup, Morten; Schmidt, Mikkel Nørgaard;

    2012-01-01

    a generative Bayesian model that is able to infer whether hierarchies are present or not from a hypothesis space encompassing all types of hierarchical tree structures. For efficient inference we propose a collapsed Gibbs sampling procedure that jointly infers a partition and its hierarchical structure......Many real-world networks exhibit hierarchical organization. Previous models of hierarchies within relational data has focused on binary trees; however, for many networks it is unknown whether there is hierarchical structure, and if there is, a binary tree might not account well for it. We propose....... On synthetic and real data we demonstrate that our model can detect hierarchical structure leading to better link-prediction than competing models. Our model can be used to detect if a network exhibits hierarchical structure, thereby leading to a better comprehension and statistical account the network....

  11. Context updates are hierarchical

    Directory of Open Access Journals (Sweden)

    Anton Karl Ingason

    2016-10-01

    Full Text Available This squib studies the order in which elements are added to the shared context of interlocutors in a conversation. It focuses on context updates within one hierarchical structure and argues that structurally higher elements are entered into the context before lower elements, even if the structurally higher elements are pronounced after the lower elements. The crucial data are drawn from a comparison of relative clauses in two head-initial languages, English and Icelandic, and two head-final languages, Korean and Japanese. The findings have consequences for any theory of a dynamic semantics.

  12. Hierarchical Neural Regression Models for Customer Churn Prediction

    Directory of Open Access Journals (Sweden)

    Golshan Mohammadi

    2013-01-01

    Full Text Available As customers are the main assets of each industry, customer churn prediction is becoming a major task for companies to remain in competition with competitors. In the literature, the better applicability and efficiency of hierarchical data mining techniques has been reported. This paper considers three hierarchical models by combining four different data mining techniques for churn prediction, which are backpropagation artificial neural networks (ANN, self-organizing maps (SOM, alpha-cut fuzzy c-means (α-FCM, and Cox proportional hazards regression model. The hierarchical models are ANN + ANN + Cox, SOM + ANN + Cox, and α-FCM + ANN + Cox. In particular, the first component of the models aims to cluster data in two churner and nonchurner groups and also filter out unrepresentative data or outliers. Then, the clustered data as the outputs are used to assign customers to churner and nonchurner groups by the second technique. Finally, the correctly classified data are used to create Cox proportional hazards model. To evaluate the performance of the hierarchical models, an Iranian mobile dataset is considered. The experimental results show that the hierarchical models outperform the single Cox regression baseline model in terms of prediction accuracy, Types I and II errors, RMSE, and MAD metrics. In addition, the α-FCM + ANN + Cox model significantly performs better than the two other hierarchical models.

  13. Micromechanical design of hierarchical composites using global load sharing theory

    Science.gov (United States)

    Rajan, V. P.; Curtin, W. A.

    2016-05-01

    Hierarchical composites, embodied by natural materials ranging from bone to bamboo, may offer combinations of material properties inaccessible to conventional composites. Using global load sharing (GLS) theory, a well-established micromechanics model for composites, we develop accurate numerical and analytical predictions for the strength and toughness of hierarchical composites with arbitrary fiber geometries, fiber strengths, interface properties, and number of hierarchical levels, N. The model demonstrates that two key material properties at each hierarchical level-a characteristic strength and a characteristic fiber length-control the scalings of composite properties. One crucial finding is that short- and long-fiber composites behave radically differently. Long-fiber composites are significantly stronger than short-fiber composites, by a factor of 2N or more; they are also significantly tougher because their fiber breaks are bridged by smaller-scale fibers that dissipate additional energy. Indeed, an "infinite" fiber length appears to be optimal in hierarchical composites. However, at the highest level of the composite, long fibers localize on planes of pre-existing damage, and thus short fibers must be employed instead to achieve notch sensitivity and damage tolerance. We conclude by providing simple guidelines for microstructural design of hierarchical composites, including the selection of N, the fiber lengths, the ratio of length scales at successive hierarchical levels, the fiber volume fractions, and the desired properties of the smallest-scale reinforcement. Our model enables superior hierarchical composites to be designed in a rational way, without resorting either to numerical simulation or trial-and-error-based experimentation.

  14. Medication Errors - A Review

    OpenAIRE

    Vinay BC; Nikhitha MK; Patel Sunil B

    2015-01-01

    In this present review article, regarding medication errors its definition, medication error problem, types of medication errors, common causes of medication errors, monitoring medication errors, consequences of medication errors, prevention of medication error and managing medication errors have been explained neatly and legibly with proper tables which is easy to understand.

  15. Medication Errors - A Review

    OpenAIRE

    Vinay BC; Nikhitha MK; Patel Sunil B

    2015-01-01

    In this present review article, regarding medication errors its definition, medication error problem, types of medication errors, common causes of medication errors, monitoring medication errors, consequences of medication errors, prevention of medication error and managing medication errors have been explained neatly and legibly with proper tables which is easy to understand.

  16. Hierarchical characterization procedures for dimensional metrology

    Science.gov (United States)

    MacKinnon, David; Beraldin, Jean-Angelo; Cournoyer, Luc; Carrier, Benjamin

    2011-03-01

    We present a series of dimensional metrology procedures for evaluating the geometrical performance of a 3D imaging system that have either been designed or modified from existing procedures to ensure, where possible, statistical traceability of each characteristic value from the certified reference surface to the certifying laboratory. Because there are currently no internationally-accepted standards for characterizing 3D imaging systems, these procedures have been designed to avoid using characteristic values provided by the vendors of 3D imaging systems. For this paper, we focus only on characteristics related to geometric surface properties, dividing them into surface form precision and surface fit trueness. These characteristics have been selected to be familiar to operators of 3D imaging systems that use Geometrical Dimensioning and Tolerancing (GD&T). The procedures for generating characteristic values would form the basis of either a volumetric or application-specific analysis of the characteristic profile of a 3D imaging system. We use a hierarchical approach in which each procedure builds on either certified reference values or previously-generated characteristic values. Starting from one of three classes of surface forms, we demonstrate how procedures for quantifying for flatness, roundness, angularity, diameter error, angle error, sphere-spacing error, and unidirectional and bidirectional plane-spacing error are built upon each other. We demonstrate how these procedures can be used as part of a process for characterizing the geometrical performance of a 3D imaging system.

  17. Hierarchical partial order ranking.

    Science.gov (United States)

    Carlsen, Lars

    2008-09-01

    Assessing the potential impact on environmental and human health from the production and use of chemicals or from polluted sites involves a multi-criteria evaluation scheme. A priori several parameters are to address, e.g., production tonnage, specific release scenarios, geographical and site-specific factors in addition to various substance dependent parameters. Further socio-economic factors may be taken into consideration. The number of parameters to be included may well appear to be prohibitive for developing a sensible model. The study introduces hierarchical partial order ranking (HPOR) that remedies this problem. By HPOR the original parameters are initially grouped based on their mutual connection and a set of meta-descriptors is derived representing the ranking corresponding to the single groups of descriptors, respectively. A second partial order ranking is carried out based on the meta-descriptors, the final ranking being disclosed though average ranks. An illustrative example on the prioritization of polluted sites is given.

  18. Trees and Hierarchical Structures

    CERN Document Server

    Haeseler, Arndt

    1990-01-01

    The "raison d'etre" of hierarchical dustering theory stems from one basic phe­ nomenon: This is the notorious non-transitivity of similarity relations. In spite of the fact that very often two objects may be quite similar to a third without being that similar to each other, one still wants to dassify objects according to their similarity. This should be achieved by grouping them into a hierarchy of non-overlapping dusters such that any two objects in ~ne duster appear to be more related to each other than they are to objects outside this duster. In everyday life, as well as in essentially every field of scientific investigation, there is an urge to reduce complexity by recognizing and establishing reasonable das­ sification schemes. Unfortunately, this is counterbalanced by the experience of seemingly unavoidable deadlocks caused by the existence of sequences of objects, each comparatively similar to the next, but the last rather different from the first.

  19. Hierarchical Affinity Propagation

    CERN Document Server

    Givoni, Inmar; Frey, Brendan J

    2012-01-01

    Affinity propagation is an exemplar-based clustering algorithm that finds a set of data-points that best exemplify the data, and associates each datapoint with one exemplar. We extend affinity propagation in a principled way to solve the hierarchical clustering problem, which arises in a variety of domains including biology, sensor networks and decision making in operational research. We derive an inference algorithm that operates by propagating information up and down the hierarchy, and is efficient despite the high-order potentials required for the graphical model formulation. We demonstrate that our method outperforms greedy techniques that cluster one layer at a time. We show that on an artificial dataset designed to mimic the HIV-strain mutation dynamics, our method outperforms related methods. For real HIV sequences, where the ground truth is not available, we show our method achieves better results, in terms of the underlying objective function, and show the results correspond meaningfully to geographi...

  20. Optimisation by hierarchical search

    Science.gov (United States)

    Zintchenko, Ilia; Hastings, Matthew; Troyer, Matthias

    2015-03-01

    Finding optimal values for a set of variables relative to a cost function gives rise to some of the hardest problems in physics, computer science and applied mathematics. Although often very simple in their formulation, these problems have a complex cost function landscape which prevents currently known algorithms from efficiently finding the global optimum. Countless techniques have been proposed to partially circumvent this problem, but an efficient method is yet to be found. We present a heuristic, general purpose approach to potentially improve the performance of conventional algorithms or special purpose hardware devices by optimising groups of variables in a hierarchical way. We apply this approach to problems in combinatorial optimisation, machine learning and other fields.

  1. Cosmic error and the statistics of large scale structure

    CERN Document Server

    Szapudi, I; Szapudi, Istvan; Colombi, Stephane

    1995-01-01

    We examine the errors on counts in cells extracted from galaxy surveys. The measurement error, related to the finite number of sampling cells, is disentangled from the ``cosmic error'', due to the finiteness of the survey. Using the hierarchical model and assuming locally Poisson behavior, we identified three contributions to the cosmic error: The finite volume effect is proportional to the average of the two-point correlation function over the whole survey. It accounts for possible fluctuations of the density field at scales larger than the sample size. The edge effect is related to the geometry of the survey. It accounts for the fact that objects near the boundary carry less statistical weight than those further away from it. The discreteness effect is due to the fact that the underlying smooth random field is sampled with finite number of objects. This is the ``shot noise'' error. Measurements of errors in artificial hierarchical samples showed excellent agreement with our predictions. The probability dist...

  2. How hierarchical is language use?

    Science.gov (United States)

    Frank, Stefan L.; Bod, Rens; Christiansen, Morten H.

    2012-01-01

    It is generally assumed that hierarchical phrase structure plays a central role in human language. However, considerations of simplicity and evolutionary continuity suggest that hierarchical structure should not be invoked too hastily. Indeed, recent neurophysiological, behavioural and computational studies show that sequential sentence structure has considerable explanatory power and that hierarchical processing is often not involved. In this paper, we review evidence from the recent literature supporting the hypothesis that sequential structure may be fundamental to the comprehension, production and acquisition of human language. Moreover, we provide a preliminary sketch outlining a non-hierarchical model of language use and discuss its implications and testable predictions. If linguistic phenomena can be explained by sequential rather than hierarchical structure, this will have considerable impact in a wide range of fields, such as linguistics, ethology, cognitive neuroscience, psychology and computer science. PMID:22977157

  3. How hierarchical is language use?

    Science.gov (United States)

    Frank, Stefan L; Bod, Rens; Christiansen, Morten H

    2012-11-22

    It is generally assumed that hierarchical phrase structure plays a central role in human language. However, considerations of simplicity and evolutionary continuity suggest that hierarchical structure should not be invoked too hastily. Indeed, recent neurophysiological, behavioural and computational studies show that sequential sentence structure has considerable explanatory power and that hierarchical processing is often not involved. In this paper, we review evidence from the recent literature supporting the hypothesis that sequential structure may be fundamental to the comprehension, production and acquisition of human language. Moreover, we provide a preliminary sketch outlining a non-hierarchical model of language use and discuss its implications and testable predictions. If linguistic phenomena can be explained by sequential rather than hierarchical structure, this will have considerable impact in a wide range of fields, such as linguistics, ethology, cognitive neuroscience, psychology and computer science.

  4. Associative Hierarchical Random Fields.

    Science.gov (United States)

    Ladický, L'ubor; Russell, Chris; Kohli, Pushmeet; Torr, Philip H S

    2014-06-01

    This paper makes two contributions: the first is the proposal of a new model-The associative hierarchical random field (AHRF), and a novel algorithm for its optimization; the second is the application of this model to the problem of semantic segmentation. Most methods for semantic segmentation are formulated as a labeling problem for variables that might correspond to either pixels or segments such as super-pixels. It is well known that the generation of super pixel segmentations is not unique. This has motivated many researchers to use multiple super pixel segmentations for problems such as semantic segmentation or single view reconstruction. These super-pixels have not yet been combined in a principled manner, this is a difficult problem, as they may overlap, or be nested in such a way that the segmentations form a segmentation tree. Our new hierarchical random field model allows information from all of the multiple segmentations to contribute to a global energy. MAP inference in this model can be performed efficiently using powerful graph cut based move making algorithms. Our framework generalizes much of the previous work based on pixels or segments, and the resulting labelings can be viewed both as a detailed segmentation at the pixel level, or at the other extreme, as a segment selector that pieces together a solution like a jigsaw, selecting the best segments from different segmentations as pieces. We evaluate its performance on some of the most challenging data sets for object class segmentation, and show that this ability to perform inference using multiple overlapping segmentations leads to state-of-the-art results.

  5. A novel load balancing method for hierarchical federation simulation system

    Science.gov (United States)

    Bin, Xiao; Xiao, Tian-yuan

    2013-07-01

    In contrast with single HLA federation framework, hierarchical federation framework can improve the performance of large-scale simulation system in a certain degree by distributing load on several RTI. However, in hierarchical federation framework, RTI is still the center of message exchange of federation, and it is still the bottleneck of performance of federation, the data explosion in a large-scale HLA federation may cause overload on RTI, It may suffer HLA federation performance reduction or even fatal error. Towards this problem, this paper proposes a load balancing method for hierarchical federation simulation system based on queuing theory, which is comprised of three main module: queue length predicting, load controlling policy, and controller. The method promotes the usage of resources of federate nodes, and improves the performance of HLA simulation system with balancing load on RTIG and federates. Finally, the experiment results are presented to demonstrate the efficient control of the method.

  6. Modeling hierarchical structures - Hierarchical Linear Modeling using MPlus

    CERN Document Server

    Jelonek, M

    2006-01-01

    The aim of this paper is to present the technique (and its linkage with physics) of overcoming problems connected to modeling social structures, which are typically hierarchical. Hierarchical Linear Models provide a conceptual and statistical mechanism for drawing conclusions regarding the influence of phenomena at different levels of analysis. In the social sciences it is used to analyze many problems such as educational, organizational or market dilemma. This paper introduces the logic of modeling hierarchical linear equations and estimation based on MPlus software. I present my own model to illustrate the impact of different factors on school acceptation level.

  7. Hierarchical Bass model

    Science.gov (United States)

    Tashiro, Tohru

    2014-03-01

    We propose a new model about diffusion of a product which includes a memory of how many adopters or advertisements a non-adopter met, where (non-)adopters mean people (not) possessing the product. This effect is lacking in the Bass model. As an application, we utilize the model to fit the iPod sales data, and so the better agreement is obtained than the Bass model.

  8. Hierarchical Bass model

    CERN Document Server

    Tashiro, Tohru

    2013-01-01

    We propose a new model about diffusion of a product which includes a memory of how many adopters or advertisements a non-adopter met, where (non-)adopters mean people (not) possessing the product. This effect is lacking in the Bass model. As an application, we utilize the model to fit the iPod sales data, and so the better agreement is obtained than the Bass model.

  9. Modeling hierarchical structures - Hierarchical Linear Modeling using MPlus

    OpenAIRE

    Jelonek, Magdalena

    2006-01-01

    The aim of this paper is to present the technique (and its linkage with physics) of overcoming problems connected to modeling social structures, which are typically hierarchical. Hierarchical Linear Models provide a conceptual and statistical mechanism for drawing conclusions regarding the influence of phenomena at different levels of analysis. In the social sciences it is used to analyze many problems such as educational, organizational or market dilemma. This paper introduces the logic of m...

  10. Multiple comparisons in genetic association studies: a hierarchical modeling approach.

    Science.gov (United States)

    Yi, Nengjun; Xu, Shizhong; Lou, Xiang-Yang; Mallick, Himel

    2014-02-01

    Multiple comparisons or multiple testing has been viewed as a thorny issue in genetic association studies aiming to detect disease-associated genetic variants from a large number of genotyped variants. We alleviate the problem of multiple comparisons by proposing a hierarchical modeling approach that is fundamentally different from the existing methods. The proposed hierarchical models simultaneously fit as many variables as possible and shrink unimportant effects towards zero. Thus, the hierarchical models yield more efficient estimates of parameters than the traditional methods that analyze genetic variants separately, and also coherently address the multiple comparisons problem due to largely reducing the effective number of genetic effects and the number of statistically "significant" effects. We develop a method for computing the effective number of genetic effects in hierarchical generalized linear models, and propose a new adjustment for multiple comparisons, the hierarchical Bonferroni correction, based on the effective number of genetic effects. Our approach not only increases the power to detect disease-associated variants but also controls the Type I error. We illustrate and evaluate our method with real and simulated data sets from genetic association studies. The method has been implemented in our freely available R package BhGLM (http://www.ssg.uab.edu/bhglm/).

  11. Hierarchical fringe tracking

    CERN Document Server

    Petrov, Romain G; Boskri, Abdelkarim; Folcher, Jean-Pierre; Lagarde, Stephane; Bresson, Yves; Benkhaldoum, Zouhair; Lazrek, Mohamed; Rakshit, Suvendu

    2014-01-01

    The limiting magnitude is a key issue for optical interferometry. Pairwise fringe trackers based on the integrated optics concepts used for example in GRAVITY seem limited to about K=10.5 with the 8m Unit Telescopes of the VLTI, and there is a general "common sense" statement that the efficiency of fringe tracking, and hence the sensitivity of optical interferometry, must decrease as the number of apertures increases, at least in the near infrared where we are still limited by detector readout noise. Here we present a Hierarchical Fringe Tracking (HFT) concept with sensitivity at least equal to this of a two apertures fringe trackers. HFT is based of the combination of the apertures in pairs, then in pairs of pairs then in pairs of groups. The key HFT module is a device that behaves like a spatial filter for two telescopes (2TSF) and transmits all or most of the flux of a cophased pair in a single mode beam. We give an example of such an achromatic 2TSF, based on very broadband dispersed fringes analyzed by g...

  12. Hierarchical Reverberation Mapping

    CERN Document Server

    Brewer, Brendon J

    2013-01-01

    Reverberation mapping (RM) is an important technique in studies of active galactic nuclei (AGN). The key idea of RM is to measure the time lag $\\tau$ between variations in the continuum emission from the accretion disc and subsequent response of the broad line region (BLR). The measurement of $\\tau$ is typically used to estimate the physical size of the BLR and is combined with other measurements to estimate the black hole mass $M_{\\rm BH}$. A major difficulty with RM campaigns is the large amount of data needed to measure $\\tau$. Recently, Fine et al (2012) introduced a new approach to RM where the BLR light curve is sparsely sampled, but this is counteracted by observing a large sample of AGN, rather than a single system. The results are combined to infer properties of the sample of AGN. In this letter we implement this method using a hierarchical Bayesian model and contrast this with the results from the previous stacked cross-correlation technique. We find that our inferences are more precise and allow fo...

  13. Hierarchical materials: Background and perspectives

    DEFF Research Database (Denmark)

    2016-01-01

    Hierarchical design draws inspiration from analysis of biological materials and has opened new possibilities for enhancing performance and enabling new functionalities and extraordinary properties. With the development of nanotechnology, the necessary technological requirements for the manufactur...

  14. Hierarchical clustering for graph visualization

    CERN Document Server

    Clémençon, Stéphan; Rossi, Fabrice; Tran, Viet Chi

    2012-01-01

    This paper describes a graph visualization methodology based on hierarchical maximal modularity clustering, with interactive and significant coarsening and refining possibilities. An application of this method to HIV epidemic analysis in Cuba is outlined.

  15. Direct hierarchical assembly of nanoparticles

    Science.gov (United States)

    Xu, Ting; Zhao, Yue; Thorkelsson, Kari

    2014-07-22

    The present invention provides hierarchical assemblies of a block copolymer, a bifunctional linking compound and a nanoparticle. The block copolymers form one micro-domain and the nanoparticles another micro-domain.

  16. Functional annotation of hierarchical modularity.

    Directory of Open Access Journals (Sweden)

    Kanchana Padmanabhan

    Full Text Available In biological networks of molecular interactions in a cell, network motifs that are biologically relevant are also functionally coherent, or form functional modules. These functionally coherent modules combine in a hierarchical manner into larger, less cohesive subsystems, thus revealing one of the essential design principles of system-level cellular organization and function-hierarchical modularity. Arguably, hierarchical modularity has not been explicitly taken into consideration by most, if not all, functional annotation systems. As a result, the existing methods would often fail to assign a statistically significant functional coherence score to biologically relevant molecular machines. We developed a methodology for hierarchical functional annotation. Given the hierarchical taxonomy of functional concepts (e.g., Gene Ontology and the association of individual genes or proteins with these concepts (e.g., GO terms, our method will assign a Hierarchical Modularity Score (HMS to each node in the hierarchy of functional modules; the HMS score and its p-value measure functional coherence of each module in the hierarchy. While existing methods annotate each module with a set of "enriched" functional terms in a bag of genes, our complementary method provides the hierarchical functional annotation of the modules and their hierarchically organized components. A hierarchical organization of functional modules often comes as a bi-product of cluster analysis of gene expression data or protein interaction data. Otherwise, our method will automatically build such a hierarchy by directly incorporating the functional taxonomy information into the hierarchy search process and by allowing multi-functional genes to be part of more than one component in the hierarchy. In addition, its underlying HMS scoring metric ensures that functional specificity of the terms across different levels of the hierarchical taxonomy is properly treated. We have evaluated our

  17. Hierarchical architecture of active knits

    Science.gov (United States)

    Abel, Julianna; Luntz, Jonathan; Brei, Diann

    2013-12-01

    Nature eloquently utilizes hierarchical structures to form the world around us. Applying the hierarchical architecture paradigm to smart materials can provide a basis for a new genre of actuators which produce complex actuation motions. One promising example of cellular architecture—active knits—provides complex three-dimensional distributed actuation motions with expanded operational performance through a hierarchically organized structure. The hierarchical structure arranges a single fiber of active material, such as shape memory alloys (SMAs), into a cellular network of interlacing adjacent loops according to a knitting grid. This paper defines a four-level hierarchical classification of knit structures: the basic knit loop, knit patterns, grid patterns, and restructured grids. Each level of the hierarchy provides increased architectural complexity, resulting in expanded kinematic actuation motions of active knits. The range of kinematic actuation motions are displayed through experimental examples of different SMA active knits. The results from this paper illustrate and classify the ways in which each level of the hierarchical knit architecture leverages the performance of the base smart material to generate unique actuation motions, providing necessary insight to best exploit this new actuation paradigm.

  18. A new approach for modeling generalization gradients: A case for Hierarchical Models

    Directory of Open Access Journals (Sweden)

    Koen eVanbrabant

    2015-05-01

    Full Text Available A case is made for the use of hierarchical models in the analysis of generalization gradients. Hierarchical models overcome several restrictions that are imposed by repeated measures analysis-of-variance (rANOVA, the default statistical method in current generalization research. More specifically, hierarchical models allow to include continuous independent variables and overcomes problematic assumptions such as sphericity. We focus on how generalization research can benefit from this added flexibility. In a simulation study we demonstrate the dominance of hierarchical models over rANOVA. In addition, we show the lack of efficiency of the Mauchly's sphericity test in sample sizes typical for generalization research, and confirm how violations of sphericity increase the probability of type I errors. A worked example of a hierarchical model is provided, with a specific emphasis on the interpretation of parameters relevant for generalization research.

  19. A new approach for modeling generalization gradients: a case for hierarchical models.

    Science.gov (United States)

    Vanbrabant, Koen; Boddez, Yannick; Verduyn, Philippe; Mestdagh, Merijn; Hermans, Dirk; Raes, Filip

    2015-01-01

    A case is made for the use of hierarchical models in the analysis of generalization gradients. Hierarchical models overcome several restrictions that are imposed by repeated measures analysis-of-variance (rANOVA), the default statistical method in current generalization research. More specifically, hierarchical models allow to include continuous independent variables and overcomes problematic assumptions such as sphericity. We focus on how generalization research can benefit from this added flexibility. In a simulation study we demonstrate the dominance of hierarchical models over rANOVA. In addition, we show the lack of efficiency of the Mauchly's sphericity test in sample sizes typical for generalization research, and confirm how violations of sphericity increase the probability of type I errors. A worked example of a hierarchical model is provided, with a specific emphasis on the interpretation of parameters relevant for generalization research.

  20. Advanced hierarchical distance sampling

    Science.gov (United States)

    Royle, Andy

    2016-01-01

    In this chapter, we cover a number of important extensions of the basic hierarchical distance-sampling (HDS) framework from Chapter 8. First, we discuss the inclusion of “individual covariates,” such as group size, in the HDS model. This is important in many surveys where animals form natural groups that are the primary observation unit, with the size of the group expected to have some influence on detectability. We also discuss HDS integrated with time-removal and double-observer or capture-recapture sampling. These “combined protocols” can be formulated as HDS models with individual covariates, and thus they have a commonality with HDS models involving group structure (group size being just another individual covariate). We cover several varieties of open-population HDS models that accommodate population dynamics. On one end of the spectrum, we cover models that allow replicate distance sampling surveys within a year, which estimate abundance relative to availability and temporary emigration through time. We consider a robust design version of that model. We then consider models with explicit dynamics based on the Dail and Madsen (2011) model and the work of Sollmann et al. (2015). The final major theme of this chapter is relatively newly developed spatial distance sampling models that accommodate explicit models describing the spatial distribution of individuals known as Point Process models. We provide novel formulations of spatial DS and HDS models in this chapter, including implementations of those models in the unmarked package using a hack of the pcount function for N-mixture models.

  1. Spontaneous prediction error generation in schizophrenia.

    Directory of Open Access Journals (Sweden)

    Yuichi Yamashita

    Full Text Available Goal-directed human behavior is enabled by hierarchically-organized neural systems that process executive commands associated with higher brain areas in response to sensory and motor signals from lower brain areas. Psychiatric diseases and psychotic conditions are postulated to involve disturbances in these hierarchical network interactions, but the mechanism for how aberrant disease signals are generated in networks, and a systems-level framework linking disease signals to specific psychiatric symptoms remains undetermined. In this study, we show that neural networks containing schizophrenia-like deficits can spontaneously generate uncompensated error signals with properties that explain psychiatric disease symptoms, including fictive perception, altered sense of self, and unpredictable behavior. To distinguish dysfunction at the behavioral versus network level, we monitored the interactive behavior of a humanoid robot driven by the network. Mild perturbations in network connectivity resulted in the spontaneous appearance of uncompensated prediction errors and altered interactions within the network without external changes in behavior, correlating to the fictive sensations and agency experienced by episodic disease patients. In contrast, more severe deficits resulted in unstable network dynamics resulting in overt changes in behavior similar to those observed in chronic disease patients. These findings demonstrate that prediction error disequilibrium may represent an intrinsic property of schizophrenic brain networks reporting the severity and variability of disease symptoms. Moreover, these results support a systems-level model for psychiatric disease that features the spontaneous generation of maladaptive signals in hierarchical neural networks.

  2. Generic Approach for Hierarchical Modulation Performance Analysis: Application to DVB-SH

    CERN Document Server

    Méric, Hugo; Amiot-Bazile, Caroline; Arnal, Fabrice; Boucheret, Marie-Laure

    2011-01-01

    Broadcasting systems have to deal with channel diversity in order to offer the best rate to the users. Hierarchical modulation is a practical solution to provide several rates in function of the channel quality. Unfortunately the performance evaluation of such modulations requires time consuming simulations. We propose in this paper a novel approach based on the channel capacity to avoid these simulations. The method allows to study the performance in terms of spectrum efficiency of hierarchical and also classical modulations combined with error correcting codes. Our method will be applied to the DVB-SH standard which considers hierarchical modulation as an optional feature.

  3. Hierarchical topic modeling with nested hierarchical Dirichlet process

    Institute of Scientific and Technical Information of China (English)

    Yi-qun DING; Shan-ping LI; Zhen ZHANG; Bin SHEN

    2009-01-01

    This paper deals with the statistical modeling of latent topic hierarchies in text corpora. The height of the topic tree is assumed as fixed, while the number of topics on each level as unknown a priori and to be inferred from data. Taking a nonparametric Bayesian approach to this problem, we propose a new probabilistic generative model based on the nested hierarchical Dirichlet process (nHDP) and present a Markov chain Monte Carlo sampling algorithm for the inference of the topic tree structure as welt as the word distribution of each topic and topic distribution of each document. Our theoretical analysis and experiment results show that this model can produce a more compact hierarchical topic structure and captures more free-grained topic relationships compared to the hierarchical latent Dirichlet allocation model.

  4. When mechanism matters: Bayesian forecasting using models of ecological diffusion

    Science.gov (United States)

    Hefley, Trevor J.; Hooten, Mevin B.; Russell, Robin E.; Walsh, Daniel P.; Powell, James A.

    2017-01-01

    Ecological diffusion is a theory that can be used to understand and forecast spatio-temporal processes such as dispersal, invasion, and the spread of disease. Hierarchical Bayesian modelling provides a framework to make statistical inference and probabilistic forecasts, using mechanistic ecological models. To illustrate, we show how hierarchical Bayesian models of ecological diffusion can be implemented for large data sets that are distributed densely across space and time. The hierarchical Bayesian approach is used to understand and forecast the growth and geographic spread in the prevalence of chronic wasting disease in white-tailed deer (Odocoileus virginianus). We compare statistical inference and forecasts from our hierarchical Bayesian model to phenomenological regression-based methods that are commonly used to analyse spatial occurrence data. The mechanistic statistical model based on ecological diffusion led to important ecological insights, obviated a commonly ignored type of collinearity, and was the most accurate method for forecasting.

  5. Hierarchical Bayes Ensemble Kalman Filter for geophysical data assimilation

    Science.gov (United States)

    Tsyrulnikov, Michael; Rakitko, Alexander

    2016-04-01

    In the Ensemble Kalman Filter (EnKF), the forecast error covariance matrix B is estimated from a sample (ensemble), which inevitably implies a degree of uncertainty. This uncertainty is especially large in high dimensions, where the affordable ensemble size is orders of magnitude less than the dimensionality of the system. Common remedies include ad-hoc devices like variance inflation and covariance localization. The goal of this study is to optimize the account for the inherent uncertainty of the B matrix in EnKF. Following the idea by Myrseth and Omre (2010), we explicitly admit that the B matrix is unknown and random and estimate it along with the state (x) in an optimal hierarchical Bayes analysis scheme. We separate forecast errors into predictability errors (i.e. forecast errors due to uncertainties in the initial data) and model errors (forecast errors due to imperfections in the forecast model) and include the two respective components P and Q of the B matrix into the extended control vector (x,P,Q). Similarly, we break the traditional forecast ensemble into the predictability-error related ensemble and model-error related ensemble. The reason for the separation of model errors from predictability errors is the fundamental difference between the two sources of error. Model error are external (i.e. do not depend on the filter's performance) whereas predictability errors are internal to a filter (i.e. are determined by the filter's behavior). At the analysis step, we specify Inverse Wishart based priors for the random matrices P and Q and conditionally Gaussian prior for the state x. Then, we update the prior distribution of (x,P,Q) using both observation and ensemble data, so that ensemble members are used as generalized observations and ordinary observations are allowed to influence the covariances. We show that for linear dynamics and linear observation operators, conditional Gaussianity of the state is preserved in the course of filtering. At the forecast

  6. Hierarchical modeling and its numerical implementation for layered thin elastic structures

    Energy Technology Data Exchange (ETDEWEB)

    Cho, Jin-Rae [Hongik University, Sejong (Korea, Republic of)

    2017-05-15

    Thin elastic structures such as beam- and plate-like structures and laminates are characterized by the small thickness, which lead to classical plate and laminate theories in which the displacement fields through the thickness are assumed linear or higher-order polynomials. These classical theories are either insufficient to represent the complex stress variation through the thickness or may encounter the accuracy-computational cost dilemma. In order to overcome the inherent problem of classical theories, the concept of hierarchical modeling has been emerged. In the hierarchical modeling, the hierarchical models with different model levels are selected and combined within a structure domain, in order to make the modeling error be distributed as uniformly as possible throughout the problem domain. The purpose of current study is to explore the potential of hierarchical modeling for the effective numerical analysis of layered structures such as laminated composite. For this goal, the hierarchical models are constructed and the hierarchical modeling is implemented by selectively adjusting the level of hierarchical models. As well, the major characteristics of hierarchical models are investigated through the numerical experiments.

  7. Diffusion MRI

    Science.gov (United States)

    Fukuyama, Hidenao

    Recent advances of magnetic resonance imaging have been described, especially stressed on the diffusion sequences. We have recently applied the diffusion sequence to functional brain imaging, and found the appropriate results. In addition to the neurosciences fields, diffusion weighted images have improved the accuracies of clinical diagnosis depending upon magnetic resonance images in stroke as well as inflammations.

  8. Parallel hierarchical global illumination

    Energy Technology Data Exchange (ETDEWEB)

    Snell, Quinn O. [Iowa State Univ., Ames, IA (United States)

    1997-10-08

    Solving the global illumination problem is equivalent to determining the intensity of every wavelength of light in all directions at every point in a given scene. The complexity of the problem has led researchers to use approximation methods for solving the problem on serial computers. Rather than using an approximation method, such as backward ray tracing or radiosity, the authors have chosen to solve the Rendering Equation by direct simulation of light transport from the light sources. This paper presents an algorithm that solves the Rendering Equation to any desired accuracy, and can be run in parallel on distributed memory or shared memory computer systems with excellent scaling properties. It appears superior in both speed and physical correctness to recent published methods involving bidirectional ray tracing or hybrid treatments of diffuse and specular surfaces. Like progressive radiosity methods, it dynamically refines the geometry decomposition where required, but does so without the excessive storage requirements for ray histories. The algorithm, called Photon, produces a scene which converges to the global illumination solution. This amounts to a huge task for a 1997-vintage serial computer, but using the power of a parallel supercomputer significantly reduces the time required to generate a solution. Currently, Photon can be run on most parallel environments from a shared memory multiprocessor to a parallel supercomputer, as well as on clusters of heterogeneous workstations.

  9. Deliberate change without hierarchical influence?

    DEFF Research Database (Denmark)

    Nørskov, Sladjana; Kesting, Peter; Ulhøi, John Parm

    2017-01-01

    Purpose This paper aims to present that deliberate change is strongly associated with formal structures and top-down influence. Hierarchical configurations have been used to structure processes, overcome resistance and get things done. But is deliberate change also possible without formal...... reveals that deliberate change is indeed achievable in a non-hierarchical collaborative OSS community context. However, it presupposes the presence and active involvement of informal change agents. The paper identifies and specifies four key drivers for change agents’ influence. Originality....../value The findings contribute to organisational analysis by providing a deeper understanding of the importance of leadership in making deliberate change possible in non-hierarchical settings. It points to the importance of “change-by-conviction”, essentially based on voluntary behaviour. This can open the door...

  10. Static Correctness of Hierarchical Procedures

    DEFF Research Database (Denmark)

    Schwartzbach, Michael Ignatieff

    1990-01-01

    A system of hierarchical, fully recursive types in a truly imperative language allows program fragments written for small types to be reused for all larger types. To exploit this property to enable type-safe hierarchical procedures, it is necessary to impose a static requirement on procedure calls....... We introduce an example language and prove the existence of a sound requirement which preserves static correctness while allowing hierarchical procedures. This requirement is further shown to be optimal, in the sense that it imposes as few restrictions as possible. This establishes the theoretical...... basis for a general type hierarchy with static type checking, which enables first-order polymorphism combined with multiple inheritance and specialization in a language with assignments. We extend the results to include opaque types. An opaque version of a type is different from the original but has...

  11. [Survey in hospitals. Nursing errors, error culture and error management].

    Science.gov (United States)

    Habermann, Monika; Cramer, Henning

    2010-09-01

    Knowledge on errors is important to design safe nursing practice and its framework. This article presents results of a survey on this topic, including data of a representative sample of 724 nurses from 30 German hospitals. Participants predominantly remembered medication errors. Structural and organizational factors were rated as most important causes of errors. Reporting rates were considered low; this was explained by organizational barriers. Nurses in large part expressed having suffered from mental problems after error events. Nurses' perception focussing on medication errors seems to be influenced by current discussions which are mainly medication-related. This priority should be revised. Hospitals' risk management should concentrate on organizational deficits and positive error cultures. Decision makers are requested to tackle structural problems such as staff shortage.

  12. Structural integrity of hierarchical composites

    Directory of Open Access Journals (Sweden)

    Marco Paggi

    2012-01-01

    Full Text Available Interface mechanical problems are of paramount importance in engineering and materials science. Traditionally, due to the complexity of modelling their mechanical behaviour, interfaces are often treated as defects and their features are not explored. In this study, a different approach is illustrated, where the interfaces play an active role in the design of innovative hierarchical composites and are fundamental for their structural integrity. Numerical examples regarding cutting tools made of hierarchical cellular polycrystalline materials are proposed, showing that tailoring of interface properties at the different scales is the way to achieve superior mechanical responses that cannot be obtained using standard materials

  13. Finite Population Correction for Two-Level Hierarchical Linear Models.

    Science.gov (United States)

    Lai, Mark H C; Kwok, Oi-Man; Hsiao, Yu-Yu; Cao, Qian

    2017-03-16

    The research literature has paid little attention to the issue of finite population at a higher level in hierarchical linear modeling. In this article, we propose a method to obtain finite-population-adjusted standard errors of Level-1 and Level-2 fixed effects in 2-level hierarchical linear models. When the finite population at Level-2 is incorrectly assumed as being infinite, the standard errors of the fixed effects are overestimated, resulting in lower statistical power and wider confidence intervals. The impact of ignoring finite population correction is illustrated by using both a real data example and a simulation study with a random intercept model and a random slope model. Simulation results indicated that the bias in the unadjusted fixed-effect standard errors was substantial when the Level-2 sample size exceeded 10% of the Level-2 population size; the bias increased with a larger intraclass correlation, a larger number of clusters, and a larger average cluster size. We also found that the proposed adjustment produced unbiased standard errors, particularly when the number of clusters was at least 30 and the average cluster size was at least 10. We encourage researchers to consider the characteristics of the target population for their studies and adjust for finite population when appropriate. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  14. Spatial Mapping of Translational Diffusion Coefficients Using Diffusion Tensor Imaging: A Mathematical Description.

    Science.gov (United States)

    Shetty, Anil N; Chiang, Sharon; Maletic-Savatic, Mirjana; Kasprian, Gregor; Vannucci, Marina; Lee, Wesley

    2014-01-01

    In this article, we discuss the theoretical background for diffusion weighted imaging and diffusion tensor imaging. Molecular diffusion is a random process involving thermal Brownian motion. In biological tissues, the underlying microstructures restrict the diffusion of water molecules, making diffusion directionally dependent. Water diffusion in tissue is mathematically characterized by the diffusion tensor, the elements of which contain information about the magnitude and direction of diffusion and is a function of the coordinate system. Thus, it is possible to generate contrast in tissue based primarily on diffusion effects. Expressing diffusion in terms of the measured diffusion coefficient (eigenvalue) in any one direction can lead to errors. Nowhere is this more evident than in white matter, due to the preferential orientation of myelin fibers. The directional dependency is removed by diagonalization of the diffusion tensor, which then yields a set of three eigenvalues and eigenvectors, representing the magnitude and direction of the three orthogonal axes of the diffusion ellipsoid, respectively. For example, the eigenvalue corresponding to the eigenvector along the long axis of the fiber corresponds qualitatively to diffusion with least restriction. Determination of the principal values of the diffusion tensor and various anisotropic indices provides structural information. We review the use of diffusion measurements using the modified Stejskal-Tanner diffusion equation. The anisotropy is analyzed by decomposing the diffusion tensor based on symmetrical properties describing the geometry of diffusion tensor. We further describe diffusion tensor properties in visualizing fiber tract organization of the human brain.

  15. Spatial Mapping of Translational Diffusion Coefficients Using Diffusion Tensor Imaging: A Mathematical Description

    Science.gov (United States)

    SHETTY, ANIL N.; CHIANG, SHARON; MALETIC-SAVATIC, MIRJANA; KASPRIAN, GREGOR; VANNUCCI, MARINA; LEE, WESLEY

    2016-01-01

    In this article, we discuss the theoretical background for diffusion weighted imaging and diffusion tensor imaging. Molecular diffusion is a random process involving thermal Brownian motion. In biological tissues, the underlying microstructures restrict the diffusion of water molecules, making diffusion directionally dependent. Water diffusion in tissue is mathematically characterized by the diffusion tensor, the elements of which contain information about the magnitude and direction of diffusion and is a function of the coordinate system. Thus, it is possible to generate contrast in tissue based primarily on diffusion effects. Expressing diffusion in terms of the measured diffusion coefficient (eigenvalue) in any one direction can lead to errors. Nowhere is this more evident than in white matter, due to the preferential orientation of myelin fibers. The directional dependency is removed by diagonalization of the diffusion tensor, which then yields a set of three eigenvalues and eigenvectors, representing the magnitude and direction of the three orthogonal axes of the diffusion ellipsoid, respectively. For example, the eigenvalue corresponding to the eigenvector along the long axis of the fiber corresponds qualitatively to diffusion with least restriction. Determination of the principal values of the diffusion tensor and various anisotropic indices provides structural information. We review the use of diffusion measurements using the modified Stejskal–Tanner diffusion equation. The anisotropy is analyzed by decomposing the diffusion tensor based on symmetrical properties describing the geometry of diffusion tensor. We further describe diffusion tensor properties in visualizing fiber tract organization of the human brain. PMID:27441031

  16. Sensory Hierarchical Organization and Reading.

    Science.gov (United States)

    Skapof, Jerome

    The purpose of this study was to judge the viability of an operational approach aimed at assessing response styles in reading using the hypothesis of sensory hierarchical organization. A sample of 103 middle-class children from a New York City public school, between the ages of five and seven, took part in a three phase experiment. Phase one…

  17. Memory Stacking in Hierarchical Networks.

    Science.gov (United States)

    Westö, Johan; May, Patrick J C; Tiitinen, Hannu

    2016-02-01

    Robust representations of sounds with a complex spectrotemporal structure are thought to emerge in hierarchically organized auditory cortex, but the computational advantage of this hierarchy remains unknown. Here, we used computational models to study how such hierarchical structures affect temporal binding in neural networks. We equipped individual units in different types of feedforward networks with local memory mechanisms storing recent inputs and observed how this affected the ability of the networks to process stimuli context dependently. Our findings illustrate that these local memories stack up in hierarchical structures and hence allow network units to exhibit selectivity to spectral sequences longer than the time spans of the local memories. We also illustrate that short-term synaptic plasticity is a potential local memory mechanism within the auditory cortex, and we show that it can bring robustness to context dependence against variation in the temporal rate of stimuli, while introducing nonlinearities to response profiles that are not well captured by standard linear spectrotemporal receptive field models. The results therefore indicate that short-term synaptic plasticity might provide hierarchically structured auditory cortex with computational capabilities important for robust representations of spectrotemporal patterns.

  18. Bayesian modeling of measurement error in predictor variables

    NARCIS (Netherlands)

    Fox, Gerardus J.A.; Glas, Cornelis A.W.

    2003-01-01

    It is shown that measurement error in predictor variables can be modeled using item response theory (IRT). The predictor variables, that may be defined at any level of an hierarchical regression model, are treated as latent variables. The normal ogive model is used to describe the relation between

  19. Evaluation of Machine Translation Errors in English and Iraqi Arabic

    Science.gov (United States)

    2010-05-01

    are made. Llitjós, Carbonell & Lavie (2005) created a hierarchical taxonomy of errors for use in refining rules of transfer-based MT systems...Translation. Proceedings of MT Summit XII. Llitjós, A., Carbonell , J., and Lavie, A. (2005). A framework for interactive and automatic refinement of

  20. ECoS, a framework for modelling hierarchical spatial systems.

    Science.gov (United States)

    Harris, John R W; Gorley, Ray N

    2003-10-01

    A general framework for modelling hierarchical spatial systems has been developed and implemented as the ECoS3 software package. The structure of this framework is described, and illustrated with representative examples. It allows the set-up and integration of sets of advection-diffusion equations representing multiple constituents interacting in a spatial context. Multiple spaces can be defined, with zero, one or two-dimensions and can be nested, and linked through constituent transfers. Model structure is generally object-oriented and hierarchical, reflecting the natural relations within its real-world analogue. Velocities, dispersions and inter-constituent transfers, together with additional functions, are defined as properties of constituents to which they apply. The resulting modular structure of ECoS models facilitates cut and paste model development, and template model components have been developed for the assembly of a range of estuarine water quality models. Published examples of applications to the geochemical dynamics of estuaries are listed.

  1. Hierarchical Approach in Clustering to Euclidean Traveling Salesman Problem

    Science.gov (United States)

    Fajar, Abdulah; Herman, Nanna Suryana; Abu, Nur Azman; Shahib, Sahrin

    There has been growing interest in studying combinatorial optimization problems by clustering strategy, with a special emphasis on the traveling salesman problem (TSP). TSP naturally arises as a sub problem in much transportation, manufacturing and logistics application, this problem has caught much attention of mathematicians and computer scientists. A clustering approach will decompose TSP into sub graph and form cluster, so it may reduce problem size into smaller problem. Impact of hierarchical approach will be investigated to produce a better clustering strategy that fit into Euclidean TSP. Clustering strategy to Euclidean TSP consist of two main step, there are; clustering and tour construction. The significant of this research is clustering approach solution result has error less than 10% compare to best known solution (TSPLIB) and there is improvement to a hierarchical clustering algorithm in order to fit in such Euclidean TSP solution method.

  2. Optimization of Hierarchical Modulation for Use of Scalable Media

    Directory of Open Access Journals (Sweden)

    Heneghan Conor

    2010-01-01

    Full Text Available This paper studies the Hierarchical Modulation, a transmission strategy of the approaching scalable multimedia over frequency-selective fading channel for improving the perceptible quality. An optimization strategy for Hierarchical Modulation and convolutional encoding, which can achieve the target bit error rates with minimum global signal-to-noise ratio in a single-user scenario, is suggested. This strategy allows applications to make a free choice of relationship between Higher Priority (HP and Lower Priority (LP stream delivery. The similar optimization can be used in multiuser scenario. An image transport task and a transport task of an H.264/MPEG4 AVC video embedding both QVGA and VGA resolutions are simulated as the implementation example of this optimization strategy, and demonstrate savings in SNR and improvement in Peak Signal-to-Noise Ratio (PSNR for the particular examples shown.

  3. Numerical simulation of strained Si/SiGe devices: the hierarchical approach

    Science.gov (United States)

    Meinerzhagen, B.; Jungemann, C.; Neinhüs, B.; Bartels, M.

    2004-03-01

    Performance predictions for 25 nm strained Si CMOS devices which are based on full-band Monte Carlo (FBMC) device simulations and which are in good agreement with the most recent experimental trends are presented. The FBMC simulator itself is part of a hierarchical device simulation system which allows to perform time-efficient hierarchical hydrodynamic (HD) device simulations of modern SiGe HBTs. As demonstrated below, the accuracy of a such a hydrodynamic-based dc, ac, transient, and noise analysis is comparable to FBMC device simulations. In addition, the new hierarchical numerical noise simulation method is experimentally verified based on a modern rf-CMOS technology of Philips Research. The MC-enhanced simulation accuracy of the hierarchical hydrodynamic and drift diffusion (DD) models can be also exploited for mixed-mode circuit simulations, which is shown by typical power sweep simulations of an industrial rf power amplifier.

  4. Clinical time series prediction: Toward a hierarchical dynamical system framework.

    Science.gov (United States)

    Liu, Zitao; Hauskrecht, Milos

    2015-09-01

    Developing machine learning and data mining algorithms for building temporal models of clinical time series is important for understanding of the patient condition, the dynamics of a disease, effect of various patient management interventions and clinical decision making. In this work, we propose and develop a novel hierarchical framework for modeling clinical time series data of varied length and with irregularly sampled observations. Our hierarchical dynamical system framework for modeling clinical time series combines advantages of the two temporal modeling approaches: the linear dynamical system and the Gaussian process. We model the irregularly sampled clinical time series by using multiple Gaussian process sequences in the lower level of our hierarchical framework and capture the transitions between Gaussian processes by utilizing the linear dynamical system. The experiments are conducted on the complete blood count (CBC) panel data of 1000 post-surgical cardiac patients during their hospitalization. Our framework is evaluated and compared to multiple baseline approaches in terms of the mean absolute prediction error and the absolute percentage error. We tested our framework by first learning the time series model from data for the patients in the training set, and then using it to predict future time series values for the patients in the test set. We show that our model outperforms multiple existing models in terms of its predictive accuracy. Our method achieved a 3.13% average prediction accuracy improvement on ten CBC lab time series when it was compared against the best performing baseline. A 5.25% average accuracy improvement was observed when only short-term predictions were considered. A new hierarchical dynamical system framework that lets us model irregularly sampled time series data is a promising new direction for modeling clinical time series and for improving their predictive performance. Copyright © 2014 Elsevier B.V. All rights reserved.

  5. Clinical time series prediction: towards a hierarchical dynamical system framework

    Science.gov (United States)

    Liu, Zitao; Hauskrecht, Milos

    2014-01-01

    Objective Developing machine learning and data mining algorithms for building temporal models of clinical time series is important for understanding of the patient condition, the dynamics of a disease, effect of various patient management interventions and clinical decision making. In this work, we propose and develop a novel hierarchical framework for modeling clinical time series data of varied length and with irregularly sampled observations. Materials and methods Our hierarchical dynamical system framework for modeling clinical time series combines advantages of the two temporal modeling approaches: the linear dynamical system and the Gaussian process. We model the irregularly sampled clinical time series by using multiple Gaussian process sequences in the lower level of our hierarchical framework and capture the transitions between Gaussian processes by utilizing the linear dynamical system. The experiments are conducted on the complete blood count (CBC) panel data of 1000 post-surgical cardiac patients during their hospitalization. Our framework is evaluated and compared to multiple baseline approaches in terms of the mean absolute prediction error and the absolute percentage error. Results We tested our framework by first learning the time series model from data for the patient in the training set, and then applying the model in order to predict future time series values on the patients in the test set. We show that our model outperforms multiple existing models in terms of its predictive accuracy. Our method achieved a 3.13% average prediction accuracy improvement on ten CBC lab time series when it was compared against the best performing baseline. A 5.25% average accuracy improvement was observed when only short-term predictions were considered. Conclusion A new hierarchical dynamical system framework that lets us model irregularly sampled time series data is a promising new direction for modeling clinical time series and for improving their predictive

  6. Hierarchical Prisoner's Dilemma in Hierarchical Public-Goods Game

    CERN Document Server

    Fujimoto, Yuma; Kaneko, Kunihiko

    2016-01-01

    The dilemma in cooperation is one of the major concerns in game theory. In a public-goods game, each individual pays a cost for cooperation, or to prevent defection, and receives a reward from the collected cost in a group. Thus, defection is beneficial for each individual, while cooperation is beneficial for the group. Now, groups (say, countries) consisting of individual players also play games. To study such a multi-level game, we introduce a hierarchical public-goods (HPG) game in which two groups compete for finite resources by utilizing costs collected from individuals in each group. Analyzing this HPG game, we found a hierarchical prisoner's dilemma, in which groups choose the defection policy (say, armaments) as a Nash strategy to optimize each group's benefit, while cooperation optimizes the total benefit. On the other hand, for each individual within a group, refusing to pay the cost (say, tax) is a Nash strategy, which turns to be a cooperation policy for the group, thus leading to a hierarchical d...

  7. Generalized Gaussian Error Calculus

    CERN Document Server

    Grabe, Michael

    2010-01-01

    For the first time in 200 years Generalized Gaussian Error Calculus addresses a rigorous, complete and self-consistent revision of the Gaussian error calculus. Since experimentalists realized that measurements in general are burdened by unknown systematic errors, the classical, widespread used evaluation procedures scrutinizing the consequences of random errors alone turned out to be obsolete. As a matter of course, the error calculus to-be, treating random and unknown systematic errors side by side, should ensure the consistency and traceability of physical units, physical constants and physical quantities at large. The generalized Gaussian error calculus considers unknown systematic errors to spawn biased estimators. Beyond, random errors are asked to conform to the idea of what the author calls well-defined measuring conditions. The approach features the properties of a building kit: any overall uncertainty turns out to be the sum of a contribution due to random errors, to be taken from a confidence inter...

  8. Classification of Spreadsheet Errors

    OpenAIRE

    Rajalingham, Kamalasen; Chadwick, David R.; Knight, Brian

    2008-01-01

    This paper describes a framework for a systematic classification of spreadsheet errors. This classification or taxonomy of errors is aimed at facilitating analysis and comprehension of the different types of spreadsheet errors. The taxonomy is an outcome of an investigation of the widespread problem of spreadsheet errors and an analysis of specific types of these errors. This paper contains a description of the various elements and categories of the classification and is supported by appropri...

  9. Hierarchical structure of biological systems

    Science.gov (United States)

    Alcocer-Cuarón, Carlos; Rivera, Ana L; Castaño, Victor M

    2014-01-01

    A general theory of biological systems, based on few fundamental propositions, allows a generalization of both Wierner and Berthalanffy approaches to theoretical biology. Here, a biological system is defined as a set of self-organized, differentiated elements that interact pair-wise through various networks and media, isolated from other sets by boundaries. Their relation to other systems can be described as a closed loop in a steady-state, which leads to a hierarchical structure and functioning of the biological system. Our thermodynamical approach of hierarchical character can be applied to biological systems of varying sizes through some general principles, based on the exchange of energy information and/or mass from and within the systems. PMID:24145961

  10. Automatic Hierarchical Color Image Classification

    Directory of Open Access Journals (Sweden)

    Jing Huang

    2003-02-01

    Full Text Available Organizing images into semantic categories can be extremely useful for content-based image retrieval and image annotation. Grouping images into semantic classes is a difficult problem, however. Image classification attempts to solve this hard problem by using low-level image features. In this paper, we propose a method for hierarchical classification of images via supervised learning. This scheme relies on using a good low-level feature and subsequently performing feature-space reconfiguration using singular value decomposition to reduce noise and dimensionality. We use the training data to obtain a hierarchical classification tree that can be used to categorize new images. Our experimental results suggest that this scheme not only performs better than standard nearest-neighbor techniques, but also has both storage and computational advantages.

  11. Intuitionistic fuzzy hierarchical clustering algorithms

    Institute of Scientific and Technical Information of China (English)

    Xu Zeshui

    2009-01-01

    Intuitionistic fuzzy set (IFS) is a set of 2-tuple arguments, each of which is characterized by a mem-bership degree and a nonmembership degree. The generalized form of IFS is interval-valued intuitionistic fuzzy set (IVIFS), whose components are intervals rather than exact numbers. IFSs and IVIFSs have been found to be very useful to describe vagueness and uncertainty. However, it seems that little attention has been focused on the clus-tering analysis of IFSs and IVIFSs. An intuitionistic fuzzy hierarchical algorithm is introduced for clustering IFSs, which is based on the traditional hierarchical clustering procedure, the intuitionistic fuzzy aggregation operator, and the basic distance measures between IFSs: the Hamming distance, normalized Hamming, weighted Hamming, the Euclidean distance, the normalized Euclidean distance, and the weighted Euclidean distance. Subsequently, the algorithm is extended for clustering IVIFSs. Finally the algorithm and its extended form are applied to the classifications of building materials and enterprises respectively.

  12. Hierarchical Formation of Galactic Clusters

    CERN Document Server

    Elmegreen, B G

    2006-01-01

    Young stellar groupings and clusters have hierarchical patterns ranging from flocculent spiral arms and star complexes on the largest scale to OB associations, OB subgroups, small loose groups, clusters and cluster subclumps on the smallest scales. There is no obvious transition in morphology at the cluster boundary, suggesting that clusters are only the inner parts of the hierarchy where stars have had enough time to mix. The power-law cluster mass function follows from this hierarchical structure: n(M_cl) M_cl^-b for b~2. This value of b is independently required by the observation that the summed IMFs from many clusters in a galaxy equals approximately the IMF of each cluster.

  13. Hierarchical Cont-Bouchaud model

    CERN Document Server

    Paluch, Robert; Holyst, Janusz A

    2015-01-01

    We extend the well-known Cont-Bouchaud model to include a hierarchical topology of agent's interactions. The influence of hierarchy on system dynamics is investigated by two models. The first one is based on a multi-level, nested Erdos-Renyi random graph and individual decisions by agents according to Potts dynamics. This approach does not lead to a broad return distribution outside a parameter regime close to the original Cont-Bouchaud model. In the second model we introduce a limited hierarchical Erdos-Renyi graph, where merging of clusters at a level h+1 involves only clusters that have merged at the previous level h and we use the original Cont-Bouchaud agent dynamics on resulting clusters. The second model leads to a heavy-tail distribution of cluster sizes and relative price changes in a wide range of connection densities, not only close to the percolation threshold.

  14. Hierarchical Clustering and Active Galaxies

    CERN Document Server

    Hatziminaoglou, E; Manrique, A

    2000-01-01

    The growth of Super Massive Black Holes and the parallel development of activity in galactic nuclei are implemented in an analytic code of hierarchical clustering. The evolution of the luminosity function of quasars and AGN will be computed with special attention paid to the connection between quasars and Seyfert galaxies. One of the major interests of the model is the parallel study of quasar formation and evolution and the History of Star Formation.

  15. Hybrid and hierarchical composite materials

    CERN Document Server

    Kim, Chang-Soo; Sano, Tomoko

    2015-01-01

    This book addresses a broad spectrum of areas in both hybrid materials and hierarchical composites, including recent development of processing technologies, structural designs, modern computer simulation techniques, and the relationships between the processing-structure-property-performance. Each topic is introduced at length with numerous  and detailed examples and over 150 illustrations.   In addition, the authors present a method of categorizing these materials, so that representative examples of all material classes are discussed.

  16. Treatment Protocols as Hierarchical Structures

    Science.gov (United States)

    Ben-Bassat, Moshe; Carlson, Richard W.; Puri, Vinod K.; Weil, Max Harry

    1978-01-01

    We view a treatment protocol as a hierarchical structure of therapeutic modules. The lowest level of this structure consists of individual therapeutic actions. Combinations of individual actions define higher level modules, which we call routines. Routines are designed to manage limited clinical problems, such as the routine for fluid loading to correct hypovolemia. Combinations of routines and additional actions, together with comments, questions, or precautions organized in a branching logic, in turn, define the treatment protocol for a given disorder. Adoption of this modular approach may facilitate the formulation of treatment protocols, since the physician is not required to prepare complex flowcharts. This hierarchical approach also allows protocols to be updated and modified in a flexible manner. By use of such a standard format, individual components may be fitted together to create protocols for multiple disorders. The technique is suited for computer implementation. We believe that this hierarchical approach may facilitate standarization of patient care as well as aid in clinical teaching. A protocol for acute pancreatitis is used to illustrate this technique.

  17. Hierarchical Visual Analysis and Steering Framework for Astrophysical Simulations

    Institute of Scientific and Technical Information of China (English)

    肖健; 张加万; 原野; 周鑫; 纪丽; 孙济洲

    2015-01-01

    A framework for accelerating modern long-running astrophysical simulations is presented, which is based on a hierarchical architecture where computational steering in the high-resolution run is performed under the guide of knowledge obtained in the gradually refined ensemble analyses. Several visualization schemes for facilitating ensem-ble management, error analysis, parameter grouping and tuning are also integrated owing to the pluggable modular design. The proposed approach is prototyped based on the Flash code, and it can be extended by introducing user-defined visualization for specific requirements. Two real-world simulations, i.e., stellar wind and supernova remnant, are carried out to verify the proposed approach.

  18. Comparison of the incremental and hierarchical methods for crystalline neon.

    Science.gov (United States)

    Nolan, S J; Bygrave, P J; Allan, N L; Manby, F R

    2010-02-24

    We present a critical comparison of the incremental and hierarchical methods for the evaluation of the static cohesive energy of crystalline neon. Both of these schemes make it possible to apply the methods of molecular electronic structure theory to crystalline solids, offering a systematically improvable alternative to density functional theory. Results from both methods are compared with previous theoretical and experimental studies of solid neon and potential sources of error are discussed. We explore the similarities of the two methods and demonstrate how they may be used in tandem to study crystalline solids.

  19. Vaneless diffusers

    Science.gov (United States)

    Senoo, Y.

    The influence of vaneless diffusers on flow in centrifugal compressors, particularly on surge, is discussed. A vaneless diffuser can demonstrate stable operation in a wide flow range only if it is installed with a backward leaning blade impeller. The circumferential distortion of flow in the impeller disappears quickly in the vaneless diffuser. The axial distortion of flow at the diffuser inlet does not decay easily. In large specific speed compressors, flow out of the impeller is distorted axially. Pressure recovery of diffusers at distorted inlet flow is considerably improved by half guide vanes. The best height of the vanes is a little 1/2 diffuser width. In small specific speed compressors, flow out of the impeller is not much distorted and pressure recovery can be predicted with one-dimensional flow analysis. Wall friction loss is significant in narrow diffusers. The large pressure drop at a small flow rate can cause the positive gradient of the pressure-flow rate characteristic curve, which may cause surging.

  20. HIDEN: Hierarchical decomposition of regulatory networks

    Directory of Open Access Journals (Sweden)

    Gülsoy Günhan

    2012-09-01

    Full Text Available Abstract Background Transcription factors regulate numerous cellular processes by controlling the rate of production of each gene. The regulatory relations are modeled using transcriptional regulatory networks. Recent studies have shown that such networks have an underlying hierarchical organization. We consider the problem of discovering the underlying hierarchy in transcriptional regulatory networks. Results We first transform this problem to a mixed integer programming problem. We then use existing tools to solve the resulting problem. For larger networks this strategy does not work due to rapid increase in running time and space usage. We use divide and conquer strategy for such networks. We use our method to analyze the transcriptional regulatory networks of E. coli, H. sapiens and S. cerevisiae. Conclusions Our experiments demonstrate that: (i Our method gives statistically better results than three existing state of the art methods; (ii Our method is robust against errors in the data and (iii Our method’s performance is not affected by the different topologies in the data.

  1. Reducing medication errors.

    Science.gov (United States)

    Nute, Christine

    2014-11-25

    Most nurses are involved in medicines management, which is integral to promoting patient safety. Medicines management is prone to errors, which depending on the error can cause patient injury, increased hospital stay and significant legal expenses. This article describes a new approach to help minimise drug errors within healthcare settings where medications are prescribed, dispensed or administered. The acronym DRAINS, which considers all aspects of medicines management before administration, was devised to reduce medication errors on a cardiothoracic intensive care unit.

  2. Music emotion detection using hierarchical sparse kernel machines.

    Science.gov (United States)

    Chin, Yu-Hao; Lin, Chang-Hong; Siahaan, Ernestasia; Wang, Jia-Ching

    2014-01-01

    For music emotion detection, this paper presents a music emotion verification system based on hierarchical sparse kernel machines. With the proposed system, we intend to verify if a music clip possesses happiness emotion or not. There are two levels in the hierarchical sparse kernel machines. In the first level, a set of acoustical features are extracted, and principle component analysis (PCA) is implemented to reduce the dimension. The acoustical features are utilized to generate the first-level decision vector, which is a vector with each element being a significant value of an emotion. The significant values of eight main emotional classes are utilized in this paper. To calculate the significant value of an emotion, we construct its 2-class SVM with calm emotion as the global (non-target) side of the SVM. The probability distributions of the adopted acoustical features are calculated and the probability product kernel is applied in the first-level SVMs to obtain first-level decision vector feature. In the second level of the hierarchical system, we merely construct a 2-class relevance vector machine (RVM) with happiness as the target side and other emotions as the background side of the RVM. The first-level decision vector is used as the feature with conventional radial basis function kernel. The happiness verification threshold is built on the probability value. In the experimental results, the detection error tradeoff (DET) curve shows that the proposed system has a good performance on verifying if a music clip reveals happiness emotion.

  3. Demand Forecasting Errors

    OpenAIRE

    Mackie, Peter; Nellthorp, John; Laird, James

    2005-01-01

    Demand forecasts form a key input to the economic appraisal. As such any errors present within the demand forecasts will undermine the reliability of the economic appraisal. The minimization of demand forecasting errors is therefore important in the delivery of a robust appraisal. This issue is addressed in this note by introducing the key issues, and error types present within demand fore...

  4. When errors are rewarding

    NARCIS (Netherlands)

    Bruijn, E.R.A. de; Lange, F.P. de; Cramon, D.Y. von; Ullsperger, M.

    2009-01-01

    For social beings like humans, detecting one's own and others' errors is essential for efficient goal-directed behavior. Although one's own errors are always negative events, errors from other persons may be negative or positive depending on the social context. We used neuroimaging to disentangle br

  5. Incremental learning by message passing in hierarchical temporal memory.

    Science.gov (United States)

    Rehn, Erik M; Maltoni, Davide

    2014-08-01

    Hierarchical temporal memory (HTM) is a biologically inspired framework that can be used to learn invariant representations of patterns in a wide range of applications. Classical HTM learning is mainly unsupervised, and once training is completed, the network structure is frozen, thus making further training (i.e., incremental learning) quite critical. In this letter, we develop a novel technique for HTM (incremental) supervised learning based on gradient descent error minimization. We prove that error backpropagation can be naturally and elegantly implemented through native HTM message passing based on belief propagation. Our experimental results demonstrate that a two-stage training approach composed of unsupervised pretraining and supervised refinement is very effective (both accurate and efficient). This is in line with recent findings on other deep architectures.

  6. Hierarchical Control for Smart Grids

    DEFF Research Database (Denmark)

    Trangbæk, K; Bendtsen, Jan Dimon; Stoustrup, Jakob

    2011-01-01

    This paper deals with hierarchical model predictive control (MPC) of smart grid systems. The design consists of a high level MPC controller, a second level of so-called aggregators, which reduces the computational and communication-related load on the high-level control, and a lower level...... of autonomous consumers. The control system is tasked with balancing electric power production and consumption within the smart grid, and makes active use of the flexibility of a large number of power producing and/or power consuming units. The objective is to accommodate the load variation on the grid, arising...

  7. Systematic error revisited

    Energy Technology Data Exchange (ETDEWEB)

    Glosup, J.G.; Axelrod, M.C.

    1996-08-05

    The American National Standards Institute (ANSI) defines systematic error as An error which remains constant over replicative measurements. It would seem from the ANSI definition that a systematic error is not really an error at all; it is merely a failure to calibrate the measurement system properly because if error is constant why not simply correct for it? Yet systematic errors undoubtedly exist, and they differ in some fundamental way from the kind of errors we call random. Early papers by Eisenhart and by Youden discussed systematic versus random error with regard to measurements in the physical sciences, but not in a fundamental way, and the distinction remains clouded by controversy. The lack of a general agreement on definitions has led to a plethora of different and often confusing methods on how to quantify the total uncertainty of a measurement that incorporates both its systematic and random errors. Some assert that systematic error should be treated by non- statistical methods. We disagree with this approach, and we provide basic definitions based on entropy concepts, and a statistical methodology for combining errors and making statements of total measurement of uncertainty. We illustrate our methods with radiometric assay data.

  8. Hierarchical Structures in Hypertext Learning Environments

    NARCIS (Netherlands)

    Bezdan, Eniko; Kester, Liesbeth; Kirschner, Paul A.

    2011-01-01

    Bezdan, E., Kester, L., & Kirschner, P. A. (2011, 9 September). Hierarchical Structures in Hypertext Learning Environments. Presentation for the visit of KU Leuven, Open University, Heerlen, The Netherlands.

  9. Coordinating sentence composition with error correction: A multilevel analysis

    Directory of Open Access Journals (Sweden)

    Van Waes, L.

    2011-01-01

    Full Text Available Error analysis involves detecting and correcting discrepancies between the 'text produced so far' (TPSF and the writer's mental representation of what the text should be. While many factors determine the choice of strategy, cognitive effort is a major contributor to this choice. This research shows how cognitive effort during error analysis affects strategy choice and success as measured by a series of online text production measures. We hypothesize that error correction with speech recognition software differs from error correction with keyboard for two reasons. Speech produces auditory commands and, consequently, different error types. The study reported on here measured the effects of (1 mode of presentation (auditory or visual-tactile, (2 error span, whether the error spans more or less than two characters, and (3 lexicality, whether the text error comprises an existing word. A multilevel analysis was conducted to take into account the hierarchical nature of these data. For each variable (interference reaction time, preparation time, production time, immediacy of error correction, and accuracy of error correction, multilevel regression models are presented. As such, we take into account possible disturbing person characteristics while testing the effect of the different conditions and error types at the sentence level.The results show that writers delay error correction more often when the TPSF is read out aloud first. The auditory property of speech seems to free resources for the primary task of writing, i.e. text production. Moreover, the results show that large errors in the TPSF require more cognitive effort, and are solved with a higher accuracy than small errors. The latter also holds for the correction of small errors that result in non-existing words.

  10. Comparison of hierarchical and six degrees-of-freedom marker sets in analyzing gait kinematics.

    Science.gov (United States)

    Schmitz, Anne; Buczek, Frank L; Bruening, Dustin; Rainbow, Michael J; Cooney, Kevin; Thelen, Darryl

    2016-01-01

    The objective of this study was to determine how marker spacing, noise, and joint translations affect joint angle calculations using both a hierarchical and a six degrees-of-freedom (6DoF) marker set. A simple two-segment model demonstrates that a hierarchical marker set produces biased joint rotation estimates when sagittal joint translations occur whereas a 6DoF marker set mitigates these bias errors with precision improving with increased marker spacing. These effects were evident in gait simulations where the 6DoF marker set was shown to be more accurate at tracking axial rotation angles at the hip, knee, and ankle.

  11. Hierarchical approaches to estimate energy expenditure using phone-based accelerometers.

    Science.gov (United States)

    Vathsangam, Harshvardhan; Schroeder, E Todd; Sukhatme, Gaurav S

    2014-07-01

    Physical inactivity is linked with increase in risk of cancer, heart disease, stroke, and diabetes. Walking is an easily available activity to reduce sedentary time. Objective methods to accurately assess energy expenditure from walking that is normalized to an individual would allow tailored interventions. Current techniques rely on normalization by weight scaling or fitting a polynomial function of weight and speed. Using the example of steady-state treadmill walking, we present a set of algorithms that extend previous work to include an arbitrary number of anthropometric descriptors. We specifically focus on predicting energy expenditure using movement measured by mobile phone-based accelerometers. The models tested include nearest neighbor models, weight-scaled models, a set of hierarchical linear models, multivariate models, and speed-based approaches. These are compared for prediction accuracy as measured by normalized average root mean-squared error across all participants. Nearest neighbor models showed highest errors. Feature combinations corresponding to sedentary energy expenditure, sedentary heart rate, and sex alone resulted in errors that were higher than speed-based models and nearest-neighbor models. Size-based features such as BMI, weight, and height produced lower errors. Hierarchical models performed better than multivariate models when size-based features were used. We used the hierarchical linear model to determine the best individual feature to describe a person. Weight was the best individual descriptor followed by height. We also test models for their ability to predict energy expenditure with limited training data. Hierarchical models outperformed personal models when a low amount of training data were available. Speed-based models showed poor interpolation capability, whereas hierarchical models showed uniform interpolation capabilities across speeds.

  12. Diffuse scattering

    Energy Technology Data Exchange (ETDEWEB)

    Kostorz, G. [Eidgenoessische Technische Hochschule, Angewandte Physik, Zurich (Switzerland)

    1996-12-31

    While Bragg scattering is characteristic for the average structure of crystals, static local deviations from the average lattice lead to diffuse elastic scattering around and between Bragg peaks. This scattering thus contains information on the occupation of lattice sites by different atomic species and on static local displacements, even in a macroscopically homogeneous crystalline sample. The various diffuse scattering effects, including those around the incident beam (small-angle scattering), are introduced and illustrated by typical results obtained for some Ni alloys. (author) 7 figs., 41 refs.

  13. Finite Element A Posteriori Error Estimation for Heat Conduction. Degree awarded by George Washington Univ.

    Science.gov (United States)

    Lang, Christapher G.; Bey, Kim S. (Technical Monitor)

    2002-01-01

    This research investigates residual-based a posteriori error estimates for finite element approximations of heat conduction in single-layer and multi-layered materials. The finite element approximation, based upon hierarchical modelling combined with p-version finite elements, is described with specific application to a two-dimensional, steady state, heat-conduction problem. Element error indicators are determined by solving an element equation for the error with the element residual as a source, and a global error estimate in the energy norm is computed by collecting the element contributions. Numerical results of the performance of the error estimate are presented by comparisons to the actual error. Two methods are discussed and compared for approximating the element boundary flux. The equilibrated flux method provides more accurate results for estimating the error than the average flux method. The error estimation is applied to multi-layered materials with a modification to the equilibrated flux method to approximate the discontinuous flux along a boundary at the material interfaces. A directional error indicator is developed which distinguishes between the hierarchical modeling error and the finite element error. Numerical results are presented for single-layered materials which show that the directional indicators accurately determine which contribution to the total error dominates.

  14. Discovering hierarchical structure in normal relational data

    DEFF Research Database (Denmark)

    Schmidt, Mikkel Nørgaard; Herlau, Tue; Mørup, Morten

    2014-01-01

    Hierarchical clustering is a widely used tool for structuring and visualizing complex data using similarity. Traditionally, hierarchical clustering is based on local heuristics that do not explicitly provide assessment of the statistical saliency of the extracted hierarchy. We propose a non-param...

  15. Discursive Hierarchical Patterning in Economics Cases

    Science.gov (United States)

    Lung, Jane

    2011-01-01

    This paper attempts to apply Lung's (2008) model of the discursive hierarchical patterning of cases to a closer and more specific study of Economics cases and proposes a model of the distinct discursive hierarchical patterning of the same. It examines a corpus of 150 Economics cases with a view to uncovering the patterns of discourse construction.…

  16. A Model of Hierarchical Key Assignment Scheme

    Institute of Scientific and Technical Information of China (English)

    ZHANG Zhigang; ZHAO Jing; XU Maozhi

    2006-01-01

    A model of the hierarchical key assignment scheme is approached in this paper, which can be used with any cryptography algorithm. Besides, the optimal dynamic control property of a hierarchical key assignment scheme will be defined in this paper. Also, our scheme model will meet this property.

  17. Relativistic diffusion.

    Science.gov (United States)

    Haba, Z

    2009-02-01

    We discuss relativistic diffusion in proper time in the approach of Schay (Ph.D. thesis, Princeton University, Princeton, NJ, 1961) and Dudley [Ark. Mat. 6, 241 (1965)]. We derive (Langevin) stochastic differential equations in various coordinates. We show that in some coordinates the stochastic differential equations become linear. We obtain momentum probability distribution in an explicit form. We discuss a relativistic particle diffusing in an external electromagnetic field. We solve the Langevin equations in the case of parallel electric and magnetic fields. We derive a kinetic equation for the evolution of the probability distribution. We discuss drag terms leading to an equilibrium distribution. The relativistic analog of the Ornstein-Uhlenbeck process is not unique. We show that if the drag comes from a diffusion approximation to the master equation then its form is strongly restricted. The drag leading to the Tsallis equilibrium distribution satisfies this restriction whereas the one of the Jüttner distribution does not. We show that any function of the relativistic energy can be the equilibrium distribution for a particle in a static electric field. A preliminary study of the time evolution with friction is presented. It is shown that the problem is equivalent to quantum mechanics of a particle moving on a hyperboloid with a potential determined by the drag. A relation to diffusions appearing in heavy ion collisions is briefly discussed.

  18. Probabilistic quantum error correction

    CERN Document Server

    Fern, J; Fern, Jesse; Terilla, John

    2002-01-01

    There are well known necessary and sufficient conditions for a quantum code to correct a set of errors. We study weaker conditions under which a quantum code may correct errors with probabilities that may be less than one. We work with stabilizer codes and as an application study how the nine qubit code, the seven qubit code, and the five qubit code perform when there are errors on more than one qubit. As a second application, we discuss the concept of syndrome quality and use it to suggest a way that quantum error correction can be practically improved.

  19. Galaxy formation through hierarchical clustering

    Science.gov (United States)

    White, Simon D. M.; Frenk, Carlos S.

    1991-01-01

    Analytic methods for studying the formation of galaxies by gas condensation within massive dark halos are presented. The present scheme applies to cosmogonies where structure grows through hierarchical clustering of a mixture of gas and dissipationless dark matter. The simplest models consistent with the current understanding of N-body work on dissipationless clustering, and that of numerical and analytic work on gas evolution and cooling are adopted. Standard models for the evolution of the stellar population are also employed, and new models for the way star formation heats and enriches the surrounding gas are constructed. Detailed results are presented for a cold dark matter universe with Omega = 1 and H(0) = 50 km/s/Mpc, but the present methods are applicable to other models. The present luminosity functions contain significantly more faint galaxies than are observed.

  20. Groups possessing extensive hierarchical decompositions

    CERN Document Server

    Januszkiewicz, T; Leary, I J

    2009-01-01

    Kropholler's class of groups is the smallest class of groups which contains all finite groups and is closed under the following operator: whenever $G$ admits a finite-dimensional contractible $G$-CW-complex in which all stabilizer groups are in the class, then $G$ is itself in the class. Kropholler's class admits a hierarchical structure, i.e., a natural filtration indexed by the ordinals. For example, stage 0 of the hierarchy is the class of all finite groups, and stage 1 contains all groups of finite virtual cohomological dimension. We show that for each countable ordinal $\\alpha$, there is a countable group that is in Kropholler's class which does not appear until the $\\alpha+1$st stage of the hierarchy. Previously this was known only for $\\alpha= 0$, 1 and 2. The groups that we construct contain torsion. We also review the construction of a torsion-free group that lies in the third stage of the hierarchy.

  1. Quantum transport through hierarchical structures.

    Science.gov (United States)

    Boettcher, S; Varghese, C; Novotny, M A

    2011-04-01

    The transport of quantum electrons through hierarchical lattices is of interest because such lattices have some properties of both regular lattices and random systems. We calculate the electron transmission as a function of energy in the tight-binding approximation for two related Hanoi networks. HN3 is a Hanoi network with every site having three bonds. HN5 has additional bonds added to HN3 to make the average number of bonds per site equal to five. We present a renormalization group approach to solve the matrix equation involved in this quantum transport calculation. We observe band gaps in HN3, while no such band gaps are observed in linear networks or in HN5. We provide a detailed scaling analysis near the edges of these band gaps.

  2. Hierarchical networks of scientific journals

    CERN Document Server

    Palla, Gergely; Mones, Enys; Pollner, Péter; Vicsek, Tamás

    2015-01-01

    Scientific journals are the repositories of the gradually accumulating knowledge of mankind about the world surrounding us. Just as our knowledge is organised into classes ranging from major disciplines, subjects and fields to increasingly specific topics, journals can also be categorised into groups using various metrics. In addition to the set of topics characteristic for a journal, they can also be ranked regarding their relevance from the point of overall influence. One widespread measure is impact factor, but in the present paper we intend to reconstruct a much more detailed description by studying the hierarchical relations between the journals based on citation data. We use a measure related to the notion of m-reaching centrality and find a network which shows the level of influence of a journal from the point of the direction and efficiency with which information spreads through the network. We can also obtain an alternative network using a suitably modified nested hierarchy extraction method applied ...

  3. Adaptive Sampling in Hierarchical Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Knap, J; Barton, N R; Hornung, R D; Arsenlis, A; Becker, R; Jefferson, D R

    2007-07-09

    We propose an adaptive sampling methodology for hierarchical multi-scale simulation. The method utilizes a moving kriging interpolation to significantly reduce the number of evaluations of finer-scale response functions to provide essential constitutive information to a coarser-scale simulation model. The underlying interpolation scheme is unstructured and adaptive to handle the transient nature of a simulation. To handle the dynamic construction and searching of a potentially large set of finer-scale response data, we employ a dynamic metric tree database. We study the performance of our adaptive sampling methodology for a two-level multi-scale model involving a coarse-scale finite element simulation and a finer-scale crystal plasticity based constitutive law.

  4. Final Report of Optimization Algorithms for Hierarchical Problems, with Applications to Nanoporous Materials

    Energy Technology Data Exchange (ETDEWEB)

    Nash, Stephen G.

    2013-11-11

    The research focuses on the modeling and optimization of nanoporous materials. In systems with hierarchical structure that we consider, the physics changes as the scale of the problem is reduced and it can be important to account for physics at the fine level to obtain accurate approximations at coarser levels. For example, nanoporous materials hold promise for energy production and storage. A significant issue is the fabrication of channels within these materials to allow rapid diffusion through the material. One goal of our research is to apply optimization methods to the design of nanoporous materials. Such problems are large and challenging, with hierarchical structure that we believe can be exploited, and with a large range of important scales, down to atomistic. This requires research on large-scale optimization for systems that exhibit different physics at different scales, and the development of algorithms applicable to designing nanoporous materials for many important applications in energy production, storage, distribution, and use. Our research has two major research thrusts. The first is hierarchical modeling. We plan to develop and study hierarchical optimization models for nanoporous materials. The models have hierarchical structure, and attempt to balance the conflicting aims of model fidelity and computational tractability. In addition, we analyze the general hierarchical model, as well as the specific application models, to determine their properties, particularly those properties that are relevant to the hierarchical optimization algorithms. The second thrust was to develop, analyze, and implement a class of hierarchical optimization algorithms, and apply them to the hierarchical models we have developed. We adapted and extended the optimization-based multigrid algorithms of Lewis and Nash to the optimization models exemplified by the hierarchical optimization model. This class of multigrid algorithms has been shown to be a powerful tool for

  5. Hierarchically Nanostructured Materials for Sustainable Environmental Applications

    Science.gov (United States)

    Ren, Zheng; Guo, Yanbing; Liu, Cai-Hong; Gao, Pu-Xian

    2013-11-01

    This article presents a comprehensive overview of the hierarchical nanostructured materials with either geometry or composition complexity in environmental applications. The hierarchical nanostructures offer advantages of high surface area, synergistic interactions and multiple functionalities towards water remediation, environmental gas sensing and monitoring as well as catalytic gas treatment. Recent advances in synthetic strategies for various hierarchical morphologies such as hollow spheres and urchin-shaped architectures have been reviewed. In addition to the chemical synthesis, the physical mechanisms associated with the materials design and device fabrication have been discussed for each specific application. The development and application of hierarchical complex perovskite oxide nanostructures have also been introduced in photocatalytic water remediation, gas sensing and catalytic converter. Hierarchical nanostructures will open up many possibilities for materials design and device fabrication in environmental chemistry and technology.

  6. Hierarchical Identity-Based Lossy Trapdoor Functions

    CERN Document Server

    Escala, Alex; Libert, Benoit; Rafols, Carla

    2012-01-01

    Lossy trapdoor functions, introduced by Peikert and Waters (STOC'08), have received a lot of attention in the last years, because of their wide range of applications in theoretical cryptography. The notion has been recently extended to the identity-based scenario by Bellare et al. (Eurocrypt'12). We provide one more step in this direction, by considering the notion of hierarchical identity-based lossy trapdoor functions (HIB-LTDFs). Hierarchical identity-based cryptography generalizes identitybased cryptography in the sense that identities are organized in a hierarchical way; a parent identity has more power than its descendants, because it can generate valid secret keys for them. Hierarchical identity-based cryptography has been proved very useful both for practical applications and to establish theoretical relations with other cryptographic primitives. In order to realize HIB-LTDFs, we first build a weakly secure hierarchical predicate encryption scheme. This scheme, which may be of independent interest, is...

  7. Hierarchically nanostructured materials for sustainable environmental applications

    Science.gov (United States)

    Ren, Zheng; Guo, Yanbing; Liu, Cai-Hong; Gao, Pu-Xian

    2013-01-01

    This review presents a comprehensive overview of the hierarchical nanostructured materials with either geometry or composition complexity in environmental applications. The hierarchical nanostructures offer advantages of high surface area, synergistic interactions, and multiple functionalities toward water remediation, biosensing, environmental gas sensing and monitoring as well as catalytic gas treatment. Recent advances in synthetic strategies for various hierarchical morphologies such as hollow spheres and urchin-shaped architectures have been reviewed. In addition to the chemical synthesis, the physical mechanisms associated with the materials design and device fabrication have been discussed for each specific application. The development and application of hierarchical complex perovskite oxide nanostructures have also been introduced in photocatalytic water remediation, gas sensing, and catalytic converter. Hierarchical nanostructures will open up many possibilities for materials design and device fabrication in environmental chemistry and technology. PMID:24790946

  8. Hierarchically Nanostructured Materials for Sustainable Environmental Applications

    Directory of Open Access Journals (Sweden)

    Zheng eRen

    2013-11-01

    Full Text Available This article presents a comprehensive overview of the hierarchical nanostructured materials with either geometry or composition complexity in environmental applications. The hierarchical nanostructures offer advantages of high surface area, synergistic interactions and multiple functionalities towards water remediation, environmental gas sensing and monitoring as well as catalytic gas treatment. Recent advances in synthetic strategies for various hierarchical morphologies such as hollow spheres and urchin-shaped architectures have been reviewed. In addition to the chemical synthesis, the physical mechanisms associated with the materials design and device fabrication have been discussed for each specific application. The development and application of hierarchical complex perovskite oxide nanostructures have also been introduced in photocatalytic water remediation, gas sensing and catalytic converter. Hierarchical nanostructures will open up many possibilities for materials design and device fabrication in environmental chemistry and technology.

  9. Hierarchically Nanoporous Bioactive Glasses for High Efficiency Immobilization of Enzymes

    DEFF Research Database (Denmark)

    He, W.; Min, D.D.; Zhang, X.D.

    2014-01-01

    Bioactive glasses with hierarchical nanoporosity and structures have been heavily involved in immobilization of enzymes. Because of meticulous design and ingenious hierarchical nanostructuration of porosities from yeast cell biotemplates, hierarchically nanostructured porous bioactive glasses can...

  10. Mean-field analysis of phase transitions in the emergence of hierarchical society

    Science.gov (United States)

    Okubo, Tsuyoshi; Odagaki, Takashi

    2007-09-01

    Emergence of hierarchical society is analyzed by use of a simple agent-based model. We extend the mean-field model of Bonabeau [Physica A 217, 373 (1995)] to societies obeying complex diffusion rules where each individual selects a moving direction following their power rankings. We apply this mean-field analysis to the pacifist society model recently investigated by use of Monte Carlo simulation [Physica A 367, 435 (2006)]. We show analytically that the self-organization of hierarchies occurs in two steps as the individual density is increased and there are three phases: one egalitarian and two hierarchical states. We also highlight that the transition from the egalitarian phase to the first hierarchical phase is a continuous change in the order parameter and the second transition causes a discontinuous jump in the order parameter.

  11. Lattice-Symmetry-Driven Epitaxy of Hierarchical GaN Nanotripods

    KAUST Repository

    Wang, Ping

    2017-01-18

    Lattice-symmetry-driven epitaxy of hierarchical GaN nanotripods is demonstrated. The nanotripods emerge on the top of hexagonal GaN nanowires, which are selectively grown on pillar-patterned GaN templates using molecular beam epitaxy. High-resolution transmission electron microscopy confirms that two kinds of lattice-symmetry, wurtzite (wz) and zinc-blende (zb), coexist in the GaN nanotripods. Periodical transformation between wz and zb drives the epitaxy of the hierarchical nanotripods with N-polarity. The zb-GaN is formed by the poor diffusion of adatoms, and it can be suppressed by improving the ability of the Ga adatoms to migrate as the growth temperature increased. This controllable epitaxy of hierarchical GaN nanotripods allows quantum dots to be located at the phase junctions of the nanotripods and nanowires, suggesting a new recipe for multichannel quantum devices.

  12. Hierarchical mutual information for the comparison of hierarchical community structures in complex networks

    CERN Document Server

    Perotti, Juan Ignacio; Caldarelli, Guido

    2015-01-01

    The quest for a quantitative characterization of community and modular structure of complex networks produced a variety of methods and algorithms to classify different networks. However, it is not clear if such methods provide consistent, robust and meaningful results when considering hierarchies as a whole. Part of the problem is the lack of a similarity measure for the comparison of hierarchical community structures. In this work we give a contribution by introducing the {\\it hierarchical mutual information}, which is a generalization of the traditional mutual information, and allows to compare hierarchical partitions and hierarchical community structures. The {\\it normalized} version of the hierarchical mutual information should behave analogously to the traditional normalized mutual information. Here, the correct behavior of the hierarchical mutual information is corroborated on an extensive battery of numerical experiments. The experiments are performed on artificial hierarchies, and on the hierarchical ...

  13. MEG source localization of spatially extended generators of epileptic activity: comparing entropic and hierarchical bayesian approaches.

    Directory of Open Access Journals (Sweden)

    Rasheda Arman Chowdhury

    Full Text Available Localizing the generators of epileptic activity in the brain using Electro-EncephaloGraphy (EEG or Magneto-EncephaloGraphy (MEG signals is of particular interest during the pre-surgical investigation of epilepsy. Epileptic discharges can be detectable from background brain activity, provided they are associated with spatially extended generators. Using realistic simulations of epileptic activity, this study evaluates the ability of distributed source localization methods to accurately estimate the location of the generators and their sensitivity to the spatial extent of such generators when using MEG data. Source localization methods based on two types of realistic models have been investigated: (i brain activity may be modeled using cortical parcels and (ii brain activity is assumed to be locally smooth within each parcel. A Data Driven Parcellization (DDP method was used to segment the cortical surface into non-overlapping parcels and diffusion-based spatial priors were used to model local spatial smoothness within parcels. These models were implemented within the Maximum Entropy on the Mean (MEM and the Hierarchical Bayesian (HB source localization frameworks. We proposed new methods in this context and compared them with other standard ones using Monte Carlo simulations of realistic MEG data involving sources of several spatial extents and depths. Detection accuracy of each method was quantified using Receiver Operating Characteristic (ROC analysis and localization error metrics. Our results showed that methods implemented within the MEM framework were sensitive to all spatial extents of the sources ranging from 3 cm(2 to 30 cm(2, whatever were the number and size of the parcels defining the model. To reach a similar level of accuracy within the HB framework, a model using parcels larger than the size of the sources should be considered.

  14. Correction for quadrature errors

    DEFF Research Database (Denmark)

    Netterstrøm, A.; Christensen, Erik Lintz

    1994-01-01

    In high bandwidth radar systems it is necessary to use quadrature devices to convert the signal to/from baseband. Practical problems make it difficult to implement a perfect quadrature system. Channel imbalance and quadrature phase errors in the transmitter and the receiver result in error signal...

  15. ERRORS AND CORRECTION

    Institute of Scientific and Technical Information of China (English)

    1998-01-01

    To err is human . Since the 1960s, most second language teachers or language theorists have regarded errors as natural and inevitable in the language learning process . Instead of regarding them as terrible and disappointing, teachers have come to realize their value. This paper will consider these values, analyze some errors and propose some effective correction techniques.

  16. ERROR AND ERROR CORRECTION AT ELEMENTARY LEVEL

    Institute of Scientific and Technical Information of China (English)

    1994-01-01

    Introduction Errors are unavoidable in language learning, however, to a great extent, teachers in most middle schools in China regard errors as undesirable, a sign of failure in language learning. Most middle schools are still using the grammar-translation method which aims at encouraging students to read scientific works and enjoy literary works. The other goals of this method are to gain a greater understanding of the first language and to improve the students’ ability to cope with difficult subjects and materials, i.e. to develop the students’ minds. The practical purpose of using this method is to help learners pass the annual entrance examination. "To achieve these goals, the students must first learn grammar and vocabulary,... Grammar is taught deductively by means of long and elaborate explanations... students learn the rules of the language rather than its use." (Tang Lixing, 1983:11-12)

  17. Errors on errors - Estimating cosmological parameter covariance

    CERN Document Server

    Joachimi, Benjamin

    2014-01-01

    Current and forthcoming cosmological data analyses share the challenge of huge datasets alongside increasingly tight requirements on the precision and accuracy of extracted cosmological parameters. The community is becoming increasingly aware that these requirements not only apply to the central values of parameters but, equally important, also to the error bars. Due to non-linear effects in the astrophysics, the instrument, and the analysis pipeline, data covariance matrices are usually not well known a priori and need to be estimated from the data itself, or from suites of large simulations. In either case, the finite number of realisations available to determine data covariances introduces significant biases and additional variance in the errors on cosmological parameters in a standard likelihood analysis. Here, we review recent work on quantifying these biases and additional variances and discuss approaches to remedy these effects.

  18. Proofreading for word errors.

    Science.gov (United States)

    Pilotti, Maura; Chodorow, Martin; Agpawa, Ian; Krajniak, Marta; Mahamane, Salif

    2012-04-01

    Proofreading (i.e., reading text for the purpose of detecting and correcting typographical errors) is viewed as a component of the activity of revising text and thus is a necessary (albeit not sufficient) procedural step for enhancing the quality of a written product. The purpose of the present research was to test competing accounts of word-error detection which predict factors that may influence reading and proofreading differently. Word errors, which change a word into another word (e.g., from --> form), were selected for examination because they are unlikely to be detected by automatic spell-checking functions. Consequently, their detection still rests mostly in the hands of the human proofreader. Findings highlighted the weaknesses of existing accounts of proofreading and identified factors, such as length and frequency of the error in the English language relative to frequency of the correct word, which might play a key role in detection of word errors.

  19. Generic Approach for Hierarchical Modulation Performance Analysis: Application to DVB-SH and DVB-S2

    CERN Document Server

    Méric, Hugo; Amiot-Bazile, Caroline; Arnal, Fabrice; Boucheret, Marie-Laure

    2011-01-01

    Broadcasting systems have to deal with channel variability in order to offer the best rate to the users. Hierarchical modulation is a practical solution to provide different rates to the receivers in function of the channel quality. Unfortunately, the performance evaluation of such modulations requires time consuming simulations. We propose in this paper a novel approach based on the channel capacity to avoid these simulations. The method allows to study the performance of hierarchical and also classical modulations combined with error correcting codes. We will also compare hierarchical modulation with time sharing strategy in terms of achievable rates and indisponibility. Our work will be applied to the DVB-SH and DVB-S2 standards, which both consider hierarchical modulation as an optional feature.

  20. A Hierarchically Micro-Meso-Macroporous Zeolite CaA for Methanol Conversion to Dimethyl Ether

    Directory of Open Access Journals (Sweden)

    Yan Wang

    2016-11-01

    Full Text Available A hierarchical zeolite CaA with microporous, mesoporous and macroporous structure was hydrothermally synthesized by a ”Bond-Blocking” method using organo-functionalized mesoporous silica (MS as a silica source. The characterization by XRD, SEM/TEM and N2 adsorption/desorption techniques showed that the prepared material had well-crystalline zeolite Linde Type A (LTA topological structure, microspherical particle morphologies, and hierarchically intracrystalline micro-meso-macropores structure. With the Bond-Blocking principle, the external surface area and macro-mesoporosity of the hierarchical zeolite CaA can be adjusted by varying the organo-functionalized degree of the mesoporous silica surface. Similarly, the distribution of the micro-meso-macroporous structure in the zeolite CaA can be controlled purposely. Compared with the conventional microporous zeolite CaA, the hierarchical zeolite CaA as a catalyst in the conversion of methanol to dimethyl ether (DME, exhibited complete DME selectivity and stable catalytic activity with high methanol conversion. The catalytic performances of the hierarchical zeolite CaA results clearly from the micro-meso-macroporous structure, improving diffusion properties, favoring the access to the active surface and avoiding secondary reactions (no hydrocarbon products were detected after 3 h of reaction.

  1. Uncorrected refractive errors

    Directory of Open Access Journals (Sweden)

    Kovin S Naidoo

    2012-01-01

    Full Text Available Global estimates indicate that more than 2.3 billion people in the world suffer from poor vision due to refractive error; of which 670 million people are considered visually impaired because they do not have access to corrective treatment. Refractive errors, if uncorrected, results in an impaired quality of life for millions of people worldwide, irrespective of their age, sex and ethnicity. Over the past decade, a series of studies using a survey methodology, referred to as Refractive Error Study in Children (RESC, were performed in populations with different ethnic origins and cultural settings. These studies confirmed that the prevalence of uncorrected refractive errors is considerably high for children in low-and-middle-income countries. Furthermore, uncorrected refractive error has been noted to have extensive social and economic impacts, such as limiting educational and employment opportunities of economically active persons, healthy individuals and communities. The key public health challenges presented by uncorrected refractive errors, the leading cause of vision impairment across the world, require urgent attention. To address these issues, it is critical to focus on the development of human resources and sustainable methods of service delivery. This paper discusses three core pillars to addressing the challenges posed by uncorrected refractive errors: Human Resource (HR Development, Service Development and Social Entrepreneurship.

  2. Uncorrected refractive errors.

    Science.gov (United States)

    Naidoo, Kovin S; Jaggernath, Jyoti

    2012-01-01

    Global estimates indicate that more than 2.3 billion people in the world suffer from poor vision due to refractive error; of which 670 million people are considered visually impaired because they do not have access to corrective treatment. Refractive errors, if uncorrected, results in an impaired quality of life for millions of people worldwide, irrespective of their age, sex and ethnicity. Over the past decade, a series of studies using a survey methodology, referred to as Refractive Error Study in Children (RESC), were performed in populations with different ethnic origins and cultural settings. These studies confirmed that the prevalence of uncorrected refractive errors is considerably high for children in low-and-middle-income countries. Furthermore, uncorrected refractive error has been noted to have extensive social and economic impacts, such as limiting educational and employment opportunities of economically active persons, healthy individuals and communities. The key public health challenges presented by uncorrected refractive errors, the leading cause of vision impairment across the world, require urgent attention. To address these issues, it is critical to focus on the development of human resources and sustainable methods of service delivery. This paper discusses three core pillars to addressing the challenges posed by uncorrected refractive errors: Human Resource (HR) Development, Service Development and Social Entrepreneurship.

  3. Errors in Radiologic Reporting

    Directory of Open Access Journals (Sweden)

    Esmaeel Shokrollahi

    2010-05-01

    Full Text Available Given that the report is a professional document and bears the associated responsibilities, all of the radiologist's errors appear in it, either directly or indirectly. It is not easy to distinguish and classify the mistakes made when a report is prepared, because in most cases the errors are complex and attributable to more than one cause and because many errors depend on the individual radiologists' professional, behavioral and psychological traits."nIn fact, anyone can make a mistake, but some radiologists make more mistakes, and some types of mistakes are predictable to some extent."nReporting errors can be categorized differently:"nUniversal vs. individual"nHuman related vs. system related"nPerceptive vs. cognitive errors"n1. Descriptive "n2. Interpretative "n3. Decision related Perceptive errors"n1. False positive "n2. False negative"n Nonidentification "n Erroneous identification "nCognitive errors "n Knowledge-based"n Psychological  

  4. Errors in neuroradiology.

    Science.gov (United States)

    Caranci, Ferdinando; Tedeschi, Enrico; Leone, Giuseppe; Reginelli, Alfonso; Gatta, Gianluca; Pinto, Antonio; Squillaci, Ettore; Briganti, Francesco; Brunese, Luca

    2015-09-01

    Approximately 4 % of radiologic interpretation in daily practice contains errors and discrepancies that should occur in 2-20 % of reports. Fortunately, most of them are minor degree errors, or if serious, are found and corrected with sufficient promptness; obviously, diagnostic errors become critical when misinterpretation or misidentification should significantly delay medical or surgical treatments. Errors can be summarized into four main categories: observer errors, errors in interpretation, failure to suggest the next appropriate procedure, failure to communicate in a timely and a clinically appropriate manner. Misdiagnosis/misinterpretation percentage should rise up in emergency setting and in the first moments of the learning curve, as in residency. Para-physiological and pathological pitfalls in neuroradiology include calcification and brain stones, pseudofractures, and enlargement of subarachnoid or epidural spaces, ventricular system abnormalities, vascular system abnormalities, intracranial lesions or pseudolesions, and finally neuroradiological emergencies. In order to minimize the possibility of error, it is important to be aware of various presentations of pathology, obtain clinical information, know current practice guidelines, review after interpreting a diagnostic study, suggest follow-up studies when appropriate, communicate significant abnormal findings appropriately and in a timely fashion directly with the treatment team.

  5. Hierarchically structured, nitrogen-doped carbon membranes

    KAUST Repository

    Wang, Hong

    2017-08-03

    The present invention is a structure, method of making and method of use for a novel macroscopic hierarchically structured, nitrogen-doped, nano-porous carbon membrane (HNDCMs) with asymmetric and hierarchical pore architecture that can be produced on a large-scale approach. The unique HNDCM holds great promise as components in separation and advanced carbon devices because they could offer unconventional fluidic transport phenomena on the nanoscale. Overall, the invention set forth herein covers a hierarchically structured, nitrogen-doped carbon membranes and methods of making and using such a membranes.

  6. A Model for Slicing JAVA Programs Hierarchically

    Institute of Scientific and Technical Information of China (English)

    Bi-Xin Li; Xiao-Cong Fan; Jun Pang; Jian-Jun Zhao

    2004-01-01

    Program slicing can be effectively used to debug, test, analyze, understand and maintain objectoriented software. In this paper, a new slicing model is proposed to slice Java programs based on their inherent hierarchical feature. The main idea of hierarchical slicing is to slice programs in a stepwise way, from package level, to class level, method level, and finally up to statement level. The stepwise slicing algorithm and the related graph reachability algorithms are presented, the architecture of the Java program Analyzing Tool (JATO) based on hierarchical slicing model is provided, the applications and a small case study are also discussed.

  7. Hierarchical analysis of acceptable use policies

    Directory of Open Access Journals (Sweden)

    P. A. Laughton

    2008-01-01

    Full Text Available Acceptable use policies (AUPs are vital tools for organizations to protect themselves and their employees from misuse of computer facilities provided. A well structured, thorough AUP is essential for any organization. It is impossible for an effective AUP to deal with every clause and remain readable. For this reason, some sections of an AUP carry more weight than others, denoting importance. The methodology used to develop the hierarchical analysis is a literature review, where various sources were consulted. This hierarchical approach to AUP analysis attempts to highlight important sections and clauses dealt with in an AUP. The emphasis of the hierarchal analysis is to prioritize the objectives of an AUP.

  8. Hierarchical modeling and analysis for spatial data

    CERN Document Server

    Banerjee, Sudipto; Gelfand, Alan E

    2003-01-01

    Among the many uses of hierarchical modeling, their application to the statistical analysis of spatial and spatio-temporal data from areas such as epidemiology And environmental science has proven particularly fruitful. Yet to date, the few books that address the subject have been either too narrowly focused on specific aspects of spatial analysis, or written at a level often inaccessible to those lacking a strong background in mathematical statistics.Hierarchical Modeling and Analysis for Spatial Data is the first accessible, self-contained treatment of hierarchical methods, modeling, and dat

  9. Social Influence on Information Technology Adoption and Sustained Use in Healthcare: A Hierarchical Bayesian Learning Method Analysis

    Science.gov (United States)

    Hao, Haijing

    2013-01-01

    Information technology adoption and diffusion is currently a significant challenge in the healthcare delivery setting. This thesis includes three papers that explore social influence on information technology adoption and sustained use in the healthcare delivery environment using conventional regression models and novel hierarchical Bayesian…

  10. An accessible method for implementing hierarchical models with spatio-temporal abundance data

    Science.gov (United States)

    Ross, Beth E.; Hooten, Melvin B.; Koons, David N.

    2012-01-01

    A common goal in ecology and wildlife management is to determine the causes of variation in population dynamics over long periods of time and across large spatial scales. Many assumptions must nevertheless be overcome to make appropriate inference about spatio-temporal variation in population dynamics, such as autocorrelation among data points, excess zeros, and observation error in count data. To address these issues, many scientists and statisticians have recommended the use of Bayesian hierarchical models. Unfortunately, hierarchical statistical models remain somewhat difficult to use because of the necessary quantitative background needed to implement them, or because of the computational demands of using Markov Chain Monte Carlo algorithms to estimate parameters. Fortunately, new tools have recently been developed that make it more feasible for wildlife biologists to fit sophisticated hierarchical Bayesian models (i.e., Integrated Nested Laplace Approximation, ‘INLA’). We present a case study using two important game species in North America, the lesser and greater scaup, to demonstrate how INLA can be used to estimate the parameters in a hierarchical model that decouples observation error from process variation, and accounts for unknown sources of excess zeros as well as spatial and temporal dependence in the data. Ultimately, our goal was to make unbiased inference about spatial variation in population trends over time.

  11. HIERARCHICAL ADAPTIVE ROOD PATTERN SEARCH FOR MOTION ESTIMATION AT VIDEO SEQUENCE ANALYSIS

    Directory of Open Access Journals (Sweden)

    V. T. Nguyen

    2016-05-01

    Full Text Available Subject of Research.The paper deals with the motion estimation algorithms for the analysis of video sequences in compression standards MPEG-4 Visual and H.264. Anew algorithm has been offered based on the analysis of the advantages and disadvantages of existing algorithms. Method. Thealgorithm is called hierarchical adaptive rood pattern search (Hierarchical ARPS, HARPS. This new algorithm includes the classic adaptive rood pattern search ARPS and hierarchical search MP (Hierarchical search or Mean pyramid. All motion estimation algorithms have been implemented using MATLAB package and tested with several video sequences. Main Results. The criteria for evaluating the algorithms were: speed, peak signal to noise ratio, mean square error and mean absolute deviation. The proposed method showed a much better performance at a comparable error and deviation. The peak signal to noise ratio in different video sequences shows better and worse results than characteristics of known algorithms so it requires further investigation. Practical Relevance. Application of this algorithm in MPEG-4 and H.264 codecs instead of the standard can significantly reduce compression time. This feature enables to recommend it in telecommunication systems for multimedia data storing, transmission and processing.

  12. Simulation and Measurement of the Transmission Distortions of the Digital Television DVB-T/H Part 2: Hierarchical Modulation Performance

    Directory of Open Access Journals (Sweden)

    R. Stukavec

    2010-09-01

    Full Text Available The paper deals with the second part of results of the Czech Science Foundation research project that was aimed into the simulation and measurement of the transmission distortions of the digital terrestrial television according to DVB-T/H standards. In this part the hierarchical modulation performance characteristics and its simulation and laboratory measurements are presented. The paper deals with the hierarchical oriented COFDM modulator for the digital terrestrial television transmission and DVB-T/H standards and possible utilization of this technique in real broadcasting scenarios – fixed, portable and mobile digital TV, all in one TV channel. Impact of the hierarchical modulation on Modulation Error Rate from I/Q constellations and Bit Error Rates before and after Viterbi decoding in DVB-T/H signal decoding are evaluated and discussed.

  13. Image meshing via hierarchical optimization

    Institute of Scientific and Technical Information of China (English)

    Hao XIE; Ruo-feng TONG‡

    2016-01-01

    Vector graphic, as a kind of geometric representation of raster images, has many advantages, e.g., defi nition independence and editing facility. A popular way to convert raster images into vector graphics is image meshing, the aim of which is to fi nd a mesh to represent an image as faithfully as possible. For traditional meshing algorithms, the crux of the problem resides mainly in the high non-linearity and non-smoothness of the objective, which makes it difficult to fi nd a desirable optimal solution. To ameliorate this situation, we present a hierarchical optimization algorithm solving the problem from coarser levels to fi ner ones, providing initialization for each level with its coarser ascent. To further simplify the problem, the original non-convex problem is converted to a linear least squares one, and thus becomes convex, which makes the problem much easier to solve. A dictionary learning framework is used to combine geometry and topology elegantly. Then an alternating scheme is employed to solve both parts. Experiments show that our algorithm runs fast and achieves better results than existing ones for most images.

  14. Image meshing via hierarchical optimization*

    Institute of Scientific and Technical Information of China (English)

    Hao XIE; Ruo-feng TONGS

    2016-01-01

    Vector graphic, as a kind of geometric representation of raster images, has many advantages, e.g., definition independence and editing facility. A popular way to convert raster images into vector graphics is image meshing, the aim of which is to find a mesh to represent an image as faithfully as possible. For traditional meshing algorithms, the crux of the problem resides mainly in the high non-linearity and non-smoothness of the objective, which makes it difficult to find a desirable optimal solution. To ameliorate this situation, we present a hierarchical optimization algorithm solving the problem from coarser levels to finer ones, providing initialization for each level with its coarser ascent. To further simplify the problem, the original non-convex problem is converted to a linear least squares one, and thus becomes convex, which makes the problem much easier to solve. A dictionary learning framework is used to combine geometry and topology elegantly. Then an alternating scheme is employed to solve both parts. Experiments show that our algorithm runs fast and achieves better results than existing ones for most images.

  15. Hierarchical prediction and context adaptive coding for lossless color image compression.

    Science.gov (United States)

    Kim, Seyun; Cho, Nam Ik

    2014-01-01

    This paper presents a new lossless color image compression algorithm, based on the hierarchical prediction and context-adaptive arithmetic coding. For the lossless compression of an RGB image, it is first decorrelated by a reversible color transform and then Y component is encoded by a conventional lossless grayscale image compression method. For encoding the chrominance images, we develop a hierarchical scheme that enables the use of upper, left, and lower pixels for the pixel prediction, whereas the conventional raster scan prediction methods use upper and left pixels. An appropriate context model for the prediction error is also defined and the arithmetic coding is applied to the error signal corresponding to each context. For several sets of images, it is shown that the proposed method further reduces the bit rates compared with JPEG2000 and JPEG-XR.

  16. Music Emotion Detection Using Hierarchical Sparse Kernel Machines

    Directory of Open Access Journals (Sweden)

    Yu-Hao Chin

    2014-01-01

    Full Text Available For music emotion detection, this paper presents a music emotion verification system based on hierarchical sparse kernel machines. With the proposed system, we intend to verify if a music clip possesses happiness emotion or not. There are two levels in the hierarchical sparse kernel machines. In the first level, a set of acoustical features are extracted, and principle component analysis (PCA is implemented to reduce the dimension. The acoustical features are utilized to generate the first-level decision vector, which is a vector with each element being a significant value of an emotion. The significant values of eight main emotional classes are utilized in this paper. To calculate the significant value of an emotion, we construct its 2-class SVM with calm emotion as the global (non-target side of the SVM. The probability distributions of the adopted acoustical features are calculated and the probability product kernel is applied in the first-level SVMs to obtain first-level decision vector feature. In the second level of the hierarchical system, we merely construct a 2-class relevance vector machine (RVM with happiness as the target side and other emotions as the background side of the RVM. The first-level decision vector is used as the feature with conventional radial basis function kernel. The happiness verification threshold is built on the probability value. In the experimental results, the detection error tradeoff (DET curve shows that the proposed system has a good performance on verifying if a music clip reveals happiness emotion.

  17. Cobalt silicate hierarchical hollow spheres for lithium-ion batteries

    Science.gov (United States)

    Yang, Jun; Guo, Yuanyuan; Zhang, Yufei; Sun, Chencheng; Yan, Qingyu; Dong, Xiaochen

    2016-09-01

    In this paper, the synthesis of cobalt silicate novel hierarchical hollow spheres via a facile hydrothermal method is presented. With a unique hollow structure, the Co2SiO4 provides a large surface area, which can shorten the lithium ions diffusion length and effectively accommodate the volumetic variation during the lithiation/de-lithiation process. Serving as an anode material in lithium-ion battery application, the Co2SiO4 electrode demonstrates a high reversible specific capacity (first-cycle charge capacity of 948.6 mAh g-1 at 100 mA g-1), a cycling durability (specific capacity of 791.4 mAh g-1 after 100 cycles at 100 mA g-1), and a good rate capability (specific capacity of 349.4 mAh g-1 at 10 A g-1). The results indicate that the cobalt silicate hierarchical hollow sphere holds the potential applications in energy storage electrodes.

  18. MnO2-modified hierarchical graphene fiber electrochemical supercapacitor

    Science.gov (United States)

    Chen, Qing; Meng, Yuning; Hu, Chuangang; Zhao, Yang; Shao, Huibo; Chen, Nan; Qu, Liangti

    2014-02-01

    A novel hybrid fiber that MnO2 modified graphene sheets on graphene fiber has been fabricated by direct deposition of MnO2 onto graphene network surrounding graphene fiber (MnO2/G/GF). In this hierarchical structure, the graphene fiber with a sheath of 3D graphene network is coated with MnO2 nanoflowers. The 3D graphene on graphene fibers (G/GF) serves as highly conductive backbones with high surface area for deposition of nanostructured MnO2, which provide the high accessibility of electrolytic ions for shorten diffusion paths. An all-solid-state flexible supercapacitor based on a MnO2/G/GF hybrid fiber structure has been developed on the basis of the intrinsic mechanical flexibility of GF and the unique hierarchical structure. By combination of electric double layer capacitance of graphene network with the pseudocapacitance of MnO2 nanostructures, the all-solid-state fiber supercapacitor shows the much enhanced electrochemical capacitive behaviors with robust tolerance to mechanical deformation, promising for being woven into a textile for wearable electronics.

  19. Hereditary Diffuse Gastric Cancer

    Science.gov (United States)

    ... Hereditary Diffuse Gastric Cancer Request Permissions Hereditary Diffuse Gastric Cancer Approved by the Cancer.Net Editorial Board , 11/2015 What is hereditary diffuse gastric cancer? Hereditary diffuse gastric cancer (HDGC) is an inherited ...

  20. An Automatic Hierarchical Delay Analysis Tool

    Institute of Scientific and Technical Information of China (English)

    FaridMheir-El-Saadi; BozenaKaminska

    1994-01-01

    The performance analysis of VLSI integrated circuits(ICs) with flat tools is slow and even sometimes impossible to complete.Some hierarchical tools have been developed to speed up the analysis of these large ICs.However,these hierarchical tools suffer from a poor interaction with the CAD database and poorly automatized operations.We introduce a general hierarchical framework for performance analysis to solve these problems.The circuit analysis is automatic under the proposed framework.Information that has been automatically abstracted in the hierarchy is kept in database properties along with the topological information.A limited software implementation of the framework,PREDICT,has also been developed to analyze the delay performance.Experimental results show that hierarchical analysis CPU time and memory requirements are low if heuristics are used during the abstraction process.

  1. Packaging glass with hierarchically nanostructured surface

    KAUST Repository

    He, Jr-Hau

    2017-08-03

    An optical device includes an active region and packaging glass located on top of the active region. A top surface of the packaging glass includes hierarchical nanostructures comprised of honeycombed nanowalls (HNWs) and nanorod (NR) structures extending from the HNWs.

  2. Generation of hierarchically correlated multivariate symbolic sequences

    CERN Document Server

    Tumminello, Mi; Mantegna, R N

    2008-01-01

    We introduce an algorithm to generate multivariate series of symbols from a finite alphabet with a given hierarchical structure of similarities. The target hierarchical structure of similarities is arbitrary, for instance the one obtained by some hierarchical clustering procedure as applied to an empirical matrix of Hamming distances. The algorithm can be interpreted as the finite alphabet equivalent of the recently introduced hierarchically nested factor model (M. Tumminello et al. EPL 78 (3) 30006 (2007)). The algorithm is based on a generating mechanism that is different from the one used in the mutation rate approach. We apply the proposed methodology for investigating the relationship between the bootstrap value associated with a node of a phylogeny and the probability of finding that node in the true phylogeny.

  3. Hierarchical modularity in human brain functional networks

    CERN Document Server

    Meunier, D; Fornito, A; Ersche, K D; Bullmore, E T; 10.3389/neuro.11.037.2009

    2010-01-01

    The idea that complex systems have a hierarchical modular organization originates in the early 1960s and has recently attracted fresh support from quantitative studies of large scale, real-life networks. Here we investigate the hierarchical modular (or "modules-within-modules") decomposition of human brain functional networks, measured using functional magnetic resonance imaging (fMRI) in 18 healthy volunteers under no-task or resting conditions. We used a customized template to extract networks with more than 1800 regional nodes, and we applied a fast algorithm to identify nested modular structure at several hierarchical levels. We used mutual information, 0 < I < 1, to estimate the similarity of community structure of networks in different subjects, and to identify the individual network that is most representative of the group. Results show that human brain functional networks have a hierarchical modular organization with a fair degree of similarity between subjects, I=0.63. The largest 5 modules at ...

  4. HIERARCHICAL ORGANIZATION OF INFORMATION, IN RELATIONAL DATABASES

    Directory of Open Access Journals (Sweden)

    Demian Horia

    2008-05-01

    Full Text Available In this paper I will present different types of representation, of hierarchical information inside a relational database. I also will compare them to find the best organization for specific scenarios.

  5. Hierarchical Network Design Using Simulated Annealing

    DEFF Research Database (Denmark)

    Thomadsen, Tommy; Clausen, Jens

    2002-01-01

    The hierarchical network problem is the problem of finding the least cost network, with nodes divided into groups, edges connecting nodes in each groups and groups ordered in a hierarchy. The idea of hierarchical networks comes from telecommunication networks where hierarchies exist. Hierarchical...... networks are described and a mathematical model is proposed for a two level version of the hierarchical network problem. The problem is to determine which edges should connect nodes, and how demand is routed in the network. The problem is solved heuristically using simulated annealing which as a sub......-algorithm uses a construction algorithm to determine edges and route the demand. Performance for different versions of the algorithm are reported in terms of runtime and quality of the solutions. The algorithm is able to find solutions of reasonable quality in approximately 1 hour for networks with 100 nodes....

  6. When to Use Hierarchical Linear Modeling

    National Research Council Canada - National Science Library

    Veronika Huta

    2014-01-01

    Previous publications on hierarchical linear modeling (HLM) have provided guidance on how to perform the analysis, yet there is relatively little information on two questions that arise even before analysis...

  7. An introduction to hierarchical linear modeling

    National Research Council Canada - National Science Library

    Woltman, Heather; Feldstain, Andrea; MacKay, J. Christine; Rocchi, Meredith

    2012-01-01

    This tutorial aims to introduce Hierarchical Linear Modeling (HLM). A simple explanation of HLM is provided that describes when to use this statistical technique and identifies key factors to consider before conducting this analysis...

  8. Conservation Laws in the Hierarchical Model

    NARCIS (Netherlands)

    Beijeren, H. van; Gallavotti, G.; Knops, H.

    1974-01-01

    An exposition of the renormalization-group equations for the hierarchical model is given. Attention is drawn to some properties of the spin distribution functions which are conserved under the action of the renormalization group.

  9. Hierarchical DSE for multi-ASIP platforms

    DEFF Research Database (Denmark)

    Micconi, Laura; Corvino, Rosilde; Gangadharan, Deepak;

    2013-01-01

    This work proposes a hierarchical Design Space Exploration (DSE) for the design of multi-processor platforms targeted to specific applications with strict timing and area constraints. In particular, it considers platforms integrating multiple Application Specific Instruction Set Processors (ASIPs...

  10. Error Decomposition and Adaptivity for Response Surface Approximations from PDEs with Parametric Uncertainty

    KAUST Repository

    Bryant, C. M.

    2015-01-01

    In this work, we investigate adaptive approaches to control errors in response surface approximations computed from numerical approximations of differential equations with uncertain or random data and coefficients. The adaptivity of the response surface approximation is based on a posteriori error estimation, and the approach relies on the ability to decompose the a posteriori error estimate into contributions from the physical discretization and the approximation in parameter space. Errors are evaluated in terms of linear quantities of interest using adjoint-based methodologies. We demonstrate that a significant reduction in the computational cost required to reach a given error tolerance can be achieved by refining the dominant error contributions rather than uniformly refining both the physical and stochastic discretization. Error decomposition is demonstrated for a two-dimensional flow problem, and adaptive procedures are tested on a convection-diffusion problem with discontinuous parameter dependence and a diffusion problem, where the diffusion coefficient is characterized by a 10-dimensional parameter space.

  11. Inpatients’ medical prescription errors

    Directory of Open Access Journals (Sweden)

    Aline Melo Santos Silva

    2009-09-01

    Full Text Available Objective: To identify and quantify the most frequent prescription errors in inpatients’ medical prescriptions. Methods: A survey of prescription errors was performed in the inpatients’ medical prescriptions, from July 2008 to May 2009 for eight hours a day. Rresults: At total of 3,931 prescriptions was analyzed and 362 (9.2% prescription errors were found, which involved the healthcare team as a whole. Among the 16 types of errors detected in prescription, the most frequent occurrences were lack of information, such as dose (66 cases, 18.2% and administration route (26 cases, 7.2%; 45 cases (12.4% of wrong transcriptions to the information system; 30 cases (8.3% of duplicate drugs; doses higher than recommended (24 events, 6.6% and 29 cases (8.0% of prescriptions with indication but not specifying allergy. Cconclusion: Medication errors are a reality at hospitals. All healthcare professionals are responsible for the identification and prevention of these errors, each one in his/her own area. The pharmacist is an essential professional in the drug therapy process. All hospital organizations need a pharmacist team responsible for medical prescription analyses before preparation, dispensation and administration of drugs to inpatients. This study showed that the pharmacist improves the inpatient’s safety and success of prescribed therapy.

  12. Hierarchical organization versus self-organization

    OpenAIRE

    Busseniers, Evo

    2014-01-01

    In this paper we try to define the difference between hierarchical organization and self-organization. Organization is defined as a structure with a function. So we can define the difference between hierarchical organization and self-organization both on the structure as on the function. In the next two chapters these two definitions are given. For the structure we will use some existing definitions in graph theory, for the function we will use existing theory on (self-)organization. In the t...

  13. Hierarchical decision making for flood risk reduction

    DEFF Research Database (Denmark)

    Custer, Rocco; Nishijima, Kazuyoshi

    2013-01-01

    . In current practice, structures are often optimized individually without considering benefits of having a hierarchy of protection structures. It is here argued, that the joint consideration of hierarchically integrated protection structures is beneficial. A hierarchical decision model is utilized to analyze...... and compare the benefit of large upstream protection structures and local downstream protection structures in regard to epistemic uncertainty parameters. Results suggest that epistemic uncertainty influences the outcome of the decision model and that, depending on the magnitude of epistemic uncertainty...

  14. Hierarchical self-organization of tectonic plates

    OpenAIRE

    2010-01-01

    The Earth's surface is subdivided into eight large tectonic plates and many smaller ones. We reconstruct the plate tessellation history and demonstrate that both large and small plates display two distinct hierarchical patterns, described by different power-law size-relationships. While small plates display little organisational change through time, the structure of the large plates oscillate between minimum and maximum hierarchical tessellations. The organization of large plates rapidly chan...

  15. Angelic Hierarchical Planning: Optimal and Online Algorithms

    Science.gov (United States)

    2008-12-06

    restrict our attention to plans in I∗(Act, s0). Definition 2. ( Parr and Russell , 1998) A plan ah∗ is hierarchically optimal iff ah∗ = argmina∈I∗(Act,s0):T...Murdock, Dan Wu, and Fusun Yaman. SHOP2: An HTN planning system. JAIR, 20:379–404, 2003. Ronald Parr and Stuart Russell . Reinforcement Learning with...Angelic Hierarchical Planning: Optimal and Online Algorithms Bhaskara Marthi Stuart J. Russell Jason Wolfe Electrical Engineering and Computer

  16. Hierarchical Needs, Income Comparisons and Happiness Levels

    OpenAIRE

    Drakopoulos, Stavros

    2011-01-01

    The cornerstone of the hierarchical approach is that there are some basic human needs which must be satisfied before non-basic needs come into the picture. The hierarchical structure of needs implies that the satisfaction of primary needs provides substantial increases to individual happiness compared to the subsequent satisfaction of secondary needs. This idea can be combined with the concept of comparison income which means that individuals compare rewards with individuals with similar char...

  17. A Bisimulation-based Hierarchical Framework for Software Development Models

    Directory of Open Access Journals (Sweden)

    Ping Liang

    2013-08-01

    Full Text Available Software development models have been ripen since the emergence of software engineering, like waterfall model, V-model, spiral model, etc. To ensure the successful implementation of those models, various metrics for software products and development process have been developed along, like CMMI, software metrics, and process re-engineering, etc. The quality of software products and processes can be ensured in consistence as much as possible and the abstract integrity of a software product can be achieved. However, in reality, the maintenance of software products is still high and even higher along with software evolution due to the inconsistence occurred by changes and inherent errors of software products. It is better to build up a robust software product that can sustain changes as many as possible. Therefore, this paper proposes a process algebra based hierarchical framework to extract an abstract equivalent of deliverable at the end of phases of a software product from its software development models. The process algebra equivalent of the deliverable is developed hierarchically with the development of the software product, applying bi-simulation to test run the deliverable of phases to guarantee the consistence and integrity of the software development and product in a trivially mathematical way. And an algorithm is also given to carry out the assessment of the phase deliverable in process algebra.  

  18. Anterior insula coordinates hierarchical processing of tactile mismatch responses

    Science.gov (United States)

    Allen, Micah; Fardo, Francesca; Dietz, Martin J.; Hillebrandt, Hauke; Friston, Karl J.; Rees, Geraint; Roepstorff, Andreas

    2016-01-01

    The body underlies our sense of self, emotion, and agency. Signals arising from the skin convey warmth, social touch, and the physical characteristics of external stimuli. Surprising or unexpected tactile sensations can herald events of motivational salience, including imminent threats (e.g., an insect bite) and hedonic rewards (e.g., a caressing touch). Awareness of such events is thought to depend upon the hierarchical integration of body-related mismatch responses by the anterior insula. To investigate this possibility, we measured brain activity using functional magnetic resonance imaging, while healthy participants performed a roving tactile oddball task. Mass-univariate analysis demonstrated robust activations in limbic, somatosensory, and prefrontal cortical areas previously implicated in tactile deviancy, body awareness, and cognitive control. Dynamic Causal Modelling revealed that unexpected stimuli increased the strength of forward connections along a caudal to rostral hierarchy—projecting from thalamic and somatosensory regions towards insula, cingulate and prefrontal cortices. Within this ascending flow of sensory information, the AIC was the only region to show increased backwards connectivity to the somatosensory cortex, augmenting a reciprocal exchange of neuronal signals. Further, participants who rated stimulus changes as easier to detect showed stronger modulation of descending PFC to AIC connections by deviance. These results suggest that the AIC coordinates hierarchical processing of tactile prediction error. They are interpreted in support of an embodied predictive coding model where AIC mediated body awareness is involved in anchoring a global neuronal workspace. PMID:26584870

  19. Evaluating Hierarchical Structure in Music Annotations.

    Science.gov (United States)

    McFee, Brian; Nieto, Oriol; Farbood, Morwaread M; Bello, Juan Pablo

    2017-01-01

    Music exhibits structure at multiple scales, ranging from motifs to large-scale functional components. When inferring the structure of a piece, different listeners may attend to different temporal scales, which can result in disagreements when they describe the same piece. In the field of music informatics research (MIR), it is common to use corpora annotated with structural boundaries at different levels. By quantifying disagreements between multiple annotators, previous research has yielded several insights relevant to the study of music cognition. First, annotators tend to agree when structural boundaries are ambiguous. Second, this ambiguity seems to depend on musical features, time scale, and genre. Furthermore, it is possible to tune current annotation evaluation metrics to better align with these perceptual differences. However, previous work has not directly analyzed the effects of hierarchical structure because the existing methods for comparing structural annotations are designed for "flat" descriptions, and do not readily generalize to hierarchical annotations. In this paper, we extend and generalize previous work on the evaluation of hierarchical descriptions of musical structure. We derive an evaluation metric which can compare hierarchical annotations holistically across multiple levels. sing this metric, we investigate inter-annotator agreement on the multilevel annotations of two different music corpora, investigate the influence of acoustic properties on hierarchical annotations, and evaluate existing hierarchical segmentation algorithms against the distribution of inter-annotator agreement.

  20. Evaluating Hierarchical Structure in Music Annotations

    Directory of Open Access Journals (Sweden)

    Brian McFee

    2017-08-01

    Full Text Available Music exhibits structure at multiple scales, ranging from motifs to large-scale functional components. When inferring the structure of a piece, different listeners may attend to different temporal scales, which can result in disagreements when they describe the same piece. In the field of music informatics research (MIR, it is common to use corpora annotated with structural boundaries at different levels. By quantifying disagreements between multiple annotators, previous research has yielded several insights relevant to the study of music cognition. First, annotators tend to agree when structural boundaries are ambiguous. Second, this ambiguity seems to depend on musical features, time scale, and genre. Furthermore, it is possible to tune current annotation evaluation metrics to better align with these perceptual differences. However, previous work has not directly analyzed the effects of hierarchical structure because the existing methods for comparing structural annotations are designed for “flat” descriptions, and do not readily generalize to hierarchical annotations. In this paper, we extend and generalize previous work on the evaluation of hierarchical descriptions of musical structure. We derive an evaluation metric which can compare hierarchical annotations holistically across multiple levels. sing this metric, we investigate inter-annotator agreement on the multilevel annotations of two different music corpora, investigate the influence of acoustic properties on hierarchical annotations, and evaluate existing hierarchical segmentation algorithms against the distribution of inter-annotator agreement.

  1. Hierarchical Nanoceramics for Industrial Process Sensors

    Energy Technology Data Exchange (ETDEWEB)

    Ruud, James, A.; Brosnan, Kristen, H.; Striker, Todd; Ramaswamy, Vidya; Aceto, Steven, C.; Gao, Yan; Willson, Patrick, D.; Manoharan, Mohan; Armstrong, Eric, N., Wachsman, Eric, D.; Kao, Chi-Chang

    2011-07-15

    This project developed a robust, tunable, hierarchical nanoceramics materials platform for industrial process sensors in harsh-environments. Control of material structure at multiple length scales from nano to macro increased the sensing response of the materials to combustion gases. These materials operated at relatively high temperatures, enabling detection close to the source of combustion. It is anticipated that these materials can form the basis for a new class of sensors enabling widespread use of efficient combustion processes with closed loop feedback control in the energy-intensive industries. The first phase of the project focused on materials selection and process development, leading to hierarchical nanoceramics that were evaluated for sensing performance. The second phase focused on optimizing the materials processes and microstructures, followed by validation of performance of a prototype sensor in a laboratory combustion environment. The objectives of this project were achieved by: (1) synthesizing and optimizing hierarchical nanostructures; (2) synthesizing and optimizing sensing nanomaterials; (3) integrating sensing functionality into hierarchical nanostructures; (4) demonstrating material performance in a sensing element; and (5) validating material performance in a simulated service environment. The project developed hierarchical nanoceramic electrodes for mixed potential zirconia gas sensors with increased surface area and demonstrated tailored electrocatalytic activity operable at high temperatures enabling detection of products of combustion such as NOx close to the source of combustion. Methods were developed for synthesis of hierarchical nanostructures with high, stable surface area, integrated catalytic functionality within the structures for gas sensing, and demonstrated materials performance in harsh lab and combustion gas environments.

  2. Three-dimensional h-adaptivity for the multigroup neutron diffusion equations

    KAUST Repository

    Wang, Yaqi

    2009-04-01

    Adaptive mesh refinement (AMR) has been shown to allow solving partial differential equations to significantly higher accuracy at reduced numerical cost. This paper presents a state-of-the-art AMR algorithm applied to the multigroup neutron diffusion equation for reactor applications. In order to follow the physics closely, energy group-dependent meshes are employed. We present a novel algorithm for assembling the terms coupling shape functions from different meshes and show how it can be made efficient by deriving all meshes from a common coarse mesh by hierarchic refinement. Our methods are formulated using conforming finite elements of any order, for any number of energy groups. The spatial error distribution is assessed with a generalization of an error estimator originally derived for the Poisson equation. Our implementation of this algorithm is based on the widely used Open Source adaptive finite element library deal.II and is made available as part of this library\\'s extensively documented tutorial. We illustrate our methods with results for 2-D and 3-D reactor simulations using 2 and 7 energy groups, and using conforming finite elements of polynomial degree up to 6. © 2008 Elsevier Ltd. All rights reserved.

  3. Spontaneous and Hierarchical Segmentation of Non-functional Events

    DEFF Research Database (Denmark)

    Nielbo, Kristoffer Laigaard

    2012-01-01

    The dissertation, Spontaneous and Hierarchical Segmentation of Non-functional Events (SHSNE henceforth), explores and tests human perception of so-called non-functional events (i.e., events or action sequences that lack a necessary link between sub-actions and sequence goal), which typically...... ritual behavior. Part 1 concludes with five primary theoretical hypotheses: I) non-functional events will increase the human event segmentation rate; II) transitions between events will increase the cognitive prediction error signal independent of event type, but this signal will be chronically high......, consisting of four experiments in total, and four computer simulations. The first set of experiments shows that the event segmentation rate increases for human participants that observe non-functional events compared to functional events. Furthermore, it appears that context information does not have...

  4. A Maximum Entropy Estimator for the Aggregate Hierarchical Logit Model

    Directory of Open Access Journals (Sweden)

    Pedro Donoso

    2011-08-01

    Full Text Available A new approach for estimating the aggregate hierarchical logit model is presented. Though usually derived from random utility theory assuming correlated stochastic errors, the model can also be derived as a solution to a maximum entropy problem. Under the latter approach, the Lagrange multipliers of the optimization problem can be understood as parameter estimators of the model. Based on theoretical analysis and Monte Carlo simulations of a transportation demand model, it is demonstrated that the maximum entropy estimators have statistical properties that are superior to classical maximum likelihood estimators, particularly for small or medium-size samples. The simulations also generated reduced bias in the estimates of the subjective value of time and consumer surplus.

  5. Connectionist and diffusion models of reaction time.

    Science.gov (United States)

    Ratcliff, R; Van Zandt, T; McKoon, G

    1999-04-01

    Two connectionist frameworks, GRAIN (J. L. McClelland, 1993) and brain-state-in-a-box (J. A. Anderson, 1991), and R. Ratcliff's (1978) diffusion model were evaluated using data from a signal detection task. Dependent variables included response probabilities, reaction times for correct and error responses, and shapes of reaction-time distributions. The diffusion model accounted for all aspects of the data, including error reaction times that had previously been a problem for all response-time models. The connectionist models accounted for many aspects of the data adequately, but each failed to a greater or lesser degree in important ways except for one model that was similar to the diffusion model. The findings advance the development of the diffusion model and show that the long tradition of reaction-time research and theory is a fertile domain for development and testing of connectionist assumptions about how decisions are generated over time.

  6. HIERARCHICAL OPTIMIZATION MODEL ON GEONETWORK

    Directory of Open Access Journals (Sweden)

    Z. Zha

    2012-07-01

    Full Text Available In existing construction experience of Spatial Data Infrastructure (SDI, GeoNetwork, as the geographical information integrated solution, is an effective way of building SDI. During GeoNetwork serving as an internet application, several shortcomings are exposed. The first one is that the time consuming of data loading has been considerately increasing with the growth of metadata count. Consequently, the efficiency of query and search service becomes lower. Another problem is that stability and robustness are both ruined since huge amount of metadata. The final flaw is that the requirements of multi-user concurrent accessing based on massive data are not effectively satisfied on the internet. A novel approach, Hierarchical Optimization Model (HOM, is presented to solve the incapability of GeoNetwork working with massive data in this paper. HOM optimizes the GeoNetwork from these aspects: internal procedure, external deployment strategies, etc. This model builds an efficient index for accessing huge metadata and supporting concurrent processes. In this way, the services based on GeoNetwork can maintain stable while running massive metadata. As an experiment, we deployed more than 30 GeoNetwork nodes, and harvest nearly 1.1 million metadata. From the contrast between the HOM-improved software and the original one, the model makes indexing and retrieval processes more quickly and keeps the speed stable on metadata amount increasing. It also shows stable on multi-user concurrent accessing to system services, the experiment achieved good results and proved that our optimization model is efficient and reliable.

  7. Diffusion coefficient in photon diffusion theory

    NARCIS (Netherlands)

    Graaff, R; Ten Bosch, JJ

    2000-01-01

    The choice of the diffusion coefficient to be used in photon diffusion theory has been a subject of discussion in recent publications on tissue optics. We compared several diffusion coefficients with the apparent diffusion coefficient from the more fundamental transport theory, D-app. Application to

  8. Diffusion coefficient in photon diffusion theory

    NARCIS (Netherlands)

    Graaff, R; Ten Bosch, JJ

    2000-01-01

    The choice of the diffusion coefficient to be used in photon diffusion theory has been a subject of discussion in recent publications on tissue optics. We compared several diffusion coefficients with the apparent diffusion coefficient from the more fundamental transport theory, D-app. Application to

  9. Error monitoring in musicians

    Directory of Open Access Journals (Sweden)

    Clemens eMaidhof

    2013-07-01

    Full Text Available To err is human, and hence even professional musicians make errors occasionally during their performances. This paper summarizes recent work investigating error monitoring in musicians, i.e. the processes and their neural correlates associated with the monitoring of ongoing actions and the detection of deviations from intended sounds. EEG Studies reported an early component of the event-related potential (ERP occurring before the onsets of pitch errors. This component, which can be altered in musicians with focal dystonia, likely reflects processes of error detection and/or error compensation, i.e. attempts to cancel the undesired sensory consequence (a wrong tone a musician is about to perceive. Thus, auditory feedback seems not to be a prerequisite for error detection, consistent with previous behavioral results. In contrast, when auditory feedback is externally manipulated and thus unexpected, motor performance can be severely distorted, although not all feedback alterations result in performance impairments. Recent studies investigating the neural correlates of feedback processing showed that unexpected feedback elicits an ERP component after note onsets, which shows larger amplitudes during music performance than during mere perception of the same musical sequences. Hence, these results stress the role of motor actions for the processing of auditory information. Furthermore, recent methodological advances like the combination of 3D motion capture techniques with EEG will be discussed. Such combinations of different measures can potentially help to disentangle the roles of different feedback types such as proprioceptive and auditory feedback, and in general to derive at a better understanding of the complex interactions between the motor and auditory domain during error monitoring. Finally, outstanding questions and future directions in this context will be discussed.

  10. Smoothing error pitfalls

    Science.gov (United States)

    von Clarmann, T.

    2014-09-01

    The difference due to the content of a priori information between a constrained retrieval and the true atmospheric state is usually represented by a diagnostic quantity called smoothing error. In this paper it is shown that, regardless of the usefulness of the smoothing error as a diagnostic tool in its own right, the concept of the smoothing error as a component of the retrieval error budget is questionable because it is not compliant with Gaussian error propagation. The reason for this is that the smoothing error does not represent the expected deviation of the retrieval from the true state but the expected deviation of the retrieval from the atmospheric state sampled on an arbitrary grid, which is itself a smoothed representation of the true state; in other words, to characterize the full loss of information with respect to the true atmosphere, the effect of the representation of the atmospheric state on a finite grid also needs to be considered. The idea of a sufficiently fine sampling of this reference atmospheric state is problematic because atmospheric variability occurs on all scales, implying that there is no limit beyond which the sampling is fine enough. Even the idealization of infinitesimally fine sampling of the reference state does not help, because the smoothing error is applied to quantities which are only defined in a statistical sense, which implies that a finite volume of sufficient spatial extent is needed to meaningfully discuss temperature or concentration. Smoothing differences, however, which play a role when measurements are compared, are still a useful quantity if the covariance matrix involved has been evaluated on the comparison grid rather than resulting from interpolation and if the averaging kernel matrices have been evaluated on a grid fine enough to capture all atmospheric variations that the instruments are sensitive to. This is, under the assumptions stated, because the undefined component of the smoothing error, which is the

  11. HIERARCHICAL DEEP LEARNING ARCHITECTURE FOR 10K OBJECTS CLASSIFICATION

    Directory of Open Access Journals (Sweden)

    Atul Laxman Katole

    2015-08-01

    Full Text Available Evolution of visual object recognition architectures based on Convolutional Neural Networks & Convolutional Deep Belief Networks paradigms has revolutionized artificial Vision Science. These architectures extract & learn the real world hierarchical visual features utilizing supervised & unsupervised learning approaches respectively. Both the approaches yet cannot scale up realistically to provide recognition for a very large number of objects as high as 10K. We propose a two level hierarchical deep learning architecture inspired by divide & conquer principle that decomposes the large scale recognition architecture into root & leaf level model architectures. Each of the root & leaf level models is trained exclusively to provide superior results than possible by any 1-level deep learning architecture prevalent today. The proposed architecture classifies objects in two steps. In the first step the root level model classifies the object in a high level category. In the second step, the leaf level recognition model for the recognized high level category is selected among all the leaf models. This leaf level model is presented with the same input object image which classifies it in a specific category. Also we propose a blend of leaf level models trained with either supervised or unsupervised learning approaches. Unsupervised learning is suitable whenever labelled data is scarce for the specific leaf level models. Currently the training of leaf level models is in progress; where we have trained 25 out of the total 47 leaf level models as of now. We have trained the leaf models with the best case top-5 error rate of 3.2% on the validation data set for the particular leaf models. Also we demonstrate that the validation error of the leaf level models saturates towards the above mentioned accuracy as the number of epochs are increased to more than sixty. The top-5 error rate for the entire two-level architecture needs to be computed in conjunction with

  12. Learning from Errors

    Directory of Open Access Journals (Sweden)

    MA. Lendita Kryeziu

    2015-06-01

    Full Text Available “Errare humanum est”, a well known and widespread Latin proverb which states that: to err is human, and that people make mistakes all the time. However, what counts is that people must learn from mistakes. On these grounds Steve Jobs stated: “Sometimes when you innovate, you make mistakes. It is best to admit them quickly, and get on with improving your other innovations.” Similarly, in learning new language, learners make mistakes, thus it is important to accept them, learn from them, discover the reason why they make them, improve and move on. The significance of studying errors is described by Corder as: “There have always been two justifications proposed for the study of learners' errors: the pedagogical justification, namely that a good understanding of the nature of error is necessary before a systematic means of eradicating them could be found, and the theoretical justification, which claims that a study of learners' errors is part of the systematic study of the learners' language which is itself necessary to an understanding of the process of second language acquisition” (Corder, 1982; 1. Thus the importance and the aim of this paper is analyzing errors in the process of second language acquisition and the way we teachers can benefit from mistakes to help students improve themselves while giving the proper feedback.

  13. Error Correction in Classroom

    Institute of Scientific and Technical Information of China (English)

    Dr. Grace Zhang

    2000-01-01

    Error correction is an important issue in foreign language acquisition. This paper investigates how students feel about the way in which error correction should take place in a Chinese-as-a foreign-language classroom, based on empirical data of a large scale. The study shows that there is a general consensus that error correction is necessary. In terms of correction strategy, the students preferred a combination of direct and indirect corrections, or a direct only correction. The former choice indicates that students would be happy to take either so long as the correction gets done.Most students didn't mind peer correcting provided it is conducted in a constructive way. More than halfofthe students would feel uncomfortable ifthe same error they make in class is corrected consecutively more than three times. Taking these findings into consideration, we may want to cncourage peer correcting, use a combination of correction strategies (direct only if suitable) and do it in a non-threatening and sensitive way. It is hoped that this study would contribute to the effectiveness of error correction in a Chinese language classroom and it may also have a wider implication on other languages.

  14. Quantifying and reducing uncertainties in estimated soil CO2 fluxes with hierarchical data-model integration

    Science.gov (United States)

    Ogle, Kiona; Ryan, Edmund; Dijkstra, Feike A.; Pendall, Elise

    2016-12-01

    Nonsteady state chambers are often employed to measure soil CO2 fluxes. CO2 concentrations (C) in the headspace are sampled at different times (t), and fluxes (f) are calculated from regressions of C versus t based on a limited number of observations. Variability in the data can lead to poor fits and unreliable f estimates; groups with too few observations or poor fits are often discarded, resulting in "missing" f values. We solve these problems by fitting linear (steady state) and nonlinear (nonsteady state, diffusion based) models of C versus t, within a hierarchical Bayesian framework. Data are from the Prairie Heating and CO2 Enrichment study that manipulated atmospheric CO2, temperature, soil moisture, and vegetation. CO2 was collected from static chambers biweekly during five growing seasons, resulting in >12,000 samples and >3100 groups and associated fluxes. We compare f estimates based on nonhierarchical and hierarchical Bayesian (B versus HB) versions of the linear and diffusion-based (L versus D) models, resulting in four different models (BL, BD, HBL, and HBD). Three models fit the data exceptionally well (R2 ≥ 0.98), but the BD model was inferior (R2 = 0.87). The nonhierarchical models (BL and BD) produced highly uncertain f estimates (wide 95% credible intervals), whereas the hierarchical models (HBL and HBD) produced very precise estimates. Of the hierarchical versions, the linear model (HBL) underestimated f by 33% relative to the nonsteady state model (HBD). The hierarchical models offer improvements upon traditional nonhierarchical approaches to estimating f, and we provide example code for the models.

  15. Hierarchical linear regression models for conditional quantiles

    Institute of Scientific and Technical Information of China (English)

    TIAN Maozai; CHEN Gemai

    2006-01-01

    The quantile regression has several useful features and therefore is gradually developing into a comprehensive approach to the statistical analysis of linear and nonlinear response models,but it cannot deal effectively with the data with a hierarchical structure.In practice,the existence of such data hierarchies is neither accidental nor ignorable,it is a common phenomenon.To ignore this hierarchical data structure risks overlooking the importance of group effects,and may also render many of the traditional statistical analysis techniques used for studying data relationships invalid.On the other hand,the hierarchical models take a hierarchical data structure into account and have also many applications in statistics,ranging from overdispersion to constructing min-max estimators.However,the hierarchical models are virtually the mean regression,therefore,they cannot be used to characterize the entire conditional distribution of a dependent variable given high-dimensional covariates.Furthermore,the estimated coefficient vector (marginal effects)is sensitive to an outlier observation on the dependent variable.In this article,a new approach,which is based on the Gauss-Seidel iteration and taking a full advantage of the quantile regression and hierarchical models,is developed.On the theoretical front,we also consider the asymptotic properties of the new method,obtaining the simple conditions for an n1/2-convergence and an asymptotic normality.We also illustrate the use of the technique with the real educational data which is hierarchical and how the results can be explained.

  16. Errors in Neonatology

    Directory of Open Access Journals (Sweden)

    Antonio Boldrini

    2013-06-01

    Full Text Available Introduction: Danger and errors are inherent in human activities. In medical practice errors can lean to adverse events for patients. Mass media echo the whole scenario. Methods: We reviewed recent published papers in PubMed database to focus on the evidence and management of errors in medical practice in general and in Neonatology in particular. We compared the results of the literature with our specific experience in Nina Simulation Centre (Pisa, Italy. Results: In Neonatology the main error domains are: medication and total parenteral nutrition, resuscitation and respiratory care, invasive procedures, nosocomial infections, patient identification, diagnostics. Risk factors include patients’ size, prematurity, vulnerability and underlying disease conditions but also multidisciplinary teams, working conditions providing fatigue, a large variety of treatment and investigative modalities needed. Discussion and Conclusions: In our opinion, it is hardly possible to change the human beings but it is likely possible to change the conditions under they work. Voluntary errors report systems can help in preventing adverse events. Education and re-training by means of simulation can be an effective strategy too. In Pisa (Italy Nina (ceNtro di FormazIone e SimulazioNe NeonAtale is a simulation center that offers the possibility of a continuous retraining for technical and non-technical skills to optimize neonatological care strategies. Furthermore, we have been working on a novel skill trainer for mechanical ventilation (MEchatronic REspiratory System SImulator for Neonatal Applications, MERESSINA. Finally, in our opinion national health policy indirectly influences risk for errors. Proceedings of the 9th International Workshop on Neonatology · Cagliari (Italy · October 23rd-26th, 2013 · Learned lessons, changing practice and cutting-edge research

  17. Error Free Software

    Science.gov (United States)

    1985-01-01

    A mathematical theory for development of "higher order" software to catch computer mistakes resulted from a Johnson Space Center contract for Apollo spacecraft navigation. Two women who were involved in the project formed Higher Order Software, Inc. to develop and market the system of error analysis and correction. They designed software which is logically error-free, which, in one instance, was found to increase productivity by 600%. USE.IT defines its objectives using AXES -- a user can write in English and the system converts to computer languages. It is employed by several large corporations.

  18. LIBERTARISMO & ERROR CATEGORIAL

    Directory of Open Access Journals (Sweden)

    Carlos G. Patarroyo G.

    2009-01-01

    Full Text Available En este artículo se ofrece una defensa del libertarismo frente a dos acusaciones según las cuales éste comete un error categorial. Para ello, se utiliza la filosofía de Gilbert Ryle como herramienta para explicar las razones que fundamentan estas acusaciones y para mostrar por qué, pese a que ciertas versiones del libertarismo que acuden a la causalidad de agentes o al dualismo cartesiano cometen estos errores, un libertarismo que busque en el indeterminismo fisicalista la base de la posibilidad de la libertad humana no necesariamente puede ser acusado de incurrir en ellos.

  19. Quantitative law of diffusion induced fracture

    Institute of Scientific and Technical Information of China (English)

    H-J Lei; H-L Wang; B Liu; C-A Wang

    2016-01-01

    Through dimension analysis, an almost analyt-ical model for the maximum diffusion induced stress (DIS) and critical temperature (or concentration) difference at which cracks begin to initiate in the diffusion process is devel-oped. It interestingly predicts that the spacing of diffusion-induced cracks is constant, independent of the thickness of specimen and the temperature difference. These conclusions are validated by our thermal shock experiments on alu-mina plates. Furthermore, the proposed model can interpret observed hierarchical crack patterns for high temperature jump cases, and a three-stage relation between the resid-ual strength and the temperature difference. The prediction for crack spacing can guide the biomimetic thermal-shock-failure proof design, in which the hard platelets smaller than the predicted diffusion induced by constant crack-spacing are embedded in a soft matrix, and, therefore, no fracture will happen. This may guide the design of the thermal protec-tion system and the lithium ion battery. Finally we present the maximum normalized DISes for various geometry and boundary conditions by single-variable curves for the stress-independent diffusion process and two-variable contour plots for the stress-dependent diffusion process, which can provide engineers and materialists a simple and easy way to quickly evaluate the reliability of related materials and devices.

  20. Many Tests of Significance: New Methods for Controlling Type I Errors

    Science.gov (United States)

    Keselman, H. J.; Miller, Charles W.; Holland, Burt

    2011-01-01

    There have been many discussions of how Type I errors should be controlled when many hypotheses are tested (e.g., all possible comparisons of means, correlations, proportions, the coefficients in hierarchical models, etc.). By and large, researchers have adopted familywise (FWER) control, though this practice certainly is not universal. Familywise…

  1. Bayesian modeling of measurement error in predictor variables using item response theory

    NARCIS (Netherlands)

    Fox, Jean-Paul; Glas, Cees A.W.

    2003-01-01

    It is shown that measurement error in predictor variables can be modeled using item response theory (IRT). The predictor variables, that may be defined at any level of an hierarchical regression model, are treated as latent variables. The normal ogive model is used to describe the relation between t

  2. Self-assembled biomimetic superhydrophobic hierarchical arrays.

    Science.gov (United States)

    Yang, Hongta; Dou, Xuan; Fang, Yin; Jiang, Peng

    2013-09-01

    Here, we report a simple and inexpensive bottom-up technology for fabricating superhydrophobic coatings with hierarchical micro-/nano-structures, which are inspired by the binary periodic structure found on the superhydrophobic compound eyes of some insects (e.g., mosquitoes and moths). Binary colloidal arrays consisting of exemplary large (4 and 30 μm) and small (300 nm) silica spheres are first assembled by a scalable Langmuir-Blodgett (LB) technology in a layer-by-layer manner. After surface modification with fluorosilanes, the self-assembled hierarchical particle arrays become superhydrophobic with an apparent water contact angle (CA) larger than 150°. The throughput of the resulting superhydrophobic coatings with hierarchical structures can be significantly improved by templating the binary periodic structures of the LB-assembled colloidal arrays into UV-curable fluoropolymers by a soft lithography approach. Superhydrophobic perfluoroether acrylate hierarchical arrays with large CAs and small CA hysteresis can be faithfully replicated onto various substrates. Both experiments and theoretical calculations based on the Cassie's dewetting model demonstrate the importance of the hierarchical structure in achieving the final superhydrophobic surface states. Copyright © 2013 Elsevier Inc. All rights reserved.

  3. Analysis hierarchical model for discrete event systems

    Science.gov (United States)

    Ciortea, E. M.

    2015-11-01

    The This paper presents the hierarchical model based on discrete event network for robotic systems. Based on the hierarchical approach, Petri network is analysed as a network of the highest conceptual level and the lowest level of local control. For modelling and control of complex robotic systems using extended Petri nets. Such a system is structured, controlled and analysed in this paper by using Visual Object Net ++ package that is relatively simple and easy to use, and the results are shown as representations easy to interpret. The hierarchical structure of the robotic system is implemented on computers analysed using specialized programs. Implementation of hierarchical model discrete event systems, as a real-time operating system on a computer network connected via a serial bus is possible, where each computer is dedicated to local and Petri model of a subsystem global robotic system. Since Petri models are simplified to apply general computers, analysis, modelling, complex manufacturing systems control can be achieved using Petri nets. Discrete event systems is a pragmatic tool for modelling industrial systems. For system modelling using Petri nets because we have our system where discrete event. To highlight the auxiliary time Petri model using transport stream divided into hierarchical levels and sections are analysed successively. Proposed robotic system simulation using timed Petri, offers the opportunity to view the robotic time. Application of goods or robotic and transmission times obtained by measuring spot is obtained graphics showing the average time for transport activity, using the parameters sets of finished products. individually.

  4. Hierarchical models and chaotic spin glasses

    Science.gov (United States)

    Berker, A. Nihat; McKay, Susan R.

    1984-09-01

    Renormalization-group studies in position space have led to the discovery of hierarchical models which are exactly solvable, exhibiting nonclassical critical behavior at finite temperature. Position-space renormalization-group approximations that had been widely and successfully used are in fact alternatively applicable as exact solutions of hierarchical models, this realizability guaranteeing important physical requirements. For example, a hierarchized version of the Sierpiriski gasket is presented, corresponding to a renormalization-group approximation which has quantitatively yielded the multicritical phase diagrams of submonolayers on graphite. Hierarchical models are now being studied directly as a testing ground for new concepts. For example, with the introduction of frustration, chaotic renormalization-group trajectories were obtained for the first time. Thus, strong and weak correlations are randomly intermingled at successive length scales, and a new microscopic picture and mechanism for a spin glass emerges. An upper critical dimension occurs via a boundary crisis mechanism in cluster-hierarchical variants developed to have well-behaved susceptibilities.

  5. Orwell's Instructive Errors

    Science.gov (United States)

    Julian, Liam

    2009-01-01

    In this article, the author talks about George Orwell, his instructive errors, and the manner in which Orwell pierced worthless theory, faced facts and defended decency (with fluctuating success), and largely ignored the tradition of accumulated wisdom that has rendered him a timeless teacher--one whose inadvertent lessons, while infrequently…

  6. Challenge and Error: Critical Events and Attention-Related Errors

    Science.gov (United States)

    Cheyne, James Allan; Carriere, Jonathan S. A.; Solman, Grayden J. F.; Smilek, Daniel

    2011-01-01

    Attention lapses resulting from reactivity to task challenges and their consequences constitute a pervasive factor affecting everyday performance errors and accidents. A bidirectional model of attention lapses (error [image omitted] attention-lapse: Cheyne, Solman, Carriere, & Smilek, 2009) argues that errors beget errors by generating attention…

  7. Error resilient image transmission based on virtual SPIHT

    Science.gov (United States)

    Liu, Rongke; He, Jie; Zhang, Xiaolin

    2007-02-01

    SPIHT is one of the most efficient image compression algorithms. It had been successfully applied to a wide variety of images, such as medical and remote sensing images. However, it is highly susceptible to channel errors. A single bit error could potentially lead to decoder derailment. In this paper, we integrate new error resilient tools into wavelet coding algorithm and present an error-resilient image transmission scheme based on virtual set partitioning in hierarchical trees (SPIHT), EREC and self truncation mechanism. After wavelet decomposition, the virtual spatial-orientation trees in the wavelet domain are individually encoded using virtual SPIHT. Since the self-similarity across sub bands is preserved, a high source coding efficiency can be achieved. The scheme is essentially a tree-based coding, thus error propagation is limited within each virtual tree. The number of virtual trees may be adjusted according to the channel conditions. When the channel is excellent, we may decrease the number of trees to further improve the compression efficiency, otherwise increase the number of trees to guarantee the error resilience to channel. EREC is also adopted to enhance the error resilience capability of the compressed bit streams. At the receiving side, the self-truncation mechanism based on self constraint of set partition trees is introduced. The decoding of any sub-tree halts in case the violation of self-constraint relationship occurs in the tree. So the bits impacted by the error propagation are limited and more likely located in the low bit-layers. In additional, inter-trees interpolation method is applied, thus some errors are compensated. Preliminary experimental results demonstrate that the proposed scheme can achieve much more benefits on error resilience.

  8. Diffusion archeology for diffusion progression history reconstruction.

    Science.gov (United States)

    Sefer, Emre; Kingsford, Carl

    2016-11-01

    Diffusion through graphs can be used to model many real-world processes, such as the spread of diseases, social network memes, computer viruses, or water contaminants. Often, a real-world diffusion cannot be directly observed while it is occurring - perhaps it is not noticed until some time has passed, continuous monitoring is too costly, or privacy concerns limit data access. This leads to the need to reconstruct how the present state of the diffusion came to be from partial diffusion data. Here, we tackle the problem of reconstructing a diffusion history from one or more snapshots of the diffusion state. This ability can be invaluable to learn when certain computer nodes are infected or which people are the initial disease spreaders to control future diffusions. We formulate this problem over discrete-time SEIRS-type diffusion models in terms of maximum likelihood. We design methods that are based on submodularity and a novel prize-collecting dominating-set vertex cover (PCDSVC) relaxation that can identify likely diffusion steps with some provable performance guarantees. Our methods are the first to be able to reconstruct complete diffusion histories accurately in real and simulated situations. As a special case, they can also identify the initial spreaders better than the existing methods for that problem. Our results for both meme and contaminant diffusion show that the partial diffusion data problem can be overcome with proper modeling and methods, and that hidden temporal characteristics of diffusion can be predicted from limited data.

  9. Biased trapping issue on weighted hierarchical networks

    Indian Academy of Sciences (India)

    Meifeng Dai; Jie Liu; Feng Zhu

    2014-10-01

    In this paper, we present trapping issues of weight-dependent walks on weighted hierarchical networks which are based on the classic scale-free hierarchical networks. Assuming that edge’s weight is used as local information by a random walker, we introduce a biased walk. The biased walk is that a walker, at each step, chooses one of its neighbours with a probability proportional to the weight of the edge. We focus on a particular case with the immobile trap positioned at the hub node which has the largest degree in the weighted hierarchical networks. Using a method based on generating functions, we determine explicitly the mean first-passage time (MFPT) for the trapping issue. Let parameter (0 < < 1) be the weight factor. We show that the efficiency of the trapping process depends on the parameter a; the smaller the value of a, the more efficient is the trapping process.

  10. Incentive Mechanisms for Hierarchical Spectrum Markets

    CERN Document Server

    Iosifidis, George; Alpcan, Tansu; Koutsopoulos, Iordanis

    2011-01-01

    We study spectrum allocation mechanisms in hierarchical multi-layer markets which are expected to proliferate in the near future based on the current spectrum policy reform proposals. We consider a setting where a state agency sells spectrum to Primary Operators (POs) and in turn these resell it to Secondary Operators (SOs) through auctions. We show that these hierarchical markets do not result in a socially efficient spectrum allocation which is aimed by the agency, due to lack of coordination among the entities in different layers and the inherently selfish revenue-maximizing strategy of POs. In order to reconcile these opposing objectives, we propose an incentive mechanism which aligns the strategy and the actions of the POs with the objective of the agency, and thus it leads to system performance improvement in terms of social welfare. This pricing based mechanism constitutes a method for hierarchical market regulation and requires the feedback provision from SOs. A basic component of the proposed incenti...

  11. Hierarchical self-organization of tectonic plates

    CERN Document Server

    Morra, Gabriele; Müller, R Dietmar

    2010-01-01

    The Earth's surface is subdivided into eight large tectonic plates and many smaller ones. We reconstruct the plate tessellation history and demonstrate that both large and small plates display two distinct hierarchical patterns, described by different power-law size-relationships. While small plates display little organisational change through time, the structure of the large plates oscillate between minimum and maximum hierarchical tessellations. The organization of large plates rapidly changes from a weak hierarchy at 120-100 million years ago (Ma) towards a strong hierarchy, which peaked at 65-50, Ma subsequently relaxing back towards a minimum hierarchical structure. We suggest that this fluctuation reflects an alternation between top and bottom driven plate tectonics, revealing a previously undiscovered tectonic cyclicity at a timescale of 100 million years.

  12. Towards a sustainable manufacture of hierarchical zeolites.

    Science.gov (United States)

    Verboekend, Danny; Pérez-Ramírez, Javier

    2014-03-01

    Hierarchical zeolites have been established as a superior type of aluminosilicate catalysts compared to their conventional (purely microporous) counterparts. An impressive array of bottom-up and top-down approaches has been developed during the last decade to design and subsequently exploit these exciting materials catalytically. However, the sustainability of the developed synthetic methods has rarely been addressed. This paper highlights important criteria to ensure the ecological and economic viability of the manufacture of hierarchical zeolites. Moreover, by using base leaching as a promising case study, we verify a variety of approaches to increase reactor productivity, recycle waste streams, prevent the combustion of organic compounds, and minimize separation efforts. By reducing their synthetic footprint, hierarchical zeolites are positioned as an integral part of sustainable chemistry. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. Classification using Hierarchical Naive Bayes models

    DEFF Research Database (Denmark)

    Langseth, Helge; Dyhre Nielsen, Thomas

    2006-01-01

    Classification problems have a long history in the machine learning literature. One of the simplest, and yet most consistently well-performing set of classifiers is the Naïve Bayes models. However, an inherent problem with these classifiers is the assumption that all attributes used to describe...... an instance are conditionally independent given the class of that instance. When this assumption is violated (which is often the case in practice) it can reduce classification accuracy due to “information double-counting” and interaction omission. In this paper we focus on a relatively new set of models......, termed Hierarchical Naïve Bayes models. Hierarchical Naïve Bayes models extend the modeling flexibility of Naïve Bayes models by introducing latent variables to relax some of the independence statements in these models. We propose a simple algorithm for learning Hierarchical Naïve Bayes models...

  14. Hierarchical Neural Network Structures for Phoneme Recognition

    CERN Document Server

    Vasquez, Daniel; Minker, Wolfgang

    2013-01-01

    In this book, hierarchical structures based on neural networks are investigated for automatic speech recognition. These structures are evaluated on the phoneme recognition task where a  Hybrid Hidden Markov Model/Artificial Neural Network paradigm is used. The baseline hierarchical scheme consists of two levels each which is based on a Multilayered Perceptron. Additionally, the output of the first level serves as a second level input. The computational speed of the phoneme recognizer can be substantially increased by removing redundant information still contained at the first level output. Several techniques based on temporal and phonetic criteria have been investigated to remove this redundant information. The computational time could be reduced by 57% whilst keeping the system accuracy comparable to the baseline hierarchical approach.

  15. Universal hierarchical behavior of citation networks

    CERN Document Server

    Mones, Enys; Vicsek, Tamás

    2014-01-01

    Many of the essential features of the evolution of scientific research are imprinted in the structure of citation networks. Connections in these networks imply information about the transfer of knowledge among papers, or in other words, edges describe the impact of papers on other publications. This inherent meaning of the edges infers that citation networks can exhibit hierarchical features, that is typical of networks based on decision-making. In this paper, we investigate the hierarchical structure of citation networks consisting of papers in the same field. We find that the majority of the networks follow a universal trend towards a highly hierarchical state, and i) the various fields display differences only concerning their phase in life (distance from the "birth" of a field) or ii) the characteristic time according to which they are approaching the stationary state. We also show by a simple argument that the alterations in the behavior are related to and can be understood by the degree of specializatio...

  16. Static and dynamic friction of hierarchical surfaces

    Science.gov (United States)

    Costagliola, Gianluca; Bosia, Federico; Pugno, Nicola M.

    2016-12-01

    Hierarchical structures are very common in nature, but only recently have they been systematically studied in materials science, in order to understand the specific effects they can have on the mechanical properties of various systems. Structural hierarchy provides a way to tune and optimize macroscopic mechanical properties starting from simple base constituents and new materials are nowadays designed exploiting this possibility. This can be true also in the field of tribology. In this paper we study the effect of hierarchical patterned surfaces on the static and dynamic friction coefficients of an elastic material. Our results are obtained by means of numerical simulations using a one-dimensional spring-block model, which has previously been used to investigate various aspects of friction. Despite the simplicity of the model, we highlight some possible mechanisms that explain how hierarchical structures can significantly modify the friction coefficients of a material, providing a means to achieve tunability.

  17. Hierarchical porous carbon toward effective cathode in advanced zinc-cerium redox flow battery

    Institute of Scientific and Technical Information of China (English)

    谢志鹏; 杨斌; 蔡定建; 杨亮

    2014-01-01

    Advanced zinc-cerium redox flow battery (ZCRFB) is a large-scale energy storage system which plays a significant role in the application of new energy sources. The requirement of superior cathode with high acitivity and fast ion diffusion is a hierarchical porous structure, which was synthesized in this work by a method in which both hard template and soft template were used. The structure and the performance of the cathode prepared here were characterized and evaluated by a variety of techniques such as scan-ning electron microscopy (SEM), transmission electron microscopy (TEM), X-ray photoelectron spectroscopy (XPS), cyclic voltam-metry (CV), linear sweep voltammetry (LSV), and chronoamperometry (CA). There were mainly three types of pore size within the hierarchical porous carbon:2μm, 80 nm, and 10 nm. The charge capacity of the cell using hierarchical porous carbon (HPC) as posi-tive electrode was obviously larger than that using carbon felt;the former was 665.5 mAh with a coulombic efficiency of 89.0%and an energy efficiency of 79.0%, whereas the latter was 611.1 mAh with a coulombic efficiency of 81.5%and an energy efficiency of 68.6%. In addition, performance of the ZCRFB using HPC as positive electrode showed a good stability over 50 cycles. These results showed that the hierarchical porous carbon was superior over the carbon felt for application in ZCRFB.

  18. Patient error: a preliminary taxonomy.

    NARCIS (Netherlands)

    Buetow, S.; Kiata, L.; Liew, T.; Kenealy, T.; Dovey, S.; Elwyn, G.

    2009-01-01

    PURPOSE: Current research on errors in health care focuses almost exclusively on system and clinician error. It tends to exclude how patients may create errors that influence their health. We aimed to identify the types of errors that patients can contribute and help manage, especially in primary ca

  19. Automatic Error Analysis Using Intervals

    Science.gov (United States)

    Rothwell, E. J.; Cloud, M. J.

    2012-01-01

    A technique for automatic error analysis using interval mathematics is introduced. A comparison to standard error propagation methods shows that in cases involving complicated formulas, the interval approach gives comparable error estimates with much less effort. Several examples are considered, and numerical errors are computed using the INTLAB…

  20. Imagery of Errors in Typing

    Science.gov (United States)

    Rieger, Martina; Martinez, Fanny; Wenke, Dorit

    2011-01-01

    Using a typing task we investigated whether insufficient imagination of errors and error corrections is related to duration differences between execution and imagination. In Experiment 1 spontaneous error imagination was investigated, whereas in Experiment 2 participants were specifically instructed to imagine errors. Further, in Experiment 2 we…

  1. Loss Function Based Ranking in Two-Stage, Hierarchical Models

    Science.gov (United States)

    Lin, Rongheng; Louis, Thomas A.; Paddock, Susan M.; Ridgeway, Greg

    2009-01-01

    Performance evaluations of health services providers burgeons. Similarly, analyzing spatially related health information, ranking teachers and schools, and identification of differentially expressed genes are increasing in prevalence and importance. Goals include valid and efficient ranking of units for profiling and league tables, identification of excellent and poor performers, the most differentially expressed genes, and determining “exceedances” (how many and which unit-specific true parameters exceed a threshold). These data and inferential goals require a hierarchical, Bayesian model that accounts for nesting relations and identifies both population values and random effects for unit-specific parameters. Furthermore, the Bayesian approach coupled with optimizing a loss function provides a framework for computing non-standard inferences such as ranks and histograms. Estimated ranks that minimize Squared Error Loss (SEL) between the true and estimated ranks have been investigated. The posterior mean ranks minimize SEL and are “general purpose,” relevant to a broad spectrum of ranking goals. However, other loss functions and optimizing ranks that are tuned to application-specific goals require identification and evaluation. For example, when the goal is to identify the relatively good (e.g., in the upper 10%) or relatively poor performers, a loss function that penalizes classification errors produces estimates that minimize the error rate. We construct loss functions that address this and other goals, developing a unified framework that facilitates generating candidate estimates, comparing approaches and producing data analytic performance summaries. We compare performance for a fully parametric, hierarchical model with Gaussian sampling distribution under Gaussian and a mixture of Gaussians prior distributions. We illustrate approaches via analysis of standardized mortality ratio data from the United States Renal Data System. Results show that SEL

  2. Hierarchical control of electron-transfer

    DEFF Research Database (Denmark)

    Westerhoff, Hans V.; Jensen, Peter Ruhdal; Egger, Louis;

    1997-01-01

    In this chapter the role of electron transfer in determining the behaviour of the ATP synthesising enzyme in E. coli is analysed. It is concluded that the latter enzyme lacks control because of special properties of the electron transfer components. These properties range from absence of a strong...... back pressure by the protonmotive force on the rate of electron transfer to hierarchical regulation of the expression of the gens that encode the electron transfer proteins as a response to changes in the bioenergetic properties of the cell.The discussion uses Hierarchical Control Analysis...

  3. Genetic Algorithm for Hierarchical Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Sajid Hussain

    2007-09-01

    Full Text Available Large scale wireless sensor networks (WSNs can be used for various pervasive and ubiquitous applications such as security, health-care, industry automation, agriculture, environment and habitat monitoring. As hierarchical clusters can reduce the energy consumption requirements for WSNs, we investigate intelligent techniques for cluster formation and management. A genetic algorithm (GA is used to create energy efficient clusters for data dissemination in wireless sensor networks. The simulation results show that the proposed intelligent hierarchical clustering technique can extend the network lifetime for different network deployment environments.

  4. DC Hierarchical Control System for Microgrid Applications

    OpenAIRE

    Lu, Xiaonan; Sun, Kai; Guerrero, Josep M.; Huang, Lipei

    2013-01-01

    In order to enhance the DC side performance of AC-DC hybrid microgrid,a DC hierarchical control system is proposed in this paper.To meet the requirement of DC load sharing between the parallel power interfaces,droop method is adopted.Meanwhile,DC voltage secondary control is employed to restore the deviation in the DC bus voltage.The hierarchical control system is composed of two levels.DC voltage and AC current controllers are achieved in the primary control level.

  5. Hierarchical social networks and information flow

    Science.gov (United States)

    López, Luis; F. F. Mendes, Jose; Sanjuán, Miguel A. F.

    2002-12-01

    Using a simple model for the information flow on social networks, we show that the traditional hierarchical topologies frequently used by companies and organizations, are poorly designed in terms of efficiency. Moreover, we prove that this type of structures are the result of the individual aim of monopolizing as much information as possible within the network. As the information is an appropriate measurement of centrality, we conclude that this kind of topology is so attractive for leaders, because the global influence each actor has within the network is completely determined by the hierarchical level occupied.

  6. Analyzing security protocols in hierarchical networks

    DEFF Research Database (Denmark)

    Zhang, Ye; Nielson, Hanne Riis

    2006-01-01

    Validating security protocols is a well-known hard problem even in a simple setting of a single global network. But a real network often consists of, besides the public-accessed part, several sub-networks and thereby forms a hierarchical structure. In this paper we first present a process calculus...... capturing the characteristics of hierarchical networks and describe the behavior of protocols on such networks. We then develop a static analysis to automate the validation. Finally we demonstrate how the technique can benefit the protocol development and the design of network systems by presenting a series...

  7. Hierarchic Models of Turbulence, Superfluidity and Superconductivity

    CERN Document Server

    Kaivarainen, A

    2000-01-01

    New models of Turbulence, Superfluidity and Superconductivity, based on new Hierarchic theory, general for liquids and solids (physics/0102086), have been proposed. CONTENTS: 1 Turbulence. General description; 2 Mesoscopic mechanism of turbulence; 3 Superfluidity. General description; 4 Mesoscopic scenario of fluidity; 5 Superfluidity as a hierarchic self-organization process; 6 Superfluidity in 3He; 7 Superconductivity: General properties of metals and semiconductors; Plasma oscillations; Cyclotron resonance; Electroconductivity; 8. Microscopic theory of superconductivity (BCS); 9. Mesoscopic scenario of superconductivity: Interpretation of experimental data in the framework of mesoscopic model of superconductivity.

  8. Hierarchical Analysis of the Omega Ontology

    Energy Technology Data Exchange (ETDEWEB)

    Joslyn, Cliff A.; Paulson, Patrick R.

    2009-12-01

    Initial delivery for mathematical analysis of the Omega Ontology. We provide an analysis of the hierarchical structure of a version of the Omega Ontology currently in use within the US Government. After providing an initial statistical analysis of the distribution of all link types in the ontology, we then provide a detailed order theoretical analysis of each of the four main hierarchical links present. This order theoretical analysis includes the distribution of components and their properties, their parent/child and multiple inheritance structure, and the distribution of their vertical ranks.

  9. Error Threshold for Spatially Resolved Evolution in the Quasispecies Model

    Energy Technology Data Exchange (ETDEWEB)

    Altmeyer, S.; McCaskill, J. S.

    2001-06-18

    The error threshold for quasispecies in 1, 2, 3, and {infinity} dimensions is investigated by stochastic simulation and analytically. The results show a monotonic decrease in the maximal sustainable error probability with decreasing diffusion coefficient, independently of the spatial dimension. It is thereby established that physical interactions between sequences are necessary in order for spatial effects to enhance the stabilization of biological information. The analytically tractable behavior in an {infinity} -dimensional (simplex) space provides a good guide to the spatial dependence of the error threshold in lower dimensional Euclidean space.

  10. Error bars in experimental biology.

    Science.gov (United States)

    Cumming, Geoff; Fidler, Fiona; Vaux, David L

    2007-04-09

    Error bars commonly appear in figures in publications, but experimental biologists are often unsure how they should be used and interpreted. In this article we illustrate some basic features of error bars and explain how they can help communicate data and assist correct interpretation. Error bars may show confidence intervals, standard errors, standard deviations, or other quantities. Different types of error bars give quite different information, and so figure legends must make clear what error bars represent. We suggest eight simple rules to assist with effective use and interpretation of error bars.

  11. Video Error Correction Using Steganography

    Directory of Open Access Journals (Sweden)

    Robie David L

    2002-01-01

    Full Text Available The transmission of any data is always subject to corruption due to errors, but video transmission, because of its real time nature must deal with these errors without retransmission of the corrupted data. The error can be handled using forward error correction in the encoder or error concealment techniques in the decoder. This MPEG-2 compliant codec uses data hiding to transmit error correction information and several error concealment techniques in the decoder. The decoder resynchronizes more quickly with fewer errors than traditional resynchronization techniques. It also allows for perfect recovery of differentially encoded DCT-DC components and motion vectors. This provides for a much higher quality picture in an error-prone environment while creating an almost imperceptible degradation of the picture in an error-free environment.

  12. Combining Adaptive Coding and Modulation with Hierarchical Modulation in Satcom Systems

    CERN Document Server

    Meric, Hugo; Arnal, Fabrice; Lesthievent, Guy; Boucheret, Marie-Laure

    2011-01-01

    We investigate the design of a broadcast system in order to maximise the throughput. This task is usually challenging due to the channel variability. Forty years ago, Cover introduced and compared two schemes: time sharing and superposition coding. Even if the second scheme was proved to be optimal for some channels, modern satellite communications systems such as DVB-SH and DVB-S2 mainly rely on time sharing strategy to optimize the throughput. They consider hierarchical modulation, a practical implementation of superposition coding, but only for unequal error protection or backward compatibility purposes. We propose in this article to combine time sharing and hierarchical modulation together and show how this scheme can improve the performance in terms of available rate. We introduce the hierarchical 16-APSK to boost the performance of the DVB-S2 standard. We also evaluate various strategies to group the receivers in pairs when using hierarchical modulation. Finally, we show in a realistic use case based on...

  13. Hierarchical Linear Models for Energy Prediction using Inertial Sensors: A Comparative Study for Treadmill Walking.

    Science.gov (United States)

    Vathsangam, Harshvardhan; Emken, B Adar; Schroeder, E Todd; Spruijt-Metz, Donna; Sukhatme, Gaurav S

    2013-12-01

    Walking is a commonly available activity to maintain a healthy lifestyle. Accurately tracking and measuring calories expended during walking can improve user feedback and intervention measures. Inertial sensors are a promising measurement tool to achieve this purpose. An important aspect in mapping inertial sensor data to energy expenditure is the question of normalizing across physiological parameters. Common approaches such as weight scaling require validation for each new population. An alternative is to use a hierarchical approach to model subject-specific parameters at one level and cross-subject parameters connected by physiological variables at a higher level. In this paper, we evaluate an inertial sensor-based hierarchical model to measure energy expenditure across a target population. We first determine the optimal movement and physiological features set to represent data. Periodicity based features are more accurate (phierarchical model with a subject-specific regression model and weight exponent scaled models. Subject-specific models perform significantly better (pmodels at all exponent scales whereas the hierarchical model performed worse than both. However, using an informed prior from the hierarchical model produces similar errors to using a subject-specific model with large amounts of training data (phierarchical modeling is a promising technique for generalized prediction energy expenditure prediction across a target population in a clinical setting.

  14. Mechanisms of hierarchical reinforcement learning in corticostriatal circuits 1: computational analysis.

    Science.gov (United States)

    Frank, Michael J; Badre, David

    2012-03-01

    Growing evidence suggests that the prefrontal cortex (PFC) is organized hierarchically, with more anterior regions having increasingly abstract representations. How does this organization support hierarchical cognitive control and the rapid discovery of abstract action rules? We present computational models at different levels of description. A neural circuit model simulates interacting corticostriatal circuits organized hierarchically. In each circuit, the basal ganglia gate frontal actions, with some striatal units gating the inputs to PFC and others gating the outputs to influence response selection. Learning at all of these levels is accomplished via dopaminergic reward prediction error signals in each corticostriatal circuit. This functionality allows the system to exhibit conditional if-then hypothesis testing and to learn rapidly in environments with hierarchical structure. We also develop a hybrid Bayesian-reinforcement learning mixture of experts (MoE) model, which can estimate the most likely hypothesis state of individual participants based on their observed sequence of choices and rewards. This model yields accurate probabilistic estimates about which hypotheses are attended by manipulating attentional states in the generative neural model and recovering them with the MoE model. This 2-pronged modeling approach leads to multiple quantitative predictions that are tested with functional magnetic resonance imaging in the companion paper.

  15. Error-Free Software

    Science.gov (United States)

    1989-01-01

    001 is an integrated tool suited for automatically developing ultra reliable models, simulations and software systems. Developed and marketed by Hamilton Technologies, Inc. (HTI), it has been applied in engineering, manufacturing, banking and software tools development. The software provides the ability to simplify the complex. A system developed with 001 can be a prototype or fully developed with production quality code. It is free of interface errors, consistent, logically complete and has no data or control flow errors. Systems can be designed, developed and maintained with maximum productivity. Margaret Hamilton, President of Hamilton Technologies, also directed the research and development of USE.IT, an earlier product which was the first computer aided software engineering product in the industry to concentrate on automatically supporting the development of an ultrareliable system throughout its life cycle. Both products originated in NASA technology developed under a Johnson Space Center contract.

  16. A Characterization of Prediction Errors

    OpenAIRE

    Meek, Christopher

    2016-01-01

    Understanding prediction errors and determining how to fix them is critical to building effective predictive systems. In this paper, we delineate four types of prediction errors and demonstrate that these four types characterize all prediction errors. In addition, we describe potential remedies and tools that can be used to reduce the uncertainty when trying to determine the source of a prediction error and when trying to take action to remove a prediction errors.

  17. Error Analysis and Its Implication

    Institute of Scientific and Technical Information of China (English)

    崔蕾

    2007-01-01

    Error analysis is the important theory and approach for exploring the mental process of language learner in SLA. Its major contribution is pointing out that intralingual errors are the main reason of the errors during language learning. Researchers' exploration and description of the errors will not only promote the bidirectional study of Error Analysis as both theory and approach, but also give the implication to second language learning.

  18. Error bars in experimental biology

    OpenAIRE

    2007-01-01

    Error bars commonly appear in figures in publications, but experimental biologists are often unsure how they should be used and interpreted. In this article we illustrate some basic features of error bars and explain how they can help communicate data and assist correct interpretation. Error bars may show confidence intervals, standard errors, standard deviations, or other quantities. Different types of error bars give quite different information, and so figure legends must make clear what er...

  19. Bayesian Hierarchical Model Characterization of Model Error in Ocean Data Assimilation and Forecasts

    Science.gov (United States)

    2012-09-28

    potential benefits from allowing switching between different process models in this setting. This will be greatly facilitated by the emulator approach...of sea surface height (SSH), SST, and phytoplankton ( chlorophyll ) data from 1998, 1999, 2000, and 2001. We then used remotely sensed SeaWiFS ocean

  20. Measurement errors with low-cost citizen science radiometers

    OpenAIRE

    Bardají, Raúl; Piera, Jaume

    2016-01-01

    The KdUINO is a Do-It-Yourself buoy with low-cost radiometers that measure a parameter related to water transparency, the diffuse attenuation coefficient integrated into all the photosynthetically active radiation. In this contribution, we analyze the measurement errors of a novel low-cost multispectral radiometer that is used with the KdUINO. Peer Reviewed

  1. Susceptibility of biallelic haplotype and genotype frequencies to genotyping error.

    Science.gov (United States)

    Moskvina, Valentina; Schmidt, Karl Michael

    2006-12-01

    With the availability of fast genotyping methods and genomic databases, the search for statistical association of single nucleotide polymorphisms with a complex trait has become an important methodology in medical genetics. However, even fairly rare errors occurring during the genotyping process can lead to spurious association results and decrease in statistical power. We develop a systematic approach to study how genotyping errors change the genotype distribution in a sample. The general M-marker case is reduced to that of a single-marker locus by recognizing the underlying tensor-product structure of the error matrix. Both method and general conclusions apply to the general error model; we give detailed results for allele-based errors of size depending both on the marker locus and the allele present. Multiple errors are treated in terms of the associated diffusion process on the space of genotype distributions. We find that certain genotype and haplotype distributions remain unchanged under genotyping errors, and that genotyping errors generally render the distribution more similar to the stable one. In case-control association studies, this will lead to loss of statistical power for nondifferential genotyping errors and increase in type I error for differential genotyping errors. Moreover, we show that allele-based genotyping errors do not disturb Hardy-Weinberg equilibrium in the genotype distribution. In this setting we also identify maximally affected distributions. As they correspond to situations with rare alleles and marker loci in high linkage disequilibrium, careful checking for genotyping errors is advisable when significant association based on such alleles/haplotypes is observed in association studies.

  2. Hierarchical machining materials and their performance

    DEFF Research Database (Denmark)

    Sidorenko, Daria; Loginov, Pavel; Levashov, Evgeny

    2016-01-01

    as nanoparticles in the binder, or polycrystalline, aggregate-like reinforcements, also at several scale levels). Such materials can ensure better productivity, efficiency, and lower costs of drilling, cutting, grinding, and other technological processes. This article reviews the main groups of hierarchical...

  3. Hierarchical Optimization of Material and Structure

    DEFF Research Database (Denmark)

    Rodrigues, Helder C.; Guedes, Jose M.; Bendsøe, Martin P.

    2002-01-01

    This paper describes a hierarchical computational procedure for optimizing material distribution as well as the local material properties of mechanical elements. The local properties are designed using a topology design approach, leading to single scale microstructures, which may be restricted...... in various ways, based on design and manufacturing criteria. Implementation issues are also discussed and computational results illustrate the nature of the procedure....

  4. Hierarchical structure of nanofibers by bubbfil spinning

    Directory of Open Access Journals (Sweden)

    Liu Chang

    2015-01-01

    Full Text Available A polymer bubble is easy to be broken under a small external force, various different fragments are formed, which can be produced to different morphologies of products including nanofibers and plate-like strip. Polyvinyl-alcohol/honey solution is used in the experiment to show hierarchical structure by the bubbfil spinning.

  5. Sharing the proceeds from a hierarchical venture

    DEFF Research Database (Denmark)

    Hougaard, Jens Leth; Moreno-Ternero, Juan D.; Tvede, Mich;

    2017-01-01

    We consider the problem of distributing the proceeds generated from a joint venture in which the participating agents are hierarchically organized. We introduce and characterize a family of allocation rules where revenue ‘bubbles up’ in the hierarchy. The family is flexible enough to accommodate...

  6. Metal oxide nanostructures with hierarchical morphology

    Science.gov (United States)

    Ren, Zhifeng; Lao, Jing Yu; Banerjee, Debasish

    2007-11-13

    The present invention relates generally to metal oxide materials with varied symmetrical nanostructure morphologies. In particular, the present invention provides metal oxide materials comprising one or more metallic oxides with three-dimensionally ordered nanostructural morphologies, including hierarchical morphologies. The present invention also provides methods for producing such metal oxide materials.

  7. Hierarchical Scaling in Systems of Natural Cities

    CERN Document Server

    Chen, Yanguang

    2016-01-01

    Hierarchies can be modeled by a set of exponential functions, from which we can derive a set of power laws indicative of scaling. These scaling laws are followed by many natural and social phenomena such as cities, earthquakes, and rivers. This paper is devoted to revealing the scaling patterns in systems of natural cities by reconstructing the hierarchy with cascade structure. The cities of America, Britain, France, and Germany are taken as examples to make empirical analyses. The hierarchical scaling relations can be well fitted to the data points within the scaling ranges of the size and area of the natural cities. The size-number and area-number scaling exponents are close to 1, and the allometric scaling exponent is slightly less than 1. The results suggest that natural cities follow hierarchical scaling laws and hierarchical conservation law. Zipf's law proved to be one of the indications of the hierarchical scaling, and the primate law of city-size distribution represents a local pattern and can be mer...

  8. Hierarchical Context Modeling for Video Event Recognition.

    Science.gov (United States)

    Wang, Xiaoyang; Ji, Qiang

    2016-10-11

    Current video event recognition research remains largely target-centered. For real-world surveillance videos, targetcentered event recognition faces great challenges due to large intra-class target variation, limited image resolution, and poor detection and tracking results. To mitigate these challenges, we introduced a context-augmented video event recognition approach. Specifically, we explicitly capture different types of contexts from three levels including image level, semantic level, and prior level. At the image level, we introduce two types of contextual features including the appearance context features and interaction context features to capture the appearance of context objects and their interactions with the target objects. At the semantic level, we propose a deep model based on deep Boltzmann machine to learn event object representations and their interactions. At the prior level, we utilize two types of prior-level contexts including scene priming and dynamic cueing. Finally, we introduce a hierarchical context model that systematically integrates the contextual information at different levels. Through the hierarchical context model, contexts at different levels jointly contribute to the event recognition. We evaluate the hierarchical context model for event recognition on benchmark surveillance video datasets. Results show that incorporating contexts in each level can improve event recognition performance, and jointly integrating three levels of contexts through our hierarchical model achieves the best performance.

  9. Managing Clustered Data Using Hierarchical Linear Modeling

    Science.gov (United States)

    Warne, Russell T.; Li, Yan; McKyer, E. Lisako J.; Condie, Rachel; Diep, Cassandra S.; Murano, Peter S.

    2012-01-01

    Researchers in nutrition research often use cluster or multistage sampling to gather participants for their studies. These sampling methods often produce violations of the assumption of data independence that most traditional statistics share. Hierarchical linear modeling is a statistical method that can overcome violations of the independence…

  10. Strategic games on a hierarchical network model

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    Among complex network models, the hierarchical network model is the one most close to such real networks as world trade web, metabolic network, WWW, actor network, and so on. It has not only the property of power-law degree distribution, but growth based on growth and preferential attachment, showing the scale-free degree distribution property. In this paper, we study the evolution of cooperation on a hierarchical network model, adopting the prisoner's dilemma (PD) game and snowdrift game (SG) as metaphors of the interplay between connected nodes. BA model provides a unifying framework for the emergence of cooperation. But interestingly, we found that on hierarchical model, there is no sign of cooperation for PD game, while the frequency of cooperation decreases as the common benefit decreases for SG. By comparing the scaling clustering coefficient properties of the hierarchical network model with that of BA model, we found that the former amplifies the effect of hubs. Considering different performances of PD game and SG on complex network, we also found that common benefit leads to cooperation in the evolution. Thus our study may shed light on the emergence of cooperation in both natural and social environments.

  11. Endogenous Effort Norms in Hierarchical Firms

    NARCIS (Netherlands)

    J. Tichem (Jan)

    2013-01-01

    markdownabstract__Abstract__ This paper studies how a three-layer hierarchical firm (principal-supervisor-agent) optimally creates effort norms for its employees. The key assumption is that effort norms are affected by the example of superiors. In equilibrium, norms are eroded as one moves down

  12. Complex Evaluation of Hierarchically-Network Systems

    CERN Document Server

    Polishchuk, Dmytro; Yadzhak, Mykhailo

    2016-01-01

    Methods of complex evaluation based on local, forecasting, aggregated, and interactive evaluation of the state, function quality, and interaction of complex system's objects on the all hierarchical levels is proposed. Examples of analysis of the structural elements of railway transport system are used for illustration of efficiency of proposed approach.

  13. A Hierarchical Grouping of Great Educators

    Science.gov (United States)

    Barker, Donald G.

    1977-01-01

    Great educators of history were categorized on the basis of their: aims of education, fundamental ideas, and educational theories. They were classed by Ward's method of hierarchical analysis into six groupings: Socrates, Ausonius, Jerome, Abelard; Quintilian, Origen, Melanchthon, Ascham, Loyola; Alciun, Comenius; Vittorino, Basedow, Pestalozzi,…

  14. Ultrafast Hierarchical OTDM/WDM Network

    Directory of Open Access Journals (Sweden)

    Hideyuki Sotobayashi

    2003-12-01

    Full Text Available Ultrafast hierarchical OTDM/WDM network is proposed for the future core-network. We review its enabling technologies: C- and L-wavelength-band generation, OTDM-WDM mutual multiplexing format conversions, and ultrafast OTDM wavelengthband conversions.

  15. Hierarchical fuzzy identification of MR damper

    Science.gov (United States)

    Wang, Hao; Hu, Haiyan

    2009-07-01

    Magneto-rheological (MR) dampers, recently, have found many successful applications in civil engineering and numerous area of mechanical engineering. When an MR damper is to be used for vibration suppression, an inevitable problem is to determine the input voltage so as to gain the desired restoring force determined from the control law. This is the so-called inverse problem of MR dampers and is always an obstacle in the application of MR dampers to vibration control. It is extremely difficult to get the inverse model of MR damper because MR dampers are highly nonlinear and hysteretic. When identifying the inverse model of MR damper with simple fuzzy system, there maybe exists curse of dimensionality of fuzzy system. Therefore, it will take much more time, and even the inverse model may not be identifiable. The paper presents two-layer hierarchical fuzzy system, that is, two-layer hierarchical ANFIS to deal with the curse of dimensionality of the fuzzy identification of MR damper and to identify the inverse model of MR damper. Data used for training the model are generated from numerical simulation of nonlinear differential equations. The numerical simulation proves that the proposed hierarchical fuzzy system can model the inverse model of MR damper much more quickly than simple fuzzy system without any reduction of identification precision. Such hierarchical ANFIS shows the higher priority for the complicated system, and can also be used in system identification and system control for the complicated system.

  16. Managing Clustered Data Using Hierarchical Linear Modeling

    Science.gov (United States)

    Warne, Russell T.; Li, Yan; McKyer, E. Lisako J.; Condie, Rachel; Diep, Cassandra S.; Murano, Peter S.

    2012-01-01

    Researchers in nutrition research often use cluster or multistage sampling to gather participants for their studies. These sampling methods often produce violations of the assumption of data independence that most traditional statistics share. Hierarchical linear modeling is a statistical method that can overcome violations of the independence…

  17. Equivalence Checking of Hierarchical Combinational Circuits

    DEFF Research Database (Denmark)

    Williams, Poul Frederick; Hulgaard, Henrik; Andersen, Henrik Reif

    1999-01-01

    This paper presents a method for verifying that two hierarchical combinational circuits implement the same Boolean functions. The key new feature of the method is its ability to exploit the modularity of circuits to reuse results obtained from one part of the circuits in other parts. We demonstrate...... our method on large adder and multiplier circuits....

  18. Synthesis of novel hierarchical ZSM-5 monoliths and their application in trichloroethylene removal

    Institute of Scientific and Technical Information of China (English)

    João Pires; Ana C.Fernandes; Divakar Duraiswami

    2014-01-01

    A self-supporting ZSM-5 monolith with a hierarchical porosity was prepared using polyurethane foam (PUF) as a structural template and a hydrothermal synthesis procedure. The synthesized monolith was characterized and investigated towards the adsorption and catalytic oxidation of trichloroethylene (TCE). Adsorption of TCE was studied gravimetrically and oxidation of TCE was studied using a vapor-phase down-flow reactor. Monolithic ZSM-5 displayed good sorption proper-ties and completely oxidized TCE. Conversion levels of 50%and 90%were achieved at reduced temperatures (by~50 °C) when compared with the conversion temperatures obtained from the powder counterparts. Besides the activity of the monolith towards TCE adsorption and oxidation, it was stable and enhanced diffusion, thereby reducing pressure drops to a great extent owing to its hierarchical porous nature.

  19. Rapid fabrication of hierarchically structured supramolecular nanocomposite thin films in one minute

    Science.gov (United States)

    Xu, Ting; Kao, Joseph

    2016-11-08

    Functional nanocomposites containing nanoparticles of different chemical compositions may exhibit new properties to meet demands for advanced technology. It is imperative to simultaneously achieve hierarchical structural control and to develop rapid, scalable fabrication to minimize degradation of nanoparticle properties and for compatibility with nanomanufacturing. The assembly kinetics of supramolecular nanocomposite in thin films is governed by the energetic cost arising from defects, the chain mobility, and the activation energy for inter-domain diffusion. By optimizing only one parameter, the solvent fraction in the film, the assembly kinetics can be precisely tailored to produce hierarchically structured thin films of supramolecular nanocomposites in approximately one minute. Moreover, the strong wavelength dependent optical anisotropy in the nanocomposite highlights their potential applications for light manipulation and information transmission. The present invention opens a new avenue in designing manufacture-friendly continuous processing for the fabrication of functional nanocomposite thin films.

  20. Fusing heterogeneous data for the calibration of molecular dynamics force fields using hierarchical Bayesian models.

    Science.gov (United States)

    Wu, Stephen; Angelikopoulos, Panagiotis; Tauriello, Gerardo; Papadimitriou, Costas; Koumoutsakos, Petros

    2016-12-28

    We propose a hierarchical Bayesian framework to systematically integrate heterogeneous data for the calibration of force fields in Molecular Dynamics (MD) simulations. Our approach enables the fusion of diverse experimental data sets of the physico-chemical properties of a system at different thermodynamic conditions. We demonstrate the value of this framework for the robust calibration of MD force-fields for water using experimental data of its diffusivity, radial distribution function, and density. In order to address the high computational cost associated with the hierarchical Bayesian models, we develop a novel surrogate model based on the empirical interpolation method. Further computational savings are achieved by implementing a highly parallel transitional Markov chain Monte Carlo technique. The present method bypasses possible subjective weightings of the experimental data in identifying MD force-field parameters.

  1. Facile fabrication of hierarchical SnO(2) microspheres film on transparent FTO glass.

    Science.gov (United States)

    Wang, Yu-Fen; Lei, Bing-Xin; Hou, Yuan-Fang; Zhao, Wen-Xia; Liang, Chao-Lun; Su, Cheng-Yong; Kuang, Dai-Bin

    2010-02-15

    Hierarchical SnO(2) microspheres consisting of nanosheets on the fluorine-doped tin oxide (FTO) glass substrates are successfully prepared via a facile hydrothermal synthesis process. The as-prepared novel microsphere films were characterized in detail by X-ray diffraction (XRD), field emission scanning electron microscopy (FE-SEM), transmission electron microscopy (TEM), UV-vis diffuse reflectance spectroscopy. Moreover, SnO(2) nanoparticles with 30-80 nm in size covered on the surface of nanosheets/microspheres were also obtained by optimizing the hydrothermal reaction temperature, time, or volume ratio of acetylacetone/H(2)O. The detailed investigations disclose the experimental parameters, such as acetylacetone, NH(4)F, and seed layer play important roles in the morphology of hierarchical SnO(2) microspheres on the FTO glass. The formation process of SnO(2) microspheres is also proposed based on the observations of time dependent samples.

  2. Hierarchically structured photonic crystals for integrated chemical separation and colorimetric detection.

    Science.gov (United States)

    Fu, Qianqian; Zhu, Biting; Ge, Jianping

    2017-02-16

    A SiO2 colloidal photonic crystal film with a hierarchical porous structure is fabricated to demonstrate an integrated separation and colorimetric detection of chemical species for the first time. This new photonic crystal based thin layer chromatography process requires no dyeing, developing and UV irradiation compared to the traditional TLC. The assembling of mesoporous SiO2 particles via a supersaturation-induced-precipitation process forms uniform and hierarchical photonic crystals with micron-scale cracks and mesopores, which accelerate the diffusion of developers and intensify the adsorption/desorption between the analytes and silica for efficient separation. Meanwhile, the chemical substances infiltrated to the voids of photonic crystals cause an increase of the refractive index and a large contrast of structural colors towards the unloaded part, so that the sample spots can be directly recognized with the naked eye before and after separation.

  3. Construction of hierarchically porous metal-organic frameworks through linker labilization

    Science.gov (United States)

    Yuan, Shuai; Zou, Lanfang; Qin, Jun-Sheng; Li, Jialuo; Huang, Lan; Feng, Liang; Wang, Xuan; Bosch, Mathieu; Alsalme, Ali; Cagin, Tahir; Zhou, Hong-Cai

    2017-05-01

    A major goal of metal-organic framework (MOF) research is the expansion of pore size and volume. Although many approaches have been attempted to increase the pore size of MOF materials, it is still a challenge to construct MOFs with precisely customized pore apertures for specific applications. Herein, we present a new method, namely linker labilization, to increase the MOF porosity and pore size, giving rise to hierarchical-pore architectures. Microporous MOFs with robust metal nodes and pro-labile linkers were initially synthesized. The mesopores were subsequently created as crystal defects through the splitting of a pro-labile-linker and the removal of the linker fragments by acid treatment. We demonstrate that linker labilization method can create controllable hierarchical porous structures in stable MOFs, which facilitates the diffusion and adsorption process of guest molecules to improve the performances of MOFs in adsorption and catalysis.

  4. Rapid fabrication of hierarchically structured supramolecular nanocomposite thin films in one minute

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Ting; Kao, Joseph

    2016-11-08

    Functional nanocomposites containing nanoparticles of different chemical compositions may exhibit new properties to meet demands for advanced technology. It is imperative to simultaneously achieve hierarchical structural control and to develop rapid, scalable fabrication to minimize degradation of nanoparticle properties and for compatibility with nanomanufacturing. The assembly kinetics of supramolecular nanocomposite in thin films is governed by the energetic cost arising from defects, the chain mobility, and the activation energy for inter-domain diffusion. By optimizing only one parameter, the solvent fraction in the film, the assembly kinetics can be precisely tailored to produce hierarchically structured thin films of supramolecular nanocomposites in approximately one minute. Moreover, the strong wavelength dependent optical anisotropy in the nanocomposite highlights their potential applications for light manipulation and information transmission. The present invention opens a new avenue in designing manufacture-friendly continuous processing for the fabrication of functional nanocomposite thin films.

  5. NHRPA: a novel hierarchical routing protocol algorithm for wireless sensor networks

    Institute of Scientific and Technical Information of China (English)

    CHENG Hong-bing; YANG Geng; HU Su-jun

    2008-01-01

    Considering severe resources constraints and security threat of wireless sensor networks (WSN), the article proposed a novel hierarchical routing protocol algorithm. The proposed routing protocol algorithm can adopt suitable routing technology for the nodes according to the distance of nodes to the base station, density of nodes distribution, and residual energy of nodes. Comparing the proposed routing protocol algorithm with simple direction diffusion routing technology, cluster-based routing mechanisms, and simple hierarchical routing protocol algorithm through comprehensive analysis and simulation in terms of the energy usage, packet latency, and security in the presence of node compromise attacks, the results show that the proposed routing protocol algorithm is more efficient for wireless sensor networks.

  6. Diagnostic errors in pediatric radiology

    Energy Technology Data Exchange (ETDEWEB)

    Taylor, George A.; Voss, Stephan D. [Children' s Hospital Boston, Department of Radiology, Harvard Medical School, Boston, MA (United States); Melvin, Patrice R. [Children' s Hospital Boston, The Program for Patient Safety and Quality, Boston, MA (United States); Graham, Dionne A. [Children' s Hospital Boston, The Program for Patient Safety and Quality, Boston, MA (United States); Harvard Medical School, The Department of Pediatrics, Boston, MA (United States)

    2011-03-15

    Little information is known about the frequency, types and causes of diagnostic errors in imaging children. Our goals were to describe the patterns and potential etiologies of diagnostic error in our subspecialty. We reviewed 265 cases with clinically significant diagnostic errors identified during a 10-year period. Errors were defined as a diagnosis that was delayed, wrong or missed; they were classified as perceptual, cognitive, system-related or unavoidable; and they were evaluated by imaging modality and level of training of the physician involved. We identified 484 specific errors in the 265 cases reviewed (mean:1.8 errors/case). Most discrepancies involved staff (45.5%). Two hundred fifty-eight individual cognitive errors were identified in 151 cases (mean = 1.7 errors/case). Of these, 83 cases (55%) had additional perceptual or system-related errors. One hundred sixty-five perceptual errors were identified in 165 cases. Of these, 68 cases (41%) also had cognitive or system-related errors. Fifty-four system-related errors were identified in 46 cases (mean = 1.2 errors/case) of which all were multi-factorial. Seven cases were unavoidable. Our study defines a taxonomy of diagnostic errors in a large academic pediatric radiology practice and suggests that most are multi-factorial in etiology. Further study is needed to define effective strategies for improvement. (orig.)

  7. Generic hierarchical engine for mask data preparation

    Science.gov (United States)

    Kalus, Christian K.; Roessl, Wolfgang; Schnitker, Uwe; Simecek, Michal

    2002-07-01

    Electronic layouts are usually flattened on their path from the hierarchical source downstream to the wafer. Mask data preparation has certainly been identified as a severe bottleneck since long. Data volumes are not only doubling every year along the ITRS roadmap. With the advent of optical proximity correction and phase-shifting masks data volumes are escalating up to non-manageable heights. Hierarchical treatment is one of the most powerful means to keep memory and CPU consumption in reasonable ranges. Only recently, however, has this technique acquired more public attention. Mask data preparation is the most critical area calling for a sound infrastructure to reduce the handling problem. Gaining more and more attention though, are other applications such as large area simulation and manufacturing rule checking (MRC). They all would profit from a generic engine capable to efficiently treat hierarchical data. In this paper we will present a generic engine for hierarchical treatment which solves the major problem, steady transitions along cell borders. Several alternatives exist how to walk through the hierarchy tree. They have, to date, not been thoroughly investigated. One is a bottom-up attempt to treat cells starting with the most elementary cells. The other one is a top-down approach which lends itself to creating a new hierarchy tree. In addition, since the variety, degree of hierarchy and quality of layouts extends over a wide range a generic engine has to take intelligent decisions when exploding the hierarchy tree. Several applications will be shown, in particular how far the limits can be pushed with the current hierarchical engine.

  8. Hierarchical organisation in perception of orientation.

    Science.gov (United States)

    Spinelli, D; Antonucci, G; Daini, R; Martelli, M L; Zoccolotti, P

    1999-01-01

    According to Rock [1990, in The Legacy of Solomon Asch (Hillsdale, NJ: Lawrence Erlbaum Associates)], hierarchical organisation of perception describes cases in which the orientation of an object is affected by the immediately surrounding elements in the visual field. Various experiments were performed to study the hierarchical organisation of orientation perception. In most of them the rod-and-frame-illusion (RFI: change of the apparent vertical measured on a central rod surrounded by a tilted frame) was measured in the presence/absence of a second inner frame. The first three experiments showed that, when the inner frame is vertical, the direction and size of the illusion are consistent with expectancies based on the hierarchical organisation hypothesis. An analysis of published and unpublished data collected on a large number of subjects showed that orientational hierarchical effects are independent from the absolute size of the RFI. In experiments 4 to 7 we examined the perceptual conditions of the inner stimulus (enclosure, orientation, and presence of luminance borders) critical for obtaining a hierarchical organisation effect. Although an inner vertical square was effective in reducing the illusion (experiment 3), an inner circle enclosing the rod was ineffective (experiment 4). This indicates that definite orientation is necessary to modulate the illusion. However, orientational information provided by a vertical or horizontal rectangle presented near the rod, but not enclosing it, did not modulate the RFI (experiment 5). This suggests that the presence of a figure with oriented contours enclosing the rod is critical. In experiments 6 and 7 we studied whether the presence of luminance borders is important or whether the inner upright square might be effective also if made of subjective contours. When the subjective contour figure was salient and the observers perceived it clearly, its effectiveness in modulating the RFI was comparable to that observed with

  9. Hierarchical linear modeling (HLM) of longitudinal brain structural and cognitive changes in alcohol-dependent individuals during sobriety

    DEFF Research Database (Denmark)

    Yeh, P.H.; Gazdzinski, S.; Durazzo, T.C.;

    2007-01-01

    and unique hierarchical linear models allow assessments of the complex relationships among outcome measures of longitudinal data sets. These HLM applications suggest that chronic cigarette smoking modulates the temporal dynamics of brain structural and cognitive changes in alcoholics during prolonged......Background: Hierarchical linear modeling (HLM) can reveal complex relationships between longitudinal outcome measures and their covariates under proper consideration of potentially unequal error variances. We demonstrate the application of FILM to the study of magnetic resonance imaging (MRI...... time points. Using HLM, we modeled volumetric and cognitive outcome measures as a function of cigarette and alcohol use variables. Results: Different hierarchical linear models with unique model structures are presented and discussed. The results show that smaller brain volumes at baseline predict...

  10. Enhancement of Unequal Error Protection Properties of LDPC Codes

    Directory of Open Access Journals (Sweden)

    Inbar Fijalkow

    2007-12-01

    Full Text Available It has been widely recognized in the literature that irregular low-density parity-check (LDPC codes exhibit naturally an unequal error protection (UEP behavior. In this paper, we propose a general method to emphasize and control the UEP properties of LDPC codes. The method is based on a hierarchical optimization of the bit node irregularity profile for each sensitivity class within the codeword by maximizing the average bit node degree while guaranteeing a minimum degree as high as possible. We show that this optimization strategy is efficient, since the codes that we optimize show better UEP capabilities than the codes optimized for the additive white Gaussian noise channel.

  11. Enhancement of Unequal Error Protection Properties of LDPC Codes

    Directory of Open Access Journals (Sweden)

    Poulliat Charly

    2007-01-01

    Full Text Available It has been widely recognized in the literature that irregular low-density parity-check (LDPC codes exhibit naturally an unequal error protection (UEP behavior. In this paper, we propose a general method to emphasize and control the UEP properties of LDPC codes. The method is based on a hierarchical optimization of the bit node irregularity profile for each sensitivity class within the codeword by maximizing the average bit node degree while guaranteeing a minimum degree as high as possible. We show that this optimization strategy is efficient, since the codes that we optimize show better UEP capabilities than the codes optimized for the additive white Gaussian noise channel.

  12. Finite-difference schemes for anisotropic diffusion

    Energy Technology Data Exchange (ETDEWEB)

    Es, Bram van, E-mail: es@cwi.nl [Centrum Wiskunde and Informatica, P.O. Box 94079, 1090GB Amsterdam (Netherlands); FOM Institute DIFFER, Dutch Institute for Fundamental Energy Research, Association EURATOM-FOM (Netherlands); Koren, Barry [Eindhoven University of Technology (Netherlands); Blank, Hugo J. de [FOM Institute DIFFER, Dutch Institute for Fundamental Energy Research, Association EURATOM-FOM (Netherlands)

    2014-09-01

    In fusion plasmas diffusion tensors are extremely anisotropic due to the high temperature and large magnetic field strength. This causes diffusion, heat conduction, and viscous momentum loss, to effectively be aligned with the magnetic field lines. This alignment leads to different values for the respective diffusive coefficients in the magnetic field direction and in the perpendicular direction, to the extent that heat diffusion coefficients can be up to 10{sup 12} times larger in the parallel direction than in the perpendicular direction. This anisotropy puts stringent requirements on the numerical methods used to approximate the MHD-equations since any misalignment of the grid may cause the perpendicular diffusion to be polluted by the numerical error in approximating the parallel diffusion. Currently the common approach is to apply magnetic field-aligned coordinates, an approach that automatically takes care of the directionality of the diffusive coefficients. This approach runs into problems at x-points and at points where there is magnetic re-connection, since this causes local non-alignment. It is therefore useful to consider numerical schemes that are tolerant to the misalignment of the grid with the magnetic field lines, both to improve existing methods and to help open the possibility of applying regular non-aligned grids. To investigate this, in this paper several discretization schemes are developed and applied to the anisotropic heat diffusion equation on a non-aligned grid.

  13. A Multi-layer, Hierarchical Information Management System for the Smart Grid

    Energy Technology Data Exchange (ETDEWEB)

    Lu, Ning; Du, Pengwei; Paulson, Patrick R.; Greitzer, Frank L.; Guo, Xinxin; Hadley, Mark D.

    2011-10-10

    This paper presents the modeling approach, methodologies, and initial results of setting up a multi-layer, hierarchical information management system (IMS) for the smart grid. The IMS allows its users to analyze the data collected by multiple control and communication networks to characterize the states of the smart grid. Abnormal, corrupted, or erroneous measurement data and outliers are detected and analyzed to identify whether they are caused by random equipment failures, unintentional human errors, or deliberate tempering attempts. Data collected from different information networks are crosschecked for data integrity based on redundancy, dependency, correlation, or cross-correlations, which reveal the interdependency between data sets. A hierarchically structured reasoning mechanism is used to rank possible causes of an event to aid the system operators to proactively respond or provide mitigation recommendations to remove or neutralize the threats. The model provides satisfactory performance on identifying the cause of an event and significantly reduces the need of processing myriads of data collected.

  14. Statistical mechanical analysis of a hierarchical random code ensemble in signal processing

    Energy Technology Data Exchange (ETDEWEB)

    Obuchi, Tomoyuki [Department of Earth and Space Science, Faculty of Science, Osaka University, Toyonaka 560-0043 (Japan); Takahashi, Kazutaka [Department of Physics, Tokyo Institute of Technology, Tokyo 152-8551 (Japan); Takeda, Koujin, E-mail: takeda@sp.dis.titech.ac.jp [Department of Computational Intelligence and Systems Science, Tokyo Institute of Technology, Yokohama 226-8502 (Japan)

    2011-02-25

    We study a random code ensemble with a hierarchical structure, which is closely related to the generalized random energy model with discrete energy values. Based on this correspondence, we analyze the hierarchical random code ensemble by using the replica method in two situations: lossy data compression and channel coding. For both the situations, the exponents of large deviation analysis characterizing the performance of the ensemble, the distortion rate of lossy data compression and the error exponent of channel coding in Gallager's formalism, are accessible by a generating function of the generalized random energy model. We discuss that the transitions of those exponents observed in the preceding work can be interpreted as phase transitions with respect to the replica number. We also show that the replica symmetry breaking plays an essential role in these transitions.

  15. Transient Error Data Analysis.

    Science.gov (United States)

    1979-05-01

    Analysis is 3.2 Graphical Data Analysis 16 3.3 General Statistics and Confidence Intervals 1" 3.4 Goodness of Fit Test 15 4. Conclusions 31 Acknowledgements...MTTF per System Technology Mechanism Processor Processor MT IE . CMUA PDP-10, ECL Parity 44 hrs. 800-1600 hrs. 0.03-0.06 Cm* LSI-1 1, NMOS Diagnostics...OF BAD TIME ERRORS: 6 TOTAL NUMBER OF ENTRIES FOR ALL INPUT FILESs 18445 TIME SPAN: 1542 HRS., FROM: 17-Feb-79 5:3:11 TO: 18-1Mj-79 11:30:99

  16. Minimum Error Entropy Classification

    CERN Document Server

    Marques de Sá, Joaquim P; Santos, Jorge M F; Alexandre, Luís A

    2013-01-01

    This book explains the minimum error entropy (MEE) concept applied to data classification machines. Theoretical results on the inner workings of the MEE concept, in its application to solving a variety of classification problems, are presented in the wider realm of risk functionals. Researchers and practitioners also find in the book a detailed presentation of practical data classifiers using MEE. These include multi‐layer perceptrons, recurrent neural networks, complexvalued neural networks, modular neural networks, and decision trees. A clustering algorithm using a MEE‐like concept is also presented. Examples, tests, evaluation experiments and comparison with similar machines using classic approaches, complement the descriptions.

  17. A general strategy to determine the congruence between a hierarchical and a non-hierarchical classification

    Directory of Open Access Journals (Sweden)

    Marín Ignacio

    2007-11-01

    Full Text Available Abstract Background Classification procedures are widely used in phylogenetic inference, the analysis of expression profiles, the study of biological networks, etc. Many algorithms have been proposed to establish the similarity between two different classifications of the same elements. However, methods to determine significant coincidences between hierarchical and non-hierarchical partitions are still poorly developed, in spite of the fact that the search for such coincidences is implicit in many analyses of massive data. Results We describe a novel strategy to compare a hierarchical and a dichotomic non-hierarchical classification of elements, in order to find clusters in a hierarchical tree in which elements of a given "flat" partition are overrepresented. The key improvement of our strategy respect to previous methods is using permutation analyses of ranked clusters to determine whether regions of the dendrograms present a significant enrichment. We show that this method is more sensitive than previously developed strategies and how it can be applied to several real cases, including microarray and interactome data. Particularly, we use it to compare a hierarchical representation of the yeast mitochondrial interactome and a catalogue of known mitochondrial protein complexes, demonstrating a high level of congruence between those two classifications. We also discuss extensions of this method to other cases which are conceptually related. Conclusion Our method is highly sensitive and outperforms previously described strategies. A PERL script that implements it is available at http://www.uv.es/~genomica/treetracker.

  18. A transformation approach to modelling multi-modal diffusions

    DEFF Research Database (Denmark)

    Forman, Julie Lyng; Sørensen, Michael

    2014-01-01

    when the diffusion is observed with additional measurement error. The new approach is applied to molecular dynamics data in the form of a reaction coordinate of the small Trp-zipper protein, from which the folding and unfolding rates of the protein are estimated. Because the diffusion coefficient...... is state-dependent, the new models provide a better fit to this type of protein folding data than the previous models with a constant diffusion coefficient, particularly when the effect of errors with a short time-scale is taken into account....

  19. Partitioning,Automation and Error Recovery in the Control and Monitoring System of an LHC Experiment

    Institute of Scientific and Technical Information of China (English)

    C.Gaspar

    2001-01-01

    The Joint Controls Project(JCOP)is a collaboration between CERN and the four LHC experiments to find and implement common solutions for their control and monitoring systems.As part of this project and Architecture Working Group was set up in order to study the requirements and devise an architectural model that would suit the four experiments.Many issues were studied by this working group:Alarm handling,Access Control,Hierarchical Control,etc.This paper will report on the specific issue of hierarchical control and in particular partitioning,automation and error recovery.

  20. Superior electrode performance of mesoporous hollow TiO2 microspheres through efficient hierarchical nanostructures

    Science.gov (United States)

    Zhang, Feng; Zhang, Yu; Song, Shuyan; Zhang, Hongjie

    2011-10-01

    Mesoporous hollow TiO2 microspheres with controlled size and hierarchical nanostructures are designed from a process employing in suit template-assisted and hydrothermal methods. The results show that the hollow microspheres composed of mesoporous nanospheres possess very stable reversible capacity of 184 mAh g-1 at 0.25C and exhibit extremely high power of 122 mAh g-1 at the high rate of 10C. The superior high-rate and high-capacity performance of the sample is attributed to the efficient hierarchical nanostructures. The hollow structure could shorten the diffusion length for lithium ion in the microspheres. The large mesoporous channels between the mesoporous nanospheres provide an easily-accessed system which facilitates electrolyte transportation and lithium ion diffusion within the electrode materials. The electrolyte, flooding the mesoporous channels, can also lead to a high electrolyte/electrode contact area, facilitating transport of lithium ions across the electrolyte/electrode interface. The small mesopores in the meosporous nanospheres can make the electrolyte and lithium ion further diffuse into the interior of electrode materials and increase electrolyte/electrode contact area. The small nanoparticles can also ensure high reversible capacity.

  1. Errors in CT colonography.

    Science.gov (United States)

    Trilisky, Igor; Ward, Emily; Dachman, Abraham H

    2015-10-01

    CT colonography (CTC) is a colorectal cancer screening modality which is becoming more widely implemented and has shown polyp detection rates comparable to those of optical colonoscopy. CTC has the potential to improve population screening rates due to its minimal invasiveness, no sedation requirement, potential for reduced cathartic examination, faster patient throughput, and cost-effectiveness. Proper implementation of a CTC screening program requires careful attention to numerous factors, including patient preparation prior to the examination, the technical aspects of image acquisition, and post-processing of the acquired data. A CTC workstation with dedicated software is required with integrated CTC-specific display features. Many workstations include computer-aided detection software which is designed to decrease errors of detection by detecting and displaying polyp-candidates to the reader for evaluation. There are several pitfalls which may result in false-negative and false-positive reader interpretation. We present an overview of the potential errors in CTC and a systematic approach to avoid them.

  2. Error Analysis in Mathematics Education.

    Science.gov (United States)

    Rittner, Max

    1982-01-01

    The article reviews the development of mathematics error analysis as a means of diagnosing students' cognitive reasoning. Errors specific to addition, subtraction, multiplication, and division are described, and suggestions for remediation are provided. (CL)

  3. Payment Error Rate Measurement (PERM)

    Data.gov (United States)

    U.S. Department of Health & Human Services — The PERM program measures improper payments in Medicaid and CHIP and produces error rates for each program. The error rates are based on reviews of the...

  4. On the geostatistical characterization of hierarchical media

    Science.gov (United States)

    Neuman, Shlomo P.; Riva, Monica; Guadagnini, Alberto

    2008-02-01

    The subsurface consists of porous and fractured materials exhibiting a hierarchical geologic structure, which gives rise to systematic and random spatial and directional variations in hydraulic and transport properties on a multiplicity of scales. Traditional geostatistical moment analysis allows one to infer the spatial covariance structure of such hierarchical, multiscale geologic materials on the basis of numerous measurements on a given support scale across a domain or "window" of a given length scale. The resultant sample variogram often appears to fit a stationary variogram model with constant variance (sill) and integral (spatial correlation) scale. In fact, some authors, who recognize that hierarchical sedimentary architecture and associated log hydraulic conductivity fields tend to be nonstationary, nevertheless associate them with stationary "exponential-like" transition probabilities and variograms, respectively, the latter being a consequence of the former. We propose that (1) the apparent ability of stationary spatial statistics to characterize the covariance structure of nonstationary hierarchical media is an artifact stemming from the finite size of the windows within which geologic and hydrologic variables are ubiquitously sampled, and (2) the artifact is eliminated upon characterizing the covariance structure of such media with the aid of truncated power variograms, which represent stationary random fields obtained upon sampling a nonstationary fractal over finite windows. To support our opinion, we note that truncated power variograms arise formally when a hierarchical medium is sampled jointly across all geologic categories and scales within a window; cite direct evidence that geostatistical parameters (variance and integral scale) inferred on the basis of traditional variograms vary systematically with support and window scales; demonstrate the ability of truncated power models to capture these variations in terms of a few scaling parameters

  5. In-situ preparation of Fe2O3 hierarchical arrays on stainless steel substrate for high efficient catalysis

    Science.gov (United States)

    Yang, Zeheng; Wang, Kun; Shao, Zongming; Tian, Yuan; Chen, Gongde; Wang, Kai; Chen, Zhangxian; Dou, Yan; Zhang, Weixin

    2017-02-01

    Hierarchical array catalysts with micro/nano structures on substrates not only possess high reactivity from large surface area and suitable interface, but intensify mass transfer through shortening the diffusion paths of both reactants and products for high catalytic efficiency. Herein, we first demonstrate fabrication of Fe2O3 hierarchical arrays grown on stainless-steel substrates via in-situ hydrothermal chemical oxidation followed by heat treatment in N2 atmosphere. As a Fenton-like catalyst, Fe2O3 hierarchical arrays exhibit excellent catalytic activity and life cycle performance for methylene blue (MB) dye degradation in aqueous solution in the presence of H2O2. The Fe2O3 catalyst with unique hierarchical structures and efficient transport channels, effectively activates H2O2 to generate large quantity of •OH radicals and highly promotes reaction kinetics between MB and •OH radicals. Immobilization of hierarchical array catalysts on stainless-steel can prevent particles agglomeration, facilitate the recovery and reuse of the catalysts, which is expected promising applications in wastewater remediation.

  6. Error bounds for set inclusions

    Institute of Scientific and Technical Information of China (English)

    ZHENG; Xiyin(郑喜印)

    2003-01-01

    A variant of Robinson-Ursescu Theorem is given in normed spaces. Several error bound theorems for convex inclusions are proved and in particular a positive answer to Li and Singer's conjecture is given under weaker assumption than the assumption required in their conjecture. Perturbation error bounds are also studied. As applications, we study error bounds for convex inequality systems.

  7. Uncertainty quantification and error analysis

    Energy Technology Data Exchange (ETDEWEB)

    Higdon, Dave M [Los Alamos National Laboratory; Anderson, Mark C [Los Alamos National Laboratory; Habib, Salman [Los Alamos National Laboratory; Klein, Richard [Los Alamos National Laboratory; Berliner, Mark [OHIO STATE UNIV.; Covey, Curt [LLNL; Ghattas, Omar [UNIV OF TEXAS; Graziani, Carlo [UNIV OF CHICAGO; Seager, Mark [LLNL; Sefcik, Joseph [LLNL; Stark, Philip [UC/BERKELEY; Stewart, James [SNL

    2010-01-01

    UQ studies all sources of error and uncertainty, including: systematic and stochastic measurement error; ignorance; limitations of theoretical models; limitations of numerical representations of those models; limitations on the accuracy and reliability of computations, approximations, and algorithms; and human error. A more precise definition for UQ is suggested below.

  8. Feature Referenced Error Correction Apparatus.

    Science.gov (United States)

    A feature referenced error correction apparatus utilizing the multiple images of the interstage level image format to compensate for positional...images and by the generation of an error correction signal in response to the sub-frame registration errors. (Author)

  9. Errors in causal inference: an organizational schema for systematic error and random error.

    Science.gov (United States)

    Suzuki, Etsuji; Tsuda, Toshihide; Mitsuhashi, Toshiharu; Mansournia, Mohammad Ali; Yamamoto, Eiji

    2016-11-01

    To provide an organizational schema for systematic error and random error in estimating causal measures, aimed at clarifying the concept of errors from the perspective of causal inference. We propose to divide systematic error into structural error and analytic error. With regard to random error, our schema shows its four major sources: nondeterministic counterfactuals, sampling variability, a mechanism that generates exposure events and measurement variability. Structural error is defined from the perspective of counterfactual reasoning and divided into nonexchangeability bias (which comprises confounding bias and selection bias) and measurement bias. Directed acyclic graphs are useful to illustrate this kind of error. Nonexchangeability bias implies a lack of "exchangeability" between the selected exposed and unexposed groups. A lack of exchangeability is not a primary concern of measurement bias, justifying its separation from confounding bias and selection bias. Many forms of analytic errors result from the small-sample properties of the estimator used and vanish asymptotically. Analytic error also results from wrong (misspecified) statistical models and inappropriate statistical methods. Our organizational schema is helpful for understanding the relationship between systematic error and random error from a previously less investigated aspect, enabling us to better understand the relationship between accuracy, validity, and precision. Copyright © 2016 Elsevier Inc. All rights reserved.

  10. A top-down approach for fabricating free-standing bio-carbon supercapacitor electrodes with a hierarchical structure

    OpenAIRE

    Yingzhi Li; Qinghua Zhang; Junxian Zhang; Lei Jin; Xin Zhao; Ting Xu

    2015-01-01

    Biomass has delicate hierarchical structures, which inspired us to develop a cost-effective route to prepare electrode materials with rational nanostructures for use in high-performance storage devices. Here, we demonstrate a novel top-down approach for fabricating bio-carbon materials with stable structures and excellent diffusion pathways; this approach is based on carbonization with controlled chemical activation. The developed free-standing bio-carbon electrode exhibits a high specific ca...

  11. Dendrimer-like hybrid particles with tunable hierarchical pores

    Science.gov (United States)

    Du, Xin; Li, Xiaoyu; Huang, Hongwei; He, Junhui; Zhang, Xueji

    2015-03-01

    Dendrimer-like silica particles with a center-radial dendritic framework and a synergistic hierarchical porosity have attracted much attention due to their unique open three-dimensional superstructures with high accessibility to the internal surface areas; however, the delicate regulation of the hierarchical porosity has been difficult to achieve up to now. Herein, a series of dendrimer-like amino-functionalized silica particles with tunable hierarchical pores (HPSNs-NH2) were successfully fabricated by carefully regulating and optimizing the various experimental parameters in the ethyl ether emulsion systems via a one-pot sol-gel reaction. Interestingly, the simple adjustment of the stirring rate or reaction temperature was found to be an easy and effective route to achieve the controllable regulation towards center-radial large pore sizes from ca. 37-267 (148 +/- 45) nm to ca. 8-119 (36 +/- 21) nm for HPSNs-NH2 with particle sizes of 300-700 nm and from ca. 9-157 (52 +/- 28) nm to ca. 8-105 (30 +/- 16) nm for HPSNs-NH2 with particle sizes of 100-320 nm. To the best of our knowledge, this is the first successful regulation towards center-radial large pore sizes in such large ranges. The formation of HPSNs-NH2 may be attributed to the complex cross-coupling of two processes: the dynamic diffusion of ethyl ether molecules and the self-assembly of partially hydrolyzed TEOS species and CTAB molecules at the dynamic ethyl ether-water interface of uniform small quasi-emulsion droplets. Thus, these results regarding the elaborate regulation of center-radial large pores and particle sizes not only help us better understand the complicated self-assembly at the dynamic oil-water interface, but also provide a unique and ideal platform as carriers or supports for adsorption, separation, catalysis, biomedicine, and sensor.Dendrimer-like silica particles with a center-radial dendritic framework and a synergistic hierarchical porosity have attracted much attention due to their

  12. Firewall Configuration Errors Revisited

    CERN Document Server

    Wool, Avishai

    2009-01-01

    The first quantitative evaluation of the quality of corporate firewall configurations appeared in 2004, based on Check Point FireWall-1 rule-sets. In general that survey indicated that corporate firewalls were often enforcing poorly written rule-sets, containing many mistakes. The goal of this work is to revisit the first survey. The current study is much larger. Moreover, for the first time, the study includes configurations from two major vendors. The study also introduce a novel "Firewall Complexity" (FC) measure, that applies to both types of firewalls. The findings of the current study indeed validate the 2004 study's main observations: firewalls are (still) poorly configured, and a rule-set's complexity is (still) positively correlated with the number of detected risk items. Thus we can conclude that, for well-configured firewalls, ``small is (still) beautiful''. However, unlike the 2004 study, we see no significant indication that later software versions have fewer errors (for both vendors).

  13. Beta systems error analysis

    Science.gov (United States)

    1984-01-01

    The atmospheric backscatter coefficient, beta, measured with an airborne CO Laser Doppler Velocimeter (LDV) system operating in a continuous wave, focussed model is discussed. The Single Particle Mode (SPM) algorithm, was developed from concept through analysis of an extensive amount of data obtained with the system on board a NASA aircraft. The SPM algorithm is intended to be employed in situations where one particle at a time appears in the sensitive volume of the LDV. In addition to giving the backscatter coefficient, the SPM algorithm also produces as intermediate results the aerosol density and the aerosol backscatter cross section distribution. A second method, which measures only the atmospheric backscatter coefficient, is called the Volume Mode (VM) and was simultaneously employed. The results of these two methods differed by slightly less than an order of magnitude. The measurement uncertainties or other errors in the results of the two methods are examined.

  14. Catalytic quantum error correction

    CERN Document Server

    Brun, T; Hsieh, M H; Brun, Todd; Devetak, Igor; Hsieh, Min-Hsiu

    2006-01-01

    We develop the theory of entanglement-assisted quantum error correcting (EAQEC) codes, a generalization of the stabilizer formalism to the setting in which the sender and receiver have access to pre-shared entanglement. Conventional stabilizer codes are equivalent to dual-containing symplectic codes. In contrast, EAQEC codes do not require the dual-containing condition, which greatly simplifies their construction. We show how any quaternary classical code can be made into a EAQEC code. In particular, efficient modern codes, like LDPC codes, which attain the Shannon capacity, can be made into EAQEC codes attaining the hashing bound. In a quantum computation setting, EAQEC codes give rise to catalytic quantum codes which maintain a region of inherited noiseless qubits. We also give an alternative construction of EAQEC codes by making classical entanglement assisted codes coherent.

  15. Converting Multi-Shell and Diffusion Spectrum Imaging to High Angular Resolution Diffusion Imaging.

    Science.gov (United States)

    Yeh, Fang-Cheng; Verstynen, Timothy D

    2016-01-01

    Multi-shell and diffusion spectrum imaging (DSI) are becoming increasingly popular methods of acquiring diffusion MRI data in a research context. However, single-shell acquisitions, such as diffusion tensor imaging (DTI) and high angular resolution diffusion imaging (HARDI), still remain the most common acquisition schemes in practice. Here we tested whether multi-shell and DSI data have conversion flexibility to be interpolated into corresponding HARDI data. We acquired multi-shell and DSI data on both a phantom and in vivo human tissue and converted them to HARDI. The correlation and difference between their diffusion signals, anisotropy values, diffusivity measurements, fiber orientations, connectivity matrices, and network measures were examined. Our analysis result showed that the diffusion signals, anisotropy, diffusivity, and connectivity matrix of the HARDI converted from multi-shell and DSI were highly correlated with those of the HARDI acquired on the MR scanner, with correlation coefficients around 0.8~0.9. The average angular error between converted and original HARDI was 20.7° at voxels with signal-to-noise ratios greater than 5. The network topology measures had less than 2% difference, whereas the average nodal measures had a percentage difference around 4~7%. In general, multi-shell and DSI acquisitions can be converted to their corresponding single-shell HARDI with high fidelity. This supports multi-shell and DSI acquisitions over HARDI acquisition as the scheme of choice for diffusion acquisitions.

  16. Converting Multi-Shell and Diffusion Spectrum Imaging to High Angular Resolution Diffusion Imaging

    Science.gov (United States)

    Yeh, Fang-Cheng; Verstynen, Timothy D.

    2016-01-01

    Multi-shell and diffusion spectrum imaging (DSI) are becoming increasingly popular methods of acquiring diffusion MRI data in a research context. However, single-shell acquisitions, such as diffusion tensor imaging (DTI) and high angular resolution diffusion imaging (HARDI), still remain the most common acquisition schemes in practice. Here we tested whether multi-shell and DSI data have conversion flexibility to be interpolated into corresponding HARDI data. We acquired multi-shell and DSI data on both a phantom and in vivo human tissue and converted them to HARDI. The correlation and difference between their diffusion signals, anisotropy values, diffusivity measurements, fiber orientations, connectivity matrices, and network measures were examined. Our analysis result showed that the diffusion signals, anisotropy, diffusivity, and connectivity matrix of the HARDI converted from multi-shell and DSI were highly correlated with those of the HARDI acquired on the MR scanner, with correlation coefficients around 0.8~0.9. The average angular error between converted and original HARDI was 20.7° at voxels with signal-to-noise ratios greater than 5. The network topology measures had less than 2% difference, whereas the average nodal measures had a percentage difference around 4~7%. In general, multi-shell and DSI acquisitions can be converted to their corresponding single-shell HARDI with high fidelity. This supports multi-shell and DSI acquisitions over HARDI acquisition as the scheme of choice for diffusion acquisitions. PMID:27683539

  17. Application of hierarchical matrices for partial inverse

    KAUST Repository

    Litvinenko, Alexander

    2013-11-26

    In this work we combine hierarchical matrix techniques (Hackbusch, 1999) and domain decomposition methods to obtain fast and efficient algorithms for the solution of multiscale problems. This combination results in the hierarchical domain decomposition (HDD) method, which can be applied for solution multi-scale problems. Multiscale problems are problems that require the use of different length scales. Using only the finest scale is very expensive, if not impossible, in computational time and memory. Domain decomposition methods decompose the complete problem into smaller systems of equations corresponding to boundary value problems in subdomains. Then fast solvers can be applied to each subdomain. Subproblems in subdomains are independent, much smaller and require less computational resources as the initial problem.

  18. First-passage phenomena in hierarchical networks

    CERN Document Server

    Tavani, Flavia

    2016-01-01

    In this paper we study Markov processes and related first passage problems on a class of weighted, modular graphs which generalize the Dyson hierarchical model. In these networks, the coupling strength between two nodes depends on their distance and is modulated by a parameter $\\sigma$. We find that, in the thermodynamic limit, ergodicity is lost and the "distant" nodes can not be reached. Moreover, for finite-sized systems, there exists a threshold value for $\\sigma$ such that, when $\\sigma$ is relatively large, the inhomogeneity of the coupling pattern prevails and "distant" nodes are hardly reached. The same analysis is carried on also for generic hierarchical graphs, where interactions are meant to involve $p$-plets ($p>2$) of nodes, finding that ergodicity is still broken in the thermodynamic limit, but no threshold value for $\\sigma$ is evidenced, ultimately due to a slow growth of the network diameter with the size.

  19. An Hierarchical Approach to Big Data

    CERN Document Server

    Allen, M G; Boch, T; Durand, D; Oberto, A; Merin, B; Stoehr, F; Genova, F; Pineau, F-X; Salgado, J

    2016-01-01

    The increasing volumes of astronomical data require practical methods for data exploration, access and visualisation. The Hierarchical Progressive Survey (HiPS) is a HEALPix based scheme that enables a multi-resolution approach to astronomy data from the individual pixels up to the whole sky. We highlight the decisions and approaches that have been taken to make this scheme a practical solution for managing large volumes of heterogeneous data. Early implementors of this system have formed a network of HiPS nodes, with some 250 diverse data sets currently available, with multiple mirror implementations for important data sets. This hierarchical approach can be adapted to expose Big Data in different ways. We describe how the ease of implementation, and local customisation of the Aladin Lite embeddable HiPS visualiser have been keys for promoting collaboration on HiPS.

  20. Non-homogeneous fractal hierarchical weighted networks.

    Science.gov (United States)

    Dong, Yujuan; Dai, Meifeng; Ye, Dandan

    2015-01-01

    A model of fractal hierarchical structures that share the property of non-homogeneous weighted networks is introduced. These networks can be completely and analytically characterized in terms of the involved parameters, i.e., the size of the original graph Nk and the non-homogeneous weight scaling factors r1, r2, · · · rM. We also study the average weighted shortest path (AWSP), the average degree and the average node strength, taking place on the non-homogeneous hierarchical weighted networks. Moreover the AWSP is scrupulously calculated. We show that the AWSP depends on the number of copies and the sum of all non-homogeneous weight scaling factors in the infinite network order limit.

  1. Noise enhances information transfer in hierarchical networks.

    Science.gov (United States)

    Czaplicka, Agnieszka; Holyst, Janusz A; Sloot, Peter M A

    2013-01-01

    We study the influence of noise on information transmission in the form of packages shipped between nodes of hierarchical networks. Numerical simulations are performed for artificial tree networks, scale-free Ravasz-Barabási networks as well for a real network formed by email addresses of former Enron employees. Two types of noise are considered. One is related to packet dynamics and is responsible for a random part of packets paths. The second one originates from random changes in initial network topology. We find that the information transfer can be enhanced by the noise. The system possesses optimal performance when both kinds of noise are tuned to specific values, this corresponds to the Stochastic Resonance phenomenon. There is a non-trivial synergy present for both noisy components. We found also that hierarchical networks built of nodes of various degrees are more efficient in information transfer than trees with a fixed branching factor.

  2. Design of Hierarchical Structures for Synchronized Deformations

    Science.gov (United States)

    Seifi, Hamed; Javan, Anooshe Rezaee; Ghaedizadeh, Arash; Shen, Jianhu; Xu, Shanqing; Xie, Yi Min

    2017-01-01

    In this paper we propose a general method for creating a new type of hierarchical structures at any level in both 2D and 3D. A simple rule based on a rotate-and-mirror procedure is introduced to achieve multi-level hierarchies. These new hierarchical structures have remarkably few degrees of freedom compared to existing designs by other methods. More importantly, these structures exhibit synchronized motions during opening or closure, resulting in uniform and easily-controllable deformations. Furthermore, a simple analytical formula is found which can be used to avoid collision of units of the structure during the closing process. The novel design concept is verified by mathematical analyses, computational simulations and physical experiments.

  3. Hierarchical model of vulnerabilities for emotional disorders.

    Science.gov (United States)

    Norton, Peter J; Mehta, Paras D

    2007-01-01

    Clark and Watson's (1991) tripartite model of anxiety and depression has had a dramatic impact on our understanding of the dispositional variables underlying emotional disorders. More recently, calls have been made to examine not simply the influence of negative affectivity (NA) but also mediating factors that might better explain how NA influences anxious and depressive syndromes (e.g. Taylor, 1998; Watson, 2005). Extending preliminary projects, this study evaluated two hierarchical models of NA, mediating factors of anxiety sensitivity and intolerance of uncertainty, and specific emotional manifestations. Data provided a very good fit to a model elaborated from preliminary studies, lending further support to hierarchical models of emotional vulnerabilities. Implications for classification and diagnosis are discussed.

  4. Hierarchical Self-organization of Complex Systems

    Institute of Scientific and Technical Information of China (English)

    CHAI Li-he; WEN Dong-sheng

    2004-01-01

    Researches on organization and structure in complex systems are academic and industrial fronts in modern sciences. Though many theories are tentatively proposed to analyze complex systems, we still lack a rigorous theory on them. Complex systems possess various degrees of freedom, which means that they should exhibit all kinds of structures. However, complex systems often show similar patterns and structures. Then the question arises why such similar structures appear in all kinds of complex systems. The paper outlines a theory on freedom degree compression and the existence of hierarchical self-organization for all complex systems is found. It is freedom degree compression and hierarchical self-organization that are responsible for the existence of these similar patterns or structures observed in the complex systems.

  5. Bayesian hierarchical modeling of drug stability data.

    Science.gov (United States)

    Chen, Jie; Zhong, Jinglin; Nie, Lei

    2008-06-15

    Stability data are commonly analyzed using linear fixed or random effect model. The linear fixed effect model does not take into account the batch-to-batch variation, whereas the random effect model may suffer from the unreliable shelf-life estimates due to small sample size. Moreover, both methods do not utilize any prior information that might have been available. In this article, we propose a Bayesian hierarchical approach to modeling drug stability data. Under this hierarchical structure, we first use Bayes factor to test the poolability of batches. Given the decision on poolability of batches, we then estimate the shelf-life that applies to all batches. The approach is illustrated with two example data sets and its performance is compared in simulation studies with that of the commonly used frequentist methods. (c) 2008 John Wiley & Sons, Ltd.

  6. Hierarchical State Machines as Modular Horn Clauses

    Directory of Open Access Journals (Sweden)

    Pierre-Loïc Garoche

    2016-07-01

    Full Text Available In model based development, embedded systems are modeled using a mix of dataflow formalism, that capture the flow of computation, and hierarchical state machines, that capture the modal behavior of the system. For safety analysis, existing approaches rely on a compilation scheme that transform the original model (dataflow and state machines into a pure dataflow formalism. Such compilation often result in loss of important structural information that capture the modal behaviour of the system. In previous work we have developed a compilation technique from a dataflow formalism into modular Horn clauses. In this paper, we present a novel technique that faithfully compile hierarchical state machines into modular Horn clauses. Our compilation technique preserves the structural and modal behavior of the system, making the safety analysis of such models more tractable.

  7. Hierarchical community structure in complex (social) networks

    CERN Document Server

    Massaro, Emanuele

    2014-01-01

    The investigation of community structure in networks is a task of great importance in many disciplines, namely physics, sociology, biology and computer science where systems are often represented as graphs. One of the challenges is to find local communities from a local viewpoint in a graph without global information in order to reproduce the subjective hierarchical vision for each vertex. In this paper we present the improvement of an information dynamics algorithm in which the label propagation of nodes is based on the Markovian flow of information in the network under cognitive-inspired constraints \\cite{Massaro2012}. In this framework we have introduced two more complex heuristics that allow the algorithm to detect the multi-resolution hierarchical community structure of networks from a source vertex or communities adopting fixed values of model's parameters. Experimental results show that the proposed methods are efficient and well-behaved in both real-world and synthetic networks.

  8. Object tracking with hierarchical multiview learning

    Science.gov (United States)

    Yang, Jun; Zhang, Shunli; Zhang, Li

    2016-09-01

    Building a robust appearance model is useful to improve tracking performance. We propose a hierarchical multiview learning framework to construct the appearance model, which has two layers for tracking. On the top layer, two different views of features, grayscale value and histogram of oriented gradients, are adopted for representation under the cotraining framework. On the bottom layer, for each view of each feature, three different random subspaces are generated to represent the appearance from multiple views. For each random view submodel, the least squares support vector machine is employed to improve the discriminability for concrete and efficient realization. These two layers are combined to construct the final appearance model for tracking. The proposed hierarchical model assembles two types of multiview learning strategies, in which the appearance can be described more accurately and robustly. Experimental results in the benchmark dataset demonstrate that the proposed method can achieve better performance than many existing state-of-the-art algorithms.

  9. Assembling hierarchical cluster solids with atomic precision.

    Science.gov (United States)

    Turkiewicz, Ari; Paley, Daniel W; Besara, Tiglet; Elbaz, Giselle; Pinkard, Andrew; Siegrist, Theo; Roy, Xavier

    2014-11-12

    Hierarchical solids created from the binary assembly of cobalt chalcogenide and iron oxide molecular clusters are reported. Six different molecular clusters based on the octahedral Co6E8 (E = Se or Te) and the expanded cubane Fe8O4 units are used as superatomic building blocks to construct these crystals. The formation of the solid is driven by the transfer of charge between complementary electron-donating and electron-accepting clusters in solution that crystallize as binary ionic compounds. The hierarchical structures are investigated by single-crystal X-ray diffraction, providing atomic and superatomic resolution. We report two different superstructures: a superatomic relative of the CsCl lattice type and an unusual packing arrangement based on the double-hexagonal close-packed lattice. Within these superstructures, we demonstrate various compositions and orientations of the clusters.

  10. Hierarchical Robot Control In A Multisensor Environment

    Science.gov (United States)

    Bhanu, Bir; Thune, Nils; Lee, Jih Kun; Thune, Mari

    1987-03-01

    Automatic recognition, inspection, manipulation and assembly of objects will be a common denominator in most of tomorrow's highly automated factories. These tasks will be handled by intelligent computer controlled robots with multisensor capabilities which contribute to desired flexibility and adaptability. The control of a robot in such a multisensor environment becomes of crucial importance as the complexity of the problem grows exponentially with the number of sensors, tasks, commands and objects. In this paper we present an approach which uses CAD (Computer-Aided Design) based geometric and functional models of objects together with action oriented neuroschemas to recognize and manipulate objects by a robot in a multisensor environment. The hierarchical robot control system is being implemented on a BBN Butterfly multi processor. Index terms: CAD, Hierarchical Control, Hypothesis Generation and Verification, Parallel Processing, Schemas

  11. The Non-Classical Boltzmann Equation, and Diffusion-Based Approximations to the Boltzmann Equation

    CERN Document Server

    Frank, Martin; Larsen, Edward W; Vasques, Richard

    2014-01-01

    We show that several diffusion-based approximations (classical diffusion or SP1, SP2, SP3) to the linear Boltzmann equation can (for an infinite, homogeneous medium) be represented exactly by a non-classical transport equation. As a consequence, we indicate a method to solve diffusion-based approximations to the Boltzmann equation via Monte Carlo, with only statistical errors - no truncation errors.

  12. Phase-transient hierarchical turbulence as an energy correlation generator of blazar light curves

    CERN Document Server

    Honda, Mitsuru

    2008-01-01

    Hierarchical turbulent structure constituting a jet is considered to reproduce energy-dependent variability in blazars, particularly, the correlation between X- and gamma-ray light curves measured in the TeV blazar Markarian 421. The scale-invariant filaments are featured by the ordered magnetic fields that involve hydromagnetic fluctuations serving as electron scatterers for diffusive shock acceleration, and the spatial size scales are identified with the local maximum electron energies, which are reflected in the synchrotron spectral energy distribution (SED) above the near-infrared/optical break. The structural transition of filaments is found to be responsible for the observed change of spectral hysteresis.

  13. Hierarchical zeolites: Enhanced utilisation of microporous crystals in catalysis by advances in materials design

    DEFF Research Database (Denmark)

    Perez-Ramirez, Javier; Christensen, Claus H.; Egeblad, Kresten

    2008-01-01

    in these materials often imposes intracrystalline diffusion limitations, rendering low utilisation of the zeolite active volume in catalysed reactions. This critical review examines recent advances in the rapidly evolving area of zeolites with improved accessibility and molecular transport. Strategies to enhance...... the properties of the resulting materials and the catalytic function. We particularly dwell on the exciting field of hierarchical zeolites, which couple in a single material the catalytic power of micropores and the facilitated access and improved transport consequence of a complementary mesopore network...

  14. TRANSIMS and the hierarchical data format

    Energy Technology Data Exchange (ETDEWEB)

    Bush, B.W.

    1997-06-12

    The Hierarchical Data Format (HDF) is a general-purposed scientific data format developed at the National Center for Supercomputing Applications. It supports metadata, compression, and a variety of data structures (multidimensional arrays, raster images, tables). FORTRAN 77 and ANSI C programming interfaces are available for it and a wide variety of visualization tools read HDF files. The author discusses the features of this file format and its possible uses in TRANSIMS.

  15. Modular, Hierarchical Learning By Artificial Neural Networks

    Science.gov (United States)

    Baldi, Pierre F.; Toomarian, Nikzad

    1996-01-01

    Modular and hierarchical approach to supervised learning by artificial neural networks leads to neural networks more structured than neural networks in which all neurons fully interconnected. These networks utilize general feedforward flow of information and sparse recurrent connections to achieve dynamical effects. The modular organization, sparsity of modular units and connections, and fact that learning is much more circumscribed are all attractive features for designing neural-network hardware. Learning streamlined by imitating some aspects of biological neural networks.

  16. The Infinite Hierarchical Factor Regression Model

    CERN Document Server

    Rai, Piyush

    2009-01-01

    We propose a nonparametric Bayesian factor regression model that accounts for uncertainty in the number of factors, and the relationship between factors. To accomplish this, we propose a sparse variant of the Indian Buffet Process and couple this with a hierarchical model over factors, based on Kingman's coalescent. We apply this model to two problems (factor analysis and factor regression) in gene-expression data analysis.

  17. Superhydrophobicity of Hierarchical and ZNO Nanowire Coatings

    Science.gov (United States)

    2014-01-01

    KOH (3 wt%), distilled water and isopropyl alcohol (10% vol%) at 95 C for 50 min. Subsequently, a 10 nm ZnO seed layer wasThis journal is © The Royal...ZnO have been widely used in sensors, piezo-nanogenerators, and solar cells. The hierarchical structures of ZnO nanowires grown on Si pyramid surfaces...exhibiting superhydrophobicity in this work will have promising applications in the next generation photovoltaic devices and solar cells

  18. Hierarchical mixture models for assessing fingerprint individuality

    OpenAIRE

    Dass, Sarat C.; Li, Mingfei

    2009-01-01

    The study of fingerprint individuality aims to determine to what extent a fingerprint uniquely identifies an individual. Recent court cases have highlighted the need for measures of fingerprint individuality when a person is identified based on fingerprint evidence. The main challenge in studies of fingerprint individuality is to adequately capture the variability of fingerprint features in a population. In this paper hierarchical mixture models are introduced to infer the extent of individua...

  19. Experimental repetitive quantum error correction.

    Science.gov (United States)

    Schindler, Philipp; Barreiro, Julio T; Monz, Thomas; Nebendahl, Volckmar; Nigg, Daniel; Chwalla, Michael; Hennrich, Markus; Blatt, Rainer

    2011-05-27

    The computational potential of a quantum processor can only be unleashed if errors during a quantum computation can be controlled and corrected for. Quantum error correction works if imperfections of quantum gate operations and measurements are below a certain threshold and corrections can be applied repeatedly. We implement multiple quantum error correction cycles for phase-flip errors on qubits encoded with trapped ions. Errors are corrected by a quantum-feedback algorithm using high-fidelity gate operations and a reset technique for the auxiliary qubits. Up to three consecutive correction cycles are realized, and the behavior of the algorithm for different noise environments is analyzed.

  20. Register file soft error recovery

    Science.gov (United States)

    Fleischer, Bruce M.; Fox, Thomas W.; Wait, Charles D.; Muff, Adam J.; Watson, III, Alfred T.

    2013-10-15

    Register file soft error recovery including a system that includes a first register file and a second register file that mirrors the first register file. The system also includes an arithmetic pipeline for receiving data read from the first register file, and error detection circuitry to detect whether the data read from the first register file includes corrupted data. The system further includes error recovery circuitry to insert an error recovery instruction into the arithmetic pipeline in response to detecting the corrupted data. The inserted error recovery instruction replaces the corrupted data in the first register file with a copy of the data from the second register file.

  1. Impact of spherical diffusion on labile trace metal speciation by electrochemical stripping techniques

    NARCIS (Netherlands)

    Pinheiro, J.P.; Domingos, R.F.

    2005-01-01

    The impact of the spherical diffusion contribution in labile trace metal speciation by stripping techniques was studied. It was shown that the relative error in the calculation of the stability constants caused by assuming linear diffusion varies with the efficiency of stirring, the diffusion coeffi

  2. High-throughput ab-initio dilute solute diffusion database

    Science.gov (United States)

    Wu, Henry; Mayeshiba, Tam; Morgan, Dane

    2016-07-01

    We demonstrate automated generation of diffusion databases from high-throughput density functional theory (DFT) calculations. A total of more than 230 dilute solute diffusion systems in Mg, Al, Cu, Ni, Pd, and Pt host lattices have been determined using multi-frequency diffusion models. We apply a correction method for solute diffusion in alloys using experimental and simulated values of host self-diffusivity. We find good agreement with experimental solute diffusion data, obtaining a weighted activation barrier RMS error of 0.176 eV when excluding magnetic solutes in non-magnetic alloys. The compiled database is the largest collection of consistently calculated ab-initio solute diffusion data in the world.

  3. Metal hierarchical patterning by direct nanoimprint lithography.

    Science.gov (United States)

    Radha, Boya; Lim, Su Hui; Saifullah, Mohammad S M; Kulkarni, Giridhar U

    2013-01-01

    Three-dimensional hierarchical patterning of metals is of paramount importance in diverse fields involving photonics, controlling surface wettability and wearable electronics. Conventionally, this type of structuring is tedious and usually involves layer-by-layer lithographic patterning. Here, we describe a simple process of direct nanoimprint lithography using palladium benzylthiolate, a versatile metal-organic ink, which not only leads to the formation of hierarchical patterns but also is amenable to layer-by-layer stacking of the metal over large areas. The key to achieving such multi-faceted patterning is hysteretic melting of ink, enabling its shaping. It undergoes transformation to metallic palladium under gentle thermal conditions without affecting the integrity of the hierarchical patterns on micro- as well as nanoscale. A metallic rice leaf structure showing anisotropic wetting behavior and woodpile-like structures were thus fabricated. Furthermore, this method is extendable for transferring imprinted structures to a flexible substrate to make them robust enough to sustain numerous bending cycles.

  4. Hierarchical unilamellar vesicles of controlled compositional heterogeneity.

    Directory of Open Access Journals (Sweden)

    Maik Hadorn

    Full Text Available Eukaryotic life contains hierarchical vesicular architectures (i.e. organelles that are crucial for material production and trafficking, information storage and access, as well as energy production. In order to perform specific tasks, these compartments differ among each other in their membrane composition and their internal cargo and also differ from the cell membrane and the cytosol. Man-made structures that reproduce this nested architecture not only offer a deeper understanding of the functionalities and evolution of organelle-bearing eukaryotic life but also allow the engineering of novel biomimetic technologies. Here, we show the newly developed vesicle-in-water-in-oil emulsion transfer preparation technique to result in giant unilamellar vesicles internally compartmentalized by unilamellar vesicles of different membrane composition and internal cargo, i.e. hierarchical unilamellar vesicles of controlled compositional heterogeneity. The compartmentalized giant unilamellar vesicles were subsequently isolated by a separation step exploiting the heterogeneity of the membrane composition and the encapsulated cargo. Due to the controlled, efficient, and technically straightforward character of the new preparation technique, this study allows the hierarchical fabrication of compartmentalized giant unilamellar vesicles of controlled compositional heterogeneity and will ease the development of eukaryotic cell mimics that resemble their natural templates as well as the fabrication of novel multi-agent drug delivery systems for combination therapies and complex artificial microreactors.

  5. A New Metrics for Hierarchical Clustering

    Institute of Scientific and Technical Information of China (English)

    YANGGuangwen; SHIShuming; WANGDingxing

    2003-01-01

    Hierarchical clustering is a popular method of performing unsupervised learning. Some metric must be used to determine the similarity between pairs of clusters in hierarchical clustering. Traditional similarity metrics either can deal with simple shapes (i.e. spherical shapes) only or are very sensitive to outliers (the chaining effect). The main contribution of this paper is to propose some potential-based similarity metrics (APES and AMAPES) between clusters in hierarchical clustering, inspired by the concepts of the electric potential and the gravitational potential in electromagnetics and astronomy. The main features of these metrics are: the first, they have strong antijamming capability; the second, they are capable of finding clusters of different shapes such as spherical, spiral, chain, circle, sigmoid, U shape or other complex irregular shapes; the third, existing algorithms and research fruits for classical metrics can be adopted to deal with these new potential-based metrics with no or little modification. Experiments showed that the new metrics are more superior to traditional ones. Different potential functions are compared, and the sensitivity to parameters is also analyzed in this paper.

  6. A secure solution on hierarchical access control

    CERN Document Server

    Wei, Chuan-Sheng; Huang, Tone-Yau; Ong, Yao Lin

    2011-01-01

    Hierarchical access control is an important and traditional problem in information security. In 2001, Wu et.al. proposed an elegant solution for hierarchical access control by the secure-filter. Jeng and Wang presented an improvement of Wu et. al.'s method by the ECC cryptosystem. However, secure-filter method is insecure in dynaminc access control. Lie, Hsu and Tripathy, Paul pointed out some secure leaks on the secure-filter and presented some improvements to eliminate these secure flaws. In this paper, we revise the secure-filter in Jeng-Wang method and propose another secure solutions in hierarchical access control problem. CA is a super security class (user) in our proposed method and the secure-filter of $u_i$ in our solutions is a polynomial of degree $n_i+1$ in $\\mathbb{Z}_p^*$, $f_i(x)=(x-h_i)(x-a_1)...(x-a_{n_i})+L_{l_i}(K_i)$. Although the degree of our secure-filter is larger than others solutions, our solution is secure and efficient in dynamics access control.

  7. SORM applied to hierarchical parallel system

    DEFF Research Database (Denmark)

    Ditlevsen, Ove Dalager

    2006-01-01

    The old hierarchical stochastic load combination model of Ferry Borges and Castanheta and the corresponding problem of determining the distribution of the extreme random load effect is the inspiration to this paper. The evaluation of the distribution function of the extreme value by use of a part......The old hierarchical stochastic load combination model of Ferry Borges and Castanheta and the corresponding problem of determining the distribution of the extreme random load effect is the inspiration to this paper. The evaluation of the distribution function of the extreme value by use...... of a particular first order reliability method (FORM) was first described in a celebrated paper by Rackwitz and Fiessler more than a quarter of a century ago. The method has become known as the Rackwitz-Fiessler algorithm. The original RF-algorithm as applied to a hierarchical random variable model...... is recapitulated so that a simple but quite effective accuracy improving calculation can be explained. A limit state curvature correction factor on the probability approximation is obtained from the final stop results of the RF-algorithm. This correction factor is based on Breitung’s asymptotic formula for second...

  8. Anisotropic and Hierarchical Porosity in Multifunctional Ceramics

    Science.gov (United States)

    Lichtner, Aaron Zev

    The performance of multifunctional porous ceramics is often hindered by the seemingly contradictory effects of porosity on both mechanical and non-structural properties and yet a sufficient body of knowledge linking microstructure to these properties does not exist. Using a combination of tailored anisotropic and hierarchical materials, these disparate effects may be reconciled. In this project, a systematic investigation of the processing, characterization and properties of anisotropic and isotropic hierarchically porous ceramics was conducted. The system chosen was a composite ceramic intended as the cathode for a solid oxide fuel cell (SOFC). Comprehensive processing investigations led to the development of approaches to make hierarchical, anisotropic porous microstructures using directional freeze-casting of well dispersed slurries. The effect of all the important processing parameters was investigated. This resulted in an ability to tailor and control the important microstructural features including the scale of the microstructure, the macropore size and total porosity. Comparable isotropic porous ceramics were also processed using fugitive pore formers. A suite of characterization techniques including x-ray tomography and 3-D sectional scanning electron micrographs (FIB-SEM) was used to characterize and quantify the green and partially sintered microstructures. The effect of sintering temperature on the microstructure was quantified and discrete element simulations (DEM) were used to explain the experimental observations. Finally, the comprehensive mechanical properties, at room temperature, were investigated, experimentally and using DEM, for the different microstructures.

  9. Resilient 3D hierarchical architected metamaterials.

    Science.gov (United States)

    Meza, Lucas R; Zelhofer, Alex J; Clarke, Nigel; Mateos, Arturo J; Kochmann, Dennis M; Greer, Julia R

    2015-09-15

    Hierarchically designed structures with architectural features that span across multiple length scales are found in numerous hard biomaterials, like bone, wood, and glass sponge skeletons, as well as manmade structures, like the Eiffel Tower. It has been hypothesized that their mechanical robustness and damage tolerance stem from sophisticated ordering within the constituents, but the specific role of hierarchy remains to be fully described and understood. We apply the principles of hierarchical design to create structural metamaterials from three material systems: (i) polymer, (ii) hollow ceramic, and (iii) ceramic-polymer composites that are patterned into self-similar unit cells in a fractal-like geometry. In situ nanomechanical experiments revealed (i) a nearly theoretical scaling of structural strength and stiffness with relative density, which outperforms existing nonhierarchical nanolattices; (ii) recoverability, with hollow alumina samples recovering up to 98% of their original height after compression to ≥ 50% strain; (iii) suppression of brittle failure and structural instabilities in hollow ceramic hierarchical nanolattices; and (iv) a range of deformation mechanisms that can be tuned by changing the slenderness ratios of the beams. Additional levels of hierarchy beyond a second order did not increase the strength or stiffness, which suggests the existence of an optimal degree of hierarchy to amplify resilience. We developed a computational model that captures local stress distributions within the nanolattices under compression and explains some of the underlying deformation mechanisms as well as validates the measured effective stiffness to be interpreted as a metamaterial property.

  10. The Hourglass Effect in Hierarchical Dependency Networks

    CERN Document Server

    Sabrin, Kaeser M

    2016-01-01

    Many hierarchically modular systems are structured in a way that resembles a bow-tie or hourglass. This "hourglass effect" means that the system generates many outputs from many inputs through a relatively small number of intermediate modules that are critical for the operation of the entire system (the waist of the hourglass). We investigate the hourglass effect in general (not necessarily layered) hierarchical dependency networks. Our analysis focuses on the number of source-to-target dependency paths that traverse each vertex, and it identifies the core of a dependency network as the smallest set of vertices that collectively cover almost all dependency paths. We then examine if a given network exhibits the hourglass property or not, comparing its core size with a "flat" (i.e., non-hierarchical) network that preserves the source dependencies of each target in the original network. As a possible explanation for the hourglass effect, we propose the Reuse Preference (RP) model that captures the bias of new mo...

  11. Semantic Image Segmentation with Contextual Hierarchical Models.

    Science.gov (United States)

    Seyedhosseini, Mojtaba; Tasdizen, Tolga

    2016-05-01

    Semantic segmentation is the problem of assigning an object label to each pixel. It unifies the image segmentation and object recognition problems. The importance of using contextual information in semantic segmentation frameworks has been widely realized in the field. We propose a contextual framework, called contextual hierarchical model (CHM), which learns contextual information in a hierarchical framework for semantic segmentation. At each level of the hierarchy, a classifier is trained based on downsampled input images and outputs of previous levels. Our model then incorporates the resulting multi-resolution contextual information into a classifier to segment the input image at original resolution. This training strategy allows for optimization of a joint posterior probability at multiple resolutions through the hierarchy. Contextual hierarchical model is purely based on the input image patches and does not make use of any fragments or shape examples. Hence, it is applicable to a variety of problems such as object segmentation and edge detection. We demonstrate that CHM performs at par with state-of-the-art on Stanford background and Weizmann horse datasets. It also outperforms state-of-the-art edge detection methods on NYU depth dataset and achieves state-of-the-art on Berkeley segmentation dataset (BSDS 500).

  12. Controlling errors in unidosis carts

    Directory of Open Access Journals (Sweden)

    Inmaculada Díaz Fernández

    2010-01-01

    Full Text Available Objective: To identify errors in the unidosis system carts. Method: For two months, the Pharmacy Service controlled medication either returned or missing from the unidosis carts both in the pharmacy and in the wards. Results: Uncorrected unidosis carts show a 0.9% of medication errors (264 versus 0.6% (154 which appeared in unidosis carts previously revised. In carts not revised, the error is 70.83% and mainly caused when setting up unidosis carts. The rest are due to a lack of stock or unavailability (21.6%, errors in the transcription of medical orders (6.81% or that the boxes had not been emptied previously (0.76%. The errors found in the units correspond to errors in the transcription of the treatment (3.46%, non-receipt of the unidosis copy (23.14%, the patient did not take the medication (14.36%or was discharged without medication (12.77%, was not provided by nurses (14.09%, was withdrawn from the stocks of the unit (14.62%, and errors of the pharmacy service (17.56% . Conclusions: It is concluded the need to redress unidosis carts and a computerized prescription system to avoid errors in transcription.Discussion: A high percentage of medication errors is caused by human error. If unidosis carts are overlooked before sent to hospitalization units, the error diminishes to 0.3%.

  13. Prediction of discretization error using the error transport equation

    Science.gov (United States)

    Celik, Ismail B.; Parsons, Don Roscoe

    2017-06-01

    This study focuses on an approach to quantify the discretization error associated with numerical solutions of partial differential equations by solving an error transport equation (ETE). The goal is to develop a method that can be used to adequately predict the discretization error using the numerical solution on only one grid/mesh. The primary problem associated with solving the ETE is the formulation of the error source term which is required for accurately predicting the transport of the error. In this study, a novel approach is considered which involves fitting the numerical solution with a series of locally smooth curves and then blending them together with a weighted spline approach. The result is a continuously differentiable analytic expression that can be used to determine the error source term. Once the source term has been developed, the ETE can easily be solved using the same solver that is used to obtain the original numerical solution. The new methodology is applied to the two-dimensional Navier-Stokes equations in the laminar flow regime. A simple unsteady flow case is also considered. The discretization error predictions based on the methodology presented in this study are in good agreement with the 'true error'. While in most cases the error predictions are not quite as accurate as those from Richardson extrapolation, the results are reasonable and only require one numerical grid. The current results indicate that there is much promise going forward with the newly developed error source term evaluation technique and the ETE.

  14. Prioritising interventions against medication errors

    DEFF Research Database (Denmark)

    Lisby, Marianne; Pape-Larsen, Louise; Sørensen, Ann Lykkegaard

    2011-01-01

    Abstract Authors: Lisby M, Larsen LP, Soerensen AL, Nielsen LP, Mainz J Title: Prioritising interventions against medication errors – the importance of a definition Objective: To develop and test a restricted definition of medication errors across health care settings in Denmark Methods: Medication...... errors constitute a major quality and safety problem in modern healthcare. However, far from all are clinically important. The prevalence of medication errors ranges from 2-75% indicating a global problem in defining and measuring these [1]. New cut-of levels focusing the clinical impact of medication...... errors are therefore needed. Development of definition: A definition of medication errors including an index of error types for each stage in the medication process was developed from existing terminology and through a modified Delphi-process in 2008. The Delphi panel consisted of 25 interdisciplinary...

  15. Sentinel-2 diffuser on-ground calibration

    Science.gov (United States)

    Mazy, E.; Camus, F.; Chorvalli, V.; Domken, I.; Laborie, A.; Marcotte, S.; Stockman, Y.

    2013-10-01

    The Sentinel-2 multi-spectral instrument (MSI) will provide Earth imagery in the frame of the Global Monitoring for Environment and Security (GMES) initiative which is a joint undertaking of the European Commission and the Agency. MSI instrument, under Astrium SAS responsibility, is a push-broom spectro imager in 13 spectral channels in VNIR and SWIR. The instrument radiometric calibration is based on in-flight calibration with sunlight through a quasi Lambertian diffuser. The diffuser covers the full pupil and the full field of view of the instrument. The on-ground calibration of the diffuser BRDF is mandatory to fulfil the in-flight performances. The diffuser is a 779 x 278 mm2 rectangular flat area in Zenith-A material. It is mounted on a motorised door in front of the instrument optical system entrance. The diffuser manufacturing and calibration is under the Centre Spatial of Liege (CSL) responsibility. The CSL has designed and built a completely remote controlled BRDF test bench able to handle large diffusers in their mount. As the diffuser is calibrated directly in its mount with respect to a reference cube, the error budget is significantly improved. The BRDF calibration is performed directly in MSI instrument spectral bands by using dedicated band-pass filters (VNIR and SWIR up to 2200 nm). Absolute accuracy is better than 0.5% in VNIR spectral bands and 1% in SWIR spectral bands. Performances were cross checked with other laboratories. The first MSI diffuser for flight model was calibrated mid 2013 on CSL BRDF measurement bench. The calibration of the diffuser consists mainly in thermal vacuum cycles, BRDF uniformity characterisation and BRDF angular characterisation. The total amount of measurement for the first flight model diffuser corresponds to more than 17500 BRDF acquisitions. Performance results are discussed in comparison with requirements.

  16. Hierarchical Direct Time Integration Method and Adaptive Procedure for Dynamic Analysis

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    New hierarchical direct time integration method for structural dynamic analysis is developed by using Taylor series expansions in each time step. Very accurate results can be obtained by increasing the order of the Taylor series. Furthermore, the local error can be estimated by simply comparing the solutions obtained by the proposed method with the higher order solutions. This local estimate is then used to develop an adaptive order-control technique. Numerical examples are given to illustrate the performance of the present method and its adaptive procedure.

  17. Hierarchical Least Squares Identification and Its Convergence for Large Scale Multivariable Systems

    Institute of Scientific and Technical Information of China (English)

    丁锋; 丁韬

    2002-01-01

    The recursive least squares identification algorithm (RLS) for large scale multivariable systems requires a large amount of calculations, therefore, the RLS algorithm is difficult to implement on a computer. The computational load of estimation algorithms can be reduced using the hierarchical least squares identification algorithm (HLS) for large scale multivariable systems. The convergence analysis using the Martingale Convergence Theorem indicates that the parameter estimation error (PEE) given by the HLS algorithm is uniformly bounded without a persistent excitation signal and that the PEE consistently converges to zero for the persistent excitation condition. The HLS algorithm has a much lower computational load than the RLS algorithm.

  18. Growth Mechanism of a Unique Hierarchical Vaterite Structure

    Science.gov (United States)

    Ma, Guobin; Xu, Yifei; Wang, Mu

    2013-03-01

    Calcium carbonate is one of the most significant minerals in nature as well as in biogenic sources. Calcium carbonate occurs naturally in three crystalline polymorphs, i.e., calcite, aragonite, and vaterite. Although it has been attracted much research attention to understanding of the formation mechanisms of the material, the properties of the vaterite polymorph is not well known. Here we report synthesis and formation mechanism of a unique hierarchical structure of vaterite. The material is grown by a controlled diffusion method. The structure possesses a core and an outer part. The core is convex lens-like and is formed by vaterite nanocrystals that have small misorientations. The outer part is separated into six garlic clove-like segments. Each segment possesses piles of plate-like vaterite crystals, and the orientations of the plates continuously change from pile to pile. Based on real-time experimental results and the structural analysis, a growth mechanism is presented. Work supported by NSFC (Grant No. 51172104) and MOST of China (Grant No. 2101CB630705)

  19. Quantifying soil CO2 respiration measurement error across instruments

    Science.gov (United States)

    Creelman, C. A.; Nickerson, N. R.; Risk, D. A.

    2010-12-01

    A variety of instrumental methodologies have been developed in an attempt to accurately measure the rate of soil CO2 respiration. Among the most commonly used are the static and dynamic chamber systems. The degree to which these methods misread or perturb the soil CO2 signal, however, is poorly understood. One source of error in particular is the introduction of lateral diffusion due to the disturbance of the steady-state CO2 concentrations. The addition of soil collars to the chamber system attempts to address this perturbation, but may induce additional errors from the increased physical disturbance. Using a numerical 3D soil-atmosphere diffusion model, we are undertaking a comprehensive comparative study of existing static and dynamic chambers, as well as a solid-state CTFD probe. Specifically, we are examining the 3D diffusion errors associated with each method and opportunities for correction. In this study, the impact of collar length, chamber geometry, chamber mixing and diffusion parameters on the magnitude of lateral diffusion around the instrument are quantified in order to provide insight into obtaining more accurate soil respiration estimates. Results suggest that while each method can approximate the true flux rate under idealized conditions, the associated errors can be of a high magnitude and may vary substantially in their sensitivity to these parameters. In some cases, factors such as the collar length and chamber exchange rate used are coupled in their effect on accuracy. Due to the widespread use of these instruments, it is critical that the nature of their biases and inaccuracies be understood in order to inform future development, ensure the accuracy of current measurements and to facilitate inter-comparison between existing datasets.

  20. Improved Error Thresholds for Measurement-Free Error Correction

    Science.gov (United States)

    Crow, Daniel; Joynt, Robert; Saffman, M.

    2016-09-01

    Motivated by limitations and capabilities of neutral atom qubits, we examine whether measurement-free error correction can produce practical error thresholds. We show that this can be achieved by extracting redundant syndrome information, giving our procedure extra fault tolerance and eliminating the need for ancilla verification. The procedure is particularly favorable when multiqubit gates are available for the correction step. Simulations of the bit-flip, Bacon-Shor, and Steane codes indicate that coherent error correction can produce threshold error rates that are on the order of 10-3 to 10-4—comparable with or better than measurement-based values, and much better than previous results for other coherent error correction schemes. This indicates that coherent error correction is worthy of serious consideration for achieving protected logical qubits.

  1. Fuzzy/Kalman Hierarchical Horizontal Motion Control of Underactuated ROVs

    Directory of Open Access Journals (Sweden)

    Francesco M. Raimondi

    2010-09-01

    Full Text Available A new closed loop fuzzy motion control system including on-line Kalman's filter (KF for the two dimensional motion of underactuated and underwater Remotely Operated Vehicle (ROV is presented. Since the sway force is unactuated, new continuous and discrete time models are developed using a polar transformation. A new hierarchical control architecture is developed, where the high level fuzzy guidance controller generates the surge speed and the yaw rate needed to achieve the objective of planar motion, while the low level controller gives the thruster surge force and the yaw control signals. The Fuzzy controller ensures robustness with respect to uncertainties due to the marine environment, forward surge speed and saturation of the control signals. Also Lyapunov's stability of the motion errors is proved based on the properties of the fuzzy maps. If Inertial Measurement Unit data (IMU is employed for the feedback directly, aleatory noises due to accelerometers and gyros damage the performances of the motion control. These noises denote a king of non parametric uncertainty which perturbs the model of the ROV. Therefore a KF is inserted in the feedback of the control system to compensate for the above uncertainties and estimate the feedback signals with more precision.

  2. Hierarchical Real-time Network Traffic Classification Based on ECOC

    Directory of Open Access Journals (Sweden)

    Yaou Zhao

    2013-09-01

    Full Text Available Classification of network traffic is basic and essential for manynetwork researches and managements. With the rapid development ofpeer-to-peer (P2P application using dynamic port disguisingtechniques and encryption to avoid detection, port-based and simplepayload-based network traffic classification methods were diminished.An alternative method based on statistics and machine learning hadattracted researchers' attention in recent years. However, most ofthe proposed algorithms were off-line and usually used a single classifier.In this paper a new hierarchical real-time model was proposed which comprised of a three tuple (source ip, destination ip and destination portlook up table(TT-LUT part and layered milestone part. TT-LUT was used to quickly classify short flows whichneed not to pass the layered milestone part, and milestones in layered milestone partcould classify the other flows in real-time with the real-time feature selection and statistics.Every milestone was a ECOC(Error-Correcting Output Codes based model which was usedto improve classification performance. Experiments showed that the proposedmodel can improve the efficiency of real-time to 80%, and themulti-class classification accuracy encouragingly to 91.4% on the datasets which had been captured from the backbone router in our campus through a week.

  3. Hierarchical, Three-Dimensional Measurement System for Crime Scene Scanning.

    Science.gov (United States)

    Marcin, Adamczyk; Maciej, Sieniło; Robert, Sitnik; Adam, Woźniak

    2017-02-02

    We present a new generation of three-dimensional (3D) measuring systems, developed for the process of crime scene documentation. This measuring system facilitates the preparation of more insightful, complete, and objective documentation for crime scenes. Our system reflects the actual requirements for hierarchical documentation, and it consists of three independent 3D scanners: a laser scanner for overall measurements, a situational structured light scanner for more minute measurements, and a detailed structured light scanner for the most detailed parts of tscene. Each scanner has its own spatial resolution, of 2.0, 0.3, and 0.05 mm, respectively. The results of interviews we have conducted with technicians indicate that our developed 3D measuring system has significant potential to become a useful tool for forensic technicians. To ensure the maximum compatibility of our measuring system with the standards that regulate the documentation process, we have also performed a metrological validation and designated the maximum permissible length measurement error EMPE for each structured light scanner. In this study, we present additional results regarding documentation processes conducted during crime scene inspections and a training session.

  4. Hierarchical segmentation-assisted multimodal registration for MR brain images.

    Science.gov (United States)

    Lu, Huanxiang; Beisteiner, Roland; Nolte, Lutz-Peter; Reyes, Mauricio

    2013-04-01

    Information theory-based metric such as mutual information (MI) is widely used as similarity measurement for multimodal registration. Nevertheless, this metric may lead to matching ambiguity for non-rigid registration. Moreover, maximization of MI alone does not necessarily produce an optimal solution. In this paper, we propose a segmentation-assisted similarity metric based on point-wise mutual information (PMI). This similarity metric, termed SPMI, enhances the registration accuracy by considering tissue classification probabilities as prior information, which is generated from an expectation maximization (EM) algorithm. Diffeomorphic demons is then adopted as the registration model and is optimized in a hierarchical framework (H-SPMI) based on different levels of anatomical structure as prior knowledge. The proposed method is evaluated using Brainweb synthetic data and clinical fMRI images. Both qualitative and quantitative assessment were performed as well as a sensitivity analysis to the segmentation error. Compared to the pure intensity-based approaches which only maximize mutual information, we show that the proposed algorithm provides significantly better accuracy on both synthetic and clinical data.

  5. On the unnecessary ubiquity of hierarchical linear modeling.

    Science.gov (United States)

    McNeish, Daniel; Stapleton, Laura M; Silverman, Rebecca D

    2017-03-01

    In psychology and the behavioral sciences generally, the use of the hierarchical linear model (HLM) and its extensions for discrete outcomes are popular methods for modeling clustered data. HLM and its discrete outcome extensions, however, are certainly not the only methods available to model clustered data. Although other methods exist and are widely implemented in other disciplines, it seems that psychologists have yet to consider these methods in substantive studies. This article compares and contrasts HLM with alternative methods including generalized estimating equations and cluster-robust standard errors. These alternative methods do not model random effects and thus make a smaller number of assumptions and are interpreted identically to single-level methods with the benefit that estimates are adjusted to reflect clustering of observations. Situations where these alternative methods may be advantageous are discussed including research questions where random effects are and are not required, when random effects can change the interpretation of regression coefficients, challenges of modeling with random effects with discrete outcomes, and examples of published psychology articles that use HLM that may have benefitted from using alternative methods. Illustrative examples are provided and discussed to demonstrate the advantages of the alternative methods and also when HLM would be the preferred method. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  6. Hierarchical object parsing from structured noisy point clouds.

    Science.gov (United States)

    Barbu, Adrian

    2013-07-01

    Object parsing and segmentation from point clouds are challenging tasks because the relevant data is available only as thin structures along object boundaries or other features, and is corrupted by large amounts of noise. To handle this kind of data, flexible shape models are desired that can accurately follow the object boundaries. Popular models such as active shape and active appearance models (AAMs) lack the necessary flexibility for this task, while recent approaches such as the recursive compositional models make model simplifications to obtain computational guarantees. This paper investigates a hierarchical Bayesian model of shape and appearance in a generative setting. The input data is explained by an object parsing layer which is a deformation of a hidden principal component analysis (PCA) shape model with Gaussian prior. The paper also introduces a novel efficient inference algorithm that uses informed data-driven proposals to initialize local searches for the hidden variables. Applied to the problem of object parsing from structured point clouds such as edge detection images, the proposed approach obtains state-of-the-art parsing errors on two standard datasets without using any intensity information.

  7. Fuzzy/Kalman Hierarchical Horizontal Motion Control of Underactuated ROVs

    Directory of Open Access Journals (Sweden)

    Francesco M. Raimondi

    2010-06-01

    Full Text Available A new closed loop fuzzy motion control system including on-line Kalman's filter (KF for the two dimensional motion of underactuated and underwater Remotely Operated Vehicle (ROV is presented. Since the sway force is unactuated, new continuous and discrete time models are developed using a polar transformation. A new hierarchical control architecture is developed, where the high level fuzzy guidance controller generates the surge speed and the yaw rate needed to achieve the objective of planar motion, while the low level controller gives the thruster surge force and the yaw torque control signals. The Fuzzy controller ensures robustness with respect to uncertainties due to the marine environment, forward surge speed and saturation of the control signals. Also Lyapunov's stability of the motion errors is proved based on the properties of the fuzzy maps. If Inertial Measurement Unit data (IMU is employed for the feedback directly, aleatory noises due to accelerometers and gyros damage the performances of the motion control. These noises denote a kind of non parametric uncertainty which perturbs the model of the ROV. Therefore a KF is inserted in the feedback of the control system to compensate for the above uncertainties and estimate the feedback signals with more precision.

  8. Influence of Boundary Condition and Diffusion Coefficient on the Accuracy of Diffusion Theory in Steady-State Spatially Resolved Diffuse Reflectance of Biological Tissues

    Institute of Scientific and Technical Information of China (English)

    张连顺; 张春平; 王新宇; 祁胜文; 许棠; 田建国; 张光寅

    2002-01-01

    The applicability of diffusion theory for the determination of tissue optical properties from steady-state diffuse reflectance is investigated. Analytical expressions from diffusion theory using the two most commonly assumed boundary conditions at the air-tissue interface and the two definitions of the diffusion coefficient are compared with Monte Carlo simulations. The effects of the choice of the boundary conditions and diffusion coefficients on the accuracy of the findings for the optical parameters are quantified, and criteria for accurate curve-fitting algorithms are developed. It is shown that the error in deriving the optical coefficients is considerably smaller for the solution which uses the extrapolated boundary condition and the diffusion coefficient independence of absorption coefficient, compared to the other three solutions.

  9. Diffusion on spatial network

    Science.gov (United States)

    Hui, Zi; Tang, Xiaoyue; Li, Wei; Greneche, Jean-Marc; Wang, Qiuping A.

    2015-04-01

    In this work, we study the problem of diffusing a product (idea, opinion, disease etc.) among agents on spatial network. The network is constructed by random addition of nodes on the planar. The probability for a previous node to be connected to the new one is inversely proportional to their spatial distance to the power of α. The diffusion rate between two connected nodes is inversely proportional to their spatial distance to the power of β as well. Inspired from the Fick's first law, we introduce the diffusion coefficient to measure the diffusion ability of the spatial network. Using both theoretical analysis and Monte Carlo simulation, we get the fact that the diffusion coefficient always decreases with the increasing of parameter α and β, and the diffusion sub-coefficient follows the power-law of the spatial distance with exponent equals to -α-β+2. Since both short-range diffusion and long-range diffusion exist, we use anomalous diffusion method in diffusion process. We get the fact that the slope index δ in anomalous diffusion is always smaller that 1. The diffusion process in our model is sub-diffusion.

  10. PREVENTABLE ERRORS: NEVER EVENTS

    Directory of Open Access Journals (Sweden)

    Narra Gopal

    2014-07-01

    Full Text Available Operation or any invasive procedure is a stressful event involving risks and complications. We should be able to offer a guarantee that the right procedure will be done on right person in the right place on their body. “Never events” are definable. These are the avoidable and preventable events. The people affected from consequences of surgical mistakes ranged from temporary injury in 60%, permanent injury in 33% and death in 7%”.World Health Organization (WHO [1] has earlier said that over seven million people across the globe suffer from preventable surgical injuries every year, a million of them even dying during or immediately after the surgery? The UN body quantified the number of surgeries taking place every year globally 234 million. It said surgeries had become common, with one in every 25 people undergoing it at any given time. 50% never events are preventable. Evidence suggests up to one in ten hospital admissions results in an adverse incident. This incident rate is not acceptable in other industries. In order to move towards a more acceptable level of safety, we need to understand how and why things go wrong and have to build a reliable system of working. With this system even though complete prevention may not be possible but we can reduce the error percentage2. To change present concept towards patient, first we have to change and replace the word patient with medical customer. Then our outlook also changes, we will be more careful towards our customers.

  11. Comparison of analytical error and sampling error for contaminated soil.

    Science.gov (United States)

    Gustavsson, Björn; Luthbom, Karin; Lagerkvist, Anders

    2006-11-16

    Investigation of soil from contaminated sites requires several sample handling steps that, most likely, will induce uncertainties in the sample. The theory of sampling describes seven sampling errors that can be calculated, estimated or discussed in order to get an idea of the size of the sampling uncertainties. With the aim of comparing the size of the analytical error to the total sampling error, these seven errors were applied, estimated and discussed, to a case study of a contaminated site. The manageable errors were summarized, showing a range of three orders of magnitudes between the examples. The comparisons show that the quotient between the total sampling error and the analytical error is larger than 20 in most calculation examples. Exceptions were samples taken in hot spots, where some components of the total sampling error get small and the analytical error gets large in comparison. Low concentration of contaminant, small extracted sample size and large particles in the sample contribute to the extent of uncertainty.

  12. Hierarchical Structures from Inorganic Nanocrystal Self-Assembly for Photoenergy Utilization

    Directory of Open Access Journals (Sweden)

    Yun-Pei Zhu

    2014-01-01

    Full Text Available Self-assembly has emerged as a powerful strategy for controlling the structure and physicochemical properties of ensembles of inorganic nanocrystals. Hierarchical structures from nanocrystal assembly show collective properties that differ from individual nanocrystals and bulk samples. Incorporation of structural hierarchy into nanostructures is of great importance as a result of enhancing mass transportation, reducing resistance to diffusion, and high surface areas for adsorption and reaction, and thus much effort has been devoted to the exploration of various novel organizing schemes through which inorganic porous structure with architectural design can be created. In this paper, the recent research progress in this field is reviewed. The general strategies for the synthesis of hierarchical structures assembled from nanobuilding blocks are elaborated. The well-defined hierarchical structures provide new opportunities for optimizing, tuning, and/or enhancing the properties and performance of these materials and have found applications in photoenergy utilization including photodegradation, photocatalytic H2 production, photocatalytic CO2 conversion, and sensitized solar cells, and these are discussed illustratively.

  13. Effect of hierarchical deformable motion compensation on image enhancement for DSA acquired via C-ARM

    Science.gov (United States)

    Wei, Liyang; Shen, Dinggang; Kumar, Dinesh; Turlapati, Ram; Suri, Jasjit S.

    2008-02-01

    DSA images suffer from challenges like system X-ray noise and artifacts due to patient movement. In this paper, we present a two-step strategy to improve DSA image quality. First, a hierarchical deformable registration algorithm is used to register the mask frame and the bolus frame before subtraction. Second, the resulted DSA image is further enhanced by background diffusion and nonlinear normalization for better visualization. Two major changes are made in the hierarchical deformable registration algorithm for DSA images: 1) B-Spline is used to represent the deformation field in order to produce the smooth deformation field; 2) two features are defined as the attribute vector for each point in the image, i.e., original image intensity and gradient. Also, for speeding up the 2D image registration, the hierarchical motion compensation algorithm is implemented by a multi-resolution framework. The proposed method has been evaluated on a database of 73 subjects by quantitatively measuring signal-to-noise (SNR) ratio. DSA embedded with proposed strategies demonstrates an improvement of 74.1% over conventional DSA in terms of SNR. Our system runs on Eigen's DSA workstation using C++ in Windows environment.

  14. On the Design of Error-Correcting Ciphers

    Directory of Open Access Journals (Sweden)

    Mathur Chetan Nanjunda

    2006-01-01

    Full Text Available Securing transmission over a wireless network is especially challenging, not only because of the inherently insecure nature of the medium, but also because of the highly error-prone nature of the wireless environment. In this paper, we take a joint encryption-error correction approach to ensure secure and robust communication over the wireless link. In particular, we design an error-correcting cipher (called the high diffusion cipher and prove bounds on its error-correcting capacity as well as its security. Towards this end, we propose a new class of error-correcting codes (HD-codes with built-in security features that we use in the diffusion layer of the proposed cipher. We construct an example, 128-bit cipher using the HD-codes, and compare it experimentally with two traditional concatenated systems: (a AES (Rijndael followed by Reed-Solomon codes, (b Rijndael followed by convolutional codes. We show that the HD-cipher is as resistant to linear and differential cryptanalysis as the Rijndael. We also show that any chosen plaintext attack that can be performed on the HD cipher can be transformed into a chosen plaintext attack on the Rijndael cipher. In terms of error correction capacity, the traditional systems using Reed-Solomon codes are comparable to the proposed joint error-correcting cipher and those that use convolutional codes require more data expansion in order to achieve similar error correction as the HD-cipher. The original contributions of this work are (1 design of a new joint error-correction-encryption system, (2 design of a new class of algebraic codes with built-in security criteria, called the high diffusion codes (HD-codes for use in the HD-cipher, (3 mathematical properties of these codes, (4 methods for construction of the codes, (5 bounds on the error-correcting capacity of the HD-cipher, (6 mathematical derivation of the bound on resistance of HD cipher to linear and differential cryptanalysis, (7 experimental comparison

  15. The Usability-Error Ontology

    DEFF Research Database (Denmark)

    2013-01-01

    ability to do systematic reviews and meta-analyses. In an effort to support improved and more interoperable data capture regarding Usability Errors, we have created the Usability Error Ontology (UEO) as a classification method for representing knowledge regarding Usability Errors. We expect the UEO...... in patients coming to harm. Often the root cause analysis of these adverse events can be traced back to Usability Errors in the Health Information Technology (HIT) or its interaction with users. Interoperability of the documentation of HIT related Usability Errors in a consistent fashion can improve our...... will grow over time to support an increasing number of HIT system types. In this manuscript, we present this Ontology of Usability Error Types and specifically address Computerized Physician Order Entry (CPOE), Electronic Health Records (EHR) and Revenue Cycle HIT systems....

  16. Nested Quantum Error Correction Codes

    CERN Document Server

    Wang, Zhuo; Fan, Hen; Vedral, Vlatko

    2009-01-01

    The theory of quantum error correction was established more than a decade ago as the primary tool for fighting decoherence in quantum information processing. Although great progress has already been made in this field, limited methods are available in constructing new quantum error correction codes from old codes. Here we exhibit a simple and general method to construct new quantum error correction codes by nesting certain quantum codes together. The problem of finding long quantum error correction codes is reduced to that of searching several short length quantum codes with certain properties. Our method works for all length and all distance codes, and is quite efficient to construct optimal or near optimal codes. Two main known methods in constructing new codes from old codes in quantum error-correction theory, the concatenating and pasting, can be understood in the framework of nested quantum error correction codes.

  17. Accounting for uncertainty in ecological analysis: the strengths and limitations of hierarchical statistical modeling.

    Science.gov (United States)

    Cressie, Noel; Calder, Catherine A; Clark, James S; Ver Hoef, Jay M; Wikle, Christopher K

    2009-04-01

    Analyses of ecological data should account for the uncertainty in the process(es) that generated the data. However, accounting for these uncertainties is a difficult task, since ecology is known for its complexity. Measurement and/or process errors are often the only sources of uncertainty modeled when addressing complex ecological problems, yet analyses should also account for uncertainty in sampling design, in model specification, in parameters governing the specified model, and in initial and boundary conditions. Only then can we be confident in the scientific inferences and forecasts made from an analysis. Probability and statistics provide a framework that accounts for multiple sources of uncertainty. Given the complexities of ecological studies, the hierarchical statistical model is an invaluable tool. This approach is not new in ecology, and there are many examples (both Bayesian and non-Bayesian) in the literature illustrating the benefits of this approach. In this article, we provide a baseline for concepts, notation, and methods, from which discussion on hierarchical statistical modeling in ecology can proceed. We have also planted some seeds for discussion and tried to show where the practical difficulties lie. Our thesis is that hierarchical statistical modeling is a powerful way of approaching ecological analysis in the presence of inevitable but quantifiable uncertainties, even if practical issues sometimes require pragmatic compromises.

  18. A Fuzzy Logic Based Supervisory Hierarchical Control Scheme for Real Time Pressure Control

    Institute of Scientific and Technical Information of China (English)

    N.Kanagaraj; P.Sivashanmugam; S.Paramasivam

    2009-01-01

    This paper describes a supervisory hierarchical fuzzy controller (SHFC) for regulating pressure in a real-time pilot pressure control system.The input scaling factor tuning of a direct expert controller is made using the error and process input parameters in a closed loop system in order to obtain better controller performance for set-point change and load disturbances.This on-line tuning method reduces operator involvement and enhances the controller performance to a wide operating range.The hierarchical control scheme consists of an intelligent upper level supervisory fuzzy controller and a lower level direct fuzzy controller.The upper level controller provides a mechanism to the main goal of the system and the lower level controller delivers the solutions to a particular situation. The control algorithm for the proposed scheme has been developed and tested using an ARM7 microcontroller-based embedded target board for a nonlinear pressure process having dead time.To demonstrate the effectiveness,the results of the proposed hierarchical controller,fuzzy controller and conventional proportional-integral (PI) controller are analyzed.The results prove that the SHFC performance is better in terms of stability and robustness than the conventional control methods.

  19. Real diffusion-weighted MRI enabling true signal averaging and increased diffusion contrast.

    Science.gov (United States)

    Eichner, Cornelius; Cauley, Stephen F; Cohen-Adad, Julien; Möller, Harald E; Turner, Robert; Setsompop, Kawin; Wald, Lawrence L

    2015-11-15

    This project aims to characterize the impact of underlying noise distributions on diffusion-weighted imaging. The noise floor is a well-known problem for traditional magnitude-based diffusion-weighted MRI (dMRI) data, leading to biased diffusion model fits and inaccurate signal averaging. Here, we introduce a total-variation-based algorithm to eliminate shot-to-shot phase variations of complex-valued diffusion data with the intention to extract real-valued dMRI datasets. The obtained real-valued diffusion data are no longer superimposed by a noise floor but instead by a zero-mean Gaussian noise distribution, yielding dMRI data without signal bias. We acquired high-resolution dMRI data with strong diffusion weighting and, thus, low signal-to-noise ratio. Both the extracted real-valued and traditional magnitude data were compared regarding signal averaging, diffusion model fitting and accuracy in resolving crossing fibers. Our results clearly indicate that real-valued diffusion data enables idealized conditions for signal averaging. Furthermore, the proposed method enables unbiased use of widely employed linear least squares estimators for model fitting and demonstrates an increased sensitivity to detect secondary fiber directions with reduced angular error. The use of phase-corrected, real-valued data for dMRI will therefore help to clear the way for more detailed and accurate studies of white matter microstructure and structural connectivity on a fine scale. Copyright © 2015 Elsevier Inc. All rights reserved.

  20. Can post-error dynamics explain sequential reaction time patterns?

    Directory of Open Access Journals (Sweden)

    Stephanie eGoldfarb

    2012-07-01

    Full Text Available We investigate human error dynamics in sequential two-alternative choice tasks. When subjects repeatedly discriminate between two stimuli, their error rates and mean reaction times (RTs systematically depend on prior sequences of stimuli. We analyze these sequential effects on RTs, separating error and correct responses, and identify a sequential RT tradeoff: a sequence of stimuli which yields a relatively fast RT on error trials will produce a relatively slow RT on correct trials and vice versa. We reanalyze previous data and acquire and analyze new data in a choice task with stimulus sequences generated by a first-order Markov process having unequal probabilities of repetitions and alternations. We then show that relationships among these stimulus sequences and the corresponding RTs for correct trials, error trials, and averaged over all trials are significantly influenced by the probability of alternations; these relationships have not been captured by previous models. Finally, we show that simple, sequential updates to the initial condition and thresholds of a pure drift diffusion model can account for the trends in RT for correct and error trials. Our results suggest that error-based parameter adjustments are critical to modeling sequential effects.

  1. A Student Diffusion Activity

    Science.gov (United States)

    Kutzner, Mickey; Pearson, Bryan

    2017-01-01

    Diffusion is a truly interdisciplinary topic bridging all areas of STEM education. When biomolecules are not being moved through the body by fluid flow through the circulatory system or by molecular motors, diffusion is the primary mode of transport over short distances. The direction of the diffusive flow of particles is from high concentration…

  2. Acoustic diffusers III

    Science.gov (United States)

    Bidondo, Alejandro

    2002-11-01

    This acoustic diffusion research presents a pragmatic view, based more on effects than causes and 15 very useful in the project advance control process, where the sound field's diffusion coefficient, sound field diffusivity (SFD), for its evaluation. Further research suggestions are presented to obtain an octave frequency resolution of the SFD for precise design or acoustical corrections.

  3. A Student Diffusion Activity

    Science.gov (United States)

    Kutzner, Mickey; Pearson, Bryan

    2017-02-01

    Diffusion is a truly interdisciplinary topic bridging all areas of STEM education. When biomolecules are not being moved through the body by fluid flow through the circulatory system or by molecular motors, diffusion is the primary mode of transport over short distances. The direction of the diffusive flow of particles is from high concentration toward low concentration.

  4. Processor register error correction management

    Science.gov (United States)

    Bose, Pradip; Cher, Chen-Yong; Gupta, Meeta S.

    2016-12-27

    Processor register protection management is disclosed. In embodiments, a method of processor register protection management can include determining a sensitive logical register for executable code generated by a compiler, generating an error-correction table identifying the sensitive logical register, and storing the error-correction table in a memory accessible by a processor. The processor can be configured to generate a duplicate register of the sensitive logical register identified by the error-correction table.

  5. A Comparison of Hierarchical and Non-Hierarchical Bayesian Approaches for Fitting Allometric Larch (Larix.spp. Biomass Equations

    Directory of Open Access Journals (Sweden)

    Dongsheng Chen

    2016-01-01

    Full Text Available Accurate biomass estimations are important for assessing and monitoring forest carbon storage. Bayesian theory has been widely applied to tree biomass models. Recently, a hierarchical Bayesian approach has received increasing attention for improving biomass models. In this study, tree biomass data were obtained by sampling 310 trees from 209 permanent sample plots from larch plantations in six regions across China. Non-hierarchical and hierarchical Bayesian approaches were used to model allometric biomass equations. We found that the total, root, stem wood, stem bark, branch and foliage biomass model relationships were statistically significant (p-values < 0.001 for both the non-hierarchical and hierarchical Bayesian approaches, but the hierarchical Bayesian approach increased the goodness-of-fit statistics over the non-hierarchical Bayesian approach. The R2 values of the hierarchical approach were higher than those of the non-hierarchical approach by 0.008, 0.018, 0.020, 0.003, 0.088 and 0.116 for the total tree, root, stem wood, stem bark, branch and foliage models, respectively. The hierarchical Bayesian approach significantly improved the accuracy of the biomass model (except for the stem bark and can reflect regional differences by using random parameters to improve the regional scale model accuracy.

  6. Multipath error in range rate measurement by PLL-transponder/GRARR/TDRS

    Science.gov (United States)

    Sohn, S. J.

    1970-01-01

    Range rate errors due to specular and diffuse multipath are calculated for a tracking and data relay satellite (TDRS) using an S band Goddard range and range rate (GRARR) system modified with a phase-locked loop transponder. Carrier signal processing in the coherent turn-around transponder and the GRARR reciever is taken into account. The root-mean-square (rms) range rate error was computed for the GRARR Doppler extractor and N-cycle count range rate measurement. Curves of worst-case range rate error are presented as a function of grazing angle at the reflection point. At very low grazing angles specular scattering predominates over diffuse scattering as expected, whereas for grazing angles greater than approximately 15 deg, the diffuse multipath predominates. The range rate errors at different low orbit altutudes peaked between 5 and 10 deg grazing angles.

  7. The Usability-Error Ontology

    DEFF Research Database (Denmark)

    2013-01-01

    ability to do systematic reviews and meta-analyses. In an effort to support improved and more interoperable data capture regarding Usability Errors, we have created the Usability Error Ontology (UEO) as a classification method for representing knowledge regarding Usability Errors. We expect the UEO...... will grow over time to support an increasing number of HIT system types. In this manuscript, we present this Ontology of Usability Error Types and specifically address Computerized Physician Order Entry (CPOE), Electronic Health Records (EHR) and Revenue Cycle HIT systems....

  8. Hierarchical Parallelization of Gene Differential Association Analysis

    Directory of Open Access Journals (Sweden)

    Dwarkadas Sandhya

    2011-09-01

    Full Text Available Abstract Background Microarray gene differential expression analysis is a widely used technique that deals with high dimensional data and is computationally intensive for permutation-based procedures. Microarray gene differential association analysis is even more computationally demanding and must take advantage of multicore computing technology, which is the driving force behind increasing compute power in recent years. In this paper, we present a two-layer hierarchical parallel implementation of gene differential association analysis. It takes advantage of both fine- and coarse-grain (with granularity defined by the frequency of communication parallelism in order to effectively leverage the non-uniform nature of parallel processing available in the cutting-edge systems of today. Results Our results show that this hierarchical strategy matches data sharing behavior to the properties of the underlying hardware, thereby reducing the memory and bandwidth needs of the application. The resulting improved efficiency reduces computation time and allows the gene differential association analysis code to scale its execution with the number of processors. The code and biological data used in this study are downloadable from http://www.urmc.rochester.edu/biostat/people/faculty/hu.cfm. Conclusions The performance sweet spot occurs when using a number of threads per MPI process that allows the working sets of the corresponding MPI processes running on the multicore to fit within the machine cache. Hence, we suggest that practitioners follow this principle in selecting the appropriate number of MPI processes and threads within each MPI process for their cluster configurations. We believe that the principles of this hierarchical approach to parallelization can be utilized in the parallelization of other computationally demanding kernels.

  9. Three Layer Hierarchical Model for Chord

    Directory of Open Access Journals (Sweden)

    Waqas A. Imtiaz

    2012-12-01

    Full Text Available Increasing popularity of decentralized Peer-to-Peer (P2P architecture emphasizes on the need to come across an overlay structure that can provide efficient content discovery mechanism, accommodate high churn rate and adapt to failures in the presence of heterogeneity among the peers. Traditional p2p systems incorporate distributed client-server communication, which finds the peer efficiently that store a desires data item, with minimum delay and reduced overhead. However traditional models are not able to solve the problems relating scalability and high churn rates. Hierarchical model were introduced to provide better fault isolation, effective bandwidth utilization, a superior adaptation to the underlying physical network and a reduction of the lookup path length as additional advantages. It is more efficient and easier to manage than traditional p2p networks. This paper discusses a further step in p2p hierarchy via 3-layers hierarchical model with distributed database architecture in different layer, each of which is connected through its root. The peers are divided into three categories according to their physical stability and strength. They are Ultra Super-peer, Super-peer and Ordinary Peer and we assign these peers to first, second and third level of hierarchy respectively. Peers in a group in lower layer have their own local database which hold as associated super-peer in middle layer and access the database among the peers through user queries. In our 3-layer hierarchical model for DHT algorithms, we used an advanced Chord algorithm with optimized finger table which can remove the redundant entry in the finger table in upper layer that influences the system to reduce the lookup latency. Our research work finally resulted that our model really provides faster search since the network lookup latency is decreased by reducing the number of hops. The peers in such network then can contribute with improve functionality and can perform well in

  10. Anxiety and Error Monitoring: Increased Error Sensitivity or Altered Expectations?

    Science.gov (United States)

    Compton, Rebecca J.; Carp, Joshua; Chaddock, Laura; Fineman, Stephanie L.; Quandt, Lorna C.; Ratliff, Jeffrey B.

    2007-01-01

    This study tested the prediction that the error-related negativity (ERN), a physiological measure of error monitoring, would be enhanced in anxious individuals, particularly in conditions with threatening cues. Participants made gender judgments about faces whose expressions were either happy, angry, or neutral. Replicating prior studies, midline…

  11. Measurement Error and Equating Error in Power Analysis

    Science.gov (United States)

    Phillips, Gary W.; Jiang, Tao

    2016-01-01

    Power analysis is a fundamental prerequisite for conducting scientific research. Without power analysis the researcher has no way of knowing whether the sample size is large enough to detect the effect he or she is looking for. This paper demonstrates how psychometric factors such as measurement error and equating error affect the power of…

  12. Hierarchical bismuth phosphate microspheres with high photocatalytic performance

    Energy Technology Data Exchange (ETDEWEB)

    Pei, Lizhai; Wei, Tian; Lin, Nan; Yu, Haiyun [Anhui University of Technology, Ma' anshan (China). Key Laboratory of Materials Science and Processing of Anhui Province

    2016-05-15

    Hierarchical bismuth phosphate microspheres have been prepared by a simple hydrothermal process with polyvinyl pyrrolidone. Scanning electron microscopy observations show that the hierarchical bismuth phosphate microspheres consist of nanosheets with a thickness of about 30 nm. The diameter of the microspheres is about 1 - 3 μm. X-ray diffraction analysis shows that the microspheres are comprised of triclinic Bi{sub 23}P{sub 4}O{sub 44.5} phase. The formation of the hierarchical microspheres depends on polyvinyl pyrrolidone concentration, hydrothermal temperature and reaction time. Gentian violet acts as the pollutant model for investigating the photocatalytic activity of the hierarchical bismuth phosphate microspheres under ultraviolet-visible light irradiation. Irradiation time, dosage of the hierarchical microspheres and initial gentian violet concentration on the photocatalytic efficiency are also discussed. The hierarchical bismuth phosphate microspheres show good photocatalytic performance for gentian violet removal in aqueous solution.

  13. Electronic Properties in a Hierarchical Multilayer Structure

    Institute of Scientific and Technical Information of China (English)

    ZHU Chen-Ping; XIONG Shi-Jie

    2001-01-01

    We investigate electronic properties of a hierarchical multilayer structure consisting of stacking of barriers and wells. The structure is formed in a sequence of generations, each of which is constructed with the same pattern but with the previous generation as the basic building blocks. We calculate the transmission spectrum which shows the multifractal behavior for systems with large generation index. From the analysis of the average resistivity and the multifractal structure of the wavefunctions, we show that there exist different types of states exhibiting extended, localized and intermediate characteristics. The degree of localization is sensitive to the variation of the structural parameters.Suggestion of the possible experimental realization is discussed.

  14. Mechanics of hierarchical 3-D nanofoams

    Science.gov (United States)

    Chen, Q.; Pugno, N. M.

    2012-01-01

    In this paper, we study the mechanics of new three-dimensional hierarchical open-cell foams, and, in particular, its Young's modulus and plastic strength. We incorporate the effects of the surface elasticity and surface residual stress in the linear elastic and plastic analyses. The results show that, as the cross-sectional dimension decreases, the influences of the surface effect on Young's modulus and plastic strength increase, and the surface effect makes the solid stiffer and stronger; similarly, as level n increases, these quantities approach to those of the classical theory as lower bounds.

  15. Hierarchical Control for Multiple DC Microgrids Clusters

    DEFF Research Database (Denmark)

    Shafiee, Qobad; Dragicevic, Tomislav; Vasquez, Juan Carlos;

    2014-01-01

    This paper presents a distributed hierarchical control framework to ensure reliable operation of dc Microgrid (MG) clusters. In this hierarchy, primary control is used to regulate the common bus voltage inside each MG locally. An adaptive droop method is proposed for this level which determines....... Another distributed policy is employed then to regulate the power flow among the MGs according to their local SOCs. The proposed distributed controllers on each MG communicate with only the neighbor MGs through a communication infrastructure. Finally, the small signal model is expanded for dc MG clusters...

  16. Effective Hierarchical Information Management in Mobile Environment

    Directory of Open Access Journals (Sweden)

    Hanmin Jung

    2012-01-01

    Full Text Available Problem statement: As the performance of mobile devices is developed highly, several kinds of data is stored on mobile devices. For effective data management and information retrieval, some researches applying ontology concept to mobile devices are progressed. However, in conventional researches, they apply conventional ontology storage structure used in PC environment to mobile platform. Conclusion/Recommendations: Therefore, performance of search about data is low and not effective. Therefore, we suggested new ontology storage schema with ontology path for effective hierarchical information in mobile environment.

  17. A hierarchical classification scheme of psoriasis images

    DEFF Research Database (Denmark)

    Maletti, Gabriela Mariel; Ersbøll, Bjarne Kjær

    2003-01-01

    the normal skin in the second stage. These tools are the Expectation-Maximization Algorithm, the quadratic discrimination function and a classification window of optimal size. Extrapolation of classification parameters of a given image to other images of the set is evaluated by means of Cohen's Kappa......A two-stage hierarchical classification scheme of psoriasis lesion images is proposed. These images are basically composed of three classes: normal skin, lesion and background. The scheme combines conventional tools to separate the skin from the background in the first stage, and the lesion from...

  18. Hierarchical silica particles by dynamic multicomponent assembly

    DEFF Research Database (Denmark)

    Wu, Z. W.; Hu, Q. Y.; Pang, J. B.

    2005-01-01

    Abstract: Aerosol-assisted assembly of mesoporous silica particles with hierarchically controllable pore structure has been prepared using cetyltrimethylammonium bromide (CTAB) and poly(propylene oxide) (PPO, H[OCH(CH3)CH2],OH) as co-templates. Addition of the hydrophobic PPO significantly influe......-silicate assembling system was discussed. The mesostructure of these particles was characterized by transmission electron microscope (TEM), scanning electron microscope (SEM), X-ray diffraction (XRD), and N-2 sorption. (c) 2005 Elsevier Inc. All rights reserved....

  19. Constructing storyboards based on hierarchical clustering analysis

    Science.gov (United States)

    Hasebe, Satoshi; Sami, Mustafa M.; Muramatsu, Shogo; Kikuchi, Hisakazu

    2005-07-01

    There are growing needs for quick preview of video contents for the purpose of improving accessibility of video archives as well as reducing network traffics. In this paper, a storyboard that contains a user-specified number of keyframes is produced from a given video sequence. It is based on hierarchical cluster analysis of feature vectors that are derived from wavelet coefficients of video frames. Consistent use of extracted feature vectors is the key to avoid a repetition of computationally-intensive parsing of the same video sequence. Experimental results suggest that a significant reduction in computational time is gained by this strategy.

  20. Technique for fast and efficient hierarchical clustering

    Science.gov (United States)

    Stork, Christopher

    2013-10-08

    A fast and efficient technique for hierarchical clustering of samples in a dataset includes compressing the dataset to reduce a number of variables within each of the samples of the dataset. A nearest neighbor matrix is generated to identify nearest neighbor pairs between the samples based on differences between the variables of the samples. The samples are arranged into a hierarchy that groups the samples based on the nearest neighbor matrix. The hierarchy is rendered to a display to graphically illustrate similarities or differences between the samples.

  1. Robust Pseudo-Hierarchical Support Vector Clustering

    DEFF Research Database (Denmark)

    Hansen, Michael Sass; Sjöstrand, Karl; Olafsdóttir, Hildur

    2007-01-01

    Support vector clustering (SVC) has proven an efficient algorithm for clustering of noisy and high-dimensional data sets, with applications within many fields of research. An inherent problem, however, has been setting the parameters of the SVC algorithm. Using the recent emergence of a method...... for calculating the entire regularization path of the support vector domain description, we propose a fast method for robust pseudo-hierarchical support vector clustering (HSVC). The method is demonstrated to work well on generated data, as well as for detecting ischemic segments from multidimensional myocardial...

  2. Additive Manufacturing of Hierarchical Porous Structures

    Energy Technology Data Exchange (ETDEWEB)

    Grote, Christopher John [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Materials Science and Technology Division. Polymers and Coatings

    2016-08-30

    Additive manufacturing has become a tool of choice for the development of customizable components. Developments in this technology have led to a powerful array of printers that t serve a variety of needs. However, resin development plays a crucial role in leading the technology forward. This paper addresses the development and application of printing hierarchical porous structures. Beginning with the development of a porous scaffold, which can be functionalized with a variety of materials, and concluding with customized resins for metal, ceramic, and carbon structures.

  3. An introduction to hierarchical linear modeling

    Directory of Open Access Journals (Sweden)

    Heather Woltman

    2012-02-01

    Full Text Available This tutorial aims to introduce Hierarchical Linear Modeling (HLM. A simple explanation of HLM is provided that describes when to use this statistical technique and identifies key factors to consider before conducting this analysis. The first section of the tutorial defines HLM, clarifies its purpose, and states its advantages. The second section explains the mathematical theory, equations, and conditions underlying HLM. HLM hypothesis testing is performed in the third section. Finally, the fourth section provides a practical example of running HLM, with which readers can follow along. Throughout this tutorial, emphasis is placed on providing a straightforward overview of the basic principles of HLM.

  4. Magnetic susceptibilities of cluster-hierarchical models

    Science.gov (United States)

    McKay, Susan R.; Berker, A. Nihat

    1984-02-01

    The exact magnetic susceptibilities of hierarchical models are calculated near and away from criticality, in both the ordered and disordered phases. The mechanism and phenomenology are discussed for models with susceptibilities that are physically sensible, e.g., nondivergent away from criticality. Such models are found based upon the Niemeijer-van Leeuwen cluster renormalization. A recursion-matrix method is presented for the renormalization-group evaluation of response functions. Diagonalization of this matrix at fixed points provides simple criteria for well-behaved densities and response functions.

  5. Universality: Accurate Checks in Dyson's Hierarchical Model

    Science.gov (United States)

    Godina, J. J.; Meurice, Y.; Oktay, M. B.

    2003-06-01

    In this talk we present high-accuracy calculations of the susceptibility near βc for Dyson's hierarchical model in D = 3. Using linear fitting, we estimate the leading (γ) and subleading (Δ) exponents. Independent estimates are obtained by calculating the first two eigenvalues of the linearized renormalization group transformation. We found γ = 1.29914073 ± 10 -8 and, Δ = 0.4259469 ± 10-7 independently of the choice of local integration measure (Ising or Landau-Ginzburg). After a suitable rescaling, the approximate fixed points for a large class of local measure coincide accurately with a fixed point constructed by Koch and Wittwer.

  6. Adaptive computation for convection dominated diffusion problems

    Institute of Scientific and Technical Information of China (English)

    CHEN Zhiming; JI Guanghua

    2004-01-01

    We derive sharp L∞(L1) a posteriori error estimate for the convection dominated diffusion equations of the form αu/αt+div(vu)-εΔu=g. The derived estimate is insensitive to the diffusionparameter ε→0. The problem is discretized implicitly in time via the method of characteristics and in space via continuous piecewise linear finite elements. Numerical experiments are reported to show the competitive behavior of the proposed adaptive method.

  7. Facile Carbonization of Microporous Organic Polymers into Hierarchically Porous Carbons Targeted for Effective CO2 Uptake at Low Pressures.

    Science.gov (United States)

    Gu, Shuai; He, Jianqiao; Zhu, Yunlong; Wang, Zhiqiang; Chen, Dongyang; Yu, Guipeng; Pan, Chunyue; Guan, Jianguo; Tao, Kai

    2016-07-20

    The advent of microporous organic polymers (MOPs) has delivered great potential in gas storage and separation (CCS). However, the presence of only micropores in these polymers often imposes diffusion limitations, which has resulted in the low utilization of MOPs in CCS. Herein, facile chemical activation of the single microporous organic polymers (MOPs) resulted in a series of hierarchically porous carbons with hierarchically meso-microporous structures and high CO2 uptake capacities at low pressures. The MOPs precursors (termed as MOP-7-10) with a simple narrow micropore structure obtained in this work possess moderate apparent BET surface areas ranging from 479 to 819 m(2) g(-1). By comparing different activating agents for the carbonization of these MOPs matrials, we found the optimized carbon matrials MOPs-C activated by KOH show unique hierarchically porous structures with a significant expansion of dominant pore size from micropores to mesopores, whereas their microporosity is also significantly improved, which was evidenced by a significant increase in the micropore volume (from 0.27 to 0.68 cm(3) g(-1)). This maybe related to the collapse and the structural rearrangement of the polymer farmeworks resulted from the activation of the activating agent KOH at high temperature. The as-made hierarchically porous carbons MOPs-C show an obvious increase in the BET surface area (from 819 to 1824 m(2) g(-1)). And the unique hierarchically porous structures of MOPs-C significantly contributed to the enhancement of the CO2 capture capacities, which are up to 214 mg g(-1) (at 273 K and 1 bar) and 52 mg g(-1) (at 273 K and 0.15 bar), superior to those of the most known MOPs and porous carbons. The high physicochemical stabilities and appropriate isosteric adsorption heats as well as high CO2/N2 ideal selectivities endow these hierarchically porous carbon materials great potential in gas sorption and separation.

  8. Fast lithium intercalation chemistry of the hierarchically porous Li2FeP2O7/C composite prepared by an iron-reduction method

    Science.gov (United States)

    Tan, L.; Zhang, S.; Deng, C.

    2015-02-01

    Lithium iron pyrophosphate has drawn great attention because of its interesting physical and electrochemical properties, whereas its high rate capability is far from satisfactory. We synthesize nano-Li2FeP2O7/C with hierarchical pore via a low cost method which uses iron powder instead of Vitamin C as the reducing agent. The hierarchical pore is constructed through a "combustion" mechanism according to the thermogravimetric and morphological characterizations. The phase-pure nanoparticles of Li2FeP2O7 are embedded in the three-dimensional network of amorphous carbon. The hierarchical pore together with the two-dimensional diffusion channel of lithium in Li2FeP2O7 is beneficial to lithium diffusion capability which is evaluated by the lithium diffusion coefficients calculated from the results of GITT measurements. The fast lithium intercalation chemistry facilitates the reversible de/intercalation of lithium, resulting in the high cycling stability and rate-capability. After 100 cycles at the current density of 1C, 93.8% of the initial capacity is retained. The discharge capacity is 62.1 mAh g-1 at the current density of 4C. Therefore, the hierarchically porous nano-Li2FeP2O7/C is a promising cathode material for advanced rechargeable lithium ion battery.

  9. Construction of hierarchical porous NiCo{sub 2}O{sub 4} films composed of nanowalls as cathode materials for high-performance supercapacitor

    Energy Technology Data Exchange (ETDEWEB)

    Zheng, Qingyun, E-mail: hizhengqingyun@126.com; Zhang, Xiangyang; Shen, Youming

    2015-04-15

    Graphical abstract: Hydrothermal-synthesized NiCo{sub 2}O{sub 4} mesowall films exhibit porous structure and high capacity as well as good cycling life for supercapacitor application. - Highlights: • Hierarchical porous NiCo{sub 2}O{sub 4} nanowall films are prepared by a hydrothermal method. • NiCo{sub 2}O{sub 4} nanowall films show excellent electrochemical performance. • Hierarchical porous film structure is favorable for fast ion/electron transfer. - Abstract: Hierarchical porous NiCo{sub 2}O{sub 4} films composed of nanowalls on nickel foam are synthesized via a facile hydrothermal method. Besides the mesoporous walls, the NiCo{sub 2}O{sub 4} nanowalls are interconnected with each other to form hierarchical porous structure. These unique porous structured films possess a high specific surface area. The supercapacitor performance of the hierarchical porous NiCo{sub 2}O{sub 4} film is fully characterized. A high capacity of 130 mA h g{sup −1} is achieved at 2 A g{sup −1} with 97% capacity maintained after 2,000 cycles. Importantly, 75.6% of capacity is retained when the current density changes from 3 A g{sup −1} to 36 A g{sup −1}. The superior electrochemical performance is mainly due to the unique hierarchical porous structure with large surface area as well as shorter diffusion length for ion and charge transport.

  10. Hierarchical Classification of Chinese Documents Based on N-grams

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    We explore the techniques of utilizing N-gram informatio n tocategorize Chinese text documents hierarchically so that the classifier can shak e off the burden of large dictionaries and complex segmentation processing, and subsequently be domain and time independent. A hierarchical Chinese text classif ier is implemented. Experimental results show that hierarchically classifying Chinese text documents based N-grams can achieve satisfactory performance and outperforms the other traditional Chinese text classifiers.

  11. Hierarchical transport networks optimizing dynamic response of permeable energy-storage materials.

    Science.gov (United States)

    Nilson, Robert H; Griffiths, Stewart K

    2009-07-01

    Channel widths and spacing in latticelike hierarchical transport networks are optimized to achieve maximum extraction of gas or electrical charge from nanoporous energy-storage materials during charge and discharge cycles of specified duration. To address a range of physics, the effective transport diffusivity is taken to vary as a power, m , of channel width. Optimal channel widths and spacing in all levels of the hierarchy are found to increase in a power-law manner with normalized system size, facilitating the derivation of closed-form approximations for the optimal dimensions. Characteristic response times and ratios of channel width to spacing are both shown to vary by the factor 2/m between successive levels of any optimal hierarchy. This leads to fractal-like self-similar geometry, but only for m=2 . For this case of quadratic dependence of diffusivity on channel width, the introduction of transport channels permits increases in system size on the order of 10;{4} , 10;{8} , and 10;{10} , without any reduction in extraction efficiency, for hierarchies having 1, 2 and, 8 levels, respectively. However, we also find that for a given system size there is an optimum number of hierarchical levels that maximizes extraction efficiency.

  12. Error begat error: design error analysis and prevention in social infrastructure projects.

    Science.gov (United States)

    Love, Peter E D; Lopez, Robert; Edwards, David J; Goh, Yang M

    2012-09-01

    Design errors contribute significantly to cost and schedule growth in social infrastructure projects and to engineering failures, which can result in accidents and loss of life. Despite considerable research that has addressed their error causation in construction projects they still remain prevalent. This paper identifies the underlying conditions that contribute to design errors in social infrastructure projects (e.g. hospitals, education, law and order type buildings). A systemic model of error causation is propagated and subsequently used to develop a learning framework for design error prevention. The research suggests that a multitude of strategies should be adopted in congruence to prevent design errors from occurring and so ensure that safety and project performance are ameliorated.

  13. Spatial frequency domain error budget

    Energy Technology Data Exchange (ETDEWEB)

    Hauschildt, H; Krulewich, D

    1998-08-27

    The aim of this paper is to describe a methodology for designing and characterizing machines used to manufacture or inspect parts with spatial-frequency-based specifications. At Lawrence Livermore National Laboratory, one of our responsibilities is to design or select the appropriate machine tools to produce advanced optical and weapons systems. Recently, many of the component tolerances for these systems have been specified in terms of the spatial frequency content of residual errors on the surface. We typically use an error budget as a sensitivity analysis tool to ensure that the parts manufactured by a machine will meet the specified component tolerances. Error budgets provide the formalism whereby we account for all sources of uncertainty in a process, and sum them to arrive at a net prediction of how "precisely" a manufactured component can meet a target specification. Using the error budget, we are able to minimize risk during initial stages by ensuring that the machine will produce components that meet specifications before the machine is actually built or purchased. However, the current error budgeting procedure provides no formal mechanism for designing machines that can produce parts with spatial-frequency-based specifications. The output from the current error budgeting procedure is a single number estimating the net worst case or RMS error on the work piece. This procedure has limited ability to differentiate between low spatial frequency form errors versus high frequency surface finish errors. Therefore the current error budgeting procedure can lead us to reject a machine that is adequate or accept a machine that is inadequate. This paper will describe a new error budgeting methodology to aid in the design and characterization of machines used to manufacture or inspect parts with spatial-frequency-based specifications. The output from this new procedure is the continuous spatial frequency content of errors that result on a machined part. If the machine

  14. Reducing errors in emergency surgery.

    Science.gov (United States)

    Watters, David A K; Truskett, Philip G

    2013-06-01

    Errors are to be expected in health care. Adverse events occur in around 10% of surgical patients and may be even more common in emergency surgery. There is little formal teaching on surgical error in surgical education and training programmes despite their frequency. This paper reviews surgical error and provides a classification system, to facilitate learning. The approach and language used to enable teaching about surgical error was developed through a review of key literature and consensus by the founding faculty of the Management of Surgical Emergencies course, currently delivered by General Surgeons Australia. Errors may be classified as being the result of commission, omission or inition. An error of inition is a failure of effort or will and is a failure of professionalism. The risk of error can be minimized by good situational awareness, matching perception to reality, and, during treatment, reassessing the patient, team and plan. It is important to recognize and acknowledge an error when it occurs and then to respond appropriately. The response will involve rectifying the error where possible but also disclosing, reporting and reviewing at a system level all the root causes. This should be done without shaming or blaming. However, the individual surgeon still needs to reflect on their own contribution and performance. A classification of surgical error has been developed that promotes understanding of how the error was generated, and utilizes a language that encourages reflection, reporting and response by surgeons and their teams. © 2013 The Authors. ANZ Journal of Surgery © 2013 Royal Australasian College of Surgeons.

  15. Diffusion of gallium in cadmium telluride

    Energy Technology Data Exchange (ETDEWEB)

    Blackmore, G.W. (Royal Signals and Radar Establishment, Malvern (United Kingdom)); Jones, E.D. (Coventry Polytechnic (United Kingdom)); Mullin, J.B. (Electronics Materials Consultant, West Malvern (United Kingdom)); Stewart, N.M. (BT Labs., Martlesham Heath, Ipswich (United Kingdom))

    1993-01-30

    The diffusion of Ga into bulk-grown, single crystal slices of CdTe was studied in the temperature range 350-811degC where the diffusion anneals were carried out in sealed silica capsules using three different types of diffusion sources. These were: excess Ga used alone, or with either excess Cd or excess Te added to the Ga. Each of the three sets of conditions resulted in different types of concentration profile. At temperatures above 470degC, a function composed of the sum of two complementary error functions gave the best fit to the profiles, whereas below this temperature a function composed of the sum of one or more exponentials of the form exp(-ax) gave the best fit. The behaviour of the diffusion of Ga in CdTe is complex, but it can be seen that two diffusion mechanisms are operating. The first is where D appears to decrease with Cd partial pressure, which implies that the diffusion mechanism may involve Cd vacancies, and a second which is independent of Cd partial pressure. The moderate values of D obtained, confirms that CdTe buffer layers may be useful in reducing Ga contamination in (Hg[sub x]Cd[sub 1-x])Te epitaxial devices grown on GaAs substrates. (orig.).

  16. Hierarchical rule-based monitoring and fuzzy logic control for neuromuscular block.

    Science.gov (United States)

    Shieh, J S; Fan, S Z; Chang, L W; Liu, C C

    2000-01-01

    The important task for anaesthetists is to provide an adequate degree of neuromuscular block during surgical operations, so that it should not be difficult to antagonize at the end of surgery. Therefore, this study examined the application of a simple technique (i.e., fuzzy logic) to an almost ideal muscle relaxant (i.e., rocuronium) at general anaesthesia in order to control the system more easily, efficiently, intelligently and safely during an operation. The characteristics of neuromuscular blockade induced by rocuronium were studied in 10 ASA I or II adult patients anaesthetized with inhalational (i.e., isoflurane) anaesthesia. A Datex Relaxograph was used to monitor neuromuscular block. And, ulnar nerve was stimulated supramaximally with repeated train-of-four via surface electrodes at the wrist. Initially a notebook personal computer was linked to a Datex Relaxograph to monitor electromyogram (EMG) signals which had been pruned by a three-level hierarchical structure of filters in order to design a controller for administering muscle relaxants. Furthermore, a four-level hierarchical fuzzy logic controller using the fuzzy logic and rule of thumb concept has been incorporated into the system. The Student's test was used to compare the variance between the groups. p control of muscle relaxation with a mean T1% error of -0.19 (SD 0.66) % accommodating a range in mean infusion rate (MIR) of 0.21-0.49 mg x kg(-1) x h(-1). When these results were compared with our previous ones using the same hierarchical structure applied to mivacurium, less variation in the T1% error (p controller activity of these two drugs showed no significant difference (p > 0.5). However, the consistent medium coefficient variance (CV) of the MIR of both rocuronium (i.e., 36.13 (SD 9.35) %) and mivacurium (i.e., 34.03 (SD 10.76) %) indicated a good controller activity. The results showed that a hierarchical rule-based monitoring and fuzzy logic control architecture can provide stable control

  17. Fractal image perception provides novel insights into hierarchical cognition.

    Science.gov (United States)

    Martins, M J; Fischmeister, F P; Puig-Waldmüller, E; Oh, J; Geissler, A; Robinson, S; Fitch, W T; Beisteiner, R

    2014-08-01

    Hierarchical structures play a central role in many aspects of human cognition, prominently including both language and music. In this study we addressed hierarchy in the visual domain, using a novel paradigm based on fractal images. Fractals are self-similar patterns generated by repeating the same simple rule at multiple hierarchical levels. Our hypothesis was that the brain uses different resources for processing hierarchies depending on whether it applies a "fractal" or a "non-fractal" cognitive strategy. We analyzed the neural circuits activated by these complex hierarchical patterns in an event-related fMRI study of 40 healthy subjects. Brain activation was compared across three different tasks: a similarity task, and two hierarchical tasks in which subjects were asked to recognize the repetition of a rule operating transformations either within an existing hierarchical level, or generating new hierarchical levels. Similar hierarchical images were generated by both rules and target images were identical. We found that when processing visual hierarchies, engagement in both hierarchical tasks activated the visual dorsal stream (occipito-parietal cortex, intraparietal sulcus and dorsolateral prefrontal cortex). In addition, the level-generating task specifically activated circuits related to the integration of spatial and categorical information, and with the integration of items in contexts (posterior cingulate cortex, retrosplenial cortex, and medial, ventral and anterior regions of temporal cortex). These findings provide interesting new clues about the cognitive mechanisms involved in the generation of new hierarchical levels as required for fractals.

  18. Geometrical phase transitions on hierarchical lattices and universality

    Science.gov (United States)

    Hauser, P. R.; Saxena, V. K.

    1986-12-01

    In order to examine the validity of the principle of universality for phase transitions on hierarchical lattices, we have studied percolation on a variety of hierarchical lattices, within exact position-space renormalization-group schemes. It is observed that the percolation critical exponent νp strongly depends on the topology of the lattices, even for lattices with the same intrinsic dimensions and connectivities. These results support some recent similar results on thermal phase transitions on hierarchical lattices and point out the possible violation of universality in phase transitions on hierarchical lattices.

  19. Modified nonlinear complex diffusion filter (MNCDF).

    Science.gov (United States)

    Saini, Kalpana; Dewal, M L; Rohit, Manojkumar

    2012-06-01

    Speckle noise removal is the most important step in the processing of echocardiographic images. A speckle-free image produces useful information to diagnose heart-related diseases. Images which contain low noise and sharp edges are more easily analyzed by the clinicians. This noise removal stage is also a preprocessing stage in segmentation techniques. A new formulation has been proposed for a well-known nonlinear complex diffusion filter (NCDF). Its diffusion coefficient and the time step size are modified to give fast processing and better results. An investigation has been performed among nine patients suffering from mitral regurgitation. Images have been taken with 2D echo in apical and parasternal views. The peak signal-to-noise ratio (PSNR), universal quality index (Qi), mean absolute error (MAE), mean square error (MSE), and root mean square error (RMSE) have been calculated, and the results show that the proposed method is much better than the previous filters for echocardiographic images. The proposed method, modified nonlinear complex diffusion filter (MNCDF), smooths the homogeneous area and enhances the fine details.

  20. Error Analysis in English Language Learning

    Institute of Scientific and Technical Information of China (English)

    杜文婷

    2009-01-01

    Errors in English language learning are usually classified into interlingual errors and intralin-gual errors, having a clear knowledge of the causes of the errors will help students learn better English.

  1. Error Analysis And Second Language Acquisition

    Institute of Scientific and Technical Information of China (English)

    王惠丽

    2016-01-01

    Based on the theories of error and error analysis, the article is trying to explore the effect of error and error analysis on SLA. Thus give some advice to the language teachers and language learners.

  2. Wavelet-Based Diffusion Approach for DTI Image Restoration

    Institute of Scientific and Technical Information of China (English)

    ZHANG Xiang-fen; CHEN Wu-fan; TIAN Wei-feng; YE Hong

    2008-01-01

    The Rician noise introduced into the diffusion tensor images (DTIs) can bring serious impacts on tensor calculation and fiber tracking. To decrease the effects of the Rician noise, we propose to consider the wavelet-based diffusion method to denoise multichannel typed diffusion weighted (DW) images. The presented smoothing strategy, which utilizes anisotropic nonlinear diffusion in wavelet domain, successfully removes noise while preserving both texture and edges. To evaluate quantitatively the efficiency of the presented method in accounting for the Rician noise introduced into the DW images, the peak-to-peak signal-to-noise ratio (PSNR) and signal-to-mean squared error ratio (SMSE) metrics are adopted. Based on the synthetic and real data, we calculated the apparent diffusion coefficient (ADC) and tracked the fibers. We made comparisons between the presented model,the wave shrinkage and regularized nonlinear diffusion smoothing method. All the experiment results prove quantitatively and visually the better performance of the presented filter.

  3. Hierarchical prisoner’s dilemma in hierarchical game for resource competition

    Science.gov (United States)

    Fujimoto, Yuma; Sagawa, Takahiro; Kaneko, Kunihiko

    2017-07-01

    Dilemmas in cooperation are one of the major concerns in game theory. In a public goods game, each individual cooperates by paying a cost or defecting without paying it, and receives a reward from the group out of the collected cost. Thus, defecting is beneficial for each individual, while cooperation is beneficial for the group. Now, groups (say, countries) consisting of individuals also play games. To study such a multi-level game, we introduce a hierarchical game in which multiple groups compete for limited resources by utilizing the collected cost in each group, where the power to appropriate resources increases with the population of the group. Analyzing this hierarchical game, we found a hierarchical prisoner’s dilemma, in which groups choose the defecting policy (say, armament) as a Nash strategy to optimize each group’s benefit, while cooperation optimizes the total benefit. On the other hand, for each individual, refusing to pay the cost (say, tax) is a Nash strategy, which turns out to be a cooperation policy for the group, thus leading to a hierarchical dilemma. Here the group reward increases with the group size. However, we find that there exists an optimal group size that maximizes the individual payoff. Furthermore, when the population asymmetry between two groups is large, the smaller group will choose a cooperation policy (say, disarmament) to avoid excessive response from the larger group, and the prisoner’s dilemma between the groups is resolved. Accordingly, the relevance of this hierarchical game on policy selection in society and the optimal size of human or animal groups are discussed.

  4. Quantifying error distributions in crowding.

    Science.gov (United States)

    Hanus, Deborah; Vul, Edward

    2013-03-22

    When multiple objects are in close proximity, observers have difficulty identifying them individually. Two classes of theories aim to account for this crowding phenomenon: spatial pooling and spatial substitution. Variations of these accounts predict different patterns of errors in crowded displays. Here we aim to characterize the kinds of errors that people make during crowding by comparing a number of error models across three experiments in which we manipulate flanker spacing, display eccentricity, and precueing duration. We find that both spatial intrusions and individual letter confusions play a considerable role in errors. Moreover, we find no evidence that a naïve pooling model that predicts errors based on a nonadditive combination of target and flankers explains errors better than an independent intrusion model (indeed, in our data, an independent intrusion model is slightly, but significantly, better). Finally, we find that manipulating trial difficulty in any way (spacing, eccentricity, or precueing) produces homogenous changes in error distributions. Together, these results provide quantitative baselines for predictive models of crowding errors, suggest that pooling and spatial substitution models are difficult to tease apart, and imply that manipulations of crowding all influence a common mechanism that impacts subject performance.

  5. Discretization error of Stochastic Integrals

    CERN Document Server

    Fukasawa, Masaaki

    2010-01-01

    Asymptotic error distribution for approximation of a stochastic integral with respect to continuous semimartingale by Riemann sum with general stochastic partition is studied. Effective discretization schemes of which asymptotic conditional mean-squared error attains a lower bound are constructed. Two applications are given; efficient delta hedging strategies with transaction costs and effective discretization schemes for the Euler-Maruyama approximation are constructed.

  6. Dual Processing and Diagnostic Errors

    Science.gov (United States)

    Norman, Geoff

    2009-01-01

    In this paper, I review evidence from two theories in psychology relevant to diagnosis and diagnostic errors. "Dual Process" theories of thinking, frequently mentioned with respect to diagnostic error, propose that categorization decisions can be made with either a fast, unconscious, contextual process called System 1 or a slow, analytical,…

  7. Barriers to medical error reporting

    Directory of Open Access Journals (Sweden)

    Jalal Poorolajal

    2015-01-01

    Full Text Available Background: This study was conducted to explore the prevalence of medical error underreporting and associated barriers. Methods: This cross-sectional study was performed from September to December 2012. Five hospitals, affiliated with Hamadan University of Medical Sciences, in Hamedan,Iran were investigated. A self-administered questionnaire was used for data collection. Participants consisted of physicians, nurses, midwives, residents, interns, and staffs of radiology and laboratory departments. Results: Overall, 50.26% of subjects had committed but not reported medical errors. The main reasons mentioned for underreporting were lack of effective medical error reporting system (60.0%, lack of proper reporting form (51.8%, lack of peer supporting a person who has committed an error (56.0%, and lack of personal attention to the importance of medical errors (62.9%. The rate of committing medical errors was higher in men (71.4%, age of 50-40 years (67.6%, less-experienced personnel (58.7%, educational level of MSc (87.5%, and staff of radiology department (88.9%. Conclusions: This study outlined the main barriers to reporting medical errors and associated factors that may be helpful for healthcare organizations in improving medical error reporting as an essential component for patient safety enhancement.

  8. Emergence of hierarchical structure mirroring linguistic composition in a recurrent neural network.

    Science.gov (United States)

    Hinoshita, Wataru; Arie, Hiroaki; Tani, Jun; Okuno, Hiroshi G; Ogata, Tetsuya

    2011-05-01

    We show that a Multiple Timescale Recurrent Neural Network (MTRNN) can acquire the capabilities to recognize, generate, and correct sentences by self-organizing in a way that mirrors the hierarchical structure of sentences: characters grouped into words, and words into sentences. The model can control which sentence to generate depending on its initial states (generation phase) and the initial states can be calculated from the target sentence (recognition phase). In an experiment, we trained our model over a set of unannotated sentences from an artificial language, represented as sequences of characters. Once trained, the model could recognize and generate grammatical sentences, even if they were not learned. Moreover, we found that our model could correct a few substitution errors in a sentence, and the correction performance was improved by adding the errors to the training sentences in each training iteration with a certain probability. An analysis of the neural activations in our model revealed that the MTRNN had self-organized, reflecting the hierarchical linguistic structure by taking advantage of the differences in timescale among its neurons: in particular, neurons that change the fastest represented "characters", those that change more slowly, "words", and those that change the slowest, "sentences".

  9. Hierarchical Star Formation in Nearby LEGUS Galaxies

    CERN Document Server

    Elmegreen, Debra Meloy; Adamo, Angela; Aloisi, Alessandra; Andrews, Jennifer; Annibali, Francesca; Bright, Stacey N; Calzetti, Daniela; Cignoni, Michele; Evans, Aaron S; Gallagher, John S; Gouliermis, Dimitrios A; Grebel, Eva K; Hunter, Deidre A; Johnson, Kelsey; Kim, Hwi; Lee, Janice; Sabbi, Elena; Smith, Linda; Thilker, David; Tosi, Monica; Ubeda, Leonardo

    2014-01-01

    Hierarchical structure in ultraviolet images of 12 late-type LEGUS galaxies is studied by determining the numbers and fluxes of nested regions as a function of size from ~1 to ~200 pc, and the number as a function of flux. Two starburst dwarfs, NGC 1705 and NGC 5253, have steeper number-size and flux-size distributions than the others, indicating high fractions of the projected areas filled with star formation. Nine subregions in 7 galaxies have similarly steep number-size slopes, even when the whole galaxies have shallower slopes. The results suggest that hierarchically structured star-forming regions several hundred parsecs or larger represent common unit structures. Small galaxies dominated by only a few of these units tend to be starbursts. The self-similarity of young stellar structures down to parsec scales suggests that star clusters form in the densest parts of a turbulent medium that also forms loose stellar groupings on larger scales. The presence of super star clusters in two of our starburst dwarf...

  10. PERFORMANCE OF SELECTED AGGLOMERATIVE HIERARCHICAL CLUSTERING METHODS

    Directory of Open Access Journals (Sweden)

    Nusa Erman

    2015-01-01

    Full Text Available A broad variety of different methods of agglomerative hierarchical clustering brings along problems how to choose the most appropriate method for the given data. It is well known that some methods outperform others if the analysed data have a specific structure. In the presented study we have observed the behaviour of the centroid, the median (Gower median method, and the average method (unweighted pair-group method with arithmetic mean – UPGMA; average linkage between groups. We have compared them with mostly used methods of hierarchical clustering: the minimum (single linkage clustering, the maximum (complete linkage clustering, the Ward, and the McQuitty (groups method average, weighted pair-group method using arithmetic averages - WPGMA methods. We have applied the comparison of these methods on spherical, ellipsoid, umbrella-like, “core-and-sphere”, ring-like and intertwined three-dimensional data structures. To generate the data and execute the analysis, we have used R statistical software. Results show that all seven methods are successful in finding compact, ball-shaped or ellipsoid structures when they are enough separated. Conversely, all methods except the minimum perform poor on non-homogenous, irregular and elongated ones. Especially challenging is a circular double helix structure; it is being correctly revealed only by the minimum method. We can also confirm formerly published results of other simulation studies, which usually favour average method (besides Ward method in cases when data is assumed to be fairly compact and well separated.

  11. Quark flavor mixings from hierarchical mass matrices

    Energy Technology Data Exchange (ETDEWEB)

    Verma, Rohit [Chinese Academy of Sciences, Institute of High Energy Physics, Beijing (China); Rayat Institute of Engineering and Information Technology, Ropar (India); Zhou, Shun [Chinese Academy of Sciences, Institute of High Energy Physics, Beijing (China); Peking University, Center for High Energy Physics, Beijing (China)

    2016-05-15

    In this paper, we extend the Fritzsch ansatz of quark mass matrices while retaining their hierarchical structures and show that the main features of the Cabibbo-Kobayashi-Maskawa (CKM) matrix V, including vertical stroke V{sub us} vertical stroke ≅ vertical stroke V{sub cd} vertical stroke, vertical stroke V{sub cb} vertical stroke ≅ vertical stroke V{sub ts} vertical stroke and vertical stroke V{sub ub} vertical stroke / vertical stroke V{sub cb} vertical stroke < vertical stroke V{sub td} vertical stroke / vertical stroke V{sub ts} vertical stroke can be well understood. This agreement is observed especially when the mass matrices have non-vanishing (1, 3) and (3, 1) off-diagonal elements. The phenomenological consequences of these for the allowed texture content and gross structural features of 'hierarchical' quark mass matrices are addressed from a model-independent prospective under the assumption of factorizable phases in these. The approximate and analytical expressions of the CKM matrix elements are derived and a detailed analysis reveals that such structures are in good agreement with the observed quark flavor mixing angles and the CP-violating phase at the 1σ level and call upon a further investigation of the realization of these structures from a top-down prospective. (orig.)

  12. Bimodal Color Distribution in Hierarchical Galaxy Formation

    CERN Document Server

    Menci, N; Giallongo, E; Salimbeni, S

    2005-01-01

    We show how the observed bimodality in the color distribution of galaxies can be explained in the framework of the hierarchical clustering picture in terms of the interplay between the properties of the merging histories and the feedback/star-formation processes in the progenitors of local galaxies. Using a semi-analytic model of hierarchical galaxy formation, we compute the color distributions of galaxies with different luminosities and compare them with the observations. Our fiducial model matches the fundamental properties of the observed distributions, namely: 1) the distribution of objects brighter than M_r = -18 is clearly bimodal, with a fraction of red objects increasing with luminosity; 2) for objects brighter than M_r = -21 the color distribution is dominated by red objects with color u-r = 2.2-2.4; 3) the spread on the distribution of the red population is smaller than that of the blue population; 4) the fraction of red galaxies is larger in denser environments, even for low-luminosity objects; 5) ...

  13. A Hierarchical Bayesian Model for Crowd Emotions

    Science.gov (United States)

    Urizar, Oscar J.; Baig, Mirza S.; Barakova, Emilia I.; Regazzoni, Carlo S.; Marcenaro, Lucio; Rauterberg, Matthias

    2016-01-01

    Estimation of emotions is an essential aspect in developing intelligent systems intended for crowded environments. However, emotion estimation in crowds remains a challenging problem due to the complexity in which human emotions are manifested and the capability of a system to perceive them in such conditions. This paper proposes a hierarchical Bayesian model to learn in unsupervised manner the behavior of individuals and of the crowd as a single entity, and explore the relation between behavior and emotions to infer emotional states. Information about the motion patterns of individuals are described using a self-organizing map, and a hierarchical Bayesian network builds probabilistic models to identify behaviors and infer the emotional state of individuals and the crowd. This model is trained and tested using data produced from simulated scenarios that resemble real-life environments. The conducted experiments tested the efficiency of our method to learn, detect and associate behaviors with emotional states yielding accuracy levels of 74% for individuals and 81% for the crowd, similar in performance with existing methods for pedestrian behavior detection but with novel concepts regarding the analysis of crowds. PMID:27458366

  14. Hierarchical majorana neutrinos from democratic mass matrices

    Science.gov (United States)

    Yang, Masaki J. S.

    2016-09-01

    In this paper, we obtain the light neutrino masses and mixings consistent with the experiments, in the democratic texture approach. The essential ansatz is that νRi are assumed to transform as "right-handed fields" 2R +1R under the S3L ×S3R symmetry. The symmetry breaking terms are assumed to be diagonal and hierarchical. This setup only allows the normal hierarchy of the neutrino mass, and excludes both of inverted hierarchical and degenerated neutrinos. Although the neutrino sector has nine free parameters, several predictions are obtained at the leading order. When we neglect the smallest parameters ζν and ζR, all components of the mixing matrix UPMNS are expressed by the masses of light neutrinos and charged leptons. From the consistency between predicted and observed UPMNS, we obtain the lightest neutrino masses m1 = (1.1 → 1.4) meV, and the effective mass for the double beta decay ≃ 4.5 meV.

  15. Efficient scalable algorithms for hierarchically semiseparable matrices

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Shen; Xia, Jianlin; Situ, Yingchong; Hoop, Maarten V. de

    2011-09-14

    Hierarchically semiseparable (HSS) matrix algorithms are emerging techniques in constructing the superfast direct solvers for both dense and sparse linear systems. Here, we develope a set of novel parallel algorithms for the key HSS operations that are used for solving large linear systems. These include the parallel rank-revealing QR factorization, the HSS constructions with hierarchical compression, the ULV HSS factorization, and the HSS solutions. The HSS tree based parallelism is fully exploited at the coarse level. The BLACS and ScaLAPACK libraries are used to facilitate the parallel dense kernel operations at the ne-grained level. We have appplied our new parallel HSS-embedded multifrontal solver to the anisotropic Helmholtz equations for seismic imaging, and were able to solve a linear system with 6.4 billion unknowns using 4096 processors, in about 20 minutes. The classical multifrontal solver simply failed due to high demand of memory. To our knowledge, this is the first successful demonstration of employing the HSS algorithms in solving the truly large-scale real-world problems. Our parallel strategies can be easily adapted to the parallelization of the other rank structured methods.

  16. A hierarchical model of temporal perception.

    Science.gov (United States)

    Pöppel, E

    1997-05-01

    Temporal perception comprises subjective phenomena such as simultaneity, successiveness, temporal order, subjective present, temporal continuity and subjective duration. These elementary temporal experiences are hierarchically related to each other. Functional system states with a duration of 30 ms are implemented by neuronal oscillations and they provide a mechanism to define successiveness. These system states are also responsible for the identification of basic events. For a sequential representation of several events time tags are allocated, resulting in an ordinal representation of such events. A mechanism of temporal integration binds successive events into perceptual units of 3 s duration. Such temporal integration, which is automatic and presemantic, is also operative in movement control and other cognitive activities. Because of the omnipresence of this integration mechanism it is used for a pragmatic definition of the subjective present. Temporal continuity is the result of a semantic connection between successive integration intervals. Subjective duration is known to depend on mental load and attentional demand, high load resulting in long time estimates. In the hierarchical model proposed, system states of 30 ms and integration intervals of 3 s, together with a memory store, provide an explanatory neuro-cognitive machinery for differential subjective duration.

  17. Hierarchical video summarization for medical data

    Science.gov (United States)

    Zhu, Xingquan; Fan, Jianping; Elmagarmid, Ahmed K.; Aref, Walid G.

    2001-12-01

    To provide users with an overview of medical video content at various levels of abstraction which can be used for more efficient database browsing and access, a hierarchical video summarization strategy has been developed and is presented in this paper. To generate an overview, the key frames of a video are preprocessed to extract special frames (black frames, slides, clip art, sketch drawings) and special regions (faces, skin or blood-red areas). A shot grouping method is then applied to merge the spatially or temporally related shots into groups. The visual features and knowledge from the video shots are integrated to assign the groups into predefined semantic categories. Based on the video groups and their semantic categories, video summaries for different levels are constructed by group merging, hierarchical group clustering and semantic category selection. Based on this strategy, a user can select the layer of the summary to access. The higher the layer, the more concise the video summary; the lower the layer, the greater the detail contained in the summary.

  18. Hierarchical Cluster Assembly in Globally Collapsing Clouds

    CERN Document Server

    Vazquez-Semadeni, Enrique; Colin, Pedro

    2016-01-01

    We discuss the mechanism of cluster formation in a numerical simulation of a molecular cloud (MC) undergoing global hierarchical collapse (GHC). The global nature of the collapse implies that the SFR increases over time. The hierarchical nature of the collapse consists of small-scale collapses within larger-scale ones. The large-scale collapses culminate a few Myr later than the small-scale ones and consist of filamentary flows that accrete onto massive central clumps. The small-scale collapses form clumps that are embedded in the filaments and falling onto the large-scale collapse centers. The stars formed in the early, small-scale collapses share the infall motion of their parent clumps. Thus, the filaments feed both gaseous and stellar material to the massive central clump. This leads to the presence of a few older stars in a region where new protostars are forming, and also to a self-similar structure, in which each unit is composed of smaller-scale sub-units that approach each other and may merge. Becaus...

  19. Onorbit IMU alignment error budget

    Science.gov (United States)

    Corson, R. W.

    1980-01-01

    The Star Tracker, Crew Optical Alignment Sight (COAS), and Inertial Measurement Unit (IMU) from a complex navigation system with a multitude of error sources were combined. A complete list of the system errors is presented. The errors were combined in a rational way to yield an estimate of the IMU alignment accuracy for STS-1. The expected standard deviation in the IMU alignment error for STS-1 type alignments was determined to be 72 arc seconds per axis for star tracker alignments and 188 arc seconds per axis for COAS alignments. These estimates are based on current knowledge of the star tracker, COAS, IMU, and navigation base error specifications, and were partially verified by preliminary Monte Carlo analysis.

  20. Measurement Error Models in Astronomy

    CERN Document Server

    Kelly, Brandon C

    2011-01-01

    I discuss the effects of measurement error on regression and density estimation. I review the statistical methods that have been developed to correct for measurement error that are most popular in astronomical data analysis, discussing their advantages and disadvantages. I describe functional models for accounting for measurement error in regression, with emphasis on the methods of moments approach and the modified loss function approach. I then describe structural models for accounting for measurement error in regression and density estimation, with emphasis on maximum-likelihood and Bayesian methods. As an example of a Bayesian application, I analyze an astronomical data set subject to large measurement errors and a non-linear dependence between the response and covariate. I conclude with some directions for future research.