WorldWideScience

Sample records for hierarchic regression analyses

  1. Hierarchical linear regression models for conditional quantiles

    Institute of Scientific and Technical Information of China (English)

    TIAN Maozai; CHEN Gemai

    2006-01-01

    The quantile regression has several useful features and therefore is gradually developing into a comprehensive approach to the statistical analysis of linear and nonlinear response models,but it cannot deal effectively with the data with a hierarchical structure.In practice,the existence of such data hierarchies is neither accidental nor ignorable,it is a common phenomenon.To ignore this hierarchical data structure risks overlooking the importance of group effects,and may also render many of the traditional statistical analysis techniques used for studying data relationships invalid.On the other hand,the hierarchical models take a hierarchical data structure into account and have also many applications in statistics,ranging from overdispersion to constructing min-max estimators.However,the hierarchical models are virtually the mean regression,therefore,they cannot be used to characterize the entire conditional distribution of a dependent variable given high-dimensional covariates.Furthermore,the estimated coefficient vector (marginal effects)is sensitive to an outlier observation on the dependent variable.In this article,a new approach,which is based on the Gauss-Seidel iteration and taking a full advantage of the quantile regression and hierarchical models,is developed.On the theoretical front,we also consider the asymptotic properties of the new method,obtaining the simple conditions for an n1/2-convergence and an asymptotic normality.We also illustrate the use of the technique with the real educational data which is hierarchical and how the results can be explained.

  2. The Infinite Hierarchical Factor Regression Model

    CERN Document Server

    Rai, Piyush

    2009-01-01

    We propose a nonparametric Bayesian factor regression model that accounts for uncertainty in the number of factors, and the relationship between factors. To accomplish this, we propose a sparse variant of the Indian Buffet Process and couple this with a hierarchical model over factors, based on Kingman's coalescent. We apply this model to two problems (factor analysis and factor regression) in gene-expression data analysis.

  3. Entrepreneurial intention modeling using hierarchical multiple regression

    Directory of Open Access Journals (Sweden)

    Marina Jeger

    2014-12-01

    Full Text Available The goal of this study is to identify the contribution of effectuation dimensions to the predictive power of the entrepreneurial intention model over and above that which can be accounted for by other predictors selected and confirmed in previous studies. As is often the case in social and behavioral studies, some variables are likely to be highly correlated with each other. Therefore, the relative amount of variance in the criterion variable explained by each of the predictors depends on several factors such as the order of variable entry and sample specifics. The results show the modest predictive power of two dimensions of effectuation prior to the introduction of the theory of planned behavior elements. The article highlights the main advantages of applying hierarchical regression in social sciences as well as in the specific context of entrepreneurial intention formation, and addresses some of the potential pitfalls that this type of analysis entails.

  4. Hierarchical Neural Regression Models for Customer Churn Prediction

    Directory of Open Access Journals (Sweden)

    Golshan Mohammadi

    2013-01-01

    Full Text Available As customers are the main assets of each industry, customer churn prediction is becoming a major task for companies to remain in competition with competitors. In the literature, the better applicability and efficiency of hierarchical data mining techniques has been reported. This paper considers three hierarchical models by combining four different data mining techniques for churn prediction, which are backpropagation artificial neural networks (ANN, self-organizing maps (SOM, alpha-cut fuzzy c-means (α-FCM, and Cox proportional hazards regression model. The hierarchical models are ANN + ANN + Cox, SOM + ANN + Cox, and α-FCM + ANN + Cox. In particular, the first component of the models aims to cluster data in two churner and nonchurner groups and also filter out unrepresentative data or outliers. Then, the clustered data as the outputs are used to assign customers to churner and nonchurner groups by the second technique. Finally, the correctly classified data are used to create Cox proportional hazards model. To evaluate the performance of the hierarchical models, an Iranian mobile dataset is considered. The experimental results show that the hierarchical models outperform the single Cox regression baseline model in terms of prediction accuracy, Types I and II errors, RMSE, and MAD metrics. In addition, the α-FCM + ANN + Cox model significantly performs better than the two other hierarchical models.

  5. Coordinate Descent Based Hierarchical Interactive Lasso Penalized Logistic Regression and Its Application to Classification Problems

    Directory of Open Access Journals (Sweden)

    Jin-Jia Wang

    2014-01-01

    Full Text Available We present the hierarchical interactive lasso penalized logistic regression using the coordinate descent algorithm based on the hierarchy theory and variables interactions. We define the interaction model based on the geometric algebra and hierarchical constraint conditions and then use the coordinate descent algorithm to solve for the coefficients of the hierarchical interactive lasso model. We provide the results of some experiments based on UCI datasets, Madelon datasets from NIPS2003, and daily activities of the elder. The experimental results show that the variable interactions and hierarchy contribute significantly to the classification. The hierarchical interactive lasso has the advantages of the lasso and interactive lasso.

  6. Hierarchical Matching and Regression with Application to Photometric Redshift Estimation

    Science.gov (United States)

    Murtagh, Fionn

    2017-06-01

    This work emphasizes that heterogeneity, diversity, discontinuity, and discreteness in data is to be exploited in classification and regression problems. A global a priori model may not be desirable. For data analytics in cosmology, this is motivated by the variety of cosmological objects such as elliptical, spiral, active, and merging galaxies at a wide range of redshifts. Our aim is matching and similarity-based analytics that takes account of discrete relationships in the data. The information structure of the data is represented by a hierarchy or tree where the branch structure, rather than just the proximity, is important. The representation is related to p-adic number theory. The clustering or binning of the data values, related to the precision of the measurements, has a central role in this methodology. If used for regression, our approach is a method of cluster-wise regression, generalizing nearest neighbour regression. Both to exemplify this analytics approach, and to demonstrate computational benefits, we address the well-known photometric redshift or `photo-z' problem, seeking to match Sloan Digital Sky Survey (SDSS) spectroscopic and photometric redshifts.

  7. Analyzing Multilevel Data: Comparing Findings from Hierarchical Linear Modeling and Ordinary Least Squares Regression

    Science.gov (United States)

    Rocconi, Louis M.

    2013-01-01

    This study examined the differing conclusions one may come to depending upon the type of analysis chosen, hierarchical linear modeling or ordinary least squares (OLS) regression. To illustrate this point, this study examined the influences of seniors' self-reported critical thinking abilities three ways: (1) an OLS regression with the student…

  8. Hierarchical Multiple Regression in Counseling Research: Common Problems and Possible Remedies.

    Science.gov (United States)

    Petrocelli, John V.

    2003-01-01

    A brief content analysis was conducted on the use of hierarchical regression in counseling research published in the "Journal of Counseling Psychology" and the "Journal of Counseling & Development" during the years 1997-2001. Common problems are cited and possible remedies are described. (Contains 43 references and 3 tables.) (Author)

  9. Analysis of genomic signatures in prokaryotes using multinomial regression and hierarchical clustering

    DEFF Research Database (Denmark)

    Ussery, David; Bohlin, Jon; Skjerve, Eystein

    2009-01-01

    Recently there has been an explosion in the availability of bacterial genomic sequences, making possible now an analysis of genomic signatures across more than 800 hundred different bacterial chromosomes, from a wide variety of environments. Using genomic signatures, we pair-wise compared 867...... different genomic DNA sequences, taken from chromosomes and plasmids more than 100,000 base-pairs in length. Hierarchical clustering was performed on the outcome of the comparisons before a multinomial regression model was fitted. The regression model included the cluster groups as the response variable...... AT content. Small improvements to the regression model, although significant, were also obtained by factors such as sequence size, habitat, growth temperature, selective pressure measured as oligonucleotide usage variance, and oxygen requirement.The statistics obtained using hierarchical clustering...

  10. Statistical power analyses using G*Power 3.1: tests for correlation and regression analyses.

    Science.gov (United States)

    Faul, Franz; Erdfelder, Edgar; Buchner, Axel; Lang, Albert-Georg

    2009-11-01

    G*Power is a free power analysis program for a variety of statistical tests. We present extensions and improvements of the version introduced by Faul, Erdfelder, Lang, and Buchner (2007) in the domain of correlation and regression analyses. In the new version, we have added procedures to analyze the power of tests based on (1) single-sample tetrachoric correlations, (2) comparisons of dependent correlations, (3) bivariate linear regression, (4) multiple linear regression based on the random predictor model, (5) logistic regression, and (6) Poisson regression. We describe these new features and provide a brief introduction to their scope and handling.

  11. Neighborhood social capital and crime victimization: comparison of spatial regression analysis and hierarchical regression analysis.

    Science.gov (United States)

    Takagi, Daisuke; Ikeda, Ken'ichi; Kawachi, Ichiro

    2012-11-01

    Crime is an important determinant of public health outcomes, including quality of life, mental well-being, and health behavior. A body of research has documented the association between community social capital and crime victimization. The association between social capital and crime victimization has been examined at multiple levels of spatial aggregation, ranging from entire countries, to states, metropolitan areas, counties, and neighborhoods. In multilevel analysis, the spatial boundaries at level 2 are most often drawn from administrative boundaries (e.g., Census tracts in the U.S.). One problem with adopting administrative definitions of neighborhoods is that it ignores spatial spillover. We conducted a study of social capital and crime victimization in one ward of Tokyo city, using a spatial Durbin model with an inverse-distance weighting matrix that assigned each respondent a unique level of "exposure" to social capital based on all other residents' perceptions. The study is based on a postal questionnaire sent to 20-69 years old residents of Arakawa Ward, Tokyo. The response rate was 43.7%. We examined the contextual influence of generalized trust, perceptions of reciprocity, two types of social network variables, as well as two principal components of social capital (constructed from the above four variables). Our outcome measure was self-reported crime victimization in the last five years. In the spatial Durbin model, we found that neighborhood generalized trust, reciprocity, supportive networks and two principal components of social capital were each inversely associated with crime victimization. By contrast, a multilevel regression performed with the same data (using administrative neighborhood boundaries) found generally null associations between neighborhood social capital and crime. Spatial regression methods may be more appropriate for investigating the contextual influence of social capital in homogeneous cultural settings such as Japan.

  12. Evidence for a non-universal Kennicutt-Schmidt relationship using hierarchical Bayesian linear regression

    CERN Document Server

    Shetty, Rahul; Bigiel, Frank

    2012-01-01

    We develop a Bayesian linear regression method which rigorously treats measurement uncertainties, and accounts for hierarchical data structure for investigating the relationship between the star formation rate and gas surface density. The method simultaneously estimates the intercept, slope, and scatter about the regression line of each individual subject (e.g. a galaxy) and the population (e.g. an ensemble of galaxies). Using synthetic datasets, we demonstrate that the Bayesian method accurately recovers the parameters of both the individuals and the population, especially when compared to commonly employed least squares methods, such as the bisector. We apply the Bayesian method to estimate the Kennicutt-Schmidt (KS) parameters of a sample of spiral galaxies compiled by Bigiel et al. (2008). We find significant variation in the KS parameters, indicating that no single KS relationship holds for all galaxies. This suggests that the relationship between molecular gas and star formation differs between galaxies...

  13. Principal Covariates Clusterwise Regression (PCCR): Accounting for Multicollinearity and Population Heterogeneity in Hierarchically Organized Data.

    Science.gov (United States)

    Wilderjans, Tom Frans; Vande Gaer, Eva; Kiers, Henk A L; Van Mechelen, Iven; Ceulemans, Eva

    2017-03-01

    In the behavioral sciences, many research questions pertain to a regression problem in that one wants to predict a criterion on the basis of a number of predictors. Although in many cases, ordinary least squares regression will suffice, sometimes the prediction problem is more challenging, for three reasons: first, multiple highly collinear predictors can be available, making it difficult to grasp their mutual relations as well as their relations to the criterion. In that case, it may be very useful to reduce the predictors to a few summary variables, on which one regresses the criterion and which at the same time yields insight into the predictor structure. Second, the population under study may consist of a few unknown subgroups that are characterized by different regression models. Third, the obtained data are often hierarchically structured, with for instance, observations being nested into persons or participants within groups or countries. Although some methods have been developed that partially meet these challenges (i.e., principal covariates regression (PCovR), clusterwise regression (CR), and structural equation models), none of these methods adequately deals with all of them simultaneously. To fill this gap, we propose the principal covariates clusterwise regression (PCCR) method, which combines the key idea's behind PCovR (de Jong & Kiers in Chemom Intell Lab Syst 14(1-3):155-164, 1992) and CR (Späth in Computing 22(4):367-373, 1979). The PCCR method is validated by means of a simulation study and by applying it to cross-cultural data regarding satisfaction with life.

  14. The use of GLS regression in regional hydrologic analyses

    Science.gov (United States)

    Griffis, V. W.; Stedinger, J. R.

    2007-09-01

    SummaryTo estimate flood quantiles and other statistics at ungauged sites, many organizations employ an iterative generalized least squares (GLS) regression procedure to estimate the parameters of a model of the statistic of interest as a function of basin characteristics. The GLS regression procedure accounts for differences in available record lengths and spatial correlation in concurrent events by using an estimator of the sampling covariance matrix of available flood quantiles. Previous studies by the US Geological Survey using the LP3 distribution have neglected the impact of uncertainty in the weighted skew on quantile precision. The needed relationship is developed here and its use is illustrated in a regional flood study with 162 sites from South Carolina. The performance of a pooled regression model is compared to separate models for each hydrologic region: statistical tests recommend an interesting hybrid of the two which is both surprising and hydrologically reasonable. The statistical analysis is augmented with new diagnostic metrics including a condition number to check for multicollinearity, a new pseudo- R appropriate for use with GLS regression, and two error variance ratios. GLS regression for the standard deviation demonstrates that again a hybrid model is attractive, and that GLS rather than an OLS or WLS analysis is appropriate for the development of regional standard deviation models.

  15. Analysing inequalities in Germany a structured additive distributional regression approach

    CERN Document Server

    Silbersdorff, Alexander

    2017-01-01

    This book seeks new perspectives on the growing inequalities that our societies face, putting forward Structured Additive Distributional Regression as a means of statistical analysis that circumvents the common problem of analytical reduction to simple point estimators. This new approach allows the observed discrepancy between the individuals’ realities and the abstract representation of those realities to be explicitly taken into consideration using the arithmetic mean alone. In turn, the method is applied to the question of economic inequality in Germany.

  16. Hierarchical Vector Auto-Regressive Models and Their Applications to Multi-subject Effective Connectivity

    Directory of Open Access Journals (Sweden)

    Cristina eGorrostieta

    2013-11-01

    Full Text Available Vector auto-regressive (VAR models typically form the basis for constructing directed graphical models for investigating connectivity in a brain network with brain regions of interest (ROIs as nodes. There are limitations in the standard VAR models. The number of parameters in the VAR model increases quadratically with the number of ROIs and linearly with the order of the model and thus due to the large number of parameters, the model could pose serious estimation problems. Moreover, when applied to imaging data, the standard VAR model does not account for variability in the connectivity structure across all subjects. In this paper, we develop a novel generalization of the VAR model that overcomes these limitations. To deal with the high dimensionality of the parameter space, we propose a Bayesian hierarchical framework for the VAR model that will account for both temporal correlation within a subject and between subject variation. Our approach uses prior distributions that give rise to estimates that correspond to penalized least squares criterion with the elastic net penalty. We apply the proposed model to investigate differences in effective connectivity during a hand grasp experiment between healthy controls and patients with residual motor deficit following a stroke.

  17. Bayesian hierarchical regression analysis of variations in sea surface temperature change over the past million years

    Science.gov (United States)

    Snyder, Carolyn W.

    2016-09-01

    Statistical challenges often preclude comparisons among different sea surface temperature (SST) reconstructions over the past million years. Inadequate consideration of uncertainty can result in misinterpretation, overconfidence, and biased conclusions. Here I apply Bayesian hierarchical regressions to analyze local SST responsiveness to climate changes for 54 SST reconstructions from across the globe over the past million years. I develop methods to account for multiple sources of uncertainty, including the quantification of uncertainty introduced from absolute dating into interrecord comparisons. The estimates of local SST responsiveness explain 64% (62% to 77%, 95% interval) of the total variation within each SST reconstruction with a single number. There is remarkable agreement between SST proxy methods, with the exception of Mg/Ca proxy methods estimating muted responses at high latitudes. The Indian Ocean exhibits a muted response in comparison to other oceans. I find a stable estimate of the proposed "universal curve" of change in local SST responsiveness to climate changes as a function of sin2(latitude) over the past 400,000 years: SST change at 45°N/S is larger than the average tropical response by a factor of 1.9 (1.5 to 2.6, 95% interval) and explains 50% (35% to 58%, 95% interval) of the total variation between each SST reconstruction. These uncertainty and statistical methods are well suited for application across paleoclimate and environmental data series intercomparisons.

  18. Type Ia Supernova Colors and Ejecta Velocities: Hierarchical Bayesian Regression with Non-Gaussian Distributions

    CERN Document Server

    Mandel, Kaisey S; Kirshner, Robert P

    2014-01-01

    We investigate the correlations between the peak intrinsic colors of Type Ia supernovae (SN Ia) and their expansion velocities at maximum light, measured from the Si II 6355 A spectral feature. We construct a new hierarchical Bayesian regression model and Gibbs sampler to estimate the dependence of the intrinsic colors of a SN Ia on its ejecta velocity, while accounting for the random effects of intrinsic scatter, measurement error, and reddening by host galaxy dust. The method is applied to the apparent color data from BVRI light curves and Si II velocity data for 79 nearby SN Ia. Comparison of the apparent color distributions of high velocity (HV) and normal velocity (NV) supernovae reveals significant discrepancies in B-V and B-R, but not other colors. Hence, they are likely due to intrinsic color differences originating in the B-band, rather than dust reddening. The mean intrinsic B-V and B-R color differences between HV and NV groups are 0.06 +/- 0.02 and 0.09 +/- 0.02 mag, respectively. Under a linear m...

  19. TYPE Ia SUPERNOVA COLORS AND EJECTA VELOCITIES: HIERARCHICAL BAYESIAN REGRESSION WITH NON-GAUSSIAN DISTRIBUTIONS

    Energy Technology Data Exchange (ETDEWEB)

    Mandel, Kaisey S.; Kirshner, Robert P. [Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138 (United States); Foley, Ryan J., E-mail: kmandel@cfa.harvard.edu [Astronomy Department, University of Illinois at Urbana-Champaign, 1002 West Green Street, Urbana, IL 61801 (United States)

    2014-12-20

    We investigate the statistical dependence of the peak intrinsic colors of Type Ia supernovae (SNe Ia) on their expansion velocities at maximum light, measured from the Si II λ6355 spectral feature. We construct a new hierarchical Bayesian regression model, accounting for the random effects of intrinsic scatter, measurement error, and reddening by host galaxy dust, and implement a Gibbs sampler and deviance information criteria to estimate the correlation. The method is applied to the apparent colors from BVRI light curves and Si II velocity data for 79 nearby SNe Ia. The apparent color distributions of high-velocity (HV) and normal velocity (NV) supernovae exhibit significant discrepancies for B – V and B – R, but not other colors. Hence, they are likely due to intrinsic color differences originating in the B band, rather than dust reddening. The mean intrinsic B – V and B – R color differences between HV and NV groups are 0.06 ± 0.02 and 0.09 ± 0.02 mag, respectively. A linear model finds significant slopes of –0.021 ± 0.006 and –0.030 ± 0.009 mag (10{sup 3} km s{sup –1}){sup –1} for intrinsic B – V and B – R colors versus velocity, respectively. Because the ejecta velocity distribution is skewed toward high velocities, these effects imply non-Gaussian intrinsic color distributions with skewness up to +0.3. Accounting for the intrinsic-color-velocity correlation results in corrections to A{sub V} extinction estimates as large as –0.12 mag for HV SNe Ia and +0.06 mag for NV events. Velocity measurements from SN Ia spectra have the potential to diminish systematic errors from the confounding of intrinsic colors and dust reddening affecting supernova distances.

  20. Hierarchical structure of the Sicilian goats revealed by Bayesian analyses of microsatellite information.

    Science.gov (United States)

    Siwek, M; Finocchiaro, R; Curik, I; Portolano, B

    2011-02-01

    Genetic structure and relationship amongst the main goat populations in Sicily (Girgentana, Derivata di Siria, Maltese and Messinese) were analysed using information from 19 microsatellite markers genotyped on 173 individuals. A posterior Bayesian approach implemented in the program STRUCTURE revealed a hierarchical structure with two clusters at the first level (Girgentana vs. Messinese, Derivata di Siria and Maltese), explaining 4.8% of variation (amovaФ(ST) estimate). Seven clusters nested within these first two clusters (further differentiations of Girgentana, Derivata di Siria and Maltese), explaining 8.5% of variation (amovaФ(SC) estimate). The analyses and methods applied in this study indicate their power to detect subtle population structure.

  1. Regressão múltipla stepwise e hierárquica em Psicologia Organizacional: aplicações, problemas e soluções Stepwise and hierarchical multiple regression in organizational psychology: Applications, problemas and solutions

    Directory of Open Access Journals (Sweden)

    Gardênia Abbad

    2002-01-01

    Full Text Available Este artigo discute algumas aplicações das técnicas de análise de regressão múltipla stepwise e hierárquica, as quais são muito utilizadas em pesquisas da área de Psicologia Organizacional. São discutidas algumas estratégias de identificação e de solução de problemas relativos à ocorrência de erros do Tipo I e II e aos fenômenos de supressão, complementaridade e redundância nas equações de regressão múltipla. São apresentados alguns exemplos de pesquisas nas quais esses padrões de associação entre variáveis estiveram presentes e descritas as estratégias utilizadas pelos pesquisadores para interpretá-los. São discutidas as aplicações dessas análises no estudo de interação entre variáveis e na realização de testes para avaliação da linearidade do relacionamento entre variáveis. Finalmente, são apresentadas sugestões para lidar com as limitações das análises de regressão múltipla (stepwise e hierárquica.This article discusses applications of stepwise and hierarchical multiple regression analyses to research in organizational psychology. Strategies for identifying type I and II errors, and solutions to potential problems that may arise from such errors are proposed. In addition, phenomena such as suppression, complementarity, and redundancy are reviewed. The article presents examples of research where these phenomena occurred, and the manner in which they were explained by researchers. Some applications of multiple regression analyses to studies involving between-variable interactions are presented, along with tests used to analyze the presence of linearity among variables. Finally, some suggestions are provided for dealing with limitations implicit in multiple regression analyses (stepwise and hierarchical.

  2. A Logistic Regression Model with a Hierarchical Random Error Term for Analyzing the Utilization of Public Transport

    Directory of Open Access Journals (Sweden)

    Chong Wei

    2015-01-01

    Full Text Available Logistic regression models have been widely used in previous studies to analyze public transport utilization. These studies have shown travel time to be an indispensable variable for such analysis and usually consider it to be a deterministic variable. This formulation does not allow us to capture travelers’ perception error regarding travel time, and recent studies have indicated that this error can have a significant effect on modal choice behavior. In this study, we propose a logistic regression model with a hierarchical random error term. The proposed model adds a new random error term for the travel time variable. This term structure enables us to investigate travelers’ perception error regarding travel time from a given choice behavior dataset. We also propose an extended model that allows constraining the sign of this error in the model. We develop two Gibbs samplers to estimate the basic hierarchical model and the extended model. The performance of the proposed models is examined using a well-known dataset.

  3. Investigating the effects of climate variations on bacillary dysentery incidence in northeast China using ridge regression and hierarchical cluster analysis

    Directory of Open Access Journals (Sweden)

    Guo Junqiao

    2008-09-01

    Full Text Available Abstract Background The effects of climate variations on bacillary dysentery incidence have gained more recent concern. However, the multi-collinearity among meteorological factors affects the accuracy of correlation with bacillary dysentery incidence. Methods As a remedy, a modified method to combine ridge regression and hierarchical cluster analysis was proposed for investigating the effects of climate variations on bacillary dysentery incidence in northeast China. Results All weather indicators, temperatures, precipitation, evaporation and relative humidity have shown positive correlation with the monthly incidence of bacillary dysentery, while air pressure had a negative correlation with the incidence. Ridge regression and hierarchical cluster analysis showed that during 1987–1996, relative humidity, temperatures and air pressure affected the transmission of the bacillary dysentery. During this period, all meteorological factors were divided into three categories. Relative humidity and precipitation belonged to one class, temperature indexes and evaporation belonged to another class, and air pressure was the third class. Conclusion Meteorological factors have affected the transmission of bacillary dysentery in northeast China. Bacillary dysentery prevention and control would benefit from by giving more consideration to local climate variations.

  4. Hierarchical design of a polymeric nanovehicle for efficient tumor regression and imaging

    Science.gov (United States)

    An, Jinxia; Guo, Qianqian; Zhang, Peng; Sinclair, Andrew; Zhao, Yu; Zhang, Xinge; Wu, Kan; Sun, Fang; Hung, Hsiang-Chieh; Li, Chaoxing; Jiang, Shaoyi

    2016-04-01

    Effective delivery of therapeutics to disease sites significantly contributes to drug efficacy, toxicity and clearance. Here we designed a hierarchical polymeric nanoparticle structure for anti-cancer chemotherapy delivery by utilizing state-of-the-art polymer chemistry and co-assembly techniques. This novel structural design combines the most desired merits for drug delivery in a single particle, including a long in vivo circulation time, inhibited non-specific cell uptake, enhanced tumor cell internalization, pH-controlled drug release and simultaneous imaging. This co-assembled nanoparticle showed exceptional stability in complex biological media. Benefiting from the synergistic effects of zwitterionic and multivalent galactose polymers, drug-loaded nanoparticles were selectively internalized by cancer cells rather than normal tissue cells. In addition, the pH-responsive core retained their cargo within their polymeric coating through hydrophobic interaction and released it under slightly acidic conditions. In vivo pharmacokinetic studies in mice showed minimal uptake of nanoparticles by the mononuclear phagocyte system and excellent blood circulation half-lives of 14.4 h. As a result, tumor growth was completely inhibited and no damage was observed for normal organ tissues. This newly developed drug nanovehicle has great potential in cancer therapy, and the hierarchical design principle should provide valuable information for the development of the next generation of drug delivery systems.Effective delivery of therapeutics to disease sites significantly contributes to drug efficacy, toxicity and clearance. Here we designed a hierarchical polymeric nanoparticle structure for anti-cancer chemotherapy delivery by utilizing state-of-the-art polymer chemistry and co-assembly techniques. This novel structural design combines the most desired merits for drug delivery in a single particle, including a long in vivo circulation time, inhibited non-specific cell uptake

  5. Stuttering, induced fluency, and natural fluency: a hierarchical series of activation likelihood estimation meta-analyses.

    Science.gov (United States)

    Budde, Kristin S; Barron, Daniel S; Fox, Peter T

    2014-12-01

    Developmental stuttering is a speech disorder most likely due to a heritable form of developmental dysmyelination impairing the function of the speech-motor system. Speech-induced brain-activation patterns in persons who stutter (PWS) are anomalous in various ways; the consistency of these aberrant patterns is a matter of ongoing debate. Here, we present a hierarchical series of coordinate-based meta-analyses addressing this issue. Two tiers of meta-analyses were performed on a 17-paper dataset (202 PWS; 167 fluent controls). Four large-scale (top-tier) meta-analyses were performed, two for each subject group (PWS and controls). These analyses robustly confirmed the regional effects previously postulated as "neural signatures of stuttering" (Brown, Ingham, Ingham, Laird, & Fox, 2005) and extended this designation to additional regions. Two smaller-scale (lower-tier) meta-analyses refined the interpretation of the large-scale analyses: (1) a between-group contrast targeting differences between PWS and controls (stuttering trait); and (2) a within-group contrast (PWS only) of stuttering with induced fluency (stuttering state).

  6. Permutation Tests of Hierarchical Cluster Analyses of Carrion Communities and Their Potential Use in Forensic Entomology.

    Science.gov (United States)

    van der Ham, Joris L

    2016-05-19

    Forensic entomologists can use carrion communities' ecological succession data to estimate the postmortem interval (PMI). Permutation tests of hierarchical cluster analyses of these data provide a conceptual method to estimate part of the PMI, the post-colonization interval (post-CI). This multivariate approach produces a baseline of statistically distinct clusters that reflect changes in the carrion community composition during the decomposition process. Carrion community samples of unknown post-CIs are compared with these baseline clusters to estimate the post-CI. In this short communication, I use data from previously published studies to demonstrate the conceptual feasibility of this multivariate approach. Analyses of these data produce series of significantly distinct clusters, which represent carrion communities during 1- to 20-day periods of the decomposition process. For 33 carrion community samples, collected over an 11-day period, this approach correctly estimated the post-CI within an average range of 3.1 days.

  7. USE OF THE SIMPLE LINEAR REGRESSION MODEL IN MACRO-ECONOMICAL ANALYSES

    Directory of Open Access Journals (Sweden)

    Constantin ANGHELACHE

    2011-10-01

    Full Text Available The article presents the fundamental aspects of the linear regression, as a toolbox which can be used in macroeconomic analyses. The article describes the estimation of the parameters, the statistical tests used, the homoscesasticity and heteroskedasticity. The use of econometrics instrument in macroeconomics is an important factor that guarantees the quality of the models, analyses, results and possible interpretation that can be drawn at this level.

  8. Analysing count data of Butterflies communities in Jasin, Melaka: A Poisson regression analysis

    Science.gov (United States)

    Afiqah Muhamad Jamil, Siti; Asrul Affendi Abdullah, M.; Kek, Sie Long; Nor, Maria Elena; Mohamed, Maryati; Ismail, Norradihah

    2017-09-01

    Counting outcomes normally have remaining values highly skewed toward the right as they are often characterized by large values of zeros. The data of butterfly communities, had been taken from Jasin, Melaka and consists of 131 number of subject visits in Jasin, Melaka. In this paper, considering the count data of butterfly communities, an analysis is considered Poisson regression analysis as it is assumed to be an alternative way on better suited to the counting process. This research paper is about analysing count data from zero observation ecological inference of butterfly communities in Jasin, Melaka by using Poisson regression analysis. The software for Poisson regression is readily available and it is becoming more widely used in many field of research and the data was analysed by using SAS software. The purpose of analysis comprised the framework of identifying the concerns. Besides, by using Poisson regression analysis, the study determines the fitness of data for accessing the reliability on using the count data. The finding indicates that the highest and lowest number of subject comes from the third family (Nymphalidae) family and fifth (Hesperidae) family and the Poisson distribution seems to fit the zero values.

  9. Bayesian hierarchical piecewise regression models: a tool to detect trajectory divergence between groups in long-term observational studies.

    Science.gov (United States)

    Buscot, Marie-Jeanne; Wotherspoon, Simon S; Magnussen, Costan G; Juonala, Markus; Sabin, Matthew A; Burgner, David P; Lehtimäki, Terho; Viikari, Jorma S A; Hutri-Kähönen, Nina; Raitakari, Olli T; Thomson, Russell J

    2017-06-06

    Bayesian hierarchical piecewise regression (BHPR) modeling has not been previously formulated to detect and characterise the mechanism of trajectory divergence between groups of participants that have longitudinal responses with distinct developmental phases. These models are useful when participants in a prospective cohort study are grouped according to a distal dichotomous health outcome. Indeed, a refined understanding of how deleterious risk factor profiles develop across the life-course may help inform early-life interventions. Previous techniques to determine between-group differences in risk factors at each age may result in biased estimate of the age at divergence. We demonstrate the use of Bayesian hierarchical piecewise regression (BHPR) to generate a point estimate and credible interval for the age at which trajectories diverge between groups for continuous outcome measures that exhibit non-linear within-person response profiles over time. We illustrate our approach by modeling the divergence in childhood-to-adulthood body mass index (BMI) trajectories between two groups of adults with/without type 2 diabetes mellitus (T2DM) in the Cardiovascular Risk in Young Finns Study (YFS). Using the proposed BHPR approach, we estimated the BMI profiles of participants with T2DM diverged from healthy participants at age 16 years for males (95% credible interval (CI):13.5-18 years) and 21 years for females (95% CI: 19.5-23 years). These data suggest that a critical window for weight management intervention in preventing T2DM might exist before the age when BMI growth rate is naturally expected to decrease. Simulation showed that when using pairwise comparison of least-square means from categorical mixed models, smaller sample sizes tended to conclude a later age of divergence. In contrast, the point estimate of the divergence time is not biased by sample size when using the proposed BHPR method. BHPR is a powerful analytic tool to model long-term non

  10. Multivariate Chemometrics with Regression and Classification Analyses in Heroin Profiling Based on the Chromatographic Data.

    Science.gov (United States)

    B Gadžurić, Slobodan; O Podunavac Kuzmanović, Sanja; B Vraneš, Milan; Petrin, Marija; Bugarski, Tatjana; Kovačević, Strahinja Z

    2016-01-01

    The purpose of this work is to promote and facilitate forensic profiling and chemical analysis of illicit drug samples in order to determine their origin, methods of production and transfer through the country. The article is based on the gas chromatography analysis of heroin samples seized from three different locations in Serbia. Chemometric approach with appropriate statistical tools (multiple-linear regression (MLR), hierarchical cluster analysis (HCA) and Wald-Wolfowitz run (WWR) test) were applied on chromatographic data of heroin samples in order to correlate and examine the geographic origin of seized heroin samples. The best MLR models were further validated by leave-one-out technique as well as by the calculation of basic statistical parameters for the established models. To confirm the predictive power of the models, external set of heroin samples was used. High agreement between experimental and predicted values of acetyl thebaol and diacetyl morphine peak ratio, obtained in the validation procedure, indicated the good quality of derived MLR models. WWR test showed which examined heroin samples come from the same population, and HCA was applied in order to overview the similarities among the studied heroine samples.

  11. Predictive Ability of Pender's Health Promotion Model for Physical Activity and Exercise in People with Spinal Cord Injuries: A Hierarchical Regression Analysis

    Science.gov (United States)

    Keegan, John P.; Chan, Fong; Ditchman, Nicole; Chiu, Chung-Yi

    2012-01-01

    The main objective of this study was to validate Pender's Health Promotion Model (HPM) as a motivational model for exercise/physical activity self-management for people with spinal cord injuries (SCIs). Quantitative descriptive research design using hierarchical regression analysis (HRA) was used. A total of 126 individuals with SCI were recruited…

  12. Problems of correlations between explanatory variables in multiple regression analyses in the dental literature.

    Science.gov (United States)

    Tu, Y-K; Kellett, M; Clerehugh, V; Gilthorpe, M S

    2005-10-01

    Multivariable analysis is a widely used statistical methodology for investigating associations amongst clinical variables. However, the problems of collinearity and multicollinearity, which can give rise to spurious results, have in the past frequently been disregarded in dental research. This article illustrates and explains the problems which may be encountered, in the hope of increasing awareness and understanding of these issues, thereby improving the quality of the statistical analyses undertaken in dental research. Three examples from different clinical dental specialties are used to demonstrate how to diagnose the problem of collinearity/multicollinearity in multiple regression analyses and to illustrate how collinearity/multicollinearity can seriously distort the model development process. Lack of awareness of these problems can give rise to misleading results and erroneous interpretations. Multivariable analysis is a useful tool for dental research, though only if its users thoroughly understand the assumptions and limitations of these methods. It would benefit evidence-based dentistry enormously if researchers were more aware of both the complexities involved in multiple regression when using these methods and of the need for expert statistical consultation in developing study design and selecting appropriate statistical methodologies.

  13. Performance Evaluation of Button Bits in Coal Measure Rocks by Using Multiple Regression Analyses

    Science.gov (United States)

    Su, Okan

    2016-02-01

    Electro-hydraulic and jumbo drills are commonly used for underground coal mines and tunnel drives for the purpose of blasthole drilling and rock bolt installations. Not only machine parameters but also environmental conditions have significant effects on drilling. This study characterizes the performance of button bits during blasthole drilling in coal measure rocks by using multiple regression analyses. The penetration rate of jumbo and electro-hydraulic drills was measured in the field by employing bits in different diameters and the specific energy of the drilling was calculated at various locations, including highway tunnels and underground roadways of coal mines. Large block samples were collected from each location at which in situ drilling measurements were performed. Then, the effects of rock properties and machine parameters on the drilling performance were examined. Multiple regression models were developed for the prediction of the specific energy of the drilling and the penetration rate. The results revealed that hole area, impact (blow) energy, blows per minute of the piston within the drill, and some rock properties, such as the uniaxial compressive strength (UCS) and the drilling rate index (DRI), influence the drill performance.

  14. Non-Hierarchical Clustering as a method to analyse an open-ended ...

    African Journals Online (AJOL)

    Apple

    tests, provide instructors with tools to probe students' conceptual knowledge of various fields of science and ... quantitative non-hierarchical clustering analysis method known as k-means (Everitt, Landau, Leese & Stahl, ...... undergraduate engineering students in creating ... mathematics-formal reasoning and the contextual.

  15. The number of subjects per variable required in linear regression analyses

    NARCIS (Netherlands)

    P.C. Austin (Peter); E.W. Steyerberg (Ewout)

    2015-01-01

    textabstractObjectives To determine the number of independent variables that can be included in a linear regression model. Study Design and Setting We used a series of Monte Carlo simulations to examine the impact of the number of subjects per variable (SPV) on the accuracy of estimated regression c

  16. Bayesian hierarchical model used to analyze regression between fish body size and scale size: application to rare fish species Zingel asper

    Directory of Open Access Journals (Sweden)

    Fontez B.

    2014-04-01

    Full Text Available Back-calculation allows to increase available data on fish growth. The accuracy of back-calculation models is of paramount importance for growth analysis. Frequentist and Bayesian hierarchical approaches were used for regression between fish body size and scale size for the rare fish species Zingel asper. The Bayesian approach permits more reliable estimation of back-calculated size, taking into account biological information and cohort variability. This method greatly improves estimation of back-calculated length when sampling is uneven and/or small.

  17. Personality change over 40 years of adulthood: hierarchical linear modeling analyses of two longitudinal samples.

    Science.gov (United States)

    Helson, Ravenna; Jones, Constance; Kwan, Virginia S Y

    2002-09-01

    Normative personality change over 40 years was shown in 2 longitudinal cohorts with hierarchical linear modeling of California Psychological Inventory data obtained at multiple times between ages 21-75. Although themes of change and the paucity of differences attributable to gender and cohort largely supported findings of multiethnic cross-sectional samples, the authors also found much quadratic change and much individual variability. The form of quadratic change supported predictions about the influence of period of life and social climate as factors in change over the adult years: Scores on Dominance and Independence peaked in the middle age of both cohorts, and scores on Responsibility were lowest during peak years of the culture of individualism. The idea that personality change is most pronounced before age 30 and then reaches a plateau received no support.

  18. The number of subjects per variable required in linear regression analyses.

    Science.gov (United States)

    Austin, Peter C; Steyerberg, Ewout W

    2015-06-01

    To determine the number of independent variables that can be included in a linear regression model. We used a series of Monte Carlo simulations to examine the impact of the number of subjects per variable (SPV) on the accuracy of estimated regression coefficients and standard errors, on the empirical coverage of estimated confidence intervals, and on the accuracy of the estimated R(2) of the fitted model. A minimum of approximately two SPV tended to result in estimation of regression coefficients with relative bias of less than 10%. Furthermore, with this minimum number of SPV, the standard errors of the regression coefficients were accurately estimated and estimated confidence intervals had approximately the advertised coverage rates. A much higher number of SPV were necessary to minimize bias in estimating the model R(2), although adjusted R(2) estimates behaved well. The bias in estimating the model R(2) statistic was inversely proportional to the magnitude of the proportion of variation explained by the population regression model. Linear regression models require only two SPV for adequate estimation of regression coefficients, standard errors, and confidence intervals. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  19. Hierarchical Cluster-based Partial Least Squares Regression (HC-PLSR is an efficient tool for metamodelling of nonlinear dynamic models

    Directory of Open Access Journals (Sweden)

    Omholt Stig W

    2011-06-01

    Full Text Available Abstract Background Deterministic dynamic models of complex biological systems contain a large number of parameters and state variables, related through nonlinear differential equations with various types of feedback. A metamodel of such a dynamic model is a statistical approximation model that maps variation in parameters and initial conditions (inputs to variation in features of the trajectories of the state variables (outputs throughout the entire biologically relevant input space. A sufficiently accurate mapping can be exploited both instrumentally and epistemically. Multivariate regression methodology is a commonly used approach for emulating dynamic models. However, when the input-output relations are highly nonlinear or non-monotone, a standard linear regression approach is prone to give suboptimal results. We therefore hypothesised that a more accurate mapping can be obtained by locally linear or locally polynomial regression. We present here a new method for local regression modelling, Hierarchical Cluster-based PLS regression (HC-PLSR, where fuzzy C-means clustering is used to separate the data set into parts according to the structure of the response surface. We compare the metamodelling performance of HC-PLSR with polynomial partial least squares regression (PLSR and ordinary least squares (OLS regression on various systems: six different gene regulatory network models with various types of feedback, a deterministic mathematical model of the mammalian circadian clock and a model of the mouse ventricular myocyte function. Results Our results indicate that multivariate regression is well suited for emulating dynamic models in systems biology. The hierarchical approach turned out to be superior to both polynomial PLSR and OLS regression in all three test cases. The advantage, in terms of explained variance and prediction accuracy, was largest in systems with highly nonlinear functional relationships and in systems with positive feedback

  20. Analysing the forward premium anomaly using a Logistic Smooth Transition Regression model.

    OpenAIRE

    Sofiane Amri

    2008-01-01

    Several researchers have suggested that exchange rates may be characterized by nonlinear behaviour. This paper examines these nonlinearities and asymetries and estimates a Logistic Transition Regression (LSTR) of Fama Regression with the Risk Adjusted Forward Premia as transition variable. Results confirm the existence of nonlinear dynamics in the relationship between spot exchange rate differential and the forward premium for all the currencies of the sample and for all maturities (three and...

  1. Testing Mediation Using Multiple Regression and Structural Equation Modeling Analyses in Secondary Data

    Science.gov (United States)

    Li, Spencer D.

    2011-01-01

    Mediation analysis in child and adolescent development research is possible using large secondary data sets. This article provides an overview of two statistical methods commonly used to test mediated effects in secondary analysis: multiple regression and structural equation modeling (SEM). Two empirical studies are presented to illustrate the…

  2. Correcting for multivariate measurement error by regression calibration in meta-analyses of epidemiological studies

    DEFF Research Database (Denmark)

    Tybjærg-Hansen, Anne

    2009-01-01

    Within-person variability in measured values of multiple risk factors can bias their associations with disease. The multivariate regression calibration (RC) approach can correct for such measurement error and has been applied to studies in which true values or independent repeat measurements of t...

  3. Alpins and thibos vectorial astigmatism analyses: proposal of a linear regression model between methods

    Directory of Open Access Journals (Sweden)

    Giuliano de Oliveira Freitas

    2013-10-01

    Full Text Available PURPOSE: To determine linear regression models between Alpins descriptive indices and Thibos astigmatic power vectors (APV, assessing the validity and strength of such correlations. METHODS: This case series prospectively assessed 62 eyes of 31 consecutive cataract patients with preoperative corneal astigmatism between 0.75 and 2.50 diopters in both eyes. Patients were randomly assorted among two phacoemulsification groups: one assigned to receive AcrySof®Toric intraocular lens (IOL in both eyes and another assigned to have AcrySof Natural IOL associated with limbal relaxing incisions, also in both eyes. All patients were reevaluated postoperatively at 6 months, when refractive astigmatism analysis was performed using both Alpins and Thibos methods. The ratio between Thibos postoperative APV and preoperative APV (APVratio and its linear regression to Alpins percentage of success of astigmatic surgery, percentage of astigmatism corrected and percentage of astigmatism reduction at the intended axis were assessed. RESULTS: Significant negative correlation between the ratio of post- and preoperative Thibos APVratio and Alpins percentage of success (%Success was found (Spearman's ρ=-0.93; linear regression is given by the following equation: %Success = (-APVratio + 1.00x100. CONCLUSION: The linear regression we found between APVratio and %Success permits a validated mathematical inference concerning the overall success of astigmatic surgery.

  4. Check-all-that-apply data analysed by Partial Least Squares regression

    DEFF Research Database (Denmark)

    Rinnan, Åsmund; Giacalone, Davide; Frøst, Michael Bom

    2015-01-01

    are analysed by multivariate techniques. CATA data can be analysed both by setting the CATA as the X and the Y. The former is the PLS-Discriminant Analysis (PLS-DA) version, while the latter is the ANOVA-PLS (A-PLS) version. We investigated the difference between these two approaches, concluding...

  5. The Chinese Family Assessment Instrument (C-FAI): Hierarchical Confirmatory Factor Analyses and Factorial Invariance

    Science.gov (United States)

    Shek, Daniel T. L.; Ma, Cecilia M. S.

    2010-01-01

    Objective: This paper examines the dimensionality and factorial invariance of the Chinese Family Assessment Instrument (C-FAI) using multigroup confirmatory factor analyses (MCFAs). Method: A total of 3,649 students responded to the C-FAI in a community survey. Results: Results showed that there are five dimensions of the C-FAI (communication,…

  6. A new hierarchical Bayesian approach to analyse environmental and climatic influences on debris flow occurrence

    Science.gov (United States)

    Jomelli, Vincent; Pavlova, Irina; Eckert, Nicolas; Grancher, Delphine; Brunstein, Daniel

    2015-12-01

    How can debris flow occurrences be modelled at regional scale and take both environmental and climatic conditions into account? And, of the two, which has the most influence on debris flow activity? In this paper, we try to answer these questions with an innovative Bayesian hierarchical probabilistic model that simultaneously accounts for how debris flows respond to environmental and climatic variables. In it, full decomposition of space and time effects in occurrence probabilities is assumed, revealing an environmental and a climatic trend shared by all years/catchments, respectively, clearly distinguished from residual "random" effects. The resulting regional and annual occurrence probabilities evaluated as functions of the covariates make it possible to weight the respective contribution of the different terms and, more generally, to check the model performances at different spatio-temporal scales. After suitable validation, the model can be used to make predictions at undocumented sites and could be used in further studies for predictions under future climate conditions. Also, the Bayesian paradigm easily copes with missing data, thus making it possible to account for events that may have been missed during surveys. As a case study, we extract 124 debris flow event triggered between 1970 and 2005 in 27 catchments located in the French Alps from the French national natural hazard survey and model their variability of occurrence considering environmental and climatic predictors at the same time. We document the environmental characteristics of each debris flow catchment (morphometry, lithology, land cover, and the presence of permafrost). We also compute 15 climate variables including mean temperature and precipitation between May and October and the number of rainy days with daily cumulative rainfall greater than 10/15/20/25/30/40 mm day- 1. Application of our model shows that the combination of environmental and climatic predictors explained 77% of the overall

  7. Differential item functioning (DIF) analyses of health-related quality of life instruments using logistic regression

    DEFF Research Database (Denmark)

    Scott, Neil W; Fayers, Peter M; Aaronson, Neil K

    2010-01-01

    Differential item functioning (DIF) methods can be used to determine whether different subgroups respond differently to particular items within a health-related quality of life (HRQoL) subscale, after allowing for overall subgroup differences in that scale. This article reviews issues that arise ...... when testing for DIF in HRQoL instruments. We focus on logistic regression methods, which are often used because of their efficiency, simplicity and ease of application....

  8. Differential item functioning (DIF) analyses of health-related quality of life instruments using logistic regression

    DEFF Research Database (Denmark)

    Scott, Neil W.; Fayers, Peter M.; Aaronson, Neil K.

    2010-01-01

    Differential item functioning (DIF) methods can be used to determine whether different subgroups respond differently to particular items within a health-related quality of life (HRQoL) subscale, after allowing for overall subgroup differences in that scale. This article reviews issues that arise...... when testing for DIF in HRQoL instruments. We focus on logistic regression methods, which are often used because of their efficiency, simplicity and ease of application....

  9. Hierarchical linear modeling analyses of the NEO-PI-R scales in the Baltimore Longitudinal Study of Aging.

    Science.gov (United States)

    Terracciano, Antonio; McCrae, Robert R; Brant, Larry J; Costa, Paul T

    2005-09-01

    The authors examined age trends in the 5 factors and 30 facets assessed by the Revised NEO Personality Inventory in Baltimore Longitudinal Study of Aging data (N=1,944; 5,027 assessments) collected between 1989 and 2004. Consistent with cross-sectional results, hierarchical linear modeling analyses showed gradual personality changes in adulthood: a decline in Neuroticism up to age 80, stability and then decline in Extraversion, decline in Openness, increase in Agreeableness, and increase in Conscientiousness up to age 70. Some facets showed different curves from the factor they define. Birth cohort effects were modest, and there were no consistent Gender x Age interactions. Significant nonnormative changes were found for all 5 factors; they were not explained by attrition but might be due to genetic factors, disease, or life experience. Copyright (c) 2005 APA, all rights reserved.

  10. Hierarchical Linear Modeling Analyses of NEO-PI-R Scales In the Baltimore Longitudinal Study of Aging

    Science.gov (United States)

    Terracciano, Antonio; McCrae, Robert R.; Brant, Larry J.; Costa, Paul T.

    2009-01-01

    We examined age trends in the five factors and 30 facets assessed by the Revised NEO Personality Inventory in Baltimore Longitudinal Study of Aging data (N = 1,944; 5,027 assessments) collected between 1989 and 2004. Consistent with cross-sectional results, Hierarchical Linear Modeling analyses showed gradual personality changes in adulthood: a decline up to age 80 in Neuroticism, stability and then decline in Extraversion, decline in Openness, increase in Agreeableness, and increase up to age 70 in Conscientiousness. Some facets showed different curves from the factor they define. Birth cohort effects were modest, and there were no consistent Gender × Age interactions. Significant non-normative changes were found for all five factors; they were not explained by attrition but might be due to genetic factors, disease, or life experience. PMID:16248708

  11. Analyses of Developmental Rate Isomorphy in Ectotherms: Introducing the Dirichlet Regression.

    Directory of Open Access Journals (Sweden)

    David S Boukal

    Full Text Available Temperature drives development in insects and other ectotherms because their metabolic rate and growth depends directly on thermal conditions. However, relative durations of successive ontogenetic stages often remain nearly constant across a substantial range of temperatures. This pattern, termed 'developmental rate isomorphy' (DRI in insects, appears to be widespread and reported departures from DRI are generally very small. We show that these conclusions may be due to the caveats hidden in the statistical methods currently used to study DRI. Because the DRI concept is inherently based on proportional data, we propose that Dirichlet regression applied to individual-level data is an appropriate statistical method to critically assess DRI. As a case study we analyze data on five aquatic and four terrestrial insect species. We find that results obtained by Dirichlet regression are consistent with DRI violation in at least eight of the studied species, although standard analysis detects significant departure from DRI in only four of them. Moreover, the departures from DRI detected by Dirichlet regression are consistently much larger than previously reported. The proposed framework can also be used to infer whether observed departures from DRI reflect life history adaptations to size- or stage-dependent effects of varying temperature. Our results indicate that the concept of DRI in insects and other ectotherms should be critically re-evaluated and put in a wider context, including the concept of 'equiproportional development' developed for copepods.

  12. Drop-Weight Impact Test on U-Shape Concrete Specimens with Statistical and Regression Analyses

    Directory of Open Access Journals (Sweden)

    Xue-Chao Zhu

    2015-09-01

    Full Text Available According to the principle and method of drop-weight impact test, the impact resistance of concrete was measured using self-designed U-shape specimens and a newly designed drop-weight impact test apparatus. A series of drop-weight impact tests were carried out with four different masses of drop hammers (0.875, 0.8, 0.675 and 0.5 kg. The test results show that the impact resistance results fail to follow a normal distribution. As expected, U-shaped specimens can predetermine the location of the cracks very well. It is also easy to record the cracks propagation during the test. The maximum of coefficient of variation in this study is 31.2%; it is lower than the values obtained from the American Concrete Institute (ACI impact tests in the literature. By regression analysis, the linear relationship between the first-crack and ultimate failure impact resistance is good. It can suggested that a minimum number of specimens is required to reliably measure the properties of the material based on the observed levels of variation.

  13. Spatializing Area-Based Measures of Neighborhood Characteristics for Multilevel Regression Analyses: An Areal Median Filtering Approach.

    Science.gov (United States)

    Oka, Masayoshi; Wong, David W S

    2016-06-01

    Area-based measures of neighborhood characteristics simply derived from enumeration units (e.g., census tracts or block groups) ignore the potential of spatial spillover effects, and thus incorporating such measures into multilevel regression models may underestimate the neighborhood effects on health. To overcome this limitation, we describe the concept and method of areal median filtering to spatialize area-based measures of neighborhood characteristics for multilevel regression analyses. The areal median filtering approach provides a means to specify or formulate "neighborhoods" as meaningful geographic entities by removing enumeration unit boundaries as the absolute barriers and by pooling information from the neighboring enumeration units. This spatializing process takes into account for the potential of spatial spillover effects and also converts aspatial measures of neighborhood characteristics into spatial measures. From a conceptual and methodological standpoint, incorporating the derived spatial measures into multilevel regression analyses allows us to more accurately examine the relationships between neighborhood characteristics and health. To promote and set the stage for informative research in the future, we provide a few important conceptual and methodological remarks, and discuss possible applications, inherent limitations, and practical solutions for using the areal median filtering approach in the study of neighborhood effects on health.

  14. [Study of the clinical phenotype of symptomatic chronic airways disease by hierarchical cluster analysis and two-step cluster analyses].

    Science.gov (United States)

    Ning, P; Guo, Y F; Sun, T Y; Zhang, H S; Chai, D; Li, X M

    2016-09-01

    To study the distinct clinical phenotype of chronic airway diseases by hierarchical cluster analysis and two-step cluster analysis. A population sample of adult patients in Donghuamen community, Dongcheng district and Qinghe community, Haidian district, Beijing from April 2012 to January 2015, who had wheeze within the last 12 months, underwent detailed investigation, including a clinical questionnaire, pulmonary function tests, total serum IgE levels, blood eosinophil level and a peak flow diary. Nine variables were chosen as evaluating parameters, including pre-salbutamol forced expired volume in one second(FEV1)/forced vital capacity(FVC) ratio, pre-salbutamol FEV1, percentage of post-salbutamol change in FEV1, residual capacity, diffusing capacity of the lung for carbon monoxide/alveolar volume adjusted for haemoglobin level, peak expiratory flow(PEF) variability, serum IgE level, cumulative tobacco cigarette consumption (pack-years) and respiratory symptoms (cough and expectoration). Subjects' different clinical phenotype by hierarchical cluster analysis and two-step cluster analysis was identified. (1) Four clusters were identified by hierarchical cluster analysis. Cluster 1 was chronic bronchitis in smokers with normal pulmonary function. Cluster 2 was chronic bronchitis or mild chronic obstructive pulmonary disease (COPD) patients with mild airflow limitation. Cluster 3 included COPD patients with heavy smoking, poor quality of life and severe airflow limitation. Cluster 4 recognized atopic patients with mild airflow limitation, elevated serum IgE and clinical features of asthma. Significant differences were revealed regarding pre-salbutamol FEV1/FVC%, pre-salbutamol FEV1% pred, post-salbutamol change in FEV1%, maximal mid-expiratory flow curve(MMEF)% pred, carbon monoxide diffusing capacity per liter of alveolar(DLCO)/(VA)% pred, residual volume(RV)% pred, total serum IgE level, smoking history (pack-years), St.George's respiratory questionnaire

  15. The importance of trait emotional intelligence and feelings in the prediction of perceived and biological stress in adolescents: hierarchical regressions and fsQCA models.

    Science.gov (United States)

    Villanueva, Lidón; Montoya-Castilla, Inmaculada; Prado-Gascó, Vicente

    2017-07-01

    The purpose of this study is to analyze the combined effects of trait emotional intelligence (EI) and feelings on healthy adolescents' stress. Identifying the extent to which adolescent stress varies with trait emotional differences and the feelings of adolescents is of considerable interest in the development of intervention programs for fostering youth well-being. To attain this goal, self-reported questionnaires (perceived stress, trait EI, and positive/negative feelings) and biological measures of stress (hair cortisol concentrations, HCC) were collected from 170 adolescents (12-14 years old). Two different methodologies were conducted, which included hierarchical regression models and a fuzzy-set qualitative comparative analysis (fsQCA). The results support trait EI as a protective factor against stress in healthy adolescents and suggest that feelings reinforce this relation. However, the debate continues regarding the possibility of optimal levels of trait EI for effective and adaptive emotional management, particularly in the emotional attention and clarity dimensions and for female adolescents.

  16. Longitudinal Analyses of a Hierarchical Model of Peer Social Competence for Preschool Children: Structural Fidelity and External Correlates

    Science.gov (United States)

    Shin, Nana; Vaughn, Brian E.; Kim, Mina; Krzysik, Lisa; Bost, Kelly K.; McBride, Brent; Santos, Antonio J.; Peceguina, Ines; Coppola, Gabrielle

    2011-01-01

    Achieving consensus on the definition and measurement of social competence (SC) for preschool children has proven difficult in the developmental sciences. We tested a hierarchical model in which SC is assumed to be a second-order latent variable by using longitudinal data (N = 345). We also tested the degree to which peer SC at Time 1 predicted…

  17. Improved Dietary Guidelines for Vitamin D: Application of Individual Participant Data (IPD)-Level Meta-Regression Analyses.

    Science.gov (United States)

    Cashman, Kevin D; Ritz, Christian; Kiely, Mairead; Odin Collaborators

    2017-05-08

    Dietary Reference Values (DRVs) for vitamin D have a key role in the prevention of vitamin D deficiency. However, despite adopting similar risk assessment protocols, estimates from authoritative agencies over the last 6 years have been diverse. This may have arisen from diverse approaches to data analysis. Modelling strategies for pooling of individual subject data from cognate vitamin D randomized controlled trials (RCTs) are likely to provide the most appropriate DRV estimates. Thus, the objective of the present work was to undertake the first-ever individual participant data (IPD)-level meta-regression, which is increasingly recognized as best practice, from seven winter-based RCTs (with 882 participants ranging in age from 4 to 90 years) of the vitamin D intake-serum 25-hydroxyvitamin D (25(OH)D) dose-response. Our IPD-derived estimates of vitamin D intakes required to maintain 97.5% of 25(OH)D concentrations >25, 30, and 50 nmol/L across the population are 10, 13, and 26 µg/day, respectively. In contrast, standard meta-regression analyses with aggregate data (as used by several agencies in recent years) from the same RCTs estimated that a vitamin D intake requirement of 14 µg/day would maintain 97.5% of 25(OH)D >50 nmol/L. These first IPD-derived estimates offer improved dietary recommendations for vitamin D because the underpinning modeling captures the between-person variability in response of serum 25(OH)D to vitamin D intake.

  18. Evidence from regression-discontinuity analyses for beneficial effects of a criterion-based increase in alcohol treatment.

    Science.gov (United States)

    Flam-Zalcman, Rosely; Mann, Robert E; Stoduto, Gina; Nochajski, Thomas H; Rush, Brian R; Koski-Jännes, Anja; Wickens, Christine M; Thomas, Rita K; Rehm, Jürgen

    2013-03-01

    Brief interventions effectively reduce alcohol problems; however, it is controversial whether longer interventions result in greater improvement. This study aims to determine whether an increase in treatment for people with more severe problems resulted in better outcome. We employed regression-discontinuity analyses to determine if drinking driver clients (n = 22,277) in Ontario benefited when they were assigned to a longer treatment program (8-hour versus 16-hour) based on assessed addiction severity criteria. Assignment to the longer16-hour program was based on two addiction severity measures derived from the Research Institute on Addictions Self-inventory (RIASI) (meeting criteria for assignment based on either the total RIASI score or the score on the recidivism subscale). The main outcome measure was self-reported number of days of alcohol use during the 90 days preceding the six month follow-up interview. We found significant reductions of one or two self-reported drinking days at the point of assignment, depending on the severity criterion used. These data suggest that more intensive treatment for alcohol problems may improve results for individuals with more severe problems.

  19. Price promotions on healthier compared with less healthy foods: a hierarchical regression analysis of the impact on sales and social patterning of responses to promotions in Great Britain.

    Science.gov (United States)

    Nakamura, Ryota; Suhrcke, Marc; Jebb, Susan A; Pechey, Rachel; Almiron-Roig, Eva; Marteau, Theresa M

    2015-04-01

    There is a growing concern, but limited evidence, that price promotions contribute to a poor diet and the social patterning of diet-related disease. We examined the following questions: 1) Are less-healthy foods more likely to be promoted than healthier foods? 2) Are consumers more responsive to promotions on less-healthy products? 3) Are there socioeconomic differences in food purchases in response to price promotions? With the use of hierarchical regression, we analyzed data on purchases of 11,323 products within 135 food and beverage categories from 26,986 households in Great Britain during 2010. Major supermarkets operated the same price promotions in all branches. The number of stores that offered price promotions on each product for each week was used to measure the frequency of price promotions. We assessed the healthiness of each product by using a nutrient profiling (NP) model. A total of 6788 products (60%) were in healthier categories and 4535 products (40%) were in less-healthy categories. There was no significant gap in the frequency of promotion by the healthiness of products neither within nor between categories. However, after we controlled for the reference price, price discount rate, and brand-specific effects, the sales uplift arising from price promotions was larger in less-healthy than in healthier categories; a 1-SD point increase in the category mean NP score, implying the category becomes less healthy, was associated with an additional 7.7-percentage point increase in sales (from 27.3% to 35.0%; P sales uplift from promotions was larger for higher-socioeconomic status (SES) groups than for lower ones (34.6% for the high-SES group, 28.1% for the middle-SES group, and 23.1% for the low-SES group). Finally, there was no significant SES gap in the absolute volume of purchases of less-healthy foods made on promotion. Attempts to limit promotions on less-healthy foods could improve the population diet but would be unlikely to reduce health

  20. Investigation of the degree of organisational influence on patient experience scores in acute medical admission units in all acute hospitals in England using multilevel hierarchical regression modelling

    Science.gov (United States)

    Sullivan, Paul

    2017-01-01

    Objectives Previous studies found that hospital and specialty have limited influence on patient experience scores, and patient level factors are more important. This could be due to heterogeneity of experience delivery across subunits within organisations. We aimed to determine whether organisation level factors have greater impact if scores for the same subspecialty microsystem are analysed in each hospital. Setting Acute medical admission units in all NHS Acute Trusts in England. Participants We analysed patient experience data from the English Adult Inpatient Survey which is administered to 850 patients annually in each acute NHS Trusts in England. We selected all 8753 patients who returned the survey and who were emergency medical admissions and stayed in their admission unit for 1–2 nights, so as to isolate the experience delivered during the acute admission process. Primary and secondary outcome measures We used multilevel logistic regression to determine the apportioned influence of host organisation and of organisation level factors (size and teaching status), and patient level factors (demographics, presence of long-term conditions and disabilities). We selected ‘being treated with respect and dignity’ and ‘pain control’ as primary outcome parameters. Other Picker Domain question scores were analysed as secondary parameters. Results The proportion of overall variance attributable at organisational level was small; 0.5% (NS) for respect and dignity, 0.4% (NS) for pain control. Long-standing conditions and consequent disabilities were associated with low scores. Other item scores also showed that most influence was from patient level factors. Conclusions When a single microsystem, the acute medical admission process, is isolated, variance in experience scores is mainly explainable by patient level factors with limited organisational level influence. This has implications for the use of generic patient experience surveys for comparison between

  1. On the Variability and Correlation of Surface Ozone and Carbon Monoxide Observed in Hong Kong Using Trajectory and Regression Analyses

    Institute of Scientific and Technical Information of China (English)

    WANG Tijian(王体健); K. S. LAM; C. W. TSANG; S. C. KOT

    2004-01-01

    This paper investigates,the variability and correlation of surface ozone (03) and carbon monoxide (CO) observed at Cape D'Aguilar in Hong Kong from I January 1994 to 31 December 1995.Statistical analysis shows that the average 03 and CO mixing ratios during the two years are 32:k17 ppbv and 305:k191ppbv,respectively.The O3/CO ratio ranges from 0.05 to 0.6 ppbv/ppbv with its frequency peaking at 0.15.The raw dataset is divided into six groups using backward trajectory and cluster analyses.For data assigned to the same trajectory type,three groups are further sorted out based on CO and NOx mixing ratios.The correlation coefficients and slopes of O3/CO for the 18 groups are calculated using linear regression analysis.Final]y,five kinds of air masses with different chemical features are identified:continental background (CB),marine background (MB),regional polluted continental (RPC),perturbed marine (P'M),and local polluted (LP) air masses.Further studies indicate that 03 and CO in the continental and marine background air masses (CB and MB) are positively correlated for the reason that they are well mixed over the long range transport before arriving at the site.The negative correlation between 03 and CO in air mass LP is believed to be associated with heavy anthropogenic influence,which results from the enhancement by local sources as indicated by high CO and NOx and depletion of 03 when mixed with fresh emissions.The positive correlation in the perturbed marine air mass P*M favors the low photochemical production of 03.The negative,correlation found in the regional polluted continental air mass RPC is different from the observations at Oki Island in Japan due to the more complex 03 chemistry at Cape D'Aguilar.

  2. Modeling Heterogeneity in Relationships between Initial Status and Rates of Change: Treating Latent Variable Regression Coefficients as Random Coefficients in a Three-Level Hierarchical Model

    Science.gov (United States)

    Choi, Kilchan; Seltzer, Michael

    2010-01-01

    In studies of change in education and numerous other fields, interest often centers on how differences in the status of individuals at the start of a period of substantive interest relate to differences in subsequent change. In this article, the authors present a fully Bayesian approach to estimating three-level Hierarchical Models in which latent…

  3. Scale of association: hierarchical linear models and the measurement of ecological systems

    Science.gov (United States)

    Sean M. McMahon; Jeffrey M. Diez

    2007-01-01

    A fundamental challenge to understanding patterns in ecological systems lies in employing methods that can analyse, test and draw inference from measured associations between variables across scales. Hierarchical linear models (HLM) use advanced estimation algorithms to measure regression relationships and variance-covariance parameters in hierarchically structured...

  4. SPECIFICS OF THE APPLICATIONS OF MULTIPLE REGRESSION MODEL IN THE ANALYSES OF THE EFFECTS OF GLOBAL FINANCIAL CRISES

    Directory of Open Access Journals (Sweden)

    Željko V. Račić

    2010-12-01

    Full Text Available This paper aims to present the specifics of the application of multiple linear regression model. The economic (financial crisis is analyzed in terms of gross domestic product which is in a function of the foreign trade balance (on one hand and the credit cards, i.e. indebtedness of the population on this basis (on the other hand, in the USA (from 1999. to 2008. We used the extended application model which shows how the analyst should run the whole development process of regression model. This process began with simple statistical features and the application of regression procedures, and ended with residual analysis, intended for the study of compatibility of data and model settings. This paper also analyzes the values of some standard statistics used in the selection of appropriate regression model. Testing of the model is carried out with the use of the Statistics PASW 17 program.

  5. Single-cell expression analyses during cellular reprogramming reveal an early stochastic and a late hierarchic phase

    NARCIS (Netherlands)

    Buganim, Y.; Faddah, D.A.; Cheng, A.W.; Itskovich, E.; Markoulaki, S.; Ganz, K.; Klemm, S.L.; van Oudenaarden, A.; Jaenisch, R.

    2012-01-01

    During cellular reprogramming, only a small fraction of cells become induced pluripotent stem cells (iPSCs). Previous analyses of gene expression during reprogramming were based on populations of cells, impeding single-cell level identification of reprogramming events. We utilized two gene

  6. Principal Component and Multiple Regression Analyses for the Estimation of Suspended Sediment Yield in Ungauged Basins of Northern Thailand

    Directory of Open Access Journals (Sweden)

    Piyawat Wuttichaikitcharoen

    2014-08-01

    Full Text Available Predicting sediment yield is necessary for good land and water management in any river basin. However, sometimes, the sediment data is either not available or is sparse, which renders estimating sediment yield a daunting task. The present study investigates the factors influencing suspended sediment yield using the principal component analysis (PCA. Additionally, the regression relationships for estimating suspended sediment yield, based on the selected key factors from the PCA, are developed. The PCA shows six components of key factors that can explain at least up to 86.7% of the variation of all variables. The regression models show that basin size, channel network characteristics, land use, basin steepness and rainfall distribution are the key factors affecting sediment yield. The validation of regression relationships for estimating suspended sediment yield shows the error of estimation ranging from −55% to +315% and −59% to +259% for suspended sediment yield and for area-specific suspended sediment yield, respectively. The proposed relationships may be considered useful for predicting suspended sediment yield in ungauged basins of Northern Thailand that have geologic, climatic and hydrologic conditions similar to the study area.

  7. Modeling type 1 and type 2 diabetes mellitus incidence in youth: an application of Bayesian hierarchical regression for sparse small area data.

    Science.gov (United States)

    Song, Hae-Ryoung; Lawson, Andrew; D'Agostino, Ralph B; Liese, Angela D

    2011-03-01

    Sparse count data violate assumptions of traditional Poisson models due to the excessive amount of zeros, and modeling sparse data becomes challenging. However, since aggregation to reduce sparseness may result in biased estimates of risk, solutions need to be found at the level of disaggregated data. We investigated different statistical approaches within a Bayesian hierarchical framework for modeling sparse data without aggregation of data. We compared our proposed models with the traditional Poisson model and the zero-inflated model based on simulated data. We applied statistical models to type 1 and type 2 diabetes in youth 10-19 years known as rare diseases, and compared models using the inference results and various model diagnostic tools. We showed that one of the models we proposed, a sparse Poisson convolution model, performed better than other models in the simulation and application based on the deviance information criterion (DIC) and the mean squared prediction error.

  8. Genetic and economic analyses of sow replacement rates in the commercial tier of a hierarchical swine breeding structure.

    Science.gov (United States)

    Faust, M A; Robison, O W; Tess, M W

    1993-06-01

    Commercial-level sow replacement rates were investigated for a 10-yr planning horizon using a stochastic life-cycle swine production model. A three-tiered breeding structure was modeled for the production of market hogs in a three-breed static crossing scheme. Growth and reproductive traits of individual pigs were simulated using genetic, environmental, and economic parameters. Culling was after a maximum of 1, 5, or 10 parities in commercial levels within 1- and 5-parity nucleus and 1-, 5-, and 10-parity multiplier combinations. Yearly changes and average phenotypic levels were computed for pig and sow performance and economic measures. For growth traits, greater commercial level response was for systems with higher sow replacement rates, 110 to 115% of lowest response. Phenotypic changes in net returns ranged from $.85 to 1.01 x pig-1 x yr-1. Average growth performances were highest for systems with greatest genetic trend. Highest kilograms.sow-1 x year-1 finished was for 10-parity commercial alternatives. System differences in total costs and returns per pig resulted primarily from differences in replacement costs. Removal of the gilt system from analyses often reduced ranges among systems for economic measures by more than 70%. Systems with the lowest commercial replacement rates were most profitable. Within these systems, those with higher genetic change had highest net returns. For high replacement rates, no more than 175% of market value could be paid for gilts, but with lower sow replacement rates commercial units could justify as much as 450%.

  9. Household Food Waste: Multivariate Regression and Principal Components Analyses of Awareness and Attitudes among U.S. Consumers.

    Science.gov (United States)

    Qi, Danyi; Roe, Brian E

    2016-01-01

    We estimate models of consumer food waste awareness and attitudes using responses from a national survey of U.S. residents. Our models are interpreted through the lens of several theories that describe how pro-social behaviors relate to awareness, attitudes and opinions. Our analysis of patterns among respondents' food waste attitudes yields a model with three principal components: one that represents perceived practical benefits households may lose if food waste were reduced, one that represents the guilt associated with food waste, and one that represents whether households feel they could be doing more to reduce food waste. We find our respondents express significant agreement that some perceived practical benefits are ascribed to throwing away uneaten food, e.g., nearly 70% of respondents agree that throwing away food after the package date has passed reduces the odds of foodborne illness, while nearly 60% agree that some food waste is necessary to ensure meals taste fresh. We identify that these attitudinal responses significantly load onto a single principal component that may represent a key attitudinal construct useful for policy guidance. Further, multivariate regression analysis reveals a significant positive association between the strength of this component and household income, suggesting that higher income households most strongly agree with statements that link throwing away uneaten food to perceived private benefits.

  10. Household Food Waste: Multivariate Regression and Principal Components Analyses of Awareness and Attitudes among U.S. Consumers

    Science.gov (United States)

    2016-01-01

    We estimate models of consumer food waste awareness and attitudes using responses from a national survey of U.S. residents. Our models are interpreted through the lens of several theories that describe how pro-social behaviors relate to awareness, attitudes and opinions. Our analysis of patterns among respondents’ food waste attitudes yields a model with three principal components: one that represents perceived practical benefits households may lose if food waste were reduced, one that represents the guilt associated with food waste, and one that represents whether households feel they could be doing more to reduce food waste. We find our respondents express significant agreement that some perceived practical benefits are ascribed to throwing away uneaten food, e.g., nearly 70% of respondents agree that throwing away food after the package date has passed reduces the odds of foodborne illness, while nearly 60% agree that some food waste is necessary to ensure meals taste fresh. We identify that these attitudinal responses significantly load onto a single principal component that may represent a key attitudinal construct useful for policy guidance. Further, multivariate regression analysis reveals a significant positive association between the strength of this component and household income, suggesting that higher income households most strongly agree with statements that link throwing away uneaten food to perceived private benefits. PMID:27441687

  11. Household Food Waste: Multivariate Regression and Principal Components Analyses of Awareness and Attitudes among U.S. Consumers.

    Directory of Open Access Journals (Sweden)

    Danyi Qi

    Full Text Available We estimate models of consumer food waste awareness and attitudes using responses from a national survey of U.S. residents. Our models are interpreted through the lens of several theories that describe how pro-social behaviors relate to awareness, attitudes and opinions. Our analysis of patterns among respondents' food waste attitudes yields a model with three principal components: one that represents perceived practical benefits households may lose if food waste were reduced, one that represents the guilt associated with food waste, and one that represents whether households feel they could be doing more to reduce food waste. We find our respondents express significant agreement that some perceived practical benefits are ascribed to throwing away uneaten food, e.g., nearly 70% of respondents agree that throwing away food after the package date has passed reduces the odds of foodborne illness, while nearly 60% agree that some food waste is necessary to ensure meals taste fresh. We identify that these attitudinal responses significantly load onto a single principal component that may represent a key attitudinal construct useful for policy guidance. Further, multivariate regression analysis reveals a significant positive association between the strength of this component and household income, suggesting that higher income households most strongly agree with statements that link throwing away uneaten food to perceived private benefits.

  12. Systematic Selection of Key Logistic Regression Variables for Risk Prediction Analyses: A Five-Factor Maximum Model.

    Science.gov (United States)

    Hewett, Timothy E; Webster, Kate E; Hurd, Wendy J

    2017-08-16

    The evolution of clinical practice and medical technology has yielded an increasing number of clinical measures and tests to assess a patient's progression and return to sport readiness after injury. The plethora of available tests may be burdensome to clinicians in the absence of evidence that demonstrates the utility of a given measurement. Thus, there is a critical need to identify a discrete number of metrics to capture during clinical assessment to effectively and concisely guide patient care. The data sources included Pubmed and PMC Pubmed Central articles on the topic. Therefore, we present a systematic approach to injury risk analyses and how this concept may be used in algorithms for risk analyses for primary anterior cruciate ligament (ACL) injury in healthy athletes and patients after ACL reconstruction. In this article, we present the five-factor maximum model, which states that in any predictive model, a maximum of 5 variables will contribute in a meaningful manner to any risk factor analysis. We demonstrate how this model already exists for prevention of primary ACL injury, how this model may guide development of the second ACL injury risk analysis, and how the five-factor maximum model may be applied across the injury spectrum for development of the injury risk analysis.

  13. Improving validation methods for molecular diagnostics: application of Bland-Altman, Deming and simple linear regression analyses in assay comparison and evaluation for next-generation sequencing.

    Science.gov (United States)

    Misyura, Maksym; Sukhai, Mahadeo A; Kulasignam, Vathany; Zhang, Tong; Kamel-Reid, Suzanne; Stockley, Tracy L

    2017-07-26

    A standard approach in test evaluation is to compare results of the assay in validation to results from previously validated methods. For quantitative molecular diagnostic assays, comparison of test values is often performed using simple linear regression and the coefficient of determination (R(2)), using R(2) as the primary metric of assay agreement. However, the use of R(2) alone does not adequately quantify constant or proportional errors required for optimal test evaluation. More extensive statistical approaches, such as Bland-Altman and expanded interpretation of linear regression methods, can be used to more thoroughly compare data from quantitative molecular assays. We present the application of Bland-Altman and linear regression statistical methods to evaluate quantitative outputs from next-generation sequencing assays (NGS). NGS-derived data sets from assay validation experiments were used to demonstrate the utility of the statistical methods. Both Bland-Altman and linear regression were able to detect the presence and magnitude of constant and proportional error in quantitative values of NGS data. Deming linear regression was used in the context of assay comparison studies, while simple linear regression was used to analyse serial dilution data. Bland-Altman statistical approach was also adapted to quantify assay accuracy, including constant and proportional errors, and precision where theoretical and empirical values were known. The complementary application of the statistical methods described in this manuscript enables more extensive evaluation of performance characteristics of quantitative molecular assays, prior to implementation in the clinical molecular laboratory. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  14. Advancing the Parameter-elevation Regressions on Independent Slopes Model (PRISM) to Accommodate Atmospheric River Influences Using a Hierarchical Estimation Structure

    Science.gov (United States)

    Hsu, C.; Cifelli, R.; Zamora, R. J.; Schneider, T.

    2014-12-01

    The PRISM monthly climatology has been widely used by various agencies for diverse purposes. In the River Forecast Centers (RFCs), the PRISM monthly climatology is used to support tasks such as QPE, or quality control of point precipitation observation, and fine tune QPFs. Validation studies by forecasters and researchers have shown that interpolation involving PRISM climatology can effectually reduce the estimation bias for the locations where moderate or little orographic phenomena occur. However, many studies have pointed out limitations in PRISM monthly climatology. These limitations are especially apparent in storm events with fast-moving wet air masses or with storm tracks that are different from climatology. In order to upgrade PRISM climatology so it possesses the capability to characterize the climatology of storm events, it is critical to integrate large-scale atmospheric conditions with the original PRISM predictor variables and to simulate them at a temporal resolution higher than monthly. To this end, a simple, flexible, and powerful framework for precipitation estimation modeling that can be applied to very large data sets is thus developed. In this project, a decision tree based estimation structure was developed to perform the aforementioned variable integration work. Three Atmospheric River events (ARs) were selected to explore the hierarchical relationships among these variables and how these relationships shape the event-based precipitation distribution pattern across California. Several atmospheric variables, including vertically Integrated Vapor Transport (IVT), temperature, zonal wind (u), meridional wind (v), and omega (ω), were added to enhance the sophistication of the tree-based structure in estimating precipitation. To develop a direction-based climatology, the directions the ARs moving over the Pacific Ocean were also calculated and parameterized within the tree estimation structure. The results show that the involvement of the

  15. Classification and regression tree (CART analyses of genomic signatures reveal sets of tetramers that discriminate temperature optima of archaea and bacteria

    Directory of Open Access Journals (Sweden)

    Betsey Dexter Dyer

    2008-01-01

    Full Text Available Classification and regression tree (CART analysis was applied to genome-wide tetranucleotide frequencies (genomic signatures of 195 archaea and bacteria. Although genomic signatures have typically been used to classify evolutionary divergence, in this study, convergent evolution was the focus. Temperature optima for most of the organisms examined could be distinguished by CART analyses of tetranucleotide frequencies. This suggests that pervasive (nonlinear qualities of genomes may reflect certain environmental conditions (such as temperature in which those genomes evolved. The predominant use of GAGA and AGGA as the discriminating tetramers in CART models suggests that purine-loading and codon biases of thermophiles may explain some of the results.

  16. The more total cognitive load is reduced by cues, the better retention and transfer of multimedia learning: A meta-analysis and two meta-regression analyses.

    Science.gov (United States)

    Xie, Heping; Wang, Fuxing; Hao, Yanbin; Chen, Jiaxue; An, Jing; Wang, Yuxin; Liu, Huashan

    2017-01-01

    Cueing facilitates retention and transfer of multimedia learning. From the perspective of cognitive load theory (CLT), cueing has a positive effect on learning outcomes because of the reduction in total cognitive load and avoidance of cognitive overload. However, this has not been systematically evaluated. Moreover, what remains ambiguous is the direct relationship between the cue-related cognitive load and learning outcomes. A meta-analysis and two subsequent meta-regression analyses were conducted to explore these issues. Subjective total cognitive load (SCL) and scores on a retention test and transfer test were selected as dependent variables. Through a systematic literature search, 32 eligible articles encompassing 3,597 participants were included in the SCL-related meta-analysis. Among them, 25 articles containing 2,910 participants were included in the retention-related meta-analysis and the following retention-related meta-regression, while there were 29 articles containing 3,204 participants included in the transfer-related meta-analysis and the transfer-related meta-regression. The meta-analysis revealed a statistically significant cueing effect on subjective ratings of cognitive load (d = -0.11, 95% CI = [-0.19, -0.02], p multimedia materials can indeed reduce SCL and promote learning outcomes, and the more SCL is reduced by cues, the better retention and transfer of multimedia learning.

  17. Analyses of polycyclic aromatic hydrocarbon (PAH) and chiral-PAH analogues-methyl-β-cyclodextrin guest-host inclusion complexes by fluorescence spectrophotometry and multivariate regression analysis

    Science.gov (United States)

    Greene, LaVana; Elzey, Brianda; Franklin, Mariah; Fakayode, Sayo O.

    2017-03-01

    The negative health impact of polycyclic aromatic hydrocarbons (PAHs) and differences in pharmacological activity of enantiomers of chiral molecules in humans highlights the need for analysis of PAHs and their chiral analogue molecules in humans. Herein, the first use of cyclodextrin guest-host inclusion complexation, fluorescence spectrophotometry, and chemometric approach to PAH (anthracene) and chiral-PAH analogue derivatives (1-(9-anthryl)-2,2,2-triflouroethanol (TFE)) analyses are reported. The binding constants (Kb), stoichiometry (n), and thermodynamic properties (Gibbs free energy (ΔG), enthalpy (ΔH), and entropy (ΔS)) of anthracene and enantiomers of TFE-methyl-β-cyclodextrin (Me-β-CD) guest-host complexes were also determined. Chemometric partial-least-square (PLS) regression analysis of emission spectra data of Me-β-CD-guest-host inclusion complexes was used for the determination of anthracene and TFE enantiomer concentrations in Me-β-CD-guest-host inclusion complex samples. The values of calculated Kb and negative ΔG suggest the thermodynamic favorability of anthracene-Me-β-CD and enantiomeric of TFE-Me-β-CD inclusion complexation reactions. However, anthracene-Me-β-CD and enantiomer TFE-Me-β-CD inclusion complexations showed notable differences in the binding affinity behaviors and thermodynamic properties. The PLS regression analysis resulted in square-correlation-coefficients of 0.997530 or better and a low LOD of 3.81 × 10- 7 M for anthracene and 3.48 × 10- 8 M for TFE enantiomers at physiological conditions. Most importantly, PLS regression accurately determined the anthracene and TFE enantiomer concentrations with an average low error of 2.31% for anthracene, 4.44% for R-TFE and 3.60% for S-TFE. The results of the study are highly significant because of its high sensitivity and accuracy for analysis of PAH and chiral PAH analogue derivatives without the need of an expensive chiral column, enantiomeric resolution, or use of a

  18. Bisphenol-A exposures and behavioural aberrations: median and linear spline and meta-regression analyses of 12 toxicity studies in rodents.

    Science.gov (United States)

    Peluso, Marco E M; Munnia, Armelle; Ceppi, Marcello

    2014-11-05

    Exposures to bisphenol-A, a weak estrogenic chemical, largely used for the production of plastic containers, can affect the rodent behaviour. Thus, we examined the relationships between bisphenol-A and the anxiety-like behaviour, spatial skills, and aggressiveness, in 12 toxicity studies of rodent offspring from females orally exposed to bisphenol-A, while pregnant and/or lactating, by median and linear splines analyses. Subsequently, the meta-regression analysis was applied to quantify the behavioural changes. U-shaped, inverted U-shaped and J-shaped dose-response curves were found to describe the relationships between bisphenol-A with the behavioural outcomes. The occurrence of anxiogenic-like effects and spatial skill changes displayed U-shaped and inverted U-shaped curves, respectively, providing examples of effects that are observed at low-doses. Conversely, a J-dose-response relationship was observed for aggressiveness. When the proportion of rodents expressing certain traits or the time that they employed to manifest an attitude was analysed, the meta-regression indicated that a borderline significant increment of anxiogenic-like effects was present at low-doses regardless of sexes (β)=-0.8%, 95% C.I. -1.7/0.1, P=0.076, at ≤120 μg bisphenol-A. Whereas, only bisphenol-A-males exhibited a significant inhibition of spatial skills (β)=0.7%, 95% C.I. 0.2/1.2, P=0.004, at ≤100 μg/day. A significant increment of aggressiveness was observed in both the sexes (β)=67.9,C.I. 3.4, 172.5, P=0.038, at >4.0 μg. Then, bisphenol-A treatments significantly abrogated spatial learning and ability in males (P<0.001 vs. females). Overall, our study showed that developmental exposures to low-doses of bisphenol-A, e.g. ≤120 μg/day, were associated to behavioural aberrations in offspring.

  19. Elaborate ligand-based modeling coupled with multiple linear regression and k nearest neighbor QSAR analyses unveiled new nanomolar mTOR inhibitors.

    Science.gov (United States)

    Khanfar, Mohammad A; Taha, Mutasem O

    2013-10-28

    The mammalian target of rapamycin (mTOR) has an important role in cell growth, proliferation, and survival. mTOR is frequently hyperactivated in cancer, and therefore, it is a clinically validated target for cancer therapy. In this study, we combined exhaustive pharmacophore modeling and quantitative structure-activity relationship (QSAR) analysis to explore the structural requirements for potent mTOR inhibitors employing 210 known mTOR ligands. Genetic function algorithm (GFA) coupled with k nearest neighbor (kNN) and multiple linear regression (MLR) analyses were employed to build self-consistent and predictive QSAR models based on optimal combinations of pharmacophores and physicochemical descriptors. Successful pharmacophores were complemented with exclusion spheres to optimize their receiver operating characteristic curve (ROC) profiles. Optimal QSAR models and their associated pharmacophore hypotheses were validated by identification and experimental evaluation of several new promising mTOR inhibitory leads retrieved from the National Cancer Institute (NCI) structural database. The most potent hit illustrated an IC50 value of 48 nM.

  20. Price promotions on healthier compared with less healthy foods: a hierarchical regression analysis of the impact on sales and social patterning of responses to promotions in Great Britain12345

    Science.gov (United States)

    Nakamura, Ryota; Suhrcke, Marc; Jebb, Susan A; Pechey, Rachel; Almiron-Roig, Eva; Marteau, Theresa M

    2015-01-01

    Background: There is a growing concern, but limited evidence, that price promotions contribute to a poor diet and the social patterning of diet-related disease. Objective: We examined the following questions: 1) Are less-healthy foods more likely to be promoted than healthier foods? 2) Are consumers more responsive to promotions on less-healthy products? 3) Are there socioeconomic differences in food purchases in response to price promotions? Design: With the use of hierarchical regression, we analyzed data on purchases of 11,323 products within 135 food and beverage categories from 26,986 households in Great Britain during 2010. Major supermarkets operated the same price promotions in all branches. The number of stores that offered price promotions on each product for each week was used to measure the frequency of price promotions. We assessed the healthiness of each product by using a nutrient profiling (NP) model. Results: A total of 6788 products (60%) were in healthier categories and 4535 products (40%) were in less-healthy categories. There was no significant gap in the frequency of promotion by the healthiness of products neither within nor between categories. However, after we controlled for the reference price, price discount rate, and brand-specific effects, the sales uplift arising from price promotions was larger in less-healthy than in healthier categories; a 1-SD point increase in the category mean NP score, implying the category becomes less healthy, was associated with an additional 7.7–percentage point increase in sales (from 27.3% to 35.0%; P sales uplift from promotions was larger for higher–socioeconomic status (SES) groups than for lower ones (34.6% for the high-SES group, 28.1% for the middle-SES group, and 23.1% for the low-SES group). Finally, there was no significant SES gap in the absolute volume of purchases of less-healthy foods made on promotion. Conclusion: Attempts to limit promotions on less-healthy foods could improve the

  1. High Adherence to Iron/Folic Acid Supplementation during Pregnancy Time among Antenatal and Postnatal Care Attendant Mothers in Governmental Health Centers in Akaki Kality Sub City, Addis Ababa, Ethiopia: Hierarchical Negative Binomial Poisson Regression

    Science.gov (United States)

    Gebreamlak, Bisratemariam; Dadi, Abel Fekadu; Atnafu, Azeb

    2017-01-01

    Background Iron deficiency during pregnancy is a risk factor for anemia, preterm delivery, and low birth weight. Iron/Folic Acid supplementation with optimal adherence can effectively prevent anemia in pregnancy. However, studies that address this area of adherence are very limited. Therefore, the current study was conducted to assess the adherence and to identify factors associated with a number of Iron/Folic Acid uptake during pregnancy time among mothers attending antenatal and postnatal care follow up in Akaki kality sub city. Methods Institutional based cross-sectional study was conducted on a sample of 557 pregnant women attending antenatal and postnatal care service. Systematic random sampling was used to select study subjects. The mothers were interviewed and the collected data was cleaned and entered into Epi Info 3.5.1 and analyzed by R version 3.2.0. Hierarchical Negative Binomial Poisson Regression Model was fitted to identify the factors associated with a number of Iron/Folic Acid uptake. Adjusted Incidence rate ratio (IRR) with 95% confidence interval (CI) was computed to assess the strength and significance of the association. Result More than 90% of the mothers were supplemented with at least one Iron/Folic Acid supplement from pill per week during their pregnancy time. Sixty percent of the mothers adhered (took four or more tablets per week) (95%CI, 56%—64.1%). Higher IRR of Iron/Folic Acid supplementation was observed among women: who received health education; which were privately employed; who achieved secondary education; and who believed that Iron/Folic Acid supplements increase blood, whereas mothers who reported a side effect, who were from families with relatively better monthly income, and who took the supplement when sick were more likely to adhere. Conclusion Adherence to Iron/Folic Acid supplement during their pregnancy time among mothers attending antenatal and postnatal care was found to be high. Activities that would address the

  2. Hierarchical photocatalysts.

    Science.gov (United States)

    Li, Xin; Yu, Jiaguo; Jaroniec, Mietek

    2016-05-01

    As a green and sustainable technology, semiconductor-based heterogeneous photocatalysis has received much attention in the last few decades because it has potential to solve both energy and environmental problems. To achieve efficient photocatalysts, various hierarchical semiconductors have been designed and fabricated at the micro/nanometer scale in recent years. This review presents a critical appraisal of fabrication methods, growth mechanisms and applications of advanced hierarchical photocatalysts. Especially, the different synthesis strategies such as two-step templating, in situ template-sacrificial dissolution, self-templating method, in situ template-free assembly, chemically induced self-transformation and post-synthesis treatment are highlighted. Finally, some important applications including photocatalytic degradation of pollutants, photocatalytic H2 production and photocatalytic CO2 reduction are reviewed. A thorough assessment of the progress made in photocatalysis may open new opportunities in designing highly effective hierarchical photocatalysts for advanced applications ranging from thermal catalysis, separation and purification processes to solar cells.

  3. Longitudinal hierarchical linear modeling analyses of California Psychological Inventory data from age 33 to 75: an examination of stability and change in adult personality.

    Science.gov (United States)

    Jones, Constance J; Livson, Norman; Peskin, Harvey

    2003-06-01

    Twenty aspects of personality assessed via the California Psychological Inventory (CPI; Gough & Bradley, 1996) from age 33 to 75 were examined in a sample of 279 individuals. Oakland Growth Study and Berkeley Guidance Study members completed the CPI a maximum of 4 times. We used longitudinal hierarchical linear modeling (HLM) to ask the following: Which personality characteristics change and which do not? Five CPI scales showed uniform lack of change, 2 showed heterogeneous change giving an averaged lack of change, 4 showed linear increases with age, 2 showed linear decreases with age, 4 showed gender or sample differences in linear change, 1 showed a quadratic peak, and 2 showed a quadratic nadir. The utility of HLM becomes apparent in portraying the complexity of personality change and stability.

  4. Principal factor and hierarchical cluster analyses for the performance assessment of an urban wastewater treatment plant in the Southeast of Spain.

    Science.gov (United States)

    Bayo, Javier; López-Castellanos, Joaquín

    2016-07-01

    Process performance and operation of wastewater treatment plants (WWTP) are carried out to ensure their compliance with legislative requirements imposed by European Union. Because a high amount of variables are daily measured, a coherent and structured approach of such a system is required to understand its inherent behavior and performance efficiency. In this sense, both principal factor analysis (PFA) and hierarchical cluster analysis (HCA) are multivariate techniques that have been widely applied to extract and structure information for different purposes. In this paper, both statistical tools are applied in an urban WWTP situated in the Southeast of Spain, a zone with special characteristics related to the geochemical background composition of water and an important use of fertilizers. Four main factors were extracted in association with nutrients, the ionic component, the organic load to the WWTP, and the efficiency of the whole process. HCA allowed distinguish between influent and effluent parameters, although a deeper examination resulted in a dendrogram with groupings similar to those previously reported for PFA.

  5. Personal, social, and game-related correlates of active and non-active gaming among dutch gaming adolescents: survey-based multivariable, multilevel logistic regression analyses.

    Science.gov (United States)

    Simons, Monique; de Vet, Emely; Chinapaw, Mai Jm; de Boer, Michiel; Seidell, Jacob C; Brug, Johannes

    2014-04-04

    Playing video games contributes substantially to sedentary behavior in youth. A new generation of video games-active games-seems to be a promising alternative to sedentary games to promote physical activity and reduce sedentary behavior. At this time, little is known about correlates of active and non-active gaming among adolescents. The objective of this study was to examine potential personal, social, and game-related correlates of both active and non-active gaming in adolescents. A survey assessing game behavior and potential personal, social, and game-related correlates was conducted among adolescents (12-16 years, N=353) recruited via schools. Multivariable, multilevel logistic regression analyses, adjusted for demographics (age, sex and educational level of adolescents), were conducted to examine personal, social, and game-related correlates of active gaming ≥1 hour per week (h/wk) and non-active gaming >7 h/wk. Active gaming ≥1 h/wk was significantly associated with a more positive attitude toward active gaming (OR 5.3, CI 2.4-11.8; Pgames (OR 0.30, CI 0.1-0.6; P=.002), a higher score on habit strength regarding gaming (OR 1.9, CI 1.2-3.2; P=.008) and having brothers/sisters (OR 6.7, CI 2.6-17.1; Pgaming and a little bit lower score on game engagement (OR 0.95, CI 0.91-0.997; P=.04). Non-active gaming >7 h/wk was significantly associated with a more positive attitude toward non-active gaming (OR 2.6, CI 1.1-6.3; P=.035), a stronger habit regarding gaming (OR 3.0, CI 1.7-5.3; Pgaming (OR 3.3, CI 1.46-7.53; P=.004), and a more positive image of a non-active gamer (OR 2, CI 1.07-3.75; P=.03). Various factors were significantly associated with active gaming ≥1 h/wk and non-active gaming >7 h/wk. Active gaming is most strongly (negatively) associated with attitude with respect to non-active games, followed by observed active game behavior of brothers and sisters and attitude with respect to active gaming (positive associations). On the other hand, non

  6. Collaborative Hierarchical Sparse Modeling

    CERN Document Server

    Sprechmann, Pablo; Sapiro, Guillermo; Eldar, Yonina C

    2010-01-01

    Sparse modeling is a powerful framework for data analysis and processing. Traditionally, encoding in this framework is done by solving an l_1-regularized linear regression problem, usually called Lasso. In this work we first combine the sparsity-inducing property of the Lasso model, at the individual feature level, with the block-sparsity property of the group Lasso model, where sparse groups of features are jointly encoded, obtaining a sparsity pattern hierarchically structured. This results in the hierarchical Lasso, which shows important practical modeling advantages. We then extend this approach to the collaborative case, where a set of simultaneously coded signals share the same sparsity pattern at the higher (group) level but not necessarily at the lower one. Signals then share the same active groups, or classes, but not necessarily the same active set. This is very well suited for applications such as source separation. An efficient optimization procedure, which guarantees convergence to the global opt...

  7. Consequences of kriging and land use regression for PM2.5 predictions in epidemiologic analyses: insights into spatial variability using high-resolution satellite data.

    Science.gov (United States)

    Alexeeff, Stacey E; Schwartz, Joel; Kloog, Itai; Chudnovsky, Alexandra; Koutrakis, Petros; Coull, Brent A

    2015-01-01

    Many epidemiological studies use predicted air pollution exposures as surrogates for true air pollution levels. These predicted exposures contain exposure measurement error, yet simulation studies have typically found negligible bias in resulting health effect estimates. However, previous studies typically assumed a statistical spatial model for air pollution exposure, which may be oversimplified. We address this shortcoming by assuming a realistic, complex exposure surface derived from fine-scale (1 km × 1 km) remote-sensing satellite data. Using simulation, we evaluate the accuracy of epidemiological health effect estimates in linear and logistic regression when using spatial air pollution predictions from kriging and land use regression models. We examined chronic (long-term) and acute (short-term) exposure to air pollution. Results varied substantially across different scenarios. Exposure models with low out-of-sample R(2) yielded severe biases in the health effect estimates of some models, ranging from 60% upward bias to 70% downward bias. One land use regression exposure model with >0.9 out-of-sample R(2) yielded upward biases up to 13% for acute health effect estimates. Almost all models drastically underestimated the SEs. Land use regression models performed better in chronic effect simulations. These results can help researchers when interpreting health effect estimates in these types of studies.

  8. Logistic versus Hazards Regression Analyses in Evaluation Research: An Exposition and Application to the North Carolina Court Counselors' Intensive Protective Supervision Project.

    Science.gov (United States)

    Land, Kenneth C.; And Others

    1994-01-01

    Advantages of using logistic and hazards regression techniques in assessing the overall impact of a treatment program and the differential impact on client subgroups are examined and compared using data from a juvenile court program for status offenders. Implications are drawn for management and effectiveness of intensive supervision programs.…

  9. 基于分层回归的中国互联网保险驱动因素实证研究%Empirical Study on the Driving Factors of China’s Internet Insurance Based on Hierarchical Regression Analysis

    Institute of Scientific and Technical Information of China (English)

    汤英汉

    2015-01-01

    By analyzing the features and status quo of China’s internet insurance development, this paper found that the main reason causing the weak growth in the insurance industry is the conflict between people’s increasing needs for insurance and the relatively backward insurance management approaches. Internet insurance is a supplement to traditional insurance to a certain degree. By using the hierarchical regression method, this paper analyzes the insurance premium and its relative data from 2003 to 2013. The result shows that the driving factors of the internet insurance are: tax, population, internet, etc. The study also indicates that internet insurance is not a replacement or a threat to the traditional insurance business, but a new form of it instead. Internet insurance can satisfy people’s various needs for insurance. Finally, the author proposes that internet insurance, as a new insurance business, its development facilitates changes in the thoughts and ideas of the insurance industry as a whole. Internet technology has pushed it forward, especially, in such areas as insurance channels, product and service innovations. Therefore, internet insurance also injects fresh blood to China’s insurance industry.%通过分析我国互联网保险的特点和发展现状,发现快速变化的市场环境引致的社会日益增长的保险需求同相对落后的保险经营管理方式之间的矛盾日益突出,造成当前保险业增长乏力。互联网保险的出现弥补了传统保险的不足,成为保险业新的增长动力。本文运用分层回归分析方法,对我国2003-2013年网销保费及相关数据进行研究,验证了我国互联网保险驱动因素主要取决于税收、人口、互联网等方面,保险业自身因素对互联网保险影响不显著。研究发现,互联网保险的发展不是对传统保险的替代和竞争,而是保险新需求的发现,互联网保险满足多层次的保险需求。提出互联

  10. Logistic regression: a brief primer.

    Science.gov (United States)

    Stoltzfus, Jill C

    2011-10-01

    Regression techniques are versatile in their application to medical research because they can measure associations, predict outcomes, and control for confounding variable effects. As one such technique, logistic regression is an efficient and powerful way to analyze the effect of a group of independent variables on a binary outcome by quantifying each independent variable's unique contribution. Using components of linear regression reflected in the logit scale, logistic regression iteratively identifies the strongest linear combination of variables with the greatest probability of detecting the observed outcome. Important considerations when conducting logistic regression include selecting independent variables, ensuring that relevant assumptions are met, and choosing an appropriate model building strategy. For independent variable selection, one should be guided by such factors as accepted theory, previous empirical investigations, clinical considerations, and univariate statistical analyses, with acknowledgement of potential confounding variables that should be accounted for. Basic assumptions that must be met for logistic regression include independence of errors, linearity in the logit for continuous variables, absence of multicollinearity, and lack of strongly influential outliers. Additionally, there should be an adequate number of events per independent variable to avoid an overfit model, with commonly recommended minimum "rules of thumb" ranging from 10 to 20 events per covariate. Regarding model building strategies, the three general types are direct/standard, sequential/hierarchical, and stepwise/statistical, with each having a different emphasis and purpose. Before reaching definitive conclusions from the results of any of these methods, one should formally quantify the model's internal validity (i.e., replicability within the same data set) and external validity (i.e., generalizability beyond the current sample). The resulting logistic regression model

  11. Quantile regression

    CERN Document Server

    Hao, Lingxin

    2007-01-01

    Quantile Regression, the first book of Hao and Naiman's two-book series, establishes the seldom recognized link between inequality studies and quantile regression models. Though separate methodological literature exists for each subject, the authors seek to explore the natural connections between this increasingly sought-after tool and research topics in the social sciences. Quantile regression as a method does not rely on assumptions as restrictive as those for the classical linear regression; though more traditional models such as least squares linear regression are more widely utilized, Hao

  12. APPROACH OF FIVE-YEAR-AVERAGE HAZARD RATES FOR THE BREAST CANCER PATIENTS AND ANALYSES OF PROGNOSTIC FACTORS-AN APPLICATION OF COX REGRESSION MODEL

    Institute of Scientific and Technical Information of China (English)

    Gai Xueliang; Fan Zhimin; Liu Guojin; Jacques Brisson

    1998-01-01

    Objective: To compare with five-year survival after surgery for the 116 breast cancer patients treated at the First Teaching Hospital (FTH) and the 866 breast cancer patients at Hopital du Saint-Sacrement (HSS). Methods:Using Cox regression model, after eliminating the confounders, to develop the comparison of the five-year average hazard rates between two hospitals and among the levels of prognostic factors. Results: It has significant difference for the old patients (50 years old or more)between the two hospitals. Conclusion: Tumor size at pathology and involvement of lymph nodes were important prognostic factors.

  13. Prevalence of Cannabis Lifetime Use in Iranian High School and College Students: A Systematic Review, Meta-Analyses, and Meta-Regression.

    Science.gov (United States)

    Nazarzadeh, Milad; Bidel, Zeinab; Mosavi Jarahi, Alireza; Esmaeelpour, Keihan; Menati, Walieh; Shakeri, Ali Asghar; Menati, Rostam; Kikhavani, Sattar; Saki, Kourosh

    2015-09-01

    Cannabis is the most widely used substance in the world. This study aimed to estimate the prevalence of cannabis lifetime use (CLU) in high school and college students of Iran and also to determine factors related to changes in prevalence. A systematic review of literature on cannabis use in Iran was conducted according to MOOSE guideline. Domestic scientific databases, PubMed/Medline, ISI Web of Knowledge, and Google Scholar, relevant reference lists, and relevant journals were searched up to April, 2014. Prevalences were calculated using the variance stabilizing double arcsine transformation and confidence intervals (CIs) estimated using the Wilson method. Heterogeneity was assessed by Cochran's Q statistic and I(2) index and causes of heterogeneity were evaluated using meta-regression model. In electronic database search, 4,000 citations were retrieved, producing a total of 33 studies. CLU was reported with a random effects pooled prevalence of 4.0% (95% CI = 3.0% to 5.0%). In subgroups of high school and college students, prevalences were 5.0% (95% CI = 3.0% to -7.0%) and 2.0% (95% CI = 2.0% to -3.0%), respectively. Meta-regression model indicated that prevalence is higher in college students (β = 0.089, p < .001), male gender (β = 0.017, p < .001), and is lower in studies with sampling versus census studies (β = -0.096, p < .001). This study reported that prevalence of CLU in Iranian students are lower than industrialized countries. In addition, gender, level of education, and methods of sampling are highly associated with changes in the prevalence of CLU across provinces.

  14. Genetic parameters for body weight, hip height, and the ratio of weight to hip height from random regression analyses of Brahman feedlot cattle.

    Science.gov (United States)

    Riley, D G; Coleman, S W; Chase, C C; Olson, T A; Hammond, A C

    2007-01-01

    The objective of this research was to assess the genetic control of BW, hip height, and the ratio of BW to hip height (n = 5,055) in Brahman cattle through 170 d on feed using covariance function-random regression models. A progeny test of Brahman sires (n = 27) generated records of Brahman steers and heifers (n = 724) over 7 yr. Each year after weaning, calves were assigned to feedlot pens, where they were fed a high-concentrate grain diet. Body weights and hip heights were recorded every 28 d until cattle reached a targeted fatness level. All calves had records through 170 d on feed; subsequent records were excluded. Models included contemporary group (sex-pen-year combinations, n = 63) and age at the beginning of the feeding period as a covariate. The residual error structure was modeled as a random effect, with 2 levels corresponding to two 85-d periods on feed. Information criterion values indicated that linear, random regression coefficients on Legendre polynomials of days on feed were most appropriate to model additive genetic effects for all 3 traits. Cubic (hip height and BW:hip height ratio) or quartic (BW) polynomials best modeled permanent environmental effects. Estimates of heritability across the 170-d feeding period ranged from 0.31 to 0.53 for BW, from 0.37 to 0.53 for hip height, and from 0.23 to 0.6 for BW:hip height ratio. Estimates of the permanent environmental proportion of phenotypic variance ranged from 0.44 to 0.58 for BW, 0.07 to 0.26 for hip height, and 0.30 to 0.48 for BW:hip height ratio. Within-trait estimates of genetic correlation on pairs of days on feed (at 28-d intervals) indicated lower associations of BW:hip height ratio EBV early and late in the feeding period but large positive associations for BW or hip height EBV throughout. Estimates of genetic correlations among the 3 traits indicated almost no association of BW:hip height ratio and hip height EBV. The ratio of BW to hip height in cattle has previously been used as an

  15. Univariate and multiple linear regression analyses for 23 single nucleotide polymorphisms in 14 genes predisposing to chronic glomerular diseases and IgA nephropathy in Han Chinese

    Directory of Open Access Journals (Sweden)

    Hui Wang

    2014-01-01

    Full Text Available Immunoglobulin A nephropathy (IgAN is a complex trait regulated by the inter-action among multiple physiologic regulatory systems and probably involving numerous genes, which leads to inconsistent findings in genetic studies. One possibility of failure to replicate some single-locus results is that the underlying genetics of IgAN nephropathy is based on multiple genes with minor effects. To learn the association between 23 single nucleotide polymorphisms (SNPs in 14 genes predisposing to chronic glomerular diseases and IgAN in Han males, the 23 SNPs genotypes of 21 Han males were detected and analyzed with a BaiO gene chip, and their asso-ciations were analyzed with univariate analysis and multiple linear regression analysis. Analysis showed that CTLA4 rs231726 and CR2 rs1048971 revealed a significant association with IgAN. These findings support the multi-gene nature of the etiology of IgAN and propose a potential gene-gene interactive model for future studies.

  16. Meta-regression analyses, meta-analyses, and trial sequential analyses of the effects of supplementation with beta-carotene, vitamin A, and vitamin E singly or in different combinations on all-cause mortality: do we have evidence for lack of harm?

    Directory of Open Access Journals (Sweden)

    Goran Bjelakovic

    Full Text Available BACKGROUND AND AIMS: Evidence shows that antioxidant supplements may increase mortality. Our aims were to assess whether different doses of beta-carotene, vitamin A, and vitamin E affect mortality in primary and secondary prevention randomized clinical trials with low risk of bias. METHODS: The present study is based on our 2012 Cochrane systematic review analyzing beneficial and harmful effects of antioxidant supplements in adults. Using random-effects meta-analyses, meta-regression analyses, and trial sequential analyses, we examined the association between beta-carotene, vitamin A, and vitamin E, and mortality according to their daily doses and doses below and above the recommended daily allowances (RDA. RESULTS: We included 53 randomized trials with low risk of bias (241,883 participants, aged 18 to 103 years, 44.6% women assessing beta-carotene, vitamin A, and vitamin E. Meta-regression analysis showed that the dose of vitamin A was significantly positively associated with all-cause mortality. Beta-carotene in a dose above 9.6 mg significantly increased mortality (relative risk (RR 1.06, 95% confidence interval (CI 1.02 to 1.09, I(2 = 13%. Vitamin A in a dose above the RDA (> 800 µg did not significantly influence mortality (RR 1.08, 95% CI 0.98 to 1.19, I(2 = 53%. Vitamin E in a dose above the RDA (> 15 mg significantly increased mortality (RR 1.03, 95% CI 1.00 to 1.05, I(2 = 0%. Doses below the RDAs did not affect mortality, but data were sparse. CONCLUSIONS: Beta-carotene and vitamin E in doses higher than the RDA seem to significantly increase mortality, whereas we lack information on vitamin A. Dose of vitamin A was significantly associated with increased mortality in meta-regression. We lack information on doses below the RDA. BACKGROUND: All essential compounds to stay healthy cannot be synthesized in our body. Therefore, these compounds must be taken through our diet or obtained in other ways [1]. Oxidative stress has been

  17. Calcium requirements for Chinese adults by cross-sectional statistical analyses of calcium balance studies: an individual participant data and aggregate data meta-regression

    Institute of Scientific and Technical Information of China (English)

    Fang Aiping; Li Keji; Shi Haoyu; He Jingjing; Li He

    2014-01-01

    Background Chinese dietary reference intakes for calcium are largely based on foreign studies.We undertook metaregression to estimate calcium requirements for Chinese adults derived from calcium balance data in Chinese adults.Methods We searched PubMed,Cochrane CENTRAL,and SinoMed from inception to March 5,2014,by using a structured search strategy.The bibliographies of any relevant papers and journals were also screened for potentially eligible studies.We extracted a standardized data set from studies in Chinese adults that reported calcium balance data.The relationship between calcium intake and output was examined by an individual participant data (IPD) and aggregate data (AD) meta-regression.Results We identified 11 metabolic studies in Chinese adults within 18-60 years of age.One hundred and forty-one IPD (n=35) expressed as mg/d,127 IPD (n=32) expressed as mg·kg body wt-1·d-1,and 44 AD (n=132) expressed as mg/d were collected.The models predicted a neutral calcium balance (defined as calcium output (Y) equal to calcium intake (C)) at intakes of 460 mg/d (Y=0.60C+183.98) and 8.27 mg·kg body wt-1·d-1 (Y=0.60C+3.33)for IPD,or 409 mg/d (Y=0.66C+139.00) for AD.Calcium requirements at upper intakes were higher than that at lower intakes in all these models.Conclusion Calcium requirement for Chinese adults 18-60 years of age approximately ranges between 400 mg/d and 500 mg/d when consuming traditional plant-based Chinese diets.

  18. Basic Diagnosis and Prediction of Persistent Contrail Occurrence using High-resolution Numerical Weather Analyses/Forecasts and Logistic Regression. Part II: Evaluation of Sample Models

    Science.gov (United States)

    Duda, David P.; Minnis, Patrick

    2009-01-01

    Previous studies have shown that probabilistic forecasting may be a useful method for predicting persistent contrail formation. A probabilistic forecast to accurately predict contrail formation over the contiguous United States (CONUS) is created by using meteorological data based on hourly meteorological analyses from the Advanced Regional Prediction System (ARPS) and from the Rapid Update Cycle (RUC) as well as GOES water vapor channel measurements, combined with surface and satellite observations of contrails. Two groups of logistic models were created. The first group of models (SURFACE models) is based on surface-based contrail observations supplemented with satellite observations of contrail occurrence. The second group of models (OUTBREAK models) is derived from a selected subgroup of satellite-based observations of widespread persistent contrails. The mean accuracies for both the SURFACE and OUTBREAK models typically exceeded 75 percent when based on the RUC or ARPS analysis data, but decreased when the logistic models were derived from ARPS forecast data.

  19. Regression Basics

    CERN Document Server

    Kahane, Leo H

    2007-01-01

    Using a friendly, nontechnical approach, the Second Edition of Regression Basics introduces readers to the fundamentals of regression. Accessible to anyone with an introductory statistics background, this book builds from a simple two-variable model to a model of greater complexity. Author Leo H. Kahane weaves four engaging examples throughout the text to illustrate not only the techniques of regression but also how this empirical tool can be applied in creative ways to consider a broad array of topics. New to the Second Edition Offers greater coverage of simple panel-data estimation:

  20. Basic Diagnosis and Prediction of Persistent Contrail Occurrence using High-resolution Numerical Weather Analyses/Forecasts and Logistic Regression. Part I: Effects of Random Error

    Science.gov (United States)

    Duda, David P.; Minnis, Patrick

    2009-01-01

    Straightforward application of the Schmidt-Appleman contrail formation criteria to diagnose persistent contrail occurrence from numerical weather prediction data is hindered by significant bias errors in the upper tropospheric humidity. Logistic models of contrail occurrence have been proposed to overcome this problem, but basic questions remain about how random measurement error may affect their accuracy. A set of 5000 synthetic contrail observations is created to study the effects of random error in these probabilistic models. The simulated observations are based on distributions of temperature, humidity, and vertical velocity derived from Advanced Regional Prediction System (ARPS) weather analyses. The logistic models created from the simulated observations were evaluated using two common statistical measures of model accuracy, the percent correct (PC) and the Hanssen-Kuipers discriminant (HKD). To convert the probabilistic results of the logistic models into a dichotomous yes/no choice suitable for the statistical measures, two critical probability thresholds are considered. The HKD scores are higher when the climatological frequency of contrail occurrence is used as the critical threshold, while the PC scores are higher when the critical probability threshold is 0.5. For both thresholds, typical random errors in temperature, relative humidity, and vertical velocity are found to be small enough to allow for accurate logistic models of contrail occurrence. The accuracy of the models developed from synthetic data is over 85 percent for both the prediction of contrail occurrence and non-occurrence, although in practice, larger errors would be anticipated.

  1. Semiparametric regression during 2003–2007

    KAUST Repository

    Ruppert, David

    2009-01-01

    Semiparametric regression is a fusion between parametric regression and nonparametric regression that integrates low-rank penalized splines, mixed model and hierarchical Bayesian methodology – thus allowing more streamlined handling of longitudinal and spatial correlation. We review progress in the field over the five-year period between 2003 and 2007. We find semiparametric regression to be a vibrant field with substantial involvement and activity, continual enhancement and widespread application.

  2. A Bayesian approach to linear regression in astronomy

    CERN Document Server

    Sereno, Mauro

    2015-01-01

    Linear regression is common in astronomical analyses. I discuss a Bayesian hierarchical modeling of data with heteroscedastic and possibly correlated measurement errors and intrinsic scatter. The method fully accounts for time evolution. The slope, the normalization, and the intrinsic scatter of the relation can evolve with the redshift. The intrinsic distribution of the independent variable is approximated using a mixture of Gaussian distributions whose means and standard deviations depend on time. The method can address scatter in the measured independent variable (a kind of Eddington bias), selection effects in the response variable (Malmquist bias), and departure from linearity in form of a knee. I tested the method with toy models and simulations and quantified the effect of biases and inefficient modeling. The R-package LIRA (LInear Regression in Astronomy) is made available to perform the regression.

  3. Autistic Regression

    Science.gov (United States)

    Matson, Johnny L.; Kozlowski, Alison M.

    2010-01-01

    Autistic regression is one of the many mysteries in the developmental course of autism and pervasive developmental disorders not otherwise specified (PDD-NOS). Various definitions of this phenomenon have been used, further clouding the study of the topic. Despite this problem, some efforts at establishing prevalence have been made. The purpose of…

  4. Logistic regression.

    Science.gov (United States)

    Nick, Todd G; Campbell, Kathleen M

    2007-01-01

    The Medical Subject Headings (MeSH) thesaurus used by the National Library of Medicine defines logistic regression models as "statistical models which describe the relationship between a qualitative dependent variable (that is, one which can take only certain discrete values, such as the presence or absence of a disease) and an independent variable." Logistic regression models are used to study effects of predictor variables on categorical outcomes and normally the outcome is binary, such as presence or absence of disease (e.g., non-Hodgkin's lymphoma), in which case the model is called a binary logistic model. When there are multiple predictors (e.g., risk factors and treatments) the model is referred to as a multiple or multivariable logistic regression model and is one of the most frequently used statistical model in medical journals. In this chapter, we examine both simple and multiple binary logistic regression models and present related issues, including interaction, categorical predictor variables, continuous predictor variables, and goodness of fit.

  5. Regression Analysis

    CERN Document Server

    Freund, Rudolf J; Sa, Ping

    2006-01-01

    The book provides complete coverage of the classical methods of statistical analysis. It is designed to give students an understanding of the purpose of statistical analyses, to allow the student to determine, at least to some degree, the correct type of statistical analyses to be performed in a given situation, and have some appreciation of what constitutes good experimental design

  6. Area under the curve predictions of dalbavancin, a new lipoglycopeptide agent, using the end of intravenous infusion concentration data point by regression analyses such as linear, log-linear and power models.

    Science.gov (United States)

    Bhamidipati, Ravi Kanth; Syed, Muzeeb; Mullangi, Ramesh; Srinivas, Nuggehally

    2017-03-14

    1. Dalbavancin, a lipoglycopeptide, is approved for treating gram-positive bacterial infections. Area under plasma concentration versus time curve (AUCinf) of dalbavancin is a key parameter and AUCinf/MIC ratio is a critical pharmacodynamic marker. 2. Using end of intravenous infusion concentration (i.e. Cmax) Cmax versus AUCinf relationship for dalbavancin was established by regression analyses (i.e. linear, log-log, log-linear and power models) using 21 pairs of subject data. 3. The predictions of the AUCinf were performed using published Cmax data by application of regression equations. The quotient of observed/predicted values rendered fold difference. The mean absolute error (MAE)/root mean square error (RMSE) and correlation coefficient (r) were used in the assessment. 4. MAE and RMSE values for the various models were comparable. The Cmax versus AUCinf exhibited excellent correlation (r > 0.9488). The internal data evaluation showed narrow confinement (0.84-1.14-fold difference) with a RMSE regression models, a single time point strategy of using Cmax (i.e. end of 30-min infusion) is amenable as a prospective tool for predicting AUCinf of dalbavancin in patients.

  7. Linear regression

    CERN Document Server

    Olive, David J

    2017-01-01

    This text covers both multiple linear regression and some experimental design models. The text uses the response plot to visualize the model and to detect outliers, does not assume that the error distribution has a known parametric distribution, develops prediction intervals that work when the error distribution is unknown, suggests bootstrap hypothesis tests that may be useful for inference after variable selection, and develops prediction regions and large sample theory for the multivariate linear regression model that has m response variables. A relationship between multivariate prediction regions and confidence regions provides a simple way to bootstrap confidence regions. These confidence regions often provide a practical method for testing hypotheses. There is also a chapter on generalized linear models and generalized additive models. There are many R functions to produce response and residual plots, to simulate prediction intervals and hypothesis tests, to detect outliers, and to choose response trans...

  8. Hierarchical Network Design

    DEFF Research Database (Denmark)

    Thomadsen, Tommy

    2005-01-01

    of different types of hierarchical networks. This is supplemented by a review of ring network design problems and a presentation of a model allowing for modeling most hierarchical networks. We use methods based on linear programming to design the hierarchical networks. Thus, a brief introduction to the various....... The thesis investigates models for hierarchical network design and methods used to design such networks. In addition, ring network design is considered, since ring networks commonly appear in the design of hierarchical networks. The thesis introduces hierarchical networks, including a classification scheme...... linear programming based methods is included. The thesis is thus suitable as a foundation for study of design of hierarchical networks. The major contribution of the thesis consists of seven papers which are included in the appendix. The papers address hierarchical network design and/or ring network...

  9. Hierarchical Multiagent Reinforcement Learning

    Science.gov (United States)

    2004-01-25

    In this paper, we investigate the use of hierarchical reinforcement learning (HRL) to speed up the acquisition of cooperative multiagent tasks. We...introduce a hierarchical multiagent reinforcement learning (RL) framework and propose a hierarchical multiagent RL algorithm called Cooperative HRL. In

  10. Should metacognition be measured by logistic regression?

    Science.gov (United States)

    Rausch, Manuel; Zehetleitner, Michael

    2017-03-01

    Are logistic regression slopes suitable to quantify metacognitive sensitivity, i.e. the efficiency with which subjective reports differentiate between correct and incorrect task responses? We analytically show that logistic regression slopes are independent from rating criteria in one specific model of metacognition, which assumes (i) that rating decisions are based on sensory evidence generated independently of the sensory evidence used for primary task responses and (ii) that the distributions of evidence are logistic. Given a hierarchical model of metacognition, logistic regression slopes depend on rating criteria. According to all considered models, regression slopes depend on the primary task criterion. A reanalysis of previous data revealed that massive numbers of trials are required to distinguish between hierarchical and independent models with tolerable accuracy. It is argued that researchers who wish to use logistic regression as measure of metacognitive sensitivity need to control the primary task criterion and rating criteria. Copyright © 2017 Elsevier Inc. All rights reserved.

  11. Hierarchical Network Design

    DEFF Research Database (Denmark)

    Thomadsen, Tommy

    2005-01-01

    Communication networks are immensely important today, since both companies and individuals use numerous services that rely on them. This thesis considers the design of hierarchical (communication) networks. Hierarchical networks consist of layers of networks and are well-suited for coping...... the clusters. The design of hierarchical networks involves clustering of nodes, hub selection, and network design, i.e. selection of links and routing of ows. Hierarchical networks have been in use for decades, but integrated design of these networks has only been considered for very special types of networks....... The thesis investigates models for hierarchical network design and methods used to design such networks. In addition, ring network design is considered, since ring networks commonly appear in the design of hierarchical networks. The thesis introduces hierarchical networks, including a classification scheme...

  12. Meta回归与亚组分析在异质性处理中的应用%Application of Meta-regression and subgroup analyses of heterogeneity disposal in Meta-analysis

    Institute of Scientific and Technical Information of China (English)

    石修权; 王增珍

    2008-01-01

    探讨Meta回归与亚组分析在异质性的识别与处理中的应用及意义.利用文献提供的二次数据建立Meta回归模型,筛选出异质性的影响因素,根据该因素做亚组分析,并比较亚组分析前后异质性的变化.Meta分析资料经异质性检验,Q=44.71,df=27,P=0.017,认为存在异质性.经Meta回归分析,从可能导致异质性的因素(研究时间、地区、样本量、病例对照比值等)中筛选出样本含量为异质性因素(P=0.012)、地区为可能的异质性因素(P=0.091).然后进行亚组分析,异质性明显减小(∑Q 由44.71减小至32.11).结论 :Meta回归法对筛选异质性影响因素比较简便可靠,据此进行的亚组分析能明显降低亚组内的异质性.故存在统计学异质性又要计算合并效应时推荐二者结合使用,可正确识别并降低异质性,从而使Meta分析结果更为稳健与合理.%To explore the role and application of Meta-regression and subgroup analyses to recognize and control the heterogeneity in Meta-analysis, Meta-regression models were established by secondary data to screen the factors resulting heterogeneity,and subgroup analyses were used to compare the change of heterogeneity before and after.The heterogeneity was found in the Meta-analysis(Q=44.71,df=27,P=0.017).Sample size and region were selected(P=0.012 and P=0.091,respectively)by Meta-regression from many possible factors such as sample size,year,region and case/contml ratio.The Q values were lowered from 44.71 to 32.11 after subgroup analyses.Thus,Metaregression method was convenient and reliable to screen the affected factors of heterogeneity,and subgroup analyses based on the hypothesis that could significantly lower the heterogeneity.It was recommended to a combined use when an obvious heterogeneity existed but was in need to get an overall result in Metaanalysis.We could correctly judge and lower the heterogeneity to increase the robustness and rationality of results from Meta-analysis.

  13. Use of hierarchical models to analyze European trends in congenital anomaly prevalence.

    Science.gov (United States)

    Cavadino, Alana; Prieto-Merino, David; Addor, Marie-Claude; Arriola, Larraitz; Bianchi, Fabrizio; Draper, Elizabeth; Garne, Ester; Greenlees, Ruth; Haeusler, Martin; Khoshnood, Babak; Kurinczuk, Jenny; McDonnell, Bob; Nelen, Vera; O'Mahony, Mary; Randrianaivo, Hanitra; Rankin, Judith; Rissmann, Anke; Tucker, David; Verellen-Dumoulin, Christine; de Walle, Hermien; Wellesley, Diana; Morris, Joan K

    2016-06-01

    Surveillance of congenital anomalies is important to identify potential teratogens. Despite known associations between different anomalies, current surveillance methods examine trends within each subgroup separately. We aimed to evaluate whether hierarchical statistical methods that combine information from several subgroups simultaneously would enhance current surveillance methods using data collected by EUROCAT, a European network of population-based congenital anomaly registries. Ten-year trends (2003 to 2012) in 18 EUROCAT registries over 11 countries were analyzed for the following groups of anomalies: neural tube defects, congenital heart defects, digestive system, and chromosomal anomalies. Hierarchical Poisson regression models that combined related subgroups together according to EUROCAT's hierarchy of subgroup coding were applied. Results from hierarchical models were compared with those from Poisson models that consider each congenital anomaly separately. Hierarchical models gave similar results as those obtained when considering each anomaly subgroup in a separate analysis. Hierarchical models that included only around three subgroups showed poor convergence and were generally found to be over-parameterized. Larger sets of anomaly subgroups were found to be too heterogeneous to group together in this way. There were no substantial differences between independent analyses of each subgroup and hierarchical models when using the EUROCAT anomaly subgroups. Considering each anomaly separately, therefore, remains an appropriate method for the detection of potential changes in prevalence by surveillance systems. Hierarchical models do, however, remain an interesting alternative method of analysis when considering the risks of specific exposures in relation to the prevalence of congenital anomalies, which could be investigated in other studies. Birth Defects Research (Part A) 106:480-10, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  14. The Energy Content and Composition of Meals Consumed after an Overnight Fast and Their Effects on Diet Induced Thermogenesis: A Systematic Review, Meta-Analyses and Meta-Regressions

    Directory of Open Access Journals (Sweden)

    Angelica Quatela

    2016-10-01

    Full Text Available This systematic review investigated the effects of differing energy intakes, macronutrient compositions, and eating patterns of meals consumed after an overnight fast on Diet Induced Thermogenesis (DIT. The initial search identified 2482 records; 26 papers remained once duplicates were removed and inclusion criteria were applied. Studies (n = 27 in the analyses were randomized crossover designs comparing the effects of two or more eating events on DIT. Higher energy intake increased DIT; in a mixed model meta-regression, for every 100 kJ increase in energy intake, DIT increased by 1.1 kJ/h (p < 0.001. Meals with a high protein or carbohydrate content had a higher DIT than high fat, although this effect was not always significant. Meals with medium chain triglycerides had a significantly higher DIT than long chain triglycerides (meta-analysis, p = 0.002. Consuming the same meal as a single bolus eating event compared to multiple small meals or snacks was associated with a significantly higher DIT (meta-analysis, p = 0.02. Unclear or inconsistent findings were found by comparing the consumption of meals quickly or slowly, and palatability was not significantly associated with DIT. These findings indicate that the magnitude of the increase in DIT is influenced by the energy intake, macronutrient composition, and eating pattern of the meal.

  15. The Energy Content and Composition of Meals Consumed after an Overnight Fast and Their Effects on Diet Induced Thermogenesis: A Systematic Review, Meta-Analyses and Meta-Regressions.

    Science.gov (United States)

    Quatela, Angelica; Callister, Robin; Patterson, Amanda; MacDonald-Wicks, Lesley

    2016-10-25

    This systematic review investigated the effects of differing energy intakes, macronutrient compositions, and eating patterns of meals consumed after an overnight fast on Diet Induced Thermogenesis (DIT). The initial search identified 2482 records; 26 papers remained once duplicates were removed and inclusion criteria were applied. Studies (n = 27) in the analyses were randomized crossover designs comparing the effects of two or more eating events on DIT. Higher energy intake increased DIT; in a mixed model meta-regression, for every 100 kJ increase in energy intake, DIT increased by 1.1 kJ/h (p < 0.001). Meals with a high protein or carbohydrate content had a higher DIT than high fat, although this effect was not always significant. Meals with medium chain triglycerides had a significantly higher DIT than long chain triglycerides (meta-analysis, p = 0.002). Consuming the same meal as a single bolus eating event compared to multiple small meals or snacks was associated with a significantly higher DIT (meta-analysis, p = 0.02). Unclear or inconsistent findings were found by comparing the consumption of meals quickly or slowly, and palatability was not significantly associated with DIT. These findings indicate that the magnitude of the increase in DIT is influenced by the energy intake, macronutrient composition, and eating pattern of the meal.

  16. Analysis hierarchical model for discrete event systems

    Science.gov (United States)

    Ciortea, E. M.

    2015-11-01

    The This paper presents the hierarchical model based on discrete event network for robotic systems. Based on the hierarchical approach, Petri network is analysed as a network of the highest conceptual level and the lowest level of local control. For modelling and control of complex robotic systems using extended Petri nets. Such a system is structured, controlled and analysed in this paper by using Visual Object Net ++ package that is relatively simple and easy to use, and the results are shown as representations easy to interpret. The hierarchical structure of the robotic system is implemented on computers analysed using specialized programs. Implementation of hierarchical model discrete event systems, as a real-time operating system on a computer network connected via a serial bus is possible, where each computer is dedicated to local and Petri model of a subsystem global robotic system. Since Petri models are simplified to apply general computers, analysis, modelling, complex manufacturing systems control can be achieved using Petri nets. Discrete event systems is a pragmatic tool for modelling industrial systems. For system modelling using Petri nets because we have our system where discrete event. To highlight the auxiliary time Petri model using transport stream divided into hierarchical levels and sections are analysed successively. Proposed robotic system simulation using timed Petri, offers the opportunity to view the robotic time. Application of goods or robotic and transmission times obtained by measuring spot is obtained graphics showing the average time for transport activity, using the parameters sets of finished products. individually.

  17. A general framework for the use of logistic regression models in meta-analysis.

    Science.gov (United States)

    Simmonds, Mark C; Higgins, Julian Pt

    2016-12-01

    Where individual participant data are available for every randomised trial in a meta-analysis of dichotomous event outcomes, "one-stage" random-effects logistic regression models have been proposed as a way to analyse these data. Such models can also be used even when individual participant data are not available and we have only summary contingency table data. One benefit of this one-stage regression model over conventional meta-analysis methods is that it maximises the correct binomial likelihood for the data and so does not require the common assumption that effect estimates are normally distributed. A second benefit of using this model is that it may be applied, with only minor modification, in a range of meta-analytic scenarios, including meta-regression, network meta-analyses and meta-analyses of diagnostic test accuracy. This single model can potentially replace the variety of often complex methods used in these areas. This paper considers, with a range of meta-analysis examples, how random-effects logistic regression models may be used in a number of different types of meta-analyses. This one-stage approach is compared with widely used meta-analysis methods including Bayesian network meta-analysis and the bivariate and hierarchical summary receiver operating characteristic (ROC) models for meta-analyses of diagnostic test accuracy.

  18. Classifying hospitals as mortality outliers: logistic versus hierarchical logistic models.

    Science.gov (United States)

    Alexandrescu, Roxana; Bottle, Alex; Jarman, Brian; Aylin, Paul

    2014-05-01

    The use of hierarchical logistic regression for provider profiling has been recommended due to the clustering of patients within hospitals, but has some associated difficulties. We assess changes in hospital outlier status based on standard logistic versus hierarchical logistic modelling of mortality. The study population consisted of all patients admitted to acute, non-specialist hospitals in England between 2007 and 2011 with a primary diagnosis of acute myocardial infarction, acute cerebrovascular disease or fracture of neck of femur or a primary procedure of coronary artery bypass graft or repair of abdominal aortic aneurysm. We compared standardised mortality ratios (SMRs) from non-hierarchical models with SMRs from hierarchical models, without and with shrinkage estimates of the predicted probabilities (Model 1 and Model 2). The SMRs from standard logistic and hierarchical models were highly statistically significantly correlated (r > 0.91, p = 0.01). More outliers were recorded in the standard logistic regression than hierarchical modelling only when using shrinkage estimates (Model 2): 21 hospitals (out of a cumulative number of 565 pairs of hospitals under study) changed from a low outlier and 8 hospitals changed from a high outlier based on the logistic regression to a not-an-outlier based on shrinkage estimates. Both standard logistic and hierarchical modelling have identified nearly the same hospitals as mortality outliers. The choice of methodological approach should, however, also consider whether the modelling aim is judgment or improvement, as shrinkage may be more appropriate for the former than the latter.

  19. A hierarchical bayesian model to quantify uncertainty of stream water temperature forecasts.

    Directory of Open Access Journals (Sweden)

    Guillaume Bal

    Full Text Available Providing generic and cost effective modelling approaches to reconstruct and forecast freshwater temperature using predictors as air temperature and water discharge is a prerequisite to understanding ecological processes underlying the impact of water temperature and of global warming on continental aquatic ecosystems. Using air temperature as a simple linear predictor of water temperature can lead to significant bias in forecasts as it does not disentangle seasonality and long term trends in the signal. Here, we develop an alternative approach based on hierarchical Bayesian statistical time series modelling of water temperature, air temperature and water discharge using seasonal sinusoidal periodic signals and time varying means and amplitudes. Fitting and forecasting performances of this approach are compared with that of simple linear regression between water and air temperatures using i an emotive simulated example, ii application to three French coastal streams with contrasting bio-geographical conditions and sizes. The time series modelling approach better fit data and does not exhibit forecasting bias in long term trends contrary to the linear regression. This new model also allows for more accurate forecasts of water temperature than linear regression together with a fair assessment of the uncertainty around forecasting. Warming of water temperature forecast by our hierarchical Bayesian model was slower and more uncertain than that expected with the classical regression approach. These new forecasts are in a form that is readily usable in further ecological analyses and will allow weighting of outcomes from different scenarios to manage climate change impacts on freshwater wildlife.

  20. Regression: A Bibliography.

    Science.gov (United States)

    Pedrini, D. T.; Pedrini, Bonnie C.

    Regression, another mechanism studied by Sigmund Freud, has had much research, e.g., hypnotic regression, frustration regression, schizophrenic regression, and infra-human-animal regression (often directly related to fixation). Many investigators worked with hypnotic age regression, which has a long history, going back to Russian reflexologists.…

  1. Regression: A Bibliography.

    Science.gov (United States)

    Pedrini, D. T.; Pedrini, Bonnie C.

    Regression, another mechanism studied by Sigmund Freud, has had much research, e.g., hypnotic regression, frustration regression, schizophrenic regression, and infra-human-animal regression (often directly related to fixation). Many investigators worked with hypnotic age regression, which has a long history, going back to Russian reflexologists.…

  2. Micromechanics of hierarchical materials

    DEFF Research Database (Denmark)

    Mishnaevsky, Leon, Jr.

    2012-01-01

    A short overview of micromechanical models of hierarchical materials (hybrid composites, biomaterials, fractal materials, etc.) is given. Several examples of the modeling of strength and damage in hierarchical materials are summarized, among them, 3D FE model of hybrid composites...... with nanoengineered matrix, fiber bundle model of UD composites with hierarchically clustered fibers and 3D multilevel model of wood considered as a gradient, cellular material with layered composite cell walls. The main areas of research in micromechanics of hierarchical materials are identified, among them......, the investigations of the effects of load redistribution between reinforcing elements at different scale levels, of the possibilities to control different material properties and to ensure synergy of strengthening effects at different scale levels and using the nanoreinforcement effects. The main future directions...

  3. Hierarchical auxetic mechanical metamaterials.

    Science.gov (United States)

    Gatt, Ruben; Mizzi, Luke; Azzopardi, Joseph I; Azzopardi, Keith M; Attard, Daphne; Casha, Aaron; Briffa, Joseph; Grima, Joseph N

    2015-02-11

    Auxetic mechanical metamaterials are engineered systems that exhibit the unusual macroscopic property of a negative Poisson's ratio due to sub-unit structure rather than chemical composition. Although their unique behaviour makes them superior to conventional materials in many practical applications, they are limited in availability. Here, we propose a new class of hierarchical auxetics based on the rotating rigid units mechanism. These systems retain the enhanced properties from having a negative Poisson's ratio with the added benefits of being a hierarchical system. Using simulations on typical hierarchical multi-level rotating squares, we show that, through design, one can control the extent of auxeticity, degree of aperture and size of the different pores in the system. This makes the system more versatile than similar non-hierarchical ones, making them promising candidates for industrial and biomedical applications, such as stents and skin grafts.

  4. Introduction into Hierarchical Matrices

    KAUST Repository

    Litvinenko, Alexander

    2013-12-05

    Hierarchical matrices allow us to reduce computational storage and cost from cubic to almost linear. This technique can be applied for solving PDEs, integral equations, matrix equations and approximation of large covariance and precision matrices.

  5. Hierarchical Auxetic Mechanical Metamaterials

    Science.gov (United States)

    Gatt, Ruben; Mizzi, Luke; Azzopardi, Joseph I.; Azzopardi, Keith M.; Attard, Daphne; Casha, Aaron; Briffa, Joseph; Grima, Joseph N.

    2015-02-01

    Auxetic mechanical metamaterials are engineered systems that exhibit the unusual macroscopic property of a negative Poisson's ratio due to sub-unit structure rather than chemical composition. Although their unique behaviour makes them superior to conventional materials in many practical applications, they are limited in availability. Here, we propose a new class of hierarchical auxetics based on the rotating rigid units mechanism. These systems retain the enhanced properties from having a negative Poisson's ratio with the added benefits of being a hierarchical system. Using simulations on typical hierarchical multi-level rotating squares, we show that, through design, one can control the extent of auxeticity, degree of aperture and size of the different pores in the system. This makes the system more versatile than similar non-hierarchical ones, making them promising candidates for industrial and biomedical applications, such as stents and skin grafts.

  6. Applied Bayesian Hierarchical Methods

    CERN Document Server

    Congdon, Peter D

    2010-01-01

    Bayesian methods facilitate the analysis of complex models and data structures. Emphasizing data applications, alternative modeling specifications, and computer implementation, this book provides a practical overview of methods for Bayesian analysis of hierarchical models.

  7. Programming with Hierarchical Maps

    DEFF Research Database (Denmark)

    Ørbæk, Peter

    This report desribes the hierarchical maps used as a central data structure in the Corundum framework. We describe its most prominent features, ague for its usefulness and briefly describe some of the software prototypes implemented using the technology....

  8. Catalysis with hierarchical zeolites

    DEFF Research Database (Denmark)

    Holm, Martin Spangsberg; Taarning, Esben; Egeblad, Kresten

    2011-01-01

    Hierarchical (or mesoporous) zeolites have attracted significant attention during the first decade of the 21st century, and so far this interest continues to increase. There have already been several reviews giving detailed accounts of the developments emphasizing different aspects of this research...... topic. Until now, the main reason for developing hierarchical zeolites has been to achieve heterogeneous catalysts with improved performance but this particular facet has not yet been reviewed in detail. Thus, the present paper summaries and categorizes the catalytic studies utilizing hierarchical...... zeolites that have been reported hitherto. Prototypical examples from some of the different categories of catalytic reactions that have been studied using hierarchical zeolite catalysts are highlighted. This clearly illustrates the different ways that improved performance can be achieved with this family...

  9. Semiparametric Quantile Modelling of Hierarchical Data

    Institute of Scientific and Technical Information of China (English)

    Mao Zai TIAN; Man Lai TANG; Ping Shing CHAN

    2009-01-01

    The classic hierarchical linear model formulation provides a considerable flexibility for modelling the random effects structure and a powerful tool for analyzing nested data that arise in various areas such as biology, economics and education. However, it assumes the within-group errors to be independently and identically distributed (i.i.d.) and models at all levels to be linear. Most importantly, traditional hierarchical models (just like other ordinary mean regression methods) cannot characterize the entire conditional distribution of a dependent variable given a set of covariates and fail to yield robust estimators. In this article, we relax the aforementioned and normality assumptions, and develop a so-called Hierarchical Semiparametric Quantile Regression Models in which the within-group errors could be heteroscedastic and models at some levels are allowed to be nonparametric. We present the ideas with a 2-level model. The level-l model is specified as a nonparametric model whereas level-2 model is set as a parametric model. Under the proposed semiparametric setting the vector of partial derivatives of the nonparametric function in level-1 becomes the response variable vector in level 2. The proposed method allows us to model the fixed effects in the innermost level (i.e., level 2) as a function of the covariates instead of a constant effect. We outline some mild regularity conditions required for convergence and asymptotic normality for our estimators. We illustrate our methodology with a real hierarchical data set from a laboratory study and some simulation studies.

  10. A general strategy to determine the congruence between a hierarchical and a non-hierarchical classification

    Directory of Open Access Journals (Sweden)

    Marín Ignacio

    2007-11-01

    Full Text Available Abstract Background Classification procedures are widely used in phylogenetic inference, the analysis of expression profiles, the study of biological networks, etc. Many algorithms have been proposed to establish the similarity between two different classifications of the same elements. However, methods to determine significant coincidences between hierarchical and non-hierarchical partitions are still poorly developed, in spite of the fact that the search for such coincidences is implicit in many analyses of massive data. Results We describe a novel strategy to compare a hierarchical and a dichotomic non-hierarchical classification of elements, in order to find clusters in a hierarchical tree in which elements of a given "flat" partition are overrepresented. The key improvement of our strategy respect to previous methods is using permutation analyses of ranked clusters to determine whether regions of the dendrograms present a significant enrichment. We show that this method is more sensitive than previously developed strategies and how it can be applied to several real cases, including microarray and interactome data. Particularly, we use it to compare a hierarchical representation of the yeast mitochondrial interactome and a catalogue of known mitochondrial protein complexes, demonstrating a high level of congruence between those two classifications. We also discuss extensions of this method to other cases which are conceptually related. Conclusion Our method is highly sensitive and outperforms previously described strategies. A PERL script that implements it is available at http://www.uv.es/~genomica/treetracker.

  11. What are hierarchical models and how do we analyze them?

    Science.gov (United States)

    Royle, Andy

    2016-01-01

    In this chapter we provide a basic definition of hierarchical models and introduce the two canonical hierarchical models in this book: site occupancy and N-mixture models. The former is a hierarchical extension of logistic regression and the latter is a hierarchical extension of Poisson regression. We introduce basic concepts of probability modeling and statistical inference including likelihood and Bayesian perspectives. We go through the mechanics of maximizing the likelihood and characterizing the posterior distribution by Markov chain Monte Carlo (MCMC) methods. We give a general perspective on topics such as model selection and assessment of model fit, although we demonstrate these topics in practice in later chapters (especially Chapters 5, 6, 7, and 10 Chapter 5 Chapter 6 Chapter 7 Chapter 10)

  12. Hierarchical control of electron-transfer

    DEFF Research Database (Denmark)

    Westerhoff, Hans V.; Jensen, Peter Ruhdal; Egger, Louis;

    1997-01-01

    In this chapter the role of electron transfer in determining the behaviour of the ATP synthesising enzyme in E. coli is analysed. It is concluded that the latter enzyme lacks control because of special properties of the electron transfer components. These properties range from absence of a strong...... back pressure by the protonmotive force on the rate of electron transfer to hierarchical regulation of the expression of the gens that encode the electron transfer proteins as a response to changes in the bioenergetic properties of the cell.The discussion uses Hierarchical Control Analysis...

  13. Replication and extension of a hierarchical model of social anxiety and depression: fear of positive evaluation as a key unique factor in social anxiety.

    Science.gov (United States)

    Weeks, Justin W

    2015-01-01

    Wang, Hsu, Chiu, and Liang (2012, Journal of Anxiety Disorders, 26, 215-224) recently proposed a hierarchical model of social interaction anxiety and depression to account for both the commonalities and distinctions between these conditions. In the present paper, this model was extended to more broadly encompass the symptoms of social anxiety disorder, and replicated in a large unselected, undergraduate sample (n = 585). Structural equation modeling (SEM) and hierarchical regression analyses were employed. Negative affect and positive affect were conceptualized as general factors shared by social anxiety and depression; fear of negative evaluation (FNE) and disqualification of positive social outcomes were operationalized as specific factors, and fear of positive evaluation (FPE) was operationalized as a factor unique to social anxiety. This extended hierarchical model explicates structural relationships among these factors, in which the higher-level, general factors (i.e., high negative affect and low positive affect) represent vulnerability markers of both social anxiety and depression, and the lower-level factors (i.e., FNE, disqualification of positive social outcomes, and FPE) are the dimensions of specific cognitive features. Results from SEM and hierarchical regression analyses converged in support of the extended model. FPE is further supported as a key symptom that differentiates social anxiety from depression.

  14. Parallel hierarchical radiosity rendering

    Energy Technology Data Exchange (ETDEWEB)

    Carter, M.

    1993-07-01

    In this dissertation, the step-by-step development of a scalable parallel hierarchical radiosity renderer is documented. First, a new look is taken at the traditional radiosity equation, and a new form is presented in which the matrix of linear system coefficients is transformed into a symmetric matrix, thereby simplifying the problem and enabling a new solution technique to be applied. Next, the state-of-the-art hierarchical radiosity methods are examined for their suitability to parallel implementation, and scalability. Significant enhancements are also discovered which both improve their theoretical foundations and improve the images they generate. The resultant hierarchical radiosity algorithm is then examined for sources of parallelism, and for an architectural mapping. Several architectural mappings are discussed. A few key algorithmic changes are suggested during the process of making the algorithm parallel. Next, the performance, efficiency, and scalability of the algorithm are analyzed. The dissertation closes with a discussion of several ideas which have the potential to further enhance the hierarchical radiosity method, or provide an entirely new forum for the application of hierarchical methods.

  15. Regression analysis by example

    National Research Council Canada - National Science Library

    Chatterjee, Samprit; Hadi, Ali S

    2012-01-01

    .... The emphasis continues to be on exploratory data analysis rather than statistical theory. The coverage offers in-depth treatment of regression diagnostics, transformation, multicollinearity, logistic regression, and robust regression...

  16. Neutrosophic Hierarchical Clustering Algoritms

    Directory of Open Access Journals (Sweden)

    Rıdvan Şahin

    2014-03-01

    Full Text Available Interval neutrosophic set (INS is a generalization of interval valued intuitionistic fuzzy set (IVIFS, whose the membership and non-membership values of elements consist of fuzzy range, while single valued neutrosophic set (SVNS is regarded as extension of intuitionistic fuzzy set (IFS. In this paper, we extend the hierarchical clustering techniques proposed for IFSs and IVIFSs to SVNSs and INSs respectively. Based on the traditional hierarchical clustering procedure, the single valued neutrosophic aggregation operator, and the basic distance measures between SVNSs, we define a single valued neutrosophic hierarchical clustering algorithm for clustering SVNSs. Then we extend the algorithm to classify an interval neutrosophic data. Finally, we present some numerical examples in order to show the effectiveness and availability of the developed clustering algorithms.

  17. Hierarchical Star Formation Across Galactic Disks

    Science.gov (United States)

    Gouliermis, Dimitrios

    2016-09-01

    Most stars form in clusters. This fact has emerged from the finding that "embedded clusters account for the 70 - 90% fraction of all stars formed in Giant Molecular Clouds (GMCs)." While this is the case at scales of few 10 parsecs, typical for GMCs, a look at star-forming galaxies in the Local Group (LG) shows significant populations of enormous loose complexes of early-type stars extending at scales from few 100 to few 1000 parsecs. The fact that these stellar complexes host extremely large numbers of loosely distributed massive blue stars implies either that stars form also in an unbound fashion or they are immediately dislocated from their original compact birthplaces or both. The Legacy Extra-Galactic UV Survey (LEGUS) has produced remarkable collections of resolved early-type stars in 50 star-forming LG galaxies, suited for testing ideas about recent star formation. I will present results from our ongoing project on star formation across LEGUS disk galaxies. We characterize the global clustering behavior of the massive young stars in order to understand the morphology of star formation over galactic scales. This morphology appears to be self-similar with fractal dimensions comparable to those of the molecular interstellar medium, apparently driven by large-scale turbulence. Our clustering analysis reveals compact stellar systems nested in larger looser concentrations, which themselves are the dense parts of unbound complexes and super-structures, giving evidence of hierarchical star formation up to galactic scales. We investigate the structural and star formation parameters demographics of the star-forming complexes revealed at various levels of compactness. I will discuss the outcome of our correlation and regression analyses on these parameters in an attempt to understand the link between galactic disk dynamics and morphological structure in spiral and ring galaxies of the local universe.

  18. Reduced Rank Regression

    DEFF Research Database (Denmark)

    Johansen, Søren

    2008-01-01

    The reduced rank regression model is a multivariate regression model with a coefficient matrix with reduced rank. The reduced rank regression algorithm is an estimation procedure, which estimates the reduced rank regression model. It is related to canonical correlations and involves calculating e...

  19. Hierarchical Porous Structures

    Energy Technology Data Exchange (ETDEWEB)

    Grote, Christopher John [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-06-07

    Materials Design is often at the forefront of technological innovation. While there has always been a push to generate increasingly low density materials, such as aero or hydrogels, more recently the idea of bicontinuous structures has gone more into play. This review will cover some of the methods and applications for generating both porous, and hierarchically porous structures.

  20. Multicollinearity in hierarchical linear models.

    Science.gov (United States)

    Yu, Han; Jiang, Shanhe; Land, Kenneth C

    2015-09-01

    This study investigates an ill-posed problem (multicollinearity) in Hierarchical Linear Models from both the data and the model perspectives. We propose an intuitive, effective approach to diagnosing the presence of multicollinearity and its remedies in this class of models. A simulation study demonstrates the impacts of multicollinearity on coefficient estimates, associated standard errors, and variance components at various levels of multicollinearity for finite sample sizes typical in social science studies. We further investigate the role multicollinearity plays at each level for estimation of coefficient parameters in terms of shrinkage. Based on these analyses, we recommend a top-down method for assessing multicollinearity in HLMs that first examines the contextual predictors (Level-2 in a two-level model) and then the individual predictors (Level-1) and uses the results for data collection, research problem redefinition, model re-specification, variable selection and estimation of a final model.

  1. Evaluation of the efficiency of continuous wavelet transform as processing and preprocessing algorithm for resolution of overlapped signals in univariate and multivariate regression analyses; an application to ternary and quaternary mixtures

    Science.gov (United States)

    Hegazy, Maha A.; Lotfy, Hayam M.; Mowaka, Shereen; Mohamed, Ekram Hany

    2016-07-01

    Wavelets have been adapted for a vast number of signal-processing applications due to the amount of information that can be extracted from a signal. In this work, a comparative study on the efficiency of continuous wavelet transform (CWT) as a signal processing tool in univariate regression and a pre-processing tool in multivariate analysis using partial least square (CWT-PLS) was conducted. These were applied to complex spectral signals of ternary and quaternary mixtures. CWT-PLS method succeeded in the simultaneous determination of a quaternary mixture of drotaverine (DRO), caffeine (CAF), paracetamol (PAR) and p-aminophenol (PAP, the major impurity of paracetamol). While, the univariate CWT failed to simultaneously determine the quaternary mixture components and was able to determine only PAR and PAP, the ternary mixtures of DRO, CAF, and PAR and CAF, PAR, and PAP. During the calculations of CWT, different wavelet families were tested. The univariate CWT method was validated according to the ICH guidelines. While for the development of the CWT-PLS model a calibration set was prepared by means of an orthogonal experimental design and their absorption spectra were recorded and processed by CWT. The CWT-PLS model was constructed by regression between the wavelet coefficients and concentration matrices and validation was performed by both cross validation and external validation sets. Both methods were successfully applied for determination of the studied drugs in pharmaceutical formulations.

  2. Evaluation of the efficiency of continuous wavelet transform as processing and preprocessing algorithm for resolution of overlapped signals in univariate and multivariate regression analyses; an application to ternary and quaternary mixtures.

    Science.gov (United States)

    Hegazy, Maha A; Lotfy, Hayam M; Mowaka, Shereen; Mohamed, Ekram Hany

    2016-07-05

    Wavelets have been adapted for a vast number of signal-processing applications due to the amount of information that can be extracted from a signal. In this work, a comparative study on the efficiency of continuous wavelet transform (CWT) as a signal processing tool in univariate regression and a pre-processing tool in multivariate analysis using partial least square (CWT-PLS) was conducted. These were applied to complex spectral signals of ternary and quaternary mixtures. CWT-PLS method succeeded in the simultaneous determination of a quaternary mixture of drotaverine (DRO), caffeine (CAF), paracetamol (PAR) and p-aminophenol (PAP, the major impurity of paracetamol). While, the univariate CWT failed to simultaneously determine the quaternary mixture components and was able to determine only PAR and PAP, the ternary mixtures of DRO, CAF, and PAR and CAF, PAR, and PAP. During the calculations of CWT, different wavelet families were tested. The univariate CWT method was validated according to the ICH guidelines. While for the development of the CWT-PLS model a calibration set was prepared by means of an orthogonal experimental design and their absorption spectra were recorded and processed by CWT. The CWT-PLS model was constructed by regression between the wavelet coefficients and concentration matrices and validation was performed by both cross validation and external validation sets. Both methods were successfully applied for determination of the studied drugs in pharmaceutical formulations.

  3. Frontoparietal Connectivity and Hierarchical Structure of the Brain’s Functional Network during Sleep

    Directory of Open Access Journals (Sweden)

    Victor I Spoormaker

    2012-05-01

    Full Text Available Frontal and parietal regions are associated with some of the most complex cognitive functions, and several frontoparietal resting-state networks can be observed in wakefulness. We used functional magnetic resonance imaging (fMRI data acquired in polysomnographically validated wakefulness, light sleep and slow-wave sleep to examine the hierarchical structure of a low-frequency functional brain network, and to examine whether frontoparietal connectivity would disintegrate in sleep. Whole-brain analyses with hierarchical cluster analysis on predefined atlases were performed, as well as regression of inferior parietal lobules seeds against all voxels in the brain, and an evaluation of the integrity of voxel time-courses in subcortical regions-of-interest. We observed that frontoparietal functional connectivity disintegrated in sleep stage 1 and was absent in deeper sleep stages. Slow-wave sleep was characterized by strong hierarchical clustering of local submodules. Frontoparietal connectivity between inferior parietal lobules and superior medial and right frontal gyrus was lower in sleep stages than in wakefulness. Moreover, thalamus voxels showed maintained integrity in sleep stage 1, making intrathalamic desynchronization an unlikely source of reduced thalamocortical connectivity in this sleep stage. Our data suggest a transition from a globally integrated functional brain network in wakefulness to a disintegrated network consisting of local submodules in slow-wave sleep, in which frontoparietal inter-modular nodes may play a crucial role, possibly in combination with the thalamus.

  4. Data with hierarchical structure: impact of intraclass correlation and sample size on type-I error.

    Science.gov (United States)

    Musca, Serban C; Kamiejski, Rodolphe; Nugier, Armelle; Méot, Alain; Er-Rafiy, Abdelatif; Brauer, Markus

    2011-01-01

    Least squares analyses (e.g., ANOVAs, linear regressions) of hierarchical data leads to Type-I error rates that depart severely from the nominal Type-I error rate assumed. Thus, when least squares methods are used to analyze hierarchical data coming from designs in which some groups are assigned to the treatment condition, and others to the control condition (i.e., the widely used "groups nested under treatment" experimental design), the Type-I error rate is seriously inflated, leading too often to the incorrect rejection of the null hypothesis (i.e., the incorrect conclusion of an effect of the treatment). To highlight the severity of the problem, we present simulations showing how the Type-I error rate is affected under different conditions of intraclass correlation and sample size. For all simulations the Type-I error rate after application of the popular Kish (1965) correction is also considered, and the limitations of this correction technique discussed. We conclude with suggestions on how one should collect and analyze data bearing a hierarchical structure.

  5. Data with hierarchical structure: impact of intraclass correlation and sample size on Type-I error

    Directory of Open Access Journals (Sweden)

    Serban C Musca

    2011-04-01

    Full Text Available Least squares analyses (e.g., ANOVAs, linear regressions of hierarchical data leads to Type-I error rates that depart severely from the nominal Type-I error rate assumed. Thus, when least squares methods are used to analyze hierarchical data coming from designs in which some groups are assigned to the treatment condition, and others to the control condition (i.e., the widely used "groups nested under treatment" experimental design, the Type-I error rate is seriously inflated, leading too often to the incorrect rejection of the null hypothesis (i.e., the incorrect conclusion of an effect of the treatment. To highlight the severity of the problem, we present simulations showing how the Type-I error rate is affected under different conditions of intraclass correlation and sample size. For all simulations the Type-I error rate after application of the popular Kish (1965 correction is also considered, and the limitations of this correction technique discussed. We conclude with suggestions on how one should collect and analyze data bearing a hierarchical structure.

  6. Logistic regression for circular data

    Science.gov (United States)

    Al-Daffaie, Kadhem; Khan, Shahjahan

    2017-05-01

    This paper considers the relationship between a binary response and a circular predictor. It develops the logistic regression model by employing the linear-circular regression approach. The maximum likelihood method is used to estimate the parameters. The Newton-Raphson numerical method is used to find the estimated values of the parameters. A data set from weather records of Toowoomba city is analysed by the proposed methods. Moreover, a simulation study is considered. The R software is used for all computations and simulations.

  7. Fast, Linear Time Hierarchical Clustering using the Baire Metric

    CERN Document Server

    Contreras, Pedro

    2011-01-01

    The Baire metric induces an ultrametric on a dataset and is of linear computational complexity, contrasted with the standard quadratic time agglomerative hierarchical clustering algorithm. In this work we evaluate empirically this new approach to hierarchical clustering. We compare hierarchical clustering based on the Baire metric with (i) agglomerative hierarchical clustering, in terms of algorithm properties; (ii) generalized ultrametrics, in terms of definition; and (iii) fast clustering through k-means partititioning, in terms of quality of results. For the latter, we carry out an in depth astronomical study. We apply the Baire distance to spectrometric and photometric redshifts from the Sloan Digital Sky Survey using, in this work, about half a million astronomical objects. We want to know how well the (more costly to determine) spectrometric redshifts can predict the (more easily obtained) photometric redshifts, i.e. we seek to regress the spectrometric on the photometric redshifts, and we use clusterwi...

  8. Hierarchical manifold learning.

    Science.gov (United States)

    Bhatia, Kanwal K; Rao, Anil; Price, Anthony N; Wolz, Robin; Hajnal, Jo; Rueckert, Daniel

    2012-01-01

    We present a novel method of hierarchical manifold learning which aims to automatically discover regional variations within images. This involves constructing manifolds in a hierarchy of image patches of increasing granularity, while ensuring consistency between hierarchy levels. We demonstrate its utility in two very different settings: (1) to learn the regional correlations in motion within a sequence of time-resolved images of the thoracic cavity; (2) to find discriminative regions of 3D brain images in the classification of neurodegenerative disease,

  9. Hierarchically Structured Electrospun Fibers

    Directory of Open Access Journals (Sweden)

    Nicole E. Zander

    2013-01-01

    Full Text Available Traditional electrospun nanofibers have a myriad of applications ranging from scaffolds for tissue engineering to components of biosensors and energy harvesting devices. The generally smooth one-dimensional structure of the fibers has stood as a limitation to several interesting novel applications. Control of fiber diameter, porosity and collector geometry will be briefly discussed, as will more traditional methods for controlling fiber morphology and fiber mat architecture. The remainder of the review will focus on new techniques to prepare hierarchically structured fibers. Fibers with hierarchical primary structures—including helical, buckled, and beads-on-a-string fibers, as well as fibers with secondary structures, such as nanopores, nanopillars, nanorods, and internally structured fibers and their applications—will be discussed. These new materials with helical/buckled morphology are expected to possess unique optical and mechanical properties with possible applications for negative refractive index materials, highly stretchable/high-tensile-strength materials, and components in microelectromechanical devices. Core-shell type fibers enable a much wider variety of materials to be electrospun and are expected to be widely applied in the sensing, drug delivery/controlled release fields, and in the encapsulation of live cells for biological applications. Materials with a hierarchical secondary structure are expected to provide new superhydrophobic and self-cleaning materials.

  10. HDS: Hierarchical Data System

    Science.gov (United States)

    Pearce, Dave; Walter, Anton; Lupton, W. F.; Warren-Smith, Rodney F.; Lawden, Mike; McIlwrath, Brian; Peden, J. C. M.; Jenness, Tim; Draper, Peter W.

    2015-02-01

    The Hierarchical Data System (HDS) is a file-based hierarchical data system designed for the storage of a wide variety of information. It is particularly suited to the storage of large multi-dimensional arrays (with their ancillary data) where efficient access is needed. It is a key component of the Starlink software collection (ascl:1110.012) and is used by the Starlink N-Dimensional Data Format (NDF) library (ascl:1411.023). HDS organizes data into hierarchies, broadly similar to the directory structure of a hierarchical filing system, but contained within a single HDS container file. The structures stored in these files are self-describing and flexible; HDS supports modification and extension of structures previously created, as well as functions such as deletion, copying, and renaming. All information stored in HDS files is portable between the machines on which HDS is implemented. Thus, there are no format conversion problems when moving between machines. HDS can write files in a private binary format (version 4), or be layered on top of HDF5 (version 5).

  11. Hierarchical video summarization

    Science.gov (United States)

    Ratakonda, Krishna; Sezan, M. Ibrahim; Crinon, Regis J.

    1998-12-01

    We address the problem of key-frame summarization of vide in the absence of any a priori information about its content. This is a common problem that is encountered in home videos. We propose a hierarchical key-frame summarization algorithm where a coarse-to-fine key-frame summary is generated. A hierarchical key-frame summary facilitates multi-level browsing where the user can quickly discover the content of the video by accessing its coarsest but most compact summary and then view a desired segment of the video with increasingly more detail. At the finest level, the summary is generated on the basis of color features of video frames, using an extension of a recently proposed key-frame extraction algorithm. The finest level key-frames are recursively clustered using a novel pairwise K-means clustering approach with temporal consecutiveness constraint. We also address summarization of MPEG-2 compressed video without fully decoding the bitstream. We also propose efficient mechanisms that facilitate decoding the video when the hierarchical summary is utilized in browsing and playback of video segments starting at selected key-frames.

  12. Regression analysis by example

    CERN Document Server

    Chatterjee, Samprit

    2012-01-01

    Praise for the Fourth Edition: ""This book is . . . an excellent source of examples for regression analysis. It has been and still is readily readable and understandable."" -Journal of the American Statistical Association Regression analysis is a conceptually simple method for investigating relationships among variables. Carrying out a successful application of regression analysis, however, requires a balance of theoretical results, empirical rules, and subjective judgment. Regression Analysis by Example, Fifth Edition has been expanded

  13. Unitary Response Regression Models

    Science.gov (United States)

    Lipovetsky, S.

    2007-01-01

    The dependent variable in a regular linear regression is a numerical variable, and in a logistic regression it is a binary or categorical variable. In these models the dependent variable has varying values. However, there are problems yielding an identity output of a constant value which can also be modelled in a linear or logistic regression with…

  14. Flexible survival regression modelling

    DEFF Research Database (Denmark)

    Cortese, Giuliana; Scheike, Thomas H; Martinussen, Torben

    2009-01-01

    Regression analysis of survival data, and more generally event history data, is typically based on Cox's regression model. We here review some recent methodology, focusing on the limitations of Cox's regression model. The key limitation is that the model is not well suited to represent time-varyi...

  15. Quantile Regression Methods

    DEFF Research Database (Denmark)

    Fitzenberger, Bernd; Wilke, Ralf Andreas

    2015-01-01

    Quantile regression is emerging as a popular statistical approach, which complements the estimation of conditional mean models. While the latter only focuses on one aspect of the conditional distribution of the dependent variable, the mean, quantile regression provides more detailed insights by m...... treatment of the topic is based on the perspective of applied researchers using quantile regression in their empirical work....

  16. Hierarchical Scaling in Systems of Natural Cities

    CERN Document Server

    Chen, Yanguang

    2016-01-01

    Hierarchies can be modeled by a set of exponential functions, from which we can derive a set of power laws indicative of scaling. These scaling laws are followed by many natural and social phenomena such as cities, earthquakes, and rivers. This paper is devoted to revealing the scaling patterns in systems of natural cities by reconstructing the hierarchy with cascade structure. The cities of America, Britain, France, and Germany are taken as examples to make empirical analyses. The hierarchical scaling relations can be well fitted to the data points within the scaling ranges of the size and area of the natural cities. The size-number and area-number scaling exponents are close to 1, and the allometric scaling exponent is slightly less than 1. The results suggest that natural cities follow hierarchical scaling laws and hierarchical conservation law. Zipf's law proved to be one of the indications of the hierarchical scaling, and the primate law of city-size distribution represents a local pattern and can be mer...

  17. Regression for economics

    CERN Document Server

    Naghshpour, Shahdad

    2012-01-01

    Regression analysis is the most commonly used statistical method in the world. Although few would characterize this technique as simple, regression is in fact both simple and elegant. The complexity that many attribute to regression analysis is often a reflection of their lack of familiarity with the language of mathematics. But regression analysis can be understood even without a mastery of sophisticated mathematical concepts. This book provides the foundation and will help demystify regression analysis using examples from economics and with real data to show the applications of the method. T

  18. A hierarchical linear model for tree height prediction.

    Science.gov (United States)

    Vicente J. Monleon

    2003-01-01

    Measuring tree height is a time-consuming process. Often, tree diameter is measured and height is estimated from a published regression model. Trees used to develop these models are clustered into stands, but this structure is ignored and independence is assumed. In this study, hierarchical linear models that account explicitly for the clustered structure of the data...

  19. A Hierarchical Framework for Facial Age Estimation

    Directory of Open Access Journals (Sweden)

    Yuyu Liang

    2014-01-01

    Full Text Available Age estimation is a complex issue of multiclassification or regression. To address the problems of uneven distribution of age database and ignorance of ordinal information, this paper shows a hierarchic age estimation system, comprising age group and specific age estimation. In our system, two novel classifiers, sequence k-nearest neighbor (SKNN and ranking-KNN, are introduced to predict age group and value, respectively. Notably, ranking-KNN utilizes the ordinal information between samples in estimation process rather than regards samples as separate individuals. Tested on FG-NET database, our system achieves 4.97 evaluated by MAE (mean absolute error for age estimation.

  20. Detecting Hierarchical Structure in Networks

    DEFF Research Database (Denmark)

    Herlau, Tue; Mørup, Morten; Schmidt, Mikkel Nørgaard;

    2012-01-01

    a generative Bayesian model that is able to infer whether hierarchies are present or not from a hypothesis space encompassing all types of hierarchical tree structures. For efficient inference we propose a collapsed Gibbs sampling procedure that jointly infers a partition and its hierarchical structure......Many real-world networks exhibit hierarchical organization. Previous models of hierarchies within relational data has focused on binary trees; however, for many networks it is unknown whether there is hierarchical structure, and if there is, a binary tree might not account well for it. We propose....... On synthetic and real data we demonstrate that our model can detect hierarchical structure leading to better link-prediction than competing models. Our model can be used to detect if a network exhibits hierarchical structure, thereby leading to a better comprehension and statistical account the network....

  1. Context updates are hierarchical

    Directory of Open Access Journals (Sweden)

    Anton Karl Ingason

    2016-10-01

    Full Text Available This squib studies the order in which elements are added to the shared context of interlocutors in a conversation. It focuses on context updates within one hierarchical structure and argues that structurally higher elements are entered into the context before lower elements, even if the structurally higher elements are pronounced after the lower elements. The crucial data are drawn from a comparison of relative clauses in two head-initial languages, English and Icelandic, and two head-final languages, Korean and Japanese. The findings have consequences for any theory of a dynamic semantics.

  2. Autistic epileptiform regression.

    Science.gov (United States)

    Canitano, Roberto; Zappella, Michele

    2006-01-01

    Autistic regression is a well known condition that occurs in one third of children with pervasive developmental disorders, who, after normal development in the first year of life, undergo a global regression during the second year that encompasses language, social skills and play. In a portion of these subjects, epileptiform abnormalities are present with or without seizures, resembling, in some respects, other epileptiform regressions of language and behaviour such as Landau-Kleffner syndrome. In these cases, for a more accurate definition of the clinical entity, the term autistic epileptifom regression has been suggested. As in other epileptic syndromes with regression, the relationships between EEG abnormalities, language and behaviour, in autism, are still unclear. We describe two cases of autistic epileptiform regression selected from a larger group of children with autistic spectrum disorders, with the aim of discussing the clinical features of the condition, the therapeutic approach and the outcome.

  3. Scaled Sparse Linear Regression

    CERN Document Server

    Sun, Tingni

    2011-01-01

    Scaled sparse linear regression jointly estimates the regression coefficients and noise level in a linear model. It chooses an equilibrium with a sparse regression method by iteratively estimating the noise level via the mean residual squares and scaling the penalty in proportion to the estimated noise level. The iterative algorithm costs nearly nothing beyond the computation of a path of the sparse regression estimator for penalty levels above a threshold. For the scaled Lasso, the algorithm is a gradient descent in a convex minimization of a penalized joint loss function for the regression coefficients and noise level. Under mild regularity conditions, we prove that the method yields simultaneously an estimator for the noise level and an estimated coefficient vector in the Lasso path satisfying certain oracle inequalities for the estimation of the noise level, prediction, and the estimation of regression coefficients. These oracle inequalities provide sufficient conditions for the consistency and asymptotic...

  4. High-dimensional regression with unknown variance

    CERN Document Server

    Giraud, Christophe; Verzelen, Nicolas

    2011-01-01

    We review recent results for high-dimensional sparse linear regression in the practical case of unknown variance. Different sparsity settings are covered, including coordinate-sparsity, group-sparsity and variation-sparsity. The emphasize is put on non-asymptotic analyses and feasible procedures. In addition, a small numerical study compares the practical performance of three schemes for tuning the Lasso esti- mator and some references are collected for some more general models, including multivariate regression and nonparametric regression.

  5. Rolling Regressions with Stata

    OpenAIRE

    Kit Baum

    2004-01-01

    This talk will describe some work underway to add a "rolling regression" capability to Stata's suite of time series features. Although commands such as "statsby" permit analysis of non-overlapping subsamples in the time domain, they are not suited to the analysis of overlapping (e.g. "moving window") samples. Both moving-window and widening-window techniques are often used to judge the stability of time series regression relationships. We will present an implementation of a rolling regression...

  6. Unbiased Quasi-regression

    Institute of Scientific and Technical Information of China (English)

    Guijun YANG; Lu LIN; Runchu ZHANG

    2007-01-01

    Quasi-regression, motivated by the problems arising in the computer experiments, focuses mainly on speeding up evaluation. However, its theoretical properties are unexplored systemically. This paper shows that quasi-regression is unbiased, strong convergent and asymptotic normal for parameter estimations but it is biased for the fitting of curve. Furthermore, a new method called unbiased quasi-regression is proposed. In addition to retaining the above asymptotic behaviors of parameter estimations, unbiased quasi-regression is unbiased for the fitting of curve.

  7. Introduction to regression graphics

    CERN Document Server

    Cook, R Dennis

    2009-01-01

    Covers the use of dynamic and interactive computer graphics in linear regression analysis, focusing on analytical graphics. Features new techniques like plot rotation. The authors have composed their own regression code, using Xlisp-Stat language called R-code, which is a nearly complete system for linear regression analysis and can be utilized as the main computer program in a linear regression course. The accompanying disks, for both Macintosh and Windows computers, contain the R-code and Xlisp-Stat. An Instructor's Manual presenting detailed solutions to all the problems in the book is ava

  8. Applied linear regression

    CERN Document Server

    Weisberg, Sanford

    2005-01-01

    Master linear regression techniques with a new edition of a classic text Reviews of the Second Edition: ""I found it enjoyable reading and so full of interesting material that even the well-informed reader will probably find something new . . . a necessity for all of those who do linear regression."" -Technometrics, February 1987 ""Overall, I feel that the book is a valuable addition to the now considerable list of texts on applied linear regression. It should be a strong contender as the leading text for a first serious course in regression analysis."" -American Scientist, May-June 1987

  9. A Simulation Investigation of Principal Component Regression.

    Science.gov (United States)

    Allen, David E.

    Regression analysis is one of the more common analytic tools used by researchers. However, multicollinearity between the predictor variables can cause problems in using the results of regression analyses. Problems associated with multicollinearity include entanglement of relative influences of variables due to reduced precision of estimation,…

  10. Design of Hierarchical Structures for Synchronized Deformations

    Science.gov (United States)

    Seifi, Hamed; Javan, Anooshe Rezaee; Ghaedizadeh, Arash; Shen, Jianhu; Xu, Shanqing; Xie, Yi Min

    2017-01-01

    In this paper we propose a general method for creating a new type of hierarchical structures at any level in both 2D and 3D. A simple rule based on a rotate-and-mirror procedure is introduced to achieve multi-level hierarchies. These new hierarchical structures have remarkably few degrees of freedom compared to existing designs by other methods. More importantly, these structures exhibit synchronized motions during opening or closure, resulting in uniform and easily-controllable deformations. Furthermore, a simple analytical formula is found which can be used to avoid collision of units of the structure during the closing process. The novel design concept is verified by mathematical analyses, computational simulations and physical experiments.

  11. Morse–Smale Regression

    Energy Technology Data Exchange (ETDEWEB)

    Gerber, Samuel [Univ. of Utah, Salt Lake City, UT (United States); Rubel, Oliver [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Bremer, Peer -Timo [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Pascucci, Valerio [Univ. of Utah, Salt Lake City, UT (United States); Whitaker, Ross T. [Univ. of Utah, Salt Lake City, UT (United States)

    2012-01-19

    This paper introduces a novel partition-based regression approach that incorporates topological information. Partition-based regression typically introduces a quality-of-fit-driven decomposition of the domain. The emphasis in this work is on a topologically meaningful segmentation. Thus, the proposed regression approach is based on a segmentation induced by a discrete approximation of the Morse–Smale complex. This yields a segmentation with partitions corresponding to regions of the function with a single minimum and maximum that are often well approximated by a linear model. This approach yields regression models that are amenable to interpretation and have good predictive capacity. Typically, regression estimates are quantified by their geometrical accuracy. For the proposed regression, an important aspect is the quality of the segmentation itself. Thus, this article introduces a new criterion that measures the topological accuracy of the estimate. The topological accuracy provides a complementary measure to the classical geometrical error measures and is very sensitive to overfitting. The Morse–Smale regression is compared to state-of-the-art approaches in terms of geometry and topology and yields comparable or improved fits in many cases. Finally, a detailed study on climate-simulation data demonstrates the application of the Morse–Smale regression. Supplementary Materials are available online and contain an implementation of the proposed approach in the R package msr, an analysis and simulations on the stability of the Morse–Smale complex approximation, and additional tables for the climate-simulation study.

  12. Regression to Causality

    DEFF Research Database (Denmark)

    Bordacconi, Mats Joe; Larsen, Martin Vinæs

    2014-01-01

    Humans are fundamentally primed for making causal attributions based on correlations. This implies that researchers must be careful to present their results in a manner that inhibits unwarranted causal attribution. In this paper, we present the results of an experiment that suggests regression...... models – one of the primary vehicles for analyzing statistical results in political science – encourage causal interpretation. Specifically, we demonstrate that presenting observational results in a regression model, rather than as a simple comparison of means, makes causal interpretation of the results...... of equivalent results presented as either regression models or as a test of two sample means. Our experiment shows that the subjects who were presented with results as estimates from a regression model were more inclined to interpret these results causally. Our experiment implies that scholars using regression...

  13. Neural Mechanisms of Hierarchical Planning in a Virtual Subway Network.

    Science.gov (United States)

    Balaguer, Jan; Spiers, Hugo; Hassabis, Demis; Summerfield, Christopher

    2016-05-18

    Planning allows actions to be structured in pursuit of a future goal. However, in natural environments, planning over multiple possible future states incurs prohibitive computational costs. To represent plans efficiently, states can be clustered hierarchically into "contexts". For example, representing a journey through a subway network as a succession of individual states (stations) is more costly than encoding a sequence of contexts (lines) and context switches (line changes). Here, using functional brain imaging, we asked humans to perform a planning task in a virtual subway network. Behavioral analyses revealed that humans executed a hierarchically organized plan. Brain activity in the dorsomedial prefrontal cortex and premotor cortex scaled with the cost of hierarchical plan representation and unique neural signals in these regions signaled contexts and context switches. These results suggest that humans represent hierarchical plans using a network of caudal prefrontal structures. VIDEO ABSTRACT.

  14. Discrepancy-Tolerant Hierarchical Poisson Event-Rate Analyses.

    Science.gov (United States)

    1985-07-01

    the Nuclear Plant Reliability Data System." Austin, Texas: The Univ. of Texas. NUREG /CR-3637. 41 Hoaglin, G.C. (1983). "g and h distributions... NUREG /CR-2434, LA-9116-MS. Morris, C. (1982). "Natural exponential families with quadratic variance functions: statistical theory." Annals of Statistics...al (1975). "Reactor Safety Study: An Assessment of Accident Risks in U.S. Commercial Nuclear Power Plants." NUREG -75/014, WASH 1400. Reynolds, D.S

  15. Hierarchical partial order ranking.

    Science.gov (United States)

    Carlsen, Lars

    2008-09-01

    Assessing the potential impact on environmental and human health from the production and use of chemicals or from polluted sites involves a multi-criteria evaluation scheme. A priori several parameters are to address, e.g., production tonnage, specific release scenarios, geographical and site-specific factors in addition to various substance dependent parameters. Further socio-economic factors may be taken into consideration. The number of parameters to be included may well appear to be prohibitive for developing a sensible model. The study introduces hierarchical partial order ranking (HPOR) that remedies this problem. By HPOR the original parameters are initially grouped based on their mutual connection and a set of meta-descriptors is derived representing the ranking corresponding to the single groups of descriptors, respectively. A second partial order ranking is carried out based on the meta-descriptors, the final ranking being disclosed though average ranks. An illustrative example on the prioritization of polluted sites is given.

  16. Trees and Hierarchical Structures

    CERN Document Server

    Haeseler, Arndt

    1990-01-01

    The "raison d'etre" of hierarchical dustering theory stems from one basic phe­ nomenon: This is the notorious non-transitivity of similarity relations. In spite of the fact that very often two objects may be quite similar to a third without being that similar to each other, one still wants to dassify objects according to their similarity. This should be achieved by grouping them into a hierarchy of non-overlapping dusters such that any two objects in ~ne duster appear to be more related to each other than they are to objects outside this duster. In everyday life, as well as in essentially every field of scientific investigation, there is an urge to reduce complexity by recognizing and establishing reasonable das­ sification schemes. Unfortunately, this is counterbalanced by the experience of seemingly unavoidable deadlocks caused by the existence of sequences of objects, each comparatively similar to the next, but the last rather different from the first.

  17. Hierarchical Affinity Propagation

    CERN Document Server

    Givoni, Inmar; Frey, Brendan J

    2012-01-01

    Affinity propagation is an exemplar-based clustering algorithm that finds a set of data-points that best exemplify the data, and associates each datapoint with one exemplar. We extend affinity propagation in a principled way to solve the hierarchical clustering problem, which arises in a variety of domains including biology, sensor networks and decision making in operational research. We derive an inference algorithm that operates by propagating information up and down the hierarchy, and is efficient despite the high-order potentials required for the graphical model formulation. We demonstrate that our method outperforms greedy techniques that cluster one layer at a time. We show that on an artificial dataset designed to mimic the HIV-strain mutation dynamics, our method outperforms related methods. For real HIV sequences, where the ground truth is not available, we show our method achieves better results, in terms of the underlying objective function, and show the results correspond meaningfully to geographi...

  18. Optimisation by hierarchical search

    Science.gov (United States)

    Zintchenko, Ilia; Hastings, Matthew; Troyer, Matthias

    2015-03-01

    Finding optimal values for a set of variables relative to a cost function gives rise to some of the hardest problems in physics, computer science and applied mathematics. Although often very simple in their formulation, these problems have a complex cost function landscape which prevents currently known algorithms from efficiently finding the global optimum. Countless techniques have been proposed to partially circumvent this problem, but an efficient method is yet to be found. We present a heuristic, general purpose approach to potentially improve the performance of conventional algorithms or special purpose hardware devices by optimising groups of variables in a hierarchical way. We apply this approach to problems in combinatorial optimisation, machine learning and other fields.

  19. How hierarchical is language use?

    Science.gov (United States)

    Frank, Stefan L.; Bod, Rens; Christiansen, Morten H.

    2012-01-01

    It is generally assumed that hierarchical phrase structure plays a central role in human language. However, considerations of simplicity and evolutionary continuity suggest that hierarchical structure should not be invoked too hastily. Indeed, recent neurophysiological, behavioural and computational studies show that sequential sentence structure has considerable explanatory power and that hierarchical processing is often not involved. In this paper, we review evidence from the recent literature supporting the hypothesis that sequential structure may be fundamental to the comprehension, production and acquisition of human language. Moreover, we provide a preliminary sketch outlining a non-hierarchical model of language use and discuss its implications and testable predictions. If linguistic phenomena can be explained by sequential rather than hierarchical structure, this will have considerable impact in a wide range of fields, such as linguistics, ethology, cognitive neuroscience, psychology and computer science. PMID:22977157

  20. How hierarchical is language use?

    Science.gov (United States)

    Frank, Stefan L; Bod, Rens; Christiansen, Morten H

    2012-11-22

    It is generally assumed that hierarchical phrase structure plays a central role in human language. However, considerations of simplicity and evolutionary continuity suggest that hierarchical structure should not be invoked too hastily. Indeed, recent neurophysiological, behavioural and computational studies show that sequential sentence structure has considerable explanatory power and that hierarchical processing is often not involved. In this paper, we review evidence from the recent literature supporting the hypothesis that sequential structure may be fundamental to the comprehension, production and acquisition of human language. Moreover, we provide a preliminary sketch outlining a non-hierarchical model of language use and discuss its implications and testable predictions. If linguistic phenomena can be explained by sequential rather than hierarchical structure, this will have considerable impact in a wide range of fields, such as linguistics, ethology, cognitive neuroscience, psychology and computer science.

  1. Boosted beta regression.

    Directory of Open Access Journals (Sweden)

    Matthias Schmid

    Full Text Available Regression analysis with a bounded outcome is a common problem in applied statistics. Typical examples include regression models for percentage outcomes and the analysis of ratings that are measured on a bounded scale. In this paper, we consider beta regression, which is a generalization of logit models to situations where the response is continuous on the interval (0,1. Consequently, beta regression is a convenient tool for analyzing percentage responses. The classical approach to fit a beta regression model is to use maximum likelihood estimation with subsequent AIC-based variable selection. As an alternative to this established - yet unstable - approach, we propose a new estimation technique called boosted beta regression. With boosted beta regression estimation and variable selection can be carried out simultaneously in a highly efficient way. Additionally, both the mean and the variance of a percentage response can be modeled using flexible nonlinear covariate effects. As a consequence, the new method accounts for common problems such as overdispersion and non-binomial variance structures.

  2. Associative Hierarchical Random Fields.

    Science.gov (United States)

    Ladický, L'ubor; Russell, Chris; Kohli, Pushmeet; Torr, Philip H S

    2014-06-01

    This paper makes two contributions: the first is the proposal of a new model-The associative hierarchical random field (AHRF), and a novel algorithm for its optimization; the second is the application of this model to the problem of semantic segmentation. Most methods for semantic segmentation are formulated as a labeling problem for variables that might correspond to either pixels or segments such as super-pixels. It is well known that the generation of super pixel segmentations is not unique. This has motivated many researchers to use multiple super pixel segmentations for problems such as semantic segmentation or single view reconstruction. These super-pixels have not yet been combined in a principled manner, this is a difficult problem, as they may overlap, or be nested in such a way that the segmentations form a segmentation tree. Our new hierarchical random field model allows information from all of the multiple segmentations to contribute to a global energy. MAP inference in this model can be performed efficiently using powerful graph cut based move making algorithms. Our framework generalizes much of the previous work based on pixels or segments, and the resulting labelings can be viewed both as a detailed segmentation at the pixel level, or at the other extreme, as a segment selector that pieces together a solution like a jigsaw, selecting the best segments from different segmentations as pieces. We evaluate its performance on some of the most challenging data sets for object class segmentation, and show that this ability to perform inference using multiple overlapping segmentations leads to state-of-the-art results.

  3. Applied logistic regression

    CERN Document Server

    Hosmer, David W; Sturdivant, Rodney X

    2013-01-01

     A new edition of the definitive guide to logistic regression modeling for health science and other applications This thoroughly expanded Third Edition provides an easily accessible introduction to the logistic regression (LR) model and highlights the power of this model by examining the relationship between a dichotomous outcome and a set of covariables. Applied Logistic Regression, Third Edition emphasizes applications in the health sciences and handpicks topics that best suit the use of modern statistical software. The book provides readers with state-of-

  4. Applied linear regression

    CERN Document Server

    Weisberg, Sanford

    2013-01-01

    Praise for the Third Edition ""...this is an excellent book which could easily be used as a course text...""-International Statistical Institute The Fourth Edition of Applied Linear Regression provides a thorough update of the basic theory and methodology of linear regression modeling. Demonstrating the practical applications of linear regression analysis techniques, the Fourth Edition uses interesting, real-world exercises and examples. Stressing central concepts such as model building, understanding parameters, assessing fit and reliability, and drawing conclusions, the new edition illus

  5. Equações de regressão para estimar valores energéticos do grão de trigo e seus subprodutos para frangos de corte, a partir de análises químicas Regression equations to evaluate the energy values of wheat grain and its by-products for broiler chickens from chemical analyses

    Directory of Open Access Journals (Sweden)

    F.M.O. Borges

    2003-12-01

    que significou pouca influência da metodologia sobre essa medida. A FDN não mostrou ser melhor preditor de EM do que a FB.One experiment was run with broiler chickens, to obtain prediction equations for metabolizable energy (ME based on feedstuffs chemical analyses, and determined ME of wheat grain and its by-products, using four different methodologies. Seven wheat grain by-products were used in five treatments: wheat grain, wheat germ, white wheat flour, dark wheat flour, wheat bran for human use, wheat bran for animal use and rough wheat bran. Based on chemical analyses of crude fiber (CF, ether extract (EE, crude protein (CP, ash (AS and starch (ST of the feeds and the determined values of apparent energy (MEA, true energy (MEV, apparent corrected energy (MEAn and true energy corrected by nitrogen balance (MEVn in five treatments, prediction equations were obtained using the stepwise procedure. CF showed the best relationship with metabolizable energy values, however, this variable alone was not enough for a good estimate of the energy values (R² below 0.80. When EE and CP were included in the equations, R² increased to 0.90 or higher in most estimates. When the equations were calculated with all treatments, the equation for MEA were less precise and R² decreased. When ME data of the traditional or force-feeding methods were used separately, the precision of the equations increases (R² higher than 0.85. For MEV and MEVn values, the best multiple linear equations included CF, EE and CP (R²>0.90, independently of using all experimental data or separating by methodology. The estimates of MEVn values showed high precision and the linear coefficients (a of the equations were similar for all treatments or methodologies. Therefore, it explains the small influence of the different methodologies on this parameter. NDF was not a better predictor of ME than CF.

  6. Modeling hierarchical structures - Hierarchical Linear Modeling using MPlus

    CERN Document Server

    Jelonek, M

    2006-01-01

    The aim of this paper is to present the technique (and its linkage with physics) of overcoming problems connected to modeling social structures, which are typically hierarchical. Hierarchical Linear Models provide a conceptual and statistical mechanism for drawing conclusions regarding the influence of phenomena at different levels of analysis. In the social sciences it is used to analyze many problems such as educational, organizational or market dilemma. This paper introduces the logic of modeling hierarchical linear equations and estimation based on MPlus software. I present my own model to illustrate the impact of different factors on school acceptation level.

  7. Transductive Ordinal Regression

    CERN Document Server

    Seah, Chun-Wei; Ong, Yew-Soon

    2011-01-01

    Ordinal regression is commonly formulated as a multi-class problem with ordinal constraints. The challenge of designing accurate classifiers for ordinal regression generally increases with the number of classes involved, due to the large number of labeled patterns that are needed. The availability of ordinal class labels, however, are often costly to calibrate or difficult to obtain. Unlabeled patterns, on the other hand, often exist in much greater abundance and are freely available. To take benefits from the abundance of unlabeled patterns, we present a novel transductive learning paradigm for ordinal regression in this paper, namely Transductive Ordinal Regression (TOR). The key challenge of the present study lies in the precise estimation of both the ordinal class label of the unlabeled data and the decision functions of the ordinal classes, simultaneously. The core elements of the proposed TOR include an objective function that caters to several commonly used loss functions casted in transductive setting...

  8. Nonparametric Predictive Regression

    OpenAIRE

    Ioannis Kasparis; Elena Andreou; Phillips, Peter C.B.

    2012-01-01

    A unifying framework for inference is developed in predictive regressions where the predictor has unknown integration properties and may be stationary or nonstationary. Two easily implemented nonparametric F-tests are proposed. The test statistics are related to those of Kasparis and Phillips (2012) and are obtained by kernel regression. The limit distribution of these predictive tests holds for a wide range of predictors including stationary as well as non-stationary fractional and near unit...

  9. Multilingual speaker age recognition: regression analyses on the Lwazi corpus

    CSIR Research Space (South Africa)

    Feld, M

    2009-12-01

    Full Text Available towards improved understanding of multilingual speech processing, the current contribution investigates how an important para-linguistic aspect of speech, namely speaker age, depends on the language spoken. In particular, the authors study how certain...

  10. Hierarchical and dynamic seascapes: A quantitative framework for scaling pelagic biogeochemistry and ecology

    Science.gov (United States)

    Kavanaugh, Maria T.; Hales, Burke; Saraceno, Martin; Spitz, Yvette H.; White, Angelicque E.; Letelier, Ricardo M.

    2014-01-01

    Comparative analyses of oceanic ecosystems require an objective framework to define coherent study regions and scale the patterns and processes observed within them. We applied the hierarchical patch mosaic paradigm of landscape ecology to the study of the seasonal variability of the North Pacific to facilitate comparative analysis between pelagic ecosystems and provide spatiotemporal context for Eulerian time-series studies. Using 13-year climatologies of sea surface temperature (SST), photosynthetically active radiation (PAR), and chlorophyll a (chl-a), we classified seascapes in environmental space that were monthly-resolved, dynamic and nested in space and time. To test the assumption that seascapes represent coherent regions with unique biogeochemical function and to determine the hierarchical scale that best characterized variance in biogeochemical parameters, independent data sets were analyzed across seascapes using analysis of variance (ANOVA), nested-ANOVA and multiple linear regression (MLR) analyses. We also compared the classification efficiency (as defined by the ANOVA F-statistic) of resultant dynamic seascapes to a commonly-used static classification system. Variance of nutrients and net primary productivity (NPP) were well characterized in the first two levels of hierarchy of eight seascapes nested within three superseascapes (R2 = 0.5-0.7). Dynamic boundaries at this level resulted in a nearly 2-fold increase in classification efficiency over static boundaries. MLR analyses revealed differential forcing on pCO2 across seascapes and hierarchical levels and a 33% reduction in mean model error with increased partitioning (from 18.5 μatm to 12.0 μatm pCO2). Importantly, the empirical influence of seasonality was minor across seascapes at all hierarchical levels, suggesting that seascape partitioning minimizes the effect of non-hydrographic variables. As part of the emerging field of pelagic seascape ecology, this effort provides an improved means of

  11. Mechanics of hierarchical 3-D nanofoams

    Science.gov (United States)

    Chen, Q.; Pugno, N. M.

    2012-01-01

    In this paper, we study the mechanics of new three-dimensional hierarchical open-cell foams, and, in particular, its Young's modulus and plastic strength. We incorporate the effects of the surface elasticity and surface residual stress in the linear elastic and plastic analyses. The results show that, as the cross-sectional dimension decreases, the influences of the surface effect on Young's modulus and plastic strength increase, and the surface effect makes the solid stiffer and stronger; similarly, as level n increases, these quantities approach to those of the classical theory as lower bounds.

  12. Modeling hierarchical structures - Hierarchical Linear Modeling using MPlus

    OpenAIRE

    Jelonek, Magdalena

    2006-01-01

    The aim of this paper is to present the technique (and its linkage with physics) of overcoming problems connected to modeling social structures, which are typically hierarchical. Hierarchical Linear Models provide a conceptual and statistical mechanism for drawing conclusions regarding the influence of phenomena at different levels of analysis. In the social sciences it is used to analyze many problems such as educational, organizational or market dilemma. This paper introduces the logic of m...

  13. Hierarchical surfaces for enhanced self-cleaning applications

    Science.gov (United States)

    Fernández, Ariadna; Francone, Achille; Thamdrup, Lasse H.; Johansson, Alicia; Bilenberg, Brian; Nielsen, Theodor; Guttmann, Markus; Sotomayor Torres, Clivia M.; Kehagias, Nikolaos

    2017-04-01

    In this study we present a flexible and adaptable fabrication method to create complex hierarchical structures over inherently hydrophobic resist materials. We have tested these surfaces for their superhydrophobic behaviour and successfully verified their self-cleaning properties. The followed approach allow us to design and produce superhydrophobic surfaces in a reproducible manner. We have analysed different combination of hierarchical micro-nanostructures for their application to self-cleaning surfaces. A static contact angle value of 170° with a hysteresis of 4° was achieved without the need of any additional chemical treatment on the fabricated hierarchical structures. Dynamic effects were analysed on these surfaces, obtaining a remarkable self-cleaning effect as well as a good robustness over impacting droplets.

  14. Hierarchical fringe tracking

    CERN Document Server

    Petrov, Romain G; Boskri, Abdelkarim; Folcher, Jean-Pierre; Lagarde, Stephane; Bresson, Yves; Benkhaldoum, Zouhair; Lazrek, Mohamed; Rakshit, Suvendu

    2014-01-01

    The limiting magnitude is a key issue for optical interferometry. Pairwise fringe trackers based on the integrated optics concepts used for example in GRAVITY seem limited to about K=10.5 with the 8m Unit Telescopes of the VLTI, and there is a general "common sense" statement that the efficiency of fringe tracking, and hence the sensitivity of optical interferometry, must decrease as the number of apertures increases, at least in the near infrared where we are still limited by detector readout noise. Here we present a Hierarchical Fringe Tracking (HFT) concept with sensitivity at least equal to this of a two apertures fringe trackers. HFT is based of the combination of the apertures in pairs, then in pairs of pairs then in pairs of groups. The key HFT module is a device that behaves like a spatial filter for two telescopes (2TSF) and transmits all or most of the flux of a cophased pair in a single mode beam. We give an example of such an achromatic 2TSF, based on very broadband dispersed fringes analyzed by g...

  15. Onboard hierarchical network

    Science.gov (United States)

    Tunesi, Luca; Armbruster, Philippe

    2004-02-01

    The objective of this paper is to demonstrate a suitable hierarchical networking solution to improve capabilities and performances of space systems, with significant recurrent costs saving and more efficient design & manufacturing flows. Classically, a satellite can be split in two functional sub-systems: the platform and the payload complement. The platform is in charge of providing power, attitude & orbit control and up/down-link services, whereas the payload represents the scientific and/or operational instruments/transponders and embodies the objectives of the mission. One major possibility to improve the performance of payloads, by limiting the data return to pertinent information, is to process data on board thanks to a proper implementation of the payload data system. In this way, it is possible to share non-recurring development costs by exploiting a system that can be adopted by the majority of space missions. It is believed that the Modular and Scalable Payload Data System, under development by ESA, provides a suitable solution to fulfil a large range of future mission requirements. The backbone of the system is the standardised high data rate SpaceWire network http://www.ecss.nl/. As complement, a lower speed command and control bus connecting peripherals is required. For instance, at instrument level, there is a need for a "local" low complexity bus, which gives the possibility to command and control sensors and actuators. Moreover, most of the connections at sub-system level are related to discrete signals management or simple telemetry acquisitions, which can easily and efficiently be handled by a local bus. An on-board hierarchical network can therefore be defined by interconnecting high-speed links and local buses. Additionally, it is worth stressing another important aspect of the design process: Agencies and ESA in particular are frequently confronted with a big consortium of geographically spread companies located in different countries, each one

  16. Hierarchical Reverberation Mapping

    CERN Document Server

    Brewer, Brendon J

    2013-01-01

    Reverberation mapping (RM) is an important technique in studies of active galactic nuclei (AGN). The key idea of RM is to measure the time lag $\\tau$ between variations in the continuum emission from the accretion disc and subsequent response of the broad line region (BLR). The measurement of $\\tau$ is typically used to estimate the physical size of the BLR and is combined with other measurements to estimate the black hole mass $M_{\\rm BH}$. A major difficulty with RM campaigns is the large amount of data needed to measure $\\tau$. Recently, Fine et al (2012) introduced a new approach to RM where the BLR light curve is sparsely sampled, but this is counteracted by observing a large sample of AGN, rather than a single system. The results are combined to infer properties of the sample of AGN. In this letter we implement this method using a hierarchical Bayesian model and contrast this with the results from the previous stacked cross-correlation technique. We find that our inferences are more precise and allow fo...

  17. Prediction, Regression and Critical Realism

    DEFF Research Database (Denmark)

    Næss, Petter

    2004-01-01

    This paper considers the possibility of prediction in land use planning, and the use of statistical research methods in analyses of relationships between urban form and travel behaviour. Influential writers within the tradition of critical realism reject the possibility of predicting social...... of prediction necessary and possible in spatial planning of urban development. Finally, the political implications of positions within theory of science rejecting the possibility of predictions about social phenomena are addressed....... phenomena. This position is fundamentally problematic to public planning. Without at least some ability to predict the likely consequences of different proposals, the justification for public sector intervention into market mechanisms will be frail. Statistical methods like regression analyses are commonly...

  18. [Understanding logistic regression].

    Science.gov (United States)

    El Sanharawi, M; Naudet, F

    2013-10-01

    Logistic regression is one of the most common multivariate analysis models utilized in epidemiology. It allows the measurement of the association between the occurrence of an event (qualitative dependent variable) and factors susceptible to influence it (explicative variables). The choice of explicative variables that should be included in the logistic regression model is based on prior knowledge of the disease physiopathology and the statistical association between the variable and the event, as measured by the odds ratio. The main steps for the procedure, the conditions of application, and the essential tools for its interpretation are discussed concisely. We also discuss the importance of the choice of variables that must be included and retained in the regression model in order to avoid the omission of important confounding factors. Finally, by way of illustration, we provide an example from the literature, which should help the reader test his or her knowledge.

  19. Constrained Sparse Galerkin Regression

    CERN Document Server

    Loiseau, Jean-Christophe

    2016-01-01

    In this work, we demonstrate the use of sparse regression techniques from machine learning to identify nonlinear low-order models of a fluid system purely from measurement data. In particular, we extend the sparse identification of nonlinear dynamics (SINDy) algorithm to enforce physical constraints in the regression, leading to energy conservation. The resulting models are closely related to Galerkin projection models, but the present method does not require the use of a full-order or high-fidelity Navier-Stokes solver to project onto basis modes. Instead, the most parsimonious nonlinear model is determined that is consistent with observed measurement data and satisfies necessary constraints. The constrained Galerkin regression algorithm is implemented on the fluid flow past a circular cylinder, demonstrating the ability to accurately construct models from data.

  20. Hierarchical materials: Background and perspectives

    DEFF Research Database (Denmark)

    2016-01-01

    Hierarchical design draws inspiration from analysis of biological materials and has opened new possibilities for enhancing performance and enabling new functionalities and extraordinary properties. With the development of nanotechnology, the necessary technological requirements for the manufactur...

  1. Hierarchical clustering for graph visualization

    CERN Document Server

    Clémençon, Stéphan; Rossi, Fabrice; Tran, Viet Chi

    2012-01-01

    This paper describes a graph visualization methodology based on hierarchical maximal modularity clustering, with interactive and significant coarsening and refining possibilities. An application of this method to HIV epidemic analysis in Cuba is outlined.

  2. Direct hierarchical assembly of nanoparticles

    Science.gov (United States)

    Xu, Ting; Zhao, Yue; Thorkelsson, Kari

    2014-07-22

    The present invention provides hierarchical assemblies of a block copolymer, a bifunctional linking compound and a nanoparticle. The block copolymers form one micro-domain and the nanoparticles another micro-domain.

  3. Practical Session: Logistic Regression

    Science.gov (United States)

    Clausel, M.; Grégoire, G.

    2014-12-01

    An exercise is proposed to illustrate the logistic regression. One investigates the different risk factors in the apparition of coronary heart disease. It has been proposed in Chapter 5 of the book of D.G. Kleinbaum and M. Klein, "Logistic Regression", Statistics for Biology and Health, Springer Science Business Media, LLC (2010) and also by D. Chessel and A.B. Dufour in Lyon 1 (see Sect. 6 of http://pbil.univ-lyon1.fr/R/pdf/tdr341.pdf). This example is based on data given in the file evans.txt coming from http://www.sph.emory.edu/dkleinb/logreg3.htm#data.

  4. Minimax Regression Quantiles

    DEFF Research Database (Denmark)

    Bache, Stefan Holst

    A new and alternative quantile regression estimator is developed and it is shown that the estimator is root n-consistent and asymptotically normal. The estimator is based on a minimax ‘deviance function’ and has asymptotically equivalent properties to the usual quantile regression estimator. It is......, however, a different and therefore new estimator. It allows for both linear- and nonlinear model specifications. A simple algorithm for computing the estimates is proposed. It seems to work quite well in practice but whether it has theoretical justification is still an open question....

  5. Functional annotation of hierarchical modularity.

    Directory of Open Access Journals (Sweden)

    Kanchana Padmanabhan

    Full Text Available In biological networks of molecular interactions in a cell, network motifs that are biologically relevant are also functionally coherent, or form functional modules. These functionally coherent modules combine in a hierarchical manner into larger, less cohesive subsystems, thus revealing one of the essential design principles of system-level cellular organization and function-hierarchical modularity. Arguably, hierarchical modularity has not been explicitly taken into consideration by most, if not all, functional annotation systems. As a result, the existing methods would often fail to assign a statistically significant functional coherence score to biologically relevant molecular machines. We developed a methodology for hierarchical functional annotation. Given the hierarchical taxonomy of functional concepts (e.g., Gene Ontology and the association of individual genes or proteins with these concepts (e.g., GO terms, our method will assign a Hierarchical Modularity Score (HMS to each node in the hierarchy of functional modules; the HMS score and its p-value measure functional coherence of each module in the hierarchy. While existing methods annotate each module with a set of "enriched" functional terms in a bag of genes, our complementary method provides the hierarchical functional annotation of the modules and their hierarchically organized components. A hierarchical organization of functional modules often comes as a bi-product of cluster analysis of gene expression data or protein interaction data. Otherwise, our method will automatically build such a hierarchy by directly incorporating the functional taxonomy information into the hierarchy search process and by allowing multi-functional genes to be part of more than one component in the hierarchy. In addition, its underlying HMS scoring metric ensures that functional specificity of the terms across different levels of the hierarchical taxonomy is properly treated. We have evaluated our

  6. Hierarchical architecture of active knits

    Science.gov (United States)

    Abel, Julianna; Luntz, Jonathan; Brei, Diann

    2013-12-01

    Nature eloquently utilizes hierarchical structures to form the world around us. Applying the hierarchical architecture paradigm to smart materials can provide a basis for a new genre of actuators which produce complex actuation motions. One promising example of cellular architecture—active knits—provides complex three-dimensional distributed actuation motions with expanded operational performance through a hierarchically organized structure. The hierarchical structure arranges a single fiber of active material, such as shape memory alloys (SMAs), into a cellular network of interlacing adjacent loops according to a knitting grid. This paper defines a four-level hierarchical classification of knit structures: the basic knit loop, knit patterns, grid patterns, and restructured grids. Each level of the hierarchy provides increased architectural complexity, resulting in expanded kinematic actuation motions of active knits. The range of kinematic actuation motions are displayed through experimental examples of different SMA active knits. The results from this paper illustrate and classify the ways in which each level of the hierarchical knit architecture leverages the performance of the base smart material to generate unique actuation motions, providing necessary insight to best exploit this new actuation paradigm.

  7. Advanced hierarchical distance sampling

    Science.gov (United States)

    Royle, Andy

    2016-01-01

    In this chapter, we cover a number of important extensions of the basic hierarchical distance-sampling (HDS) framework from Chapter 8. First, we discuss the inclusion of “individual covariates,” such as group size, in the HDS model. This is important in many surveys where animals form natural groups that are the primary observation unit, with the size of the group expected to have some influence on detectability. We also discuss HDS integrated with time-removal and double-observer or capture-recapture sampling. These “combined protocols” can be formulated as HDS models with individual covariates, and thus they have a commonality with HDS models involving group structure (group size being just another individual covariate). We cover several varieties of open-population HDS models that accommodate population dynamics. On one end of the spectrum, we cover models that allow replicate distance sampling surveys within a year, which estimate abundance relative to availability and temporary emigration through time. We consider a robust design version of that model. We then consider models with explicit dynamics based on the Dail and Madsen (2011) model and the work of Sollmann et al. (2015). The final major theme of this chapter is relatively newly developed spatial distance sampling models that accommodate explicit models describing the spatial distribution of individuals known as Point Process models. We provide novel formulations of spatial DS and HDS models in this chapter, including implementations of those models in the unmarked package using a hack of the pcount function for N-mixture models.

  8. Nonlinear Regression with R

    CERN Document Server

    Ritz, Christian; Parmigiani, Giovanni

    2009-01-01

    R is a rapidly evolving lingua franca of graphical display and statistical analysis of experiments from the applied sciences. This book provides a coherent treatment of nonlinear regression with R by means of examples from a diversity of applied sciences such as biology, chemistry, engineering, medicine and toxicology.

  9. Multiple linear regression analysis

    Science.gov (United States)

    Edwards, T. R.

    1980-01-01

    Program rapidly selects best-suited set of coefficients. User supplies only vectors of independent and dependent data and specifies confidence level required. Program uses stepwise statistical procedure for relating minimal set of variables to set of observations; final regression contains only most statistically significant coefficients. Program is written in FORTRAN IV for batch execution and has been implemented on NOVA 1200.

  10. Adaptive metric kernel regression

    DEFF Research Database (Denmark)

    Goutte, Cyril; Larsen, Jan

    2000-01-01

    regression by minimising a cross-validation estimate of the generalisation error. This allows to automatically adjust the importance of different dimensions. The improvement in terms of modelling performance is illustrated on a variable selection task where the adaptive metric kernel clearly outperforms...

  11. Software Regression Verification

    Science.gov (United States)

    2013-12-11

    of recursive procedures. Acta Informatica , 45(6):403 – 439, 2008. [GS11] Benny Godlin and Ofer Strichman. Regression verifica- tion. Technical Report...functions. Therefore, we need to rede - fine m-term. – Mutual termination. If either function f or function f ′ (or both) is non- deterministic, then their

  12. Linear Regression Analysis

    CERN Document Server

    Seber, George A F

    2012-01-01

    Concise, mathematically clear, and comprehensive treatment of the subject.* Expanded coverage of diagnostics and methods of model fitting.* Requires no specialized knowledge beyond a good grasp of matrix algebra and some acquaintance with straight-line regression and simple analysis of variance models.* More than 200 problems throughout the book plus outline solutions for the exercises.* This revision has been extensively class-tested.

  13. When to Use Hierarchical Linear Modeling

    Directory of Open Access Journals (Sweden)

    Veronika Huta

    2014-04-01

    Full Text Available Previous publications on hierarchical linear modeling (HLM have provided guidance on how to perform the analysis, yet there is relatively little information on two questions that arise even before analysis: Does HLM apply to one’s data and research question? And if it does apply, how does one choose between HLM and other methods sometimes used in these circumstances, including multiple regression, repeated-measures or mixed ANOVA, and structural equation modeling or path analysis? The purpose of this tutorial is to briefly introduce HLM and then to review some of the considerations that are helpful in answering these questions, including the nature of the data, the model to be tested, and the information desired on the output. Some examples of how the same analysis could be performed in HLM, repeated-measures or mixed ANOVA, and structural equation modeling or path analysis are also provided. .

  14. Low rank Multivariate regression

    CERN Document Server

    Giraud, Christophe

    2010-01-01

    We consider in this paper the multivariate regression problem, when the target regression matrix $A$ is close to a low rank matrix. Our primary interest in on the practical case where the variance of the noise is unknown. Our main contribution is to propose in this setting a criterion to select among a family of low rank estimators and prove a non-asymptotic oracle inequality for the resulting estimator. We also investigate the easier case where the variance of the noise is known and outline that the penalties appearing in our criterions are minimal (in some sense). These penalties involve the expected value of the Ky-Fan quasi-norm of some random matrices. These quantities can be evaluated easily in practice and upper-bounds can be derived from recent results in random matrix theory.

  15. Subset selection in regression

    CERN Document Server

    Miller, Alan

    2002-01-01

    Originally published in 1990, the first edition of Subset Selection in Regression filled a significant gap in the literature, and its critical and popular success has continued for more than a decade. Thoroughly revised to reflect progress in theory, methods, and computing power, the second edition promises to continue that tradition. The author has thoroughly updated each chapter, incorporated new material on recent developments, and included more examples and references. New in the Second Edition:A separate chapter on Bayesian methodsComplete revision of the chapter on estimationA major example from the field of near infrared spectroscopyMore emphasis on cross-validationGreater focus on bootstrappingStochastic algorithms for finding good subsets from large numbers of predictors when an exhaustive search is not feasible Software available on the Internet for implementing many of the algorithms presentedMore examplesSubset Selection in Regression, Second Edition remains dedicated to the techniques for fitting...

  16. Classification and regression trees

    CERN Document Server

    Breiman, Leo; Olshen, Richard A; Stone, Charles J

    1984-01-01

    The methodology used to construct tree structured rules is the focus of this monograph. Unlike many other statistical procedures, which moved from pencil and paper to calculators, this text's use of trees was unthinkable before computers. Both the practical and theoretical sides have been developed in the authors' study of tree methods. Classification and Regression Trees reflects these two sides, covering the use of trees as a data analysis method, and in a more mathematical framework, proving some of their fundamental properties.

  17. Aid and growth regressions

    DEFF Research Database (Denmark)

    Hansen, Henrik; Tarp, Finn

    2001-01-01

    . There are, however, decreasing returns to aid, and the estimated effectiveness of aid is highly sensitive to the choice of estimator and the set of control variables. When investment and human capital are controlled for, no positive effect of aid is found. Yet, aid continues to impact on growth via...... investment. We conclude by stressing the need for more theoretical work before this kind of cross-country regressions are used for policy purposes....

  18. Robust Nonstationary Regression

    OpenAIRE

    1993-01-01

    This paper provides a robust statistical approach to nonstationary time series regression and inference. Fully modified extensions of traditional robust statistical procedures are developed which allow for endogeneities in the nonstationary regressors and serial dependence in the shocks that drive the regressors and the errors that appear in the equation being estimated. The suggested estimators involve semiparametric corrections to accommodate these possibilities and they belong to the same ...

  19. Hierarchical topic modeling with nested hierarchical Dirichlet process

    Institute of Scientific and Technical Information of China (English)

    Yi-qun DING; Shan-ping LI; Zhen ZHANG; Bin SHEN

    2009-01-01

    This paper deals with the statistical modeling of latent topic hierarchies in text corpora. The height of the topic tree is assumed as fixed, while the number of topics on each level as unknown a priori and to be inferred from data. Taking a nonparametric Bayesian approach to this problem, we propose a new probabilistic generative model based on the nested hierarchical Dirichlet process (nHDP) and present a Markov chain Monte Carlo sampling algorithm for the inference of the topic tree structure as welt as the word distribution of each topic and topic distribution of each document. Our theoretical analysis and experiment results show that this model can produce a more compact hierarchical topic structure and captures more free-grained topic relationships compared to the hierarchical latent Dirichlet allocation model.

  20. Regression Verification Using Impact Summaries

    Science.gov (United States)

    Backes, John; Person, Suzette J.; Rungta, Neha; Thachuk, Oksana

    2013-01-01

    versions [19]. These techniques compare two programs with a large degree of syntactic similarity to prove that portions of one program version are equivalent to the other. Regression verification can be used for guaranteeing backward compatibility, and for showing behavioral equivalence in programs with syntactic differences, e.g., when a program is refactored to improve its performance, maintainability, or readability. Existing regression verification techniques leverage similarities between program versions by using abstraction and decomposition techniques to improve scalability of the analysis [10, 12, 19]. The abstractions and decomposition in the these techniques, e.g., summaries of unchanged code [12] or semantically equivalent methods [19], compute an over-approximation of the program behaviors. The equivalence checking results of these techniques are sound but not complete-they may characterize programs as not functionally equivalent when, in fact, they are equivalent. In this work we describe a novel approach that leverages the impact of the differences between two programs for scaling regression verification. We partition program behaviors of each version into (a) behaviors impacted by the changes and (b) behaviors not impacted (unimpacted) by the changes. Only the impacted program behaviors are used during equivalence checking. We then prove that checking equivalence of the impacted program behaviors is equivalent to checking equivalence of all program behaviors for a given depth bound. In this work we use symbolic execution to generate the program behaviors and leverage control- and data-dependence information to facilitate the partitioning of program behaviors. The impacted program behaviors are termed as impact summaries. The dependence analyses that facilitate the generation of the impact summaries, we believe, could be used in conjunction with other abstraction and decomposition based approaches, [10, 12], as a complementary reduction technique. An

  1. TWO REGRESSION CREDIBILITY MODELS

    Directory of Open Access Journals (Sweden)

    Constanţa-Nicoleta BODEA

    2010-03-01

    Full Text Available In this communication we will discuss two regression credibility models from Non – Life Insurance Mathematics that can be solved by means of matrix theory. In the first regression credibility model, starting from a well-known representation formula of the inverse for a special class of matrices a risk premium will be calculated for a contract with risk parameter θ. In the next regression credibility model, we will obtain a credibility solution in the form of a linear combination of the individual estimate (based on the data of a particular state and the collective estimate (based on aggregate USA data. To illustrate the solution with the properties mentioned above, we shall need the well-known representation theorem for a special class of matrices, the properties of the trace for a square matrix, the scalar product of two vectors, the norm with respect to a positive definite matrix given in advance and the complicated mathematical properties of conditional expectations and of conditional covariances.

  2. REGRESSION ANALYSIS OF PRODUCTIVITY USING MIXED EFFECT MODEL

    Directory of Open Access Journals (Sweden)

    Siana Halim

    2007-01-01

    Full Text Available Production plants of a company are located in several areas that spread across Middle and East Java. As the production process employs mostly manpower, we suspected that each location has different characteristics affecting the productivity. Thus, the production data may have a spatial and hierarchical structure. For fitting a linear regression using the ordinary techniques, we are required to make some assumptions about the nature of the residuals i.e. independent, identically and normally distributed. However, these assumptions were rarely fulfilled especially for data that have a spatial and hierarchical structure. We worked out the problem using mixed effect model. This paper discusses the model construction of productivity and several characteristics in the production line by taking location as a random effect. The simple model with high utility that satisfies the necessary regression assumptions was built using a free statistic software R version 2.6.1.

  3. Nonresident Undergraduates' Performance in English Writing Classes-Hierarchical Linear Modeling Analysis

    National Research Council Canada - National Science Library

    Allison A Vaughn; Matthew Bergman; Barry Fass-Holmes

    2015-01-01

    ...) in the fall term of the five most recent academic years. Hierarchical linear modeling analyses showed that the predictors with the largest effect sizes were English writing programs and class level...

  4. LIMO EEG: a toolbox for hierarchical LInear MOdeling of ElectroEncephaloGraphic data

    National Research Council Canada - National Science Library

    Pernet, Cyril R; Chauveau, Nicolas; Gaspar, Carl; Rousselet, Guillaume A

    2011-01-01

    ...). LIMO EEG is a Matlab toolbox (EEGLAB compatible) to analyse evoked responses over all space and time dimensions, while accounting for single trial variability using a simple hierarchical linear modelling of the data...

  5. LIMO EEG: A Toolbox for Hierarchical LInear MOdeling of ElectroEncephaloGraphic Data

    National Research Council Canada - National Science Library

    Pernet, Cyril R; Chauveau, Nicolas; Gaspar, Carl; Rousselet, Guillaume A

    2011-01-01

    ...). LIMO EEG is a Matlab toolbox (EEGLAB compatible) to analyse evoked responses over all space and time dimensions, while accounting for single trial variability using a simple hierarchical linear modelling of the data...

  6. Relationship between Multiple Regression and Selected Multivariable Methods.

    Science.gov (United States)

    Schumacker, Randall E.

    The relationship of multiple linear regression to various multivariate statistical techniques is discussed. The importance of the standardized partial regression coefficient (beta weight) in multiple linear regression as it is applied in path, factor, LISREL, and discriminant analyses is emphasized. The multivariate methods discussed in this paper…

  7. Ways of looking ahead: hierarchical planning in language production.

    Science.gov (United States)

    Lee, Eun-Kyung; Brown-Schmidt, Sarah; Watson, Duane G

    2013-12-01

    It is generally assumed that language production proceeds incrementally, with chunks of linguistic structure planned ahead of speech. Extensive research has examined the scope of language production and suggests that the size of planned chunks varies across contexts (Ferreira & Swets, 2002; Wagner & Jescheniak, 2010). By contrast, relatively little is known about the structure of advance planning, specifically whether planning proceeds incrementally according to the surface structure of the utterance, or whether speakers plan according to the hierarchical relationships between utterance elements. In two experiments, we examine the structure and scope of lexical planning in language production using a picture description task. Analyses of speech onset times and word durations show that speakers engage in hierarchical planning such that structurally dependent lexical items are planned together and that hierarchical planning occurs for both direct and indirect dependencies. Copyright © 2013 Elsevier B.V. All rights reserved.

  8. Deliberate change without hierarchical influence?

    DEFF Research Database (Denmark)

    Nørskov, Sladjana; Kesting, Peter; Ulhøi, John Parm

    2017-01-01

    Purpose This paper aims to present that deliberate change is strongly associated with formal structures and top-down influence. Hierarchical configurations have been used to structure processes, overcome resistance and get things done. But is deliberate change also possible without formal...... reveals that deliberate change is indeed achievable in a non-hierarchical collaborative OSS community context. However, it presupposes the presence and active involvement of informal change agents. The paper identifies and specifies four key drivers for change agents’ influence. Originality....../value The findings contribute to organisational analysis by providing a deeper understanding of the importance of leadership in making deliberate change possible in non-hierarchical settings. It points to the importance of “change-by-conviction”, essentially based on voluntary behaviour. This can open the door...

  9. Static Correctness of Hierarchical Procedures

    DEFF Research Database (Denmark)

    Schwartzbach, Michael Ignatieff

    1990-01-01

    A system of hierarchical, fully recursive types in a truly imperative language allows program fragments written for small types to be reused for all larger types. To exploit this property to enable type-safe hierarchical procedures, it is necessary to impose a static requirement on procedure calls....... We introduce an example language and prove the existence of a sound requirement which preserves static correctness while allowing hierarchical procedures. This requirement is further shown to be optimal, in the sense that it imposes as few restrictions as possible. This establishes the theoretical...... basis for a general type hierarchy with static type checking, which enables first-order polymorphism combined with multiple inheritance and specialization in a language with assignments. We extend the results to include opaque types. An opaque version of a type is different from the original but has...

  10. Structural integrity of hierarchical composites

    Directory of Open Access Journals (Sweden)

    Marco Paggi

    2012-01-01

    Full Text Available Interface mechanical problems are of paramount importance in engineering and materials science. Traditionally, due to the complexity of modelling their mechanical behaviour, interfaces are often treated as defects and their features are not explored. In this study, a different approach is illustrated, where the interfaces play an active role in the design of innovative hierarchical composites and are fundamental for their structural integrity. Numerical examples regarding cutting tools made of hierarchical cellular polycrystalline materials are proposed, showing that tailoring of interface properties at the different scales is the way to achieve superior mechanical responses that cannot be obtained using standard materials

  11. Novel algorithm for constructing support vector machine regression ensemble

    Institute of Scientific and Technical Information of China (English)

    Li Bo; Li Xinjun; Zhao Zhiyan

    2006-01-01

    A novel algorithm for constructing support vector machine regression ensemble is proposed. As to regression prediction, support vector machine regression(SVMR) ensemble is proposed by resampling from given training data sets repeatedly and aggregating several independent SVMRs, each of which is trained to use a replicated training set. After training, several independently trained SVMRs need to be aggregated in an appropriate combination manner. Generally, the linear weighting is usually used like expert weighting score in Boosting Regression and it is without optimization capacity. Three combination techniques are proposed, including simple arithmetic mean,linear least square error weighting and nonlinear hierarchical combining that uses another upper-layer SVMR to combine several lower-layer SVMRs. Finally, simulation experiments demonstrate the accuracy and validity of the presented algorithm.

  12. Panel data specifications in nonparametric kernel regression

    DEFF Research Database (Denmark)

    Czekaj, Tomasz Gerard; Henningsen, Arne

    parametric panel data estimators to analyse the production technology of Polish crop farms. The results of our nonparametric kernel regressions generally differ from the estimates of the parametric models but they only slightly depend on the choice of the kernel functions. Based on economic reasoning, we...

  13. PERFORMANCE OF SELECTED AGGLOMERATIVE HIERARCHICAL CLUSTERING METHODS

    Directory of Open Access Journals (Sweden)

    Nusa Erman

    2015-01-01

    Full Text Available A broad variety of different methods of agglomerative hierarchical clustering brings along problems how to choose the most appropriate method for the given data. It is well known that some methods outperform others if the analysed data have a specific structure. In the presented study we have observed the behaviour of the centroid, the median (Gower median method, and the average method (unweighted pair-group method with arithmetic mean – UPGMA; average linkage between groups. We have compared them with mostly used methods of hierarchical clustering: the minimum (single linkage clustering, the maximum (complete linkage clustering, the Ward, and the McQuitty (groups method average, weighted pair-group method using arithmetic averages - WPGMA methods. We have applied the comparison of these methods on spherical, ellipsoid, umbrella-like, “core-and-sphere”, ring-like and intertwined three-dimensional data structures. To generate the data and execute the analysis, we have used R statistical software. Results show that all seven methods are successful in finding compact, ball-shaped or ellipsoid structures when they are enough separated. Conversely, all methods except the minimum perform poor on non-homogenous, irregular and elongated ones. Especially challenging is a circular double helix structure; it is being correctly revealed only by the minimum method. We can also confirm formerly published results of other simulation studies, which usually favour average method (besides Ward method in cases when data is assumed to be fairly compact and well separated.

  14. Kvalitative analyser ..

    DEFF Research Database (Denmark)

    Boolsen, Merete Watt

    bogen forklarer de fundamentale trin i forskningsprocessen og applikerer dem på udvalgte kvalitative analyser: indholdsanalyse, Grounded Theory, argumentationsanalyse og diskursanalyse......bogen forklarer de fundamentale trin i forskningsprocessen og applikerer dem på udvalgte kvalitative analyser: indholdsanalyse, Grounded Theory, argumentationsanalyse og diskursanalyse...

  15. Conceptual hierarchical modeling to describe wetland plant community organization

    Science.gov (United States)

    Little, A.M.; Guntenspergen, G.R.; Allen, T.F.H.

    2010-01-01

    Using multivariate analysis, we created a hierarchical modeling process that describes how differently-scaled environmental factors interact to affect wetland-scale plant community organization in a system of small, isolated wetlands on Mount Desert Island, Maine. We followed the procedure: 1) delineate wetland groups using cluster analysis, 2) identify differently scaled environmental gradients using non-metric multidimensional scaling, 3) order gradient hierarchical levels according to spatiotem-poral scale of fluctuation, and 4) assemble hierarchical model using group relationships with ordination axes and post-hoc tests of environmental differences. Using this process, we determined 1) large wetland size and poor surface water chemistry led to the development of shrub fen wetland vegetation, 2) Sphagnum and water chemistry differences affected fen vs. marsh / sedge meadows status within small wetlands, and 3) small-scale hydrologic differences explained transitions between forested vs. non-forested and marsh vs. sedge meadow vegetation. This hierarchical modeling process can help explain how upper level contextual processes constrain biotic community response to lower-level environmental changes. It creates models with more nuanced spatiotemporal complexity than classification and regression tree procedures. Using this process, wetland scientists will be able to generate more generalizable theories of plant community organization, and useful management models. ?? Society of Wetland Scientists 2009.

  16. Modified Regression Correlation Coefficient for Poisson Regression Model

    Science.gov (United States)

    Kaengthong, Nattacha; Domthong, Uthumporn

    2017-09-01

    This study gives attention to indicators in predictive power of the Generalized Linear Model (GLM) which are widely used; however, often having some restrictions. We are interested in regression correlation coefficient for a Poisson regression model. This is a measure of predictive power, and defined by the relationship between the dependent variable (Y) and the expected value of the dependent variable given the independent variables [E(Y|X)] for the Poisson regression model. The dependent variable is distributed as Poisson. The purpose of this research was modifying regression correlation coefficient for Poisson regression model. We also compare the proposed modified regression correlation coefficient with the traditional regression correlation coefficient in the case of two or more independent variables, and having multicollinearity in independent variables. The result shows that the proposed regression correlation coefficient is better than the traditional regression correlation coefficient based on Bias and the Root Mean Square Error (RMSE).

  17. Sensory Hierarchical Organization and Reading.

    Science.gov (United States)

    Skapof, Jerome

    The purpose of this study was to judge the viability of an operational approach aimed at assessing response styles in reading using the hypothesis of sensory hierarchical organization. A sample of 103 middle-class children from a New York City public school, between the ages of five and seven, took part in a three phase experiment. Phase one…

  18. Memory Stacking in Hierarchical Networks.

    Science.gov (United States)

    Westö, Johan; May, Patrick J C; Tiitinen, Hannu

    2016-02-01

    Robust representations of sounds with a complex spectrotemporal structure are thought to emerge in hierarchically organized auditory cortex, but the computational advantage of this hierarchy remains unknown. Here, we used computational models to study how such hierarchical structures affect temporal binding in neural networks. We equipped individual units in different types of feedforward networks with local memory mechanisms storing recent inputs and observed how this affected the ability of the networks to process stimuli context dependently. Our findings illustrate that these local memories stack up in hierarchical structures and hence allow network units to exhibit selectivity to spectral sequences longer than the time spans of the local memories. We also illustrate that short-term synaptic plasticity is a potential local memory mechanism within the auditory cortex, and we show that it can bring robustness to context dependence against variation in the temporal rate of stimuli, while introducing nonlinearities to response profiles that are not well captured by standard linear spectrotemporal receptive field models. The results therefore indicate that short-term synaptic plasticity might provide hierarchically structured auditory cortex with computational capabilities important for robust representations of spectrotemporal patterns.

  19. Caudal Regression Syndrome

    Directory of Open Access Journals (Sweden)

    Karim Hardani*

    2012-05-01

    Full Text Available A 10-month-old baby presented with developmental delay. He had flaccid paralysis on physical examination.An MRI of the spine revealed malformation of the ninth and tenth thoracic vertebral bodies with complete agenesis of the rest of the spine down that level. The thoracic spinal cord ends at the level of the fifth thoracic vertebra with agenesis of the posterior arches of the eighth, ninth and tenth thoracic vertebral bodies. The roots of the cauda equina appear tightened down and backward and ended into a subdermal fibrous fatty tissue at the level of the ninth and tenth thoracic vertebral bodies (closed meningocele. These findings are consistent with caudal regression syndrome.

  20. Hierarchical Prisoner's Dilemma in Hierarchical Public-Goods Game

    CERN Document Server

    Fujimoto, Yuma; Kaneko, Kunihiko

    2016-01-01

    The dilemma in cooperation is one of the major concerns in game theory. In a public-goods game, each individual pays a cost for cooperation, or to prevent defection, and receives a reward from the collected cost in a group. Thus, defection is beneficial for each individual, while cooperation is beneficial for the group. Now, groups (say, countries) consisting of individual players also play games. To study such a multi-level game, we introduce a hierarchical public-goods (HPG) game in which two groups compete for finite resources by utilizing costs collected from individuals in each group. Analyzing this HPG game, we found a hierarchical prisoner's dilemma, in which groups choose the defection policy (say, armaments) as a Nash strategy to optimize each group's benefit, while cooperation optimizes the total benefit. On the other hand, for each individual within a group, refusing to pay the cost (say, tax) is a Nash strategy, which turns to be a cooperation policy for the group, thus leading to a hierarchical d...

  1. Recursive Algorithm For Linear Regression

    Science.gov (United States)

    Varanasi, S. V.

    1988-01-01

    Order of model determined easily. Linear-regression algorithhm includes recursive equations for coefficients of model of increased order. Algorithm eliminates duplicative calculations, facilitates search for minimum order of linear-regression model fitting set of data satisfactory.

  2. Hierarchical structure of biological systems

    Science.gov (United States)

    Alcocer-Cuarón, Carlos; Rivera, Ana L; Castaño, Victor M

    2014-01-01

    A general theory of biological systems, based on few fundamental propositions, allows a generalization of both Wierner and Berthalanffy approaches to theoretical biology. Here, a biological system is defined as a set of self-organized, differentiated elements that interact pair-wise through various networks and media, isolated from other sets by boundaries. Their relation to other systems can be described as a closed loop in a steady-state, which leads to a hierarchical structure and functioning of the biological system. Our thermodynamical approach of hierarchical character can be applied to biological systems of varying sizes through some general principles, based on the exchange of energy information and/or mass from and within the systems. PMID:24145961

  3. Automatic Hierarchical Color Image Classification

    Directory of Open Access Journals (Sweden)

    Jing Huang

    2003-02-01

    Full Text Available Organizing images into semantic categories can be extremely useful for content-based image retrieval and image annotation. Grouping images into semantic classes is a difficult problem, however. Image classification attempts to solve this hard problem by using low-level image features. In this paper, we propose a method for hierarchical classification of images via supervised learning. This scheme relies on using a good low-level feature and subsequently performing feature-space reconfiguration using singular value decomposition to reduce noise and dimensionality. We use the training data to obtain a hierarchical classification tree that can be used to categorize new images. Our experimental results suggest that this scheme not only performs better than standard nearest-neighbor techniques, but also has both storage and computational advantages.

  4. Intuitionistic fuzzy hierarchical clustering algorithms

    Institute of Scientific and Technical Information of China (English)

    Xu Zeshui

    2009-01-01

    Intuitionistic fuzzy set (IFS) is a set of 2-tuple arguments, each of which is characterized by a mem-bership degree and a nonmembership degree. The generalized form of IFS is interval-valued intuitionistic fuzzy set (IVIFS), whose components are intervals rather than exact numbers. IFSs and IVIFSs have been found to be very useful to describe vagueness and uncertainty. However, it seems that little attention has been focused on the clus-tering analysis of IFSs and IVIFSs. An intuitionistic fuzzy hierarchical algorithm is introduced for clustering IFSs, which is based on the traditional hierarchical clustering procedure, the intuitionistic fuzzy aggregation operator, and the basic distance measures between IFSs: the Hamming distance, normalized Hamming, weighted Hamming, the Euclidean distance, the normalized Euclidean distance, and the weighted Euclidean distance. Subsequently, the algorithm is extended for clustering IVIFSs. Finally the algorithm and its extended form are applied to the classifications of building materials and enterprises respectively.

  5. Hierarchical Formation of Galactic Clusters

    CERN Document Server

    Elmegreen, B G

    2006-01-01

    Young stellar groupings and clusters have hierarchical patterns ranging from flocculent spiral arms and star complexes on the largest scale to OB associations, OB subgroups, small loose groups, clusters and cluster subclumps on the smallest scales. There is no obvious transition in morphology at the cluster boundary, suggesting that clusters are only the inner parts of the hierarchy where stars have had enough time to mix. The power-law cluster mass function follows from this hierarchical structure: n(M_cl) M_cl^-b for b~2. This value of b is independently required by the observation that the summed IMFs from many clusters in a galaxy equals approximately the IMF of each cluster.

  6. Hierarchical matrices algorithms and analysis

    CERN Document Server

    Hackbusch, Wolfgang

    2015-01-01

    This self-contained monograph presents matrix algorithms and their analysis. The new technique enables not only the solution of linear systems but also the approximation of matrix functions, e.g., the matrix exponential. Other applications include the solution of matrix equations, e.g., the Lyapunov or Riccati equation. The required mathematical background can be found in the appendix. The numerical treatment of fully populated large-scale matrices is usually rather costly. However, the technique of hierarchical matrices makes it possible to store matrices and to perform matrix operations approximately with almost linear cost and a controllable degree of approximation error. For important classes of matrices, the computational cost increases only logarithmically with the approximation error. The operations provided include the matrix inversion and LU decomposition. Since large-scale linear algebra problems are standard in scientific computing, the subject of hierarchical matrices is of interest to scientists ...

  7. Hierarchical Cont-Bouchaud model

    CERN Document Server

    Paluch, Robert; Holyst, Janusz A

    2015-01-01

    We extend the well-known Cont-Bouchaud model to include a hierarchical topology of agent's interactions. The influence of hierarchy on system dynamics is investigated by two models. The first one is based on a multi-level, nested Erdos-Renyi random graph and individual decisions by agents according to Potts dynamics. This approach does not lead to a broad return distribution outside a parameter regime close to the original Cont-Bouchaud model. In the second model we introduce a limited hierarchical Erdos-Renyi graph, where merging of clusters at a level h+1 involves only clusters that have merged at the previous level h and we use the original Cont-Bouchaud agent dynamics on resulting clusters. The second model leads to a heavy-tail distribution of cluster sizes and relative price changes in a wide range of connection densities, not only close to the percolation threshold.

  8. Hierarchical Clustering and Active Galaxies

    CERN Document Server

    Hatziminaoglou, E; Manrique, A

    2000-01-01

    The growth of Super Massive Black Holes and the parallel development of activity in galactic nuclei are implemented in an analytic code of hierarchical clustering. The evolution of the luminosity function of quasars and AGN will be computed with special attention paid to the connection between quasars and Seyfert galaxies. One of the major interests of the model is the parallel study of quasar formation and evolution and the History of Star Formation.

  9. Hybrid and hierarchical composite materials

    CERN Document Server

    Kim, Chang-Soo; Sano, Tomoko

    2015-01-01

    This book addresses a broad spectrum of areas in both hybrid materials and hierarchical composites, including recent development of processing technologies, structural designs, modern computer simulation techniques, and the relationships between the processing-structure-property-performance. Each topic is introduced at length with numerous  and detailed examples and over 150 illustrations.   In addition, the authors present a method of categorizing these materials, so that representative examples of all material classes are discussed.

  10. Measuring efficiency of a hierarchical organization with fuzzy DEA method

    OpenAIRE

    LUBAN Florica

    2009-01-01

    The paper analyses how the data envelopment analysis (DEA) and fuzzy set theory can be used to measure and evaluate the efficiency of a hierarchical system with n decision making units and a coordinating unit. It is presented a model for determining the of activity levels of decision making units so as to achieve both fuzzy objectives of achieving global target levels of coordination unit on the inputs and outputs and individual target levels of decision making units, and then some methods to...

  11. Treatment Protocols as Hierarchical Structures

    Science.gov (United States)

    Ben-Bassat, Moshe; Carlson, Richard W.; Puri, Vinod K.; Weil, Max Harry

    1978-01-01

    We view a treatment protocol as a hierarchical structure of therapeutic modules. The lowest level of this structure consists of individual therapeutic actions. Combinations of individual actions define higher level modules, which we call routines. Routines are designed to manage limited clinical problems, such as the routine for fluid loading to correct hypovolemia. Combinations of routines and additional actions, together with comments, questions, or precautions organized in a branching logic, in turn, define the treatment protocol for a given disorder. Adoption of this modular approach may facilitate the formulation of treatment protocols, since the physician is not required to prepare complex flowcharts. This hierarchical approach also allows protocols to be updated and modified in a flexible manner. By use of such a standard format, individual components may be fitted together to create protocols for multiple disorders. The technique is suited for computer implementation. We believe that this hierarchical approach may facilitate standarization of patient care as well as aid in clinical teaching. A protocol for acute pancreatitis is used to illustrate this technique.

  12. Robust Bayesian Regularized Estimation Based on t Regression Model

    Directory of Open Access Journals (Sweden)

    Zean Li

    2015-01-01

    Full Text Available The t distribution is a useful extension of the normal distribution, which can be used for statistical modeling of data sets with heavy tails, and provides robust estimation. In this paper, in view of the advantages of Bayesian analysis, we propose a new robust coefficient estimation and variable selection method based on Bayesian adaptive Lasso t regression. A Gibbs sampler is developed based on the Bayesian hierarchical model framework, where we treat the t distribution as a mixture of normal and gamma distributions and put different penalization parameters for different regression coefficients. We also consider the Bayesian t regression with adaptive group Lasso and obtain the Gibbs sampler from the posterior distributions. Both simulation studies and real data example show that our method performs well compared with other existing methods when the error distribution has heavy tails and/or outliers.

  13. Streamflow forecasting using functional regression

    Science.gov (United States)

    Masselot, Pierre; Dabo-Niang, Sophie; Chebana, Fateh; Ouarda, Taha B. M. J.

    2016-07-01

    Streamflow, as a natural phenomenon, is continuous in time and so are the meteorological variables which influence its variability. In practice, it can be of interest to forecast the whole flow curve instead of points (daily or hourly). To this end, this paper introduces the functional linear models and adapts it to hydrological forecasting. More precisely, functional linear models are regression models based on curves instead of single values. They allow to consider the whole process instead of a limited number of time points or features. We apply these models to analyse the flow volume and the whole streamflow curve during a given period by using precipitations curves. The functional model is shown to lead to encouraging results. The potential of functional linear models to detect special features that would have been hard to see otherwise is pointed out. The functional model is also compared to the artificial neural network approach and the advantages and disadvantages of both models are discussed. Finally, future research directions involving the functional model in hydrology are presented.

  14. Regression in autistic spectrum disorders.

    Science.gov (United States)

    Stefanatos, Gerry A

    2008-12-01

    A significant proportion of children diagnosed with Autistic Spectrum Disorder experience a developmental regression characterized by a loss of previously-acquired skills. This may involve a loss of speech or social responsitivity, but often entails both. This paper critically reviews the phenomena of regression in autistic spectrum disorders, highlighting the characteristics of regression, age of onset, temporal course, and long-term outcome. Important considerations for diagnosis are discussed and multiple etiological factors currently hypothesized to underlie the phenomenon are reviewed. It is argued that regressive autistic spectrum disorders can be conceptualized on a spectrum with other regressive disorders that may share common pathophysiological features. The implications of this viewpoint are discussed.

  15. Combining Alphas via Bounded Regression

    Directory of Open Access Journals (Sweden)

    Zura Kakushadze

    2015-11-01

    Full Text Available We give an explicit algorithm and source code for combining alpha streams via bounded regression. In practical applications, typically, there is insufficient history to compute a sample covariance matrix (SCM for a large number of alphas. To compute alpha allocation weights, one then resorts to (weighted regression over SCM principal components. Regression often produces alpha weights with insufficient diversification and/or skewed distribution against, e.g., turnover. This can be rectified by imposing bounds on alpha weights within the regression procedure. Bounded regression can also be applied to stock and other asset portfolio construction. We discuss illustrative examples.

  16. Synchronization patterns: from network motifs to hierarchical networks

    Science.gov (United States)

    Krishnagopal, Sanjukta; Lehnert, Judith; Poel, Winnie; Zakharova, Anna; Schöll, Eckehard

    2017-03-01

    We investigate complex synchronization patterns such as cluster synchronization and partial amplitude death in networks of coupled Stuart-Landau oscillators with fractal connectivities. The study of fractal or self-similar topology is motivated by the network of neurons in the brain. This fractal property is well represented in hierarchical networks, for which we present three different models. In addition, we introduce an analytical eigensolution method and provide a comprehensive picture of the interplay of network topology and the corresponding network dynamics, thus allowing us to predict the dynamics of arbitrarily large hierarchical networks simply by analysing small network motifs. We also show that oscillation death can be induced in these networks, even if the coupling is symmetric, contrary to previous understanding of oscillation death. Our results show that there is a direct correlation between topology and dynamics: hierarchical networks exhibit the corresponding hierarchical dynamics. This helps bridge the gap between mesoscale motifs and macroscopic networks. This article is part of the themed issue 'Horizons of cybernetical physics'.

  17. Competing Risks Quantile Regression at Work

    DEFF Research Database (Denmark)

    Dlugosz, Stephan; Lo, Simon M. S.; Wilke, Ralf

    2017-01-01

    Despite its emergence as a frequently used method for the empirical analysis of multivariate data, quantile regression is yet to become a mainstream tool for the analysis of duration data. We present a pioneering empirical study on the grounds of a competing risks quantile regression model. We us...... into the distribution of transitions out of maternity leave. It is found that cumulative incidences implied by the quantile regression model differ from those implied by a proportional hazards model. To foster the use of the model, we make an R-package (cmprskQR) available....... large-scale maternity duration data with multiple competing risks derived from German linked social security records to analyse how public policies are related to the length of economic inactivity of young mothers after giving birth. Our results show that the model delivers detailed insights...

  18. Linear regression in astronomy. I

    Science.gov (United States)

    Isobe, Takashi; Feigelson, Eric D.; Akritas, Michael G.; Babu, Gutti Jogesh

    1990-01-01

    Five methods for obtaining linear regression fits to bivariate data with unknown or insignificant measurement errors are discussed: ordinary least-squares (OLS) regression of Y on X, OLS regression of X on Y, the bisector of the two OLS lines, orthogonal regression, and 'reduced major-axis' regression. These methods have been used by various researchers in observational astronomy, most importantly in cosmic distance scale applications. Formulas for calculating the slope and intercept coefficients and their uncertainties are given for all the methods, including a new general form of the OLS variance estimates. The accuracy of the formulas was confirmed using numerical simulations. The applicability of the procedures is discussed with respect to their mathematical properties, the nature of the astronomical data under consideration, and the scientific purpose of the regression. It is found that, for problems needing symmetrical treatment of the variables, the OLS bisector performs significantly better than orthogonal or reduced major-axis regression.

  19. Logistic regression applied to natural hazards: rare event logistic regression with replications

    Directory of Open Access Journals (Sweden)

    M. Guns

    2012-06-01

    Full Text Available Statistical analysis of natural hazards needs particular attention, as most of these phenomena are rare events. This study shows that the ordinary rare event logistic regression, as it is now commonly used in geomorphologic studies, does not always lead to a robust detection of controlling factors, as the results can be strongly sample-dependent. In this paper, we introduce some concepts of Monte Carlo simulations in rare event logistic regression. This technique, so-called rare event logistic regression with replications, combines the strength of probabilistic and statistical methods, and allows overcoming some of the limitations of previous developments through robust variable selection. This technique was here developed for the analyses of landslide controlling factors, but the concept is widely applicable for statistical analyses of natural hazards.

  20. Time-adaptive quantile regression

    DEFF Research Database (Denmark)

    Møller, Jan Kloppenborg; Nielsen, Henrik Aalborg; Madsen, Henrik

    2008-01-01

    An algorithm for time-adaptive quantile regression is presented. The algorithm is based on the simplex algorithm, and the linear optimization formulation of the quantile regression problem is given. The observations have been split to allow a direct use of the simplex algorithm. The simplex method...... and an updating procedure are combined into a new algorithm for time-adaptive quantile regression, which generates new solutions on the basis of the old solution, leading to savings in computation time. The suggested algorithm is tested against a static quantile regression model on a data set with wind power...... production, where the models combine splines and quantile regression. The comparison indicates superior performance for the time-adaptive quantile regression in all the performance parameters considered....

  1. Linear regression in astronomy. II

    Science.gov (United States)

    Feigelson, Eric D.; Babu, Gutti J.

    1992-01-01

    A wide variety of least-squares linear regression procedures used in observational astronomy, particularly investigations of the cosmic distance scale, are presented and discussed. The classes of linear models considered are (1) unweighted regression lines, with bootstrap and jackknife resampling; (2) regression solutions when measurement error, in one or both variables, dominates the scatter; (3) methods to apply a calibration line to new data; (4) truncated regression models, which apply to flux-limited data sets; and (5) censored regression models, which apply when nondetections are present. For the calibration problem we develop two new procedures: a formula for the intercept offset between two parallel data sets, which propagates slope errors from one regression to the other; and a generalization of the Working-Hotelling confidence bands to nonstandard least-squares lines. They can provide improved error analysis for Faber-Jackson, Tully-Fisher, and similar cosmic distance scale relations.

  2. Hierarchical Control for Smart Grids

    DEFF Research Database (Denmark)

    Trangbæk, K; Bendtsen, Jan Dimon; Stoustrup, Jakob

    2011-01-01

    This paper deals with hierarchical model predictive control (MPC) of smart grid systems. The design consists of a high level MPC controller, a second level of so-called aggregators, which reduces the computational and communication-related load on the high-level control, and a lower level...... of autonomous consumers. The control system is tasked with balancing electric power production and consumption within the smart grid, and makes active use of the flexibility of a large number of power producing and/or power consuming units. The objective is to accommodate the load variation on the grid, arising...

  3. Polynomial Regression on Riemannian Manifolds

    CERN Document Server

    Hinkle, Jacob; Fletcher, P Thomas; Joshi, Sarang

    2012-01-01

    In this paper we develop the theory of parametric polynomial regression in Riemannian manifolds and Lie groups. We show application of Riemannian polynomial regression to shape analysis in Kendall shape space. Results are presented, showing the power of polynomial regression on the classic rat skull growth data of Bookstein as well as the analysis of the shape changes associated with aging of the corpus callosum from the OASIS Alzheimer's study.

  4. Evaluating Differential Effects Using Regression Interactions and Regression Mixture Models

    Science.gov (United States)

    Van Horn, M. Lee; Jaki, Thomas; Masyn, Katherine; Howe, George; Feaster, Daniel J.; Lamont, Andrea E.; George, Melissa R. W.; Kim, Minjung

    2015-01-01

    Research increasingly emphasizes understanding differential effects. This article focuses on understanding regression mixture models, which are relatively new statistical methods for assessing differential effects by comparing results to using an interactive term in linear regression. The research questions which each model answers, their…

  5. Quantile regression theory and applications

    CERN Document Server

    Davino, Cristina; Vistocco, Domenico

    2013-01-01

    A guide to the implementation and interpretation of Quantile Regression models This book explores the theory and numerous applications of quantile regression, offering empirical data analysis as well as the software tools to implement the methods. The main focus of this book is to provide the reader with a comprehensivedescription of the main issues concerning quantile regression; these include basic modeling, geometrical interpretation, estimation and inference for quantile regression, as well as issues on validity of the model, diagnostic tools. Each methodological aspect is explored and

  6. Business applications of multiple regression

    CERN Document Server

    Richardson, Ronny

    2015-01-01

    This second edition of Business Applications of Multiple Regression describes the use of the statistical procedure called multiple regression in business situations, including forecasting and understanding the relationships between variables. The book assumes a basic understanding of statistics but reviews correlation analysis and simple regression to prepare the reader to understand and use multiple regression. The techniques described in the book are illustrated using both Microsoft Excel and a professional statistical program. Along the way, several real-world data sets are analyzed in deta

  7. Augmenting Data with Published Results in Bayesian Linear Regression

    Science.gov (United States)

    de Leeuw, Christiaan; Klugkist, Irene

    2012-01-01

    In most research, linear regression analyses are performed without taking into account published results (i.e., reported summary statistics) of similar previous studies. Although the prior density in Bayesian linear regression could accommodate such prior knowledge, formal models for doing so are absent from the literature. The goal of this…

  8. Augmenting Data with Published Results in Bayesian Linear Regression

    Science.gov (United States)

    de Leeuw, Christiaan; Klugkist, Irene

    2012-01-01

    In most research, linear regression analyses are performed without taking into account published results (i.e., reported summary statistics) of similar previous studies. Although the prior density in Bayesian linear regression could accommodate such prior knowledge, formal models for doing so are absent from the literature. The goal of this…

  9. Hierarchical Structures in Hypertext Learning Environments

    NARCIS (Netherlands)

    Bezdan, Eniko; Kester, Liesbeth; Kirschner, Paul A.

    2011-01-01

    Bezdan, E., Kester, L., & Kirschner, P. A. (2011, 9 September). Hierarchical Structures in Hypertext Learning Environments. Presentation for the visit of KU Leuven, Open University, Heerlen, The Netherlands.

  10. An Original Stepwise Multilevel Logistic Regression Analysis of Discriminatory Accuracy

    DEFF Research Database (Denmark)

    Merlo, Juan; Wagner, Philippe; Ghith, Nermin

    2016-01-01

    BACKGROUND AND AIM: Many multilevel logistic regression analyses of "neighbourhood and health" focus on interpreting measures of associations (e.g., odds ratio, OR). In contrast, multilevel analysis of variance is rarely considered. We propose an original stepwise analytical approach that disting......BACKGROUND AND AIM: Many multilevel logistic regression analyses of "neighbourhood and health" focus on interpreting measures of associations (e.g., odds ratio, OR). In contrast, multilevel analysis of variance is rarely considered. We propose an original stepwise analytical approach...

  11. Dynamic Organization of Hierarchical Memories.

    Science.gov (United States)

    Kurikawa, Tomoki; Kaneko, Kunihiko

    2016-01-01

    In the brain, external objects are categorized in a hierarchical way. Although it is widely accepted that objects are represented as static attractors in neural state space, this view does not take account interaction between intrinsic neural dynamics and external input, which is essential to understand how neural system responds to inputs. Indeed, structured spontaneous neural activity without external inputs is known to exist, and its relationship with evoked activities is discussed. Then, how categorical representation is embedded into the spontaneous and evoked activities has to be uncovered. To address this question, we studied bifurcation process with increasing input after hierarchically clustered associative memories are learned. We found a "dynamic categorization"; neural activity without input wanders globally over the state space including all memories. Then with the increase of input strength, diffuse representation of higher category exhibits transitions to focused ones specific to each object. The hierarchy of memories is embedded in the transition probability from one memory to another during the spontaneous dynamics. With increased input strength, neural activity wanders over a narrower state space including a smaller set of memories, showing more specific category or memory corresponding to the applied input. Moreover, such coarse-to-fine transitions are also observed temporally during transient process under constant input, which agrees with experimental findings in the temporal cortex. These results suggest the hierarchy emerging through interaction with an external input underlies hierarchy during transient process, as well as in the spontaneous activity.

  12. Collaborative regression-based anatomical landmark detection

    Science.gov (United States)

    Gao, Yaozong; Shen, Dinggang

    2015-12-01

    Anatomical landmark detection plays an important role in medical image analysis, e.g. for registration, segmentation and quantitative analysis. Among the various existing methods for landmark detection, regression-based methods have recently attracted much attention due to their robustness and efficiency. In these methods, landmarks are localised through voting from all image voxels, which is completely different from the classification-based methods that use voxel-wise classification to detect landmarks. Despite their robustness, the accuracy of regression-based landmark detection methods is often limited due to (1) the inclusion of uninformative image voxels in the voting procedure, and (2) the lack of effective ways to incorporate inter-landmark spatial dependency into the detection step. In this paper, we propose a collaborative landmark detection framework to address these limitations. The concept of collaboration is reflected in two aspects. (1) Multi-resolution collaboration. A multi-resolution strategy is proposed to hierarchically localise landmarks by gradually excluding uninformative votes from faraway voxels. Moreover, for informative voxels near the landmark, a spherical sampling strategy is also designed at the training stage to improve their prediction accuracy. (2) Inter-landmark collaboration. A confidence-based landmark detection strategy is proposed to improve the detection accuracy of ‘difficult-to-detect’ landmarks by using spatial guidance from ‘easy-to-detect’ landmarks. To evaluate our method, we conducted experiments extensively on three datasets for detecting prostate landmarks and head & neck landmarks in computed tomography images, and also dental landmarks in cone beam computed tomography images. The results show the effectiveness of our collaborative landmark detection framework in improving landmark detection accuracy, compared to other state-of-the-art methods.

  13. Using Bayesian hierarchical parameter estimation to assess the generalizability of cognitive models of choice.

    Science.gov (United States)

    Scheibehenne, Benjamin; Pachur, Thorsten

    2015-04-01

    To be useful, cognitive models with fitted parameters should show generalizability across time and allow accurate predictions of future observations. It has been proposed that hierarchical procedures yield better estimates of model parameters than do nonhierarchical, independent approaches, because the formers' estimates for individuals within a group can mutually inform each other. Here, we examine Bayesian hierarchical approaches to evaluating model generalizability in the context of two prominent models of risky choice-cumulative prospect theory (Tversky & Kahneman, 1992) and the transfer-of-attention-exchange model (Birnbaum & Chavez, 1997). Using empirical data of risky choices collected for each individual at two time points, we compared the use of hierarchical versus independent, nonhierarchical Bayesian estimation techniques to assess two aspects of model generalizability: parameter stability (across time) and predictive accuracy. The relative performance of hierarchical versus independent estimation varied across the different measures of generalizability. The hierarchical approach improved parameter stability (in terms of a lower absolute discrepancy of parameter values across time) and predictive accuracy (in terms of deviance; i.e., likelihood). With respect to test-retest correlations and posterior predictive accuracy, however, the hierarchical approach did not outperform the independent approach. Further analyses suggested that this was due to strong correlations between some parameters within both models. Such intercorrelations make it difficult to identify and interpret single parameters and can induce high degrees of shrinkage in hierarchical models. Similar findings may also occur in the context of other cognitive models of choice.

  14. Prediction of road accidents: A Bayesian hierarchical approach

    DEFF Research Database (Denmark)

    Deublein, Markus; Schubert, Matthias; Adey, Bryan T.;

    2013-01-01

    In this paper a novel methodology for the prediction of the occurrence of road accidents is presented. The methodology utilizes a combination of three statistical methods: (1) gamma-updating of the occurrence rates of injury accidents and injured road users, (2) hierarchical multivariate Poisson......-lognormal regression analysis taking into account correlations amongst multiple dependent model response variables and effects of discrete accident count data e.g. over-dispersion, and (3) Bayesian inference algorithms, which are applied by means of data mining techniques supported by Bayesian Probabilistic Networks...... in order to represent non-linearity between risk indicating and model response variables, as well as different types of uncertainties which might be present in the development of the specific models.Prior Bayesian Probabilistic Networks are first established by means of multivariate regression analysis...

  15. Testing discontinuities in nonparametric regression

    KAUST Repository

    Dai, Wenlin

    2017-01-19

    In nonparametric regression, it is often needed to detect whether there are jump discontinuities in the mean function. In this paper, we revisit the difference-based method in [13 H.-G. Müller and U. Stadtmüller, Discontinuous versus smooth regression, Ann. Stat. 27 (1999), pp. 299–337. doi: 10.1214/aos/1018031100

  16. Logistic Regression: Concept and Application

    Science.gov (United States)

    Cokluk, Omay

    2010-01-01

    The main focus of logistic regression analysis is classification of individuals in different groups. The aim of the present study is to explain basic concepts and processes of binary logistic regression analysis intended to determine the combination of independent variables which best explain the membership in certain groups called dichotomous…

  17. Fungible weights in logistic regression.

    Science.gov (United States)

    Jones, Jeff A; Waller, Niels G

    2016-06-01

    In this article we develop methods for assessing parameter sensitivity in logistic regression models. To set the stage for this work, we first review Waller's (2008) equations for computing fungible weights in linear regression. Next, we describe 2 methods for computing fungible weights in logistic regression. To demonstrate the utility of these methods, we compute fungible logistic regression weights using data from the Centers for Disease Control and Prevention's (2010) Youth Risk Behavior Surveillance Survey, and we illustrate how these alternate weights can be used to evaluate parameter sensitivity. To make our work accessible to the research community, we provide R code (R Core Team, 2015) that will generate both kinds of fungible logistic regression weights. (PsycINFO Database Record

  18. A tutorial on Bayesian Normal linear regression

    Science.gov (United States)

    Klauenberg, Katy; Wübbeler, Gerd; Mickan, Bodo; Harris, Peter; Elster, Clemens

    2015-12-01

    Regression is a common task in metrology and often applied to calibrate instruments, evaluate inter-laboratory comparisons or determine fundamental constants, for example. Yet, a regression model cannot be uniquely formulated as a measurement function, and consequently the Guide to the Expression of Uncertainty in Measurement (GUM) and its supplements are not applicable directly. Bayesian inference, however, is well suited to regression tasks, and has the advantage of accounting for additional a priori information, which typically robustifies analyses. Furthermore, it is anticipated that future revisions of the GUM shall also embrace the Bayesian view. Guidance on Bayesian inference for regression tasks is largely lacking in metrology. For linear regression models with Gaussian measurement errors this tutorial gives explicit guidance. Divided into three steps, the tutorial first illustrates how a priori knowledge, which is available from previous experiments, can be translated into prior distributions from a specific class. These prior distributions have the advantage of yielding analytical, closed form results, thus avoiding the need to apply numerical methods such as Markov Chain Monte Carlo. Secondly, formulas for the posterior results are given, explained and illustrated, and software implementations are provided. In the third step, Bayesian tools are used to assess the assumptions behind the suggested approach. These three steps (prior elicitation, posterior calculation, and robustness to prior uncertainty and model adequacy) are critical to Bayesian inference. The general guidance given here for Normal linear regression tasks is accompanied by a simple, but real-world, metrological example. The calibration of a flow device serves as a running example and illustrates the three steps. It is shown that prior knowledge from previous calibrations of the same sonic nozzle enables robust predictions even for extrapolations.

  19. Regression Testing Cost Reduction Suite

    Directory of Open Access Journals (Sweden)

    Mohamed Alaa El-Din

    2014-08-01

    Full Text Available The estimated cost of software maintenance exceeds 70 percent of total software costs [1], and large portion of this maintenance expenses is devoted to regression testing. Regression testing is an expensive and frequently executed maintenance activity used to revalidate the modified software. Any reduction in the cost of regression testing would help to reduce the software maintenance cost. Test suites once developed are reused and updated frequently as the software evolves. As a result, some test cases in the test suite may become redundant when the software is modified over time since the requirements covered by them are also covered by other test cases. Due to the resource and time constraints for re-executing large test suites, it is important to develop techniques to minimize available test suites by removing redundant test cases. In general, the test suite minimization problem is NP complete. This paper focuses on proposing an effective approach for reducing the cost of regression testing process. The proposed approach is applied on real-time case study. It was found that the reduction in cost of regression testing for each regression testing cycle is ranging highly improved in the case of programs containing high number of selected statements which in turn maximize the benefits of using it in regression testing of complex software systems. The reduction in the regression test suite size will reduce the effort and time required by the testing teams to execute the regression test suite. Since regression testing is done more frequently in software maintenance phase, the overall software maintenance cost can be reduced considerably by applying the proposed approach.

  20. Hierarchical flexural strength of enamel: transition from brittle to damage-tolerant behaviour.

    Science.gov (United States)

    Bechtle, Sabine; Özcoban, Hüseyin; Lilleodden, Erica T; Huber, Norbert; Schreyer, Andreas; Swain, Michael V; Schneider, Gerold A

    2012-06-07

    Hard, biological materials are generally hierarchically structured from the nano- to the macro-scale in a somewhat self-similar manner consisting of mineral units surrounded by a soft protein shell. Considerable efforts are underway to mimic such materials because of their structurally optimized mechanical functionality of being hard and stiff as well as damage-tolerant. However, it is unclear how different hierarchical levels interact to achieve this performance. In this study, we consider dental enamel as a representative, biological hierarchical structure and determine its flexural strength and elastic modulus at three levels of hierarchy using focused ion beam (FIB) prepared cantilevers of micrometre size. The results are compared and analysed using a theoretical model proposed by Jäger and Fratzl and developed by Gao and co-workers. Both properties decrease with increasing hierarchical dimension along with a switch in mechanical behaviour from linear-elastic to elastic-inelastic. We found Gao's model matched the results very well.

  1. Discovering hierarchical structure in normal relational data

    DEFF Research Database (Denmark)

    Schmidt, Mikkel Nørgaard; Herlau, Tue; Mørup, Morten

    2014-01-01

    Hierarchical clustering is a widely used tool for structuring and visualizing complex data using similarity. Traditionally, hierarchical clustering is based on local heuristics that do not explicitly provide assessment of the statistical saliency of the extracted hierarchy. We propose a non-param...

  2. Discursive Hierarchical Patterning in Economics Cases

    Science.gov (United States)

    Lung, Jane

    2011-01-01

    This paper attempts to apply Lung's (2008) model of the discursive hierarchical patterning of cases to a closer and more specific study of Economics cases and proposes a model of the distinct discursive hierarchical patterning of the same. It examines a corpus of 150 Economics cases with a view to uncovering the patterns of discourse construction.…

  3. A Model of Hierarchical Key Assignment Scheme

    Institute of Scientific and Technical Information of China (English)

    ZHANG Zhigang; ZHAO Jing; XU Maozhi

    2006-01-01

    A model of the hierarchical key assignment scheme is approached in this paper, which can be used with any cryptography algorithm. Besides, the optimal dynamic control property of a hierarchical key assignment scheme will be defined in this paper. Also, our scheme model will meet this property.

  4. Rank regression: an alternative regression approach for data with outliers.

    Science.gov (United States)

    Chen, Tian; Tang, Wan; Lu, Ying; Tu, Xin

    2014-10-01

    Linear regression models are widely used in mental health and related health services research. However, the classic linear regression analysis assumes that the data are normally distributed, an assumption that is not met by the data obtained in many studies. One method of dealing with this problem is to use semi-parametric models, which do not require that the data be normally distributed. But semi-parametric models are quite sensitive to outlying observations, so the generated estimates are unreliable when study data includes outliers. In this situation, some researchers trim the extreme values prior to conducting the analysis, but the ad-hoc rules used for data trimming are based on subjective criteria so different methods of adjustment can yield different results. Rank regression provides a more objective approach to dealing with non-normal data that includes outliers. This paper uses simulated and real data to illustrate this useful regression approach for dealing with outliers and compares it to the results generated using classical regression models and semi-parametric regression models.

  5. Galaxy formation through hierarchical clustering

    Science.gov (United States)

    White, Simon D. M.; Frenk, Carlos S.

    1991-01-01

    Analytic methods for studying the formation of galaxies by gas condensation within massive dark halos are presented. The present scheme applies to cosmogonies where structure grows through hierarchical clustering of a mixture of gas and dissipationless dark matter. The simplest models consistent with the current understanding of N-body work on dissipationless clustering, and that of numerical and analytic work on gas evolution and cooling are adopted. Standard models for the evolution of the stellar population are also employed, and new models for the way star formation heats and enriches the surrounding gas are constructed. Detailed results are presented for a cold dark matter universe with Omega = 1 and H(0) = 50 km/s/Mpc, but the present methods are applicable to other models. The present luminosity functions contain significantly more faint galaxies than are observed.

  6. Groups possessing extensive hierarchical decompositions

    CERN Document Server

    Januszkiewicz, T; Leary, I J

    2009-01-01

    Kropholler's class of groups is the smallest class of groups which contains all finite groups and is closed under the following operator: whenever $G$ admits a finite-dimensional contractible $G$-CW-complex in which all stabilizer groups are in the class, then $G$ is itself in the class. Kropholler's class admits a hierarchical structure, i.e., a natural filtration indexed by the ordinals. For example, stage 0 of the hierarchy is the class of all finite groups, and stage 1 contains all groups of finite virtual cohomological dimension. We show that for each countable ordinal $\\alpha$, there is a countable group that is in Kropholler's class which does not appear until the $\\alpha+1$st stage of the hierarchy. Previously this was known only for $\\alpha= 0$, 1 and 2. The groups that we construct contain torsion. We also review the construction of a torsion-free group that lies in the third stage of the hierarchy.

  7. Quantum transport through hierarchical structures.

    Science.gov (United States)

    Boettcher, S; Varghese, C; Novotny, M A

    2011-04-01

    The transport of quantum electrons through hierarchical lattices is of interest because such lattices have some properties of both regular lattices and random systems. We calculate the electron transmission as a function of energy in the tight-binding approximation for two related Hanoi networks. HN3 is a Hanoi network with every site having three bonds. HN5 has additional bonds added to HN3 to make the average number of bonds per site equal to five. We present a renormalization group approach to solve the matrix equation involved in this quantum transport calculation. We observe band gaps in HN3, while no such band gaps are observed in linear networks or in HN5. We provide a detailed scaling analysis near the edges of these band gaps.

  8. Hierarchical networks of scientific journals

    CERN Document Server

    Palla, Gergely; Mones, Enys; Pollner, Péter; Vicsek, Tamás

    2015-01-01

    Scientific journals are the repositories of the gradually accumulating knowledge of mankind about the world surrounding us. Just as our knowledge is organised into classes ranging from major disciplines, subjects and fields to increasingly specific topics, journals can also be categorised into groups using various metrics. In addition to the set of topics characteristic for a journal, they can also be ranked regarding their relevance from the point of overall influence. One widespread measure is impact factor, but in the present paper we intend to reconstruct a much more detailed description by studying the hierarchical relations between the journals based on citation data. We use a measure related to the notion of m-reaching centrality and find a network which shows the level of influence of a journal from the point of the direction and efficiency with which information spreads through the network. We can also obtain an alternative network using a suitably modified nested hierarchy extraction method applied ...

  9. Adaptive Sampling in Hierarchical Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Knap, J; Barton, N R; Hornung, R D; Arsenlis, A; Becker, R; Jefferson, D R

    2007-07-09

    We propose an adaptive sampling methodology for hierarchical multi-scale simulation. The method utilizes a moving kriging interpolation to significantly reduce the number of evaluations of finer-scale response functions to provide essential constitutive information to a coarser-scale simulation model. The underlying interpolation scheme is unstructured and adaptive to handle the transient nature of a simulation. To handle the dynamic construction and searching of a potentially large set of finer-scale response data, we employ a dynamic metric tree database. We study the performance of our adaptive sampling methodology for a two-level multi-scale model involving a coarse-scale finite element simulation and a finer-scale crystal plasticity based constitutive law.

  10. Hierarchically Nanostructured Materials for Sustainable Environmental Applications

    Science.gov (United States)

    Ren, Zheng; Guo, Yanbing; Liu, Cai-Hong; Gao, Pu-Xian

    2013-11-01

    This article presents a comprehensive overview of the hierarchical nanostructured materials with either geometry or composition complexity in environmental applications. The hierarchical nanostructures offer advantages of high surface area, synergistic interactions and multiple functionalities towards water remediation, environmental gas sensing and monitoring as well as catalytic gas treatment. Recent advances in synthetic strategies for various hierarchical morphologies such as hollow spheres and urchin-shaped architectures have been reviewed. In addition to the chemical synthesis, the physical mechanisms associated with the materials design and device fabrication have been discussed for each specific application. The development and application of hierarchical complex perovskite oxide nanostructures have also been introduced in photocatalytic water remediation, gas sensing and catalytic converter. Hierarchical nanostructures will open up many possibilities for materials design and device fabrication in environmental chemistry and technology.

  11. A neural signature of hierarchical reinforcement learning.

    Science.gov (United States)

    Ribas-Fernandes, José J F; Solway, Alec; Diuk, Carlos; McGuire, Joseph T; Barto, Andrew G; Niv, Yael; Botvinick, Matthew M

    2011-07-28

    Human behavior displays hierarchical structure: simple actions cohere into subtask sequences, which work together to accomplish overall task goals. Although the neural substrates of such hierarchy have been the target of increasing research, they remain poorly understood. We propose that the computations supporting hierarchical behavior may relate to those in hierarchical reinforcement learning (HRL), a machine-learning framework that extends reinforcement-learning mechanisms into hierarchical domains. To test this, we leveraged a distinctive prediction arising from HRL. In ordinary reinforcement learning, reward prediction errors are computed when there is an unanticipated change in the prospects for accomplishing overall task goals. HRL entails that prediction errors should also occur in relation to task subgoals. In three neuroimaging studies we observed neural responses consistent with such subgoal-related reward prediction errors, within structures previously implicated in reinforcement learning. The results reported support the relevance of HRL to the neural processes underlying hierarchical behavior.

  12. Hierarchical Identity-Based Lossy Trapdoor Functions

    CERN Document Server

    Escala, Alex; Libert, Benoit; Rafols, Carla

    2012-01-01

    Lossy trapdoor functions, introduced by Peikert and Waters (STOC'08), have received a lot of attention in the last years, because of their wide range of applications in theoretical cryptography. The notion has been recently extended to the identity-based scenario by Bellare et al. (Eurocrypt'12). We provide one more step in this direction, by considering the notion of hierarchical identity-based lossy trapdoor functions (HIB-LTDFs). Hierarchical identity-based cryptography generalizes identitybased cryptography in the sense that identities are organized in a hierarchical way; a parent identity has more power than its descendants, because it can generate valid secret keys for them. Hierarchical identity-based cryptography has been proved very useful both for practical applications and to establish theoretical relations with other cryptographic primitives. In order to realize HIB-LTDFs, we first build a weakly secure hierarchical predicate encryption scheme. This scheme, which may be of independent interest, is...

  13. Hierarchically nanostructured materials for sustainable environmental applications

    Science.gov (United States)

    Ren, Zheng; Guo, Yanbing; Liu, Cai-Hong; Gao, Pu-Xian

    2013-01-01

    This review presents a comprehensive overview of the hierarchical nanostructured materials with either geometry or composition complexity in environmental applications. The hierarchical nanostructures offer advantages of high surface area, synergistic interactions, and multiple functionalities toward water remediation, biosensing, environmental gas sensing and monitoring as well as catalytic gas treatment. Recent advances in synthetic strategies for various hierarchical morphologies such as hollow spheres and urchin-shaped architectures have been reviewed. In addition to the chemical synthesis, the physical mechanisms associated with the materials design and device fabrication have been discussed for each specific application. The development and application of hierarchical complex perovskite oxide nanostructures have also been introduced in photocatalytic water remediation, gas sensing, and catalytic converter. Hierarchical nanostructures will open up many possibilities for materials design and device fabrication in environmental chemistry and technology. PMID:24790946

  14. Hierarchically Nanostructured Materials for Sustainable Environmental Applications

    Directory of Open Access Journals (Sweden)

    Zheng eRen

    2013-11-01

    Full Text Available This article presents a comprehensive overview of the hierarchical nanostructured materials with either geometry or composition complexity in environmental applications. The hierarchical nanostructures offer advantages of high surface area, synergistic interactions and multiple functionalities towards water remediation, environmental gas sensing and monitoring as well as catalytic gas treatment. Recent advances in synthetic strategies for various hierarchical morphologies such as hollow spheres and urchin-shaped architectures have been reviewed. In addition to the chemical synthesis, the physical mechanisms associated with the materials design and device fabrication have been discussed for each specific application. The development and application of hierarchical complex perovskite oxide nanostructures have also been introduced in photocatalytic water remediation, gas sensing and catalytic converter. Hierarchical nanostructures will open up many possibilities for materials design and device fabrication in environmental chemistry and technology.

  15. Regularized multivariate regression models with skew-t error distributions

    KAUST Repository

    Chen, Lianfu

    2014-06-01

    We consider regularization of the parameters in multivariate linear regression models with the errors having a multivariate skew-t distribution. An iterative penalized likelihood procedure is proposed for constructing sparse estimators of both the regression coefficient and inverse scale matrices simultaneously. The sparsity is introduced through penalizing the negative log-likelihood by adding L1-penalties on the entries of the two matrices. Taking advantage of the hierarchical representation of skew-t distributions, and using the expectation conditional maximization (ECM) algorithm, we reduce the problem to penalized normal likelihood and develop a procedure to minimize the ensuing objective function. Using a simulation study the performance of the method is assessed, and the methodology is illustrated using a real data set with a 24-dimensional response vector. © 2014 Elsevier B.V.

  16. Hierarchically Nanoporous Bioactive Glasses for High Efficiency Immobilization of Enzymes

    DEFF Research Database (Denmark)

    He, W.; Min, D.D.; Zhang, X.D.

    2014-01-01

    Bioactive glasses with hierarchical nanoporosity and structures have been heavily involved in immobilization of enzymes. Because of meticulous design and ingenious hierarchical nanostructuration of porosities from yeast cell biotemplates, hierarchically nanostructured porous bioactive glasses can...

  17. Hierarchical mutual information for the comparison of hierarchical community structures in complex networks

    CERN Document Server

    Perotti, Juan Ignacio; Caldarelli, Guido

    2015-01-01

    The quest for a quantitative characterization of community and modular structure of complex networks produced a variety of methods and algorithms to classify different networks. However, it is not clear if such methods provide consistent, robust and meaningful results when considering hierarchies as a whole. Part of the problem is the lack of a similarity measure for the comparison of hierarchical community structures. In this work we give a contribution by introducing the {\\it hierarchical mutual information}, which is a generalization of the traditional mutual information, and allows to compare hierarchical partitions and hierarchical community structures. The {\\it normalized} version of the hierarchical mutual information should behave analogously to the traditional normalized mutual information. Here, the correct behavior of the hierarchical mutual information is corroborated on an extensive battery of numerical experiments. The experiments are performed on artificial hierarchies, and on the hierarchical ...

  18. Hierarchical group dynamics in pigeon flocks.

    Science.gov (United States)

    Nagy, Máté; Akos, Zsuzsa; Biro, Dora; Vicsek, Tamás

    2010-04-08

    Animals that travel together in groups display a variety of fascinating motion patterns thought to be the result of delicate local interactions among group members. Although the most informative way of investigating and interpreting collective movement phenomena would be afforded by the collection of high-resolution spatiotemporal data from moving individuals, such data are scarce and are virtually non-existent for long-distance group motion within a natural setting because of the associated technological difficulties. Here we present results of experiments in which track logs of homing pigeons flying in flocks of up to 10 individuals have been obtained by high-resolution lightweight GPS devices and analysed using a variety of correlation functions inspired by approaches common in statistical physics. We find a well-defined hierarchy among flock members from data concerning leading roles in pairwise interactions, defined on the basis of characteristic delay times between birds' directional choices. The average spatial position of a pigeon within the flock strongly correlates with its place in the hierarchy, and birds respond more quickly to conspecifics perceived primarily through the left eye-both results revealing differential roles for birds that assume different positions with respect to flock-mates. From an evolutionary perspective, our results suggest that hierarchical organization of group flight may be more efficient than an egalitarian one, at least for those flock sizes that permit regular pairwise interactions among group members, during which leader-follower relationships are consistently manifested.

  19. Predicting allergic contact dermatitis: a hierarchical structure activity relationship (SAR) approach to chemical classification using topological and quantum chemical descriptors

    Science.gov (United States)

    Basak, Subhash C.; Mills, Denise; Hawkins, Douglas M.

    2008-06-01

    A hierarchical classification study was carried out based on a set of 70 chemicals—35 which produce allergic contact dermatitis (ACD) and 35 which do not. This approach was implemented using a regular ridge regression computer code, followed by conversion of regression output to binary data values. The hierarchical descriptor classes used in the modeling include topostructural (TS), topochemical (TC), and quantum chemical (QC), all of which are based solely on chemical structure. The concordance, sensitivity, and specificity are reported. The model based on the TC descriptors was found to be the best, while the TS model was extremely poor.

  20. ORDINAL REGRESSION FOR INFORMATION RETRIEVAL

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    This letter presents a new discriminative model for Information Retrieval (IR), referred to as Ordinal Regression Model (ORM). ORM is different from most existing models in that it views IR as ordinal regression problem (i.e. ranking problem) instead of binary classification. It is noted that the task of IR is to rank documents according to the user information needed, so IR can be viewed as ordinal regression problem. Two parameter learning algorithms for ORM are presented. One is a perceptron-based algorithm. The other is the ranking Support Vector Machine (SVM). The effectiveness of the proposed approach has been evaluated on the task of ad hoc retrieval using three English Text REtrieval Conference (TREC) sets and two Chinese TREC sets. Results show that ORM significantly outperforms the state-of-the-art language model approaches and OKAPI system in all test sets; and it is more appropriate to view IR as ordinal regression other than binary classification.

  1. Multiple Regression and Its Discontents

    Science.gov (United States)

    Snell, Joel C.; Marsh, Mitchell

    2012-01-01

    Multiple regression is part of a larger statistical strategy originated by Gauss. The authors raise questions about the theory and suggest some changes that would make room for Mandelbrot and Serendipity.

  2. Multiple Regression and Its Discontents

    Science.gov (United States)

    Snell, Joel C.; Marsh, Mitchell

    2012-01-01

    Multiple regression is part of a larger statistical strategy originated by Gauss. The authors raise questions about the theory and suggest some changes that would make room for Mandelbrot and Serendipity.

  3. Regression methods for medical research

    CERN Document Server

    Tai, Bee Choo

    2013-01-01

    Regression Methods for Medical Research provides medical researchers with the skills they need to critically read and interpret research using more advanced statistical methods. The statistical requirements of interpreting and publishing in medical journals, together with rapid changes in science and technology, increasingly demands an understanding of more complex and sophisticated analytic procedures.The text explains the application of statistical models to a wide variety of practical medical investigative studies and clinical trials. Regression methods are used to appropriately answer the

  4. Forecasting with Dynamic Regression Models

    CERN Document Server

    Pankratz, Alan

    2012-01-01

    One of the most widely used tools in statistical forecasting, single equation regression models is examined here. A companion to the author's earlier work, Forecasting with Univariate Box-Jenkins Models: Concepts and Cases, the present text pulls together recent time series ideas and gives special attention to possible intertemporal patterns, distributed lag responses of output to input series and the auto correlation patterns of regression disturbance. It also includes six case studies.

  5. Wrong Signs in Regression Coefficients

    Science.gov (United States)

    McGee, Holly

    1999-01-01

    When using parametric cost estimation, it is important to note the possibility of the regression coefficients having the wrong sign. A wrong sign is defined as a sign on the regression coefficient opposite to the researcher's intuition and experience. Some possible causes for the wrong sign discussed in this paper are a small range of x's, leverage points, missing variables, multicollinearity, and computational error. Additionally, techniques for determining the cause of the wrong sign are given.

  6. From Rasch scores to regression

    DEFF Research Database (Denmark)

    Christensen, Karl Bang

    2006-01-01

    Rasch models provide a framework for measurement and modelling latent variables. Having measured a latent variable in a population a comparison of groups will often be of interest. For this purpose the use of observed raw scores will often be inadequate because these lack interval scale propertie....... This paper compares two approaches to group comparison: linear regression models using estimated person locations as outcome variables and latent regression models based on the distribution of the score....

  7. Hierarchical, model-based risk management of critical infrastructures

    Energy Technology Data Exchange (ETDEWEB)

    Baiardi, F. [Polo G.Marconi La Spezia, Universita di Pisa, Pisa (Italy); Dipartimento di Informatica, Universita di Pisa, L.go B.Pontecorvo 3 56127, Pisa (Italy)], E-mail: f.baiardi@unipi.it; Telmon, C.; Sgandurra, D. [Dipartimento di Informatica, Universita di Pisa, L.go B.Pontecorvo 3 56127, Pisa (Italy)

    2009-09-15

    Risk management is a process that includes several steps, from vulnerability analysis to the formulation of a risk mitigation plan that selects countermeasures to be adopted. With reference to an information infrastructure, we present a risk management strategy that considers a sequence of hierarchical models, each describing dependencies among infrastructure components. A dependency exists anytime a security-related attribute of a component depends upon the attributes of other components. We discuss how this notion supports the formal definition of risk mitigation plan and the evaluation of the infrastructure robustness. A hierarchical relation exists among models that are analyzed because each model increases the level of details of some components in a previous one. Since components and dependencies are modeled through a hypergraph, to increase the model detail level, some hypergraph nodes are replaced by more and more detailed hypergraphs. We show how critical information for the assessment can be automatically deduced from the hypergraph and define conditions that determine cases where a hierarchical decomposition simplifies the assessment. In these cases, the assessment has to analyze the hypergraph that replaces the component rather than applying again all the analyses to a more detailed, and hence larger, hypergraph. We also show how the proposed framework supports the definition of a risk mitigation plan and discuss some indicators of the overall infrastructure robustness. Lastly, the development of tools to support the assessment is discussed.

  8. Hierarchical prediction errors in midbrain and septum during social learning

    Science.gov (United States)

    Mathys, Christoph; Weber, Lilian A. E.; Kasper, Lars; Mauer, Jan; Stephan, Klaas E.

    2017-01-01

    Abstract Social learning is fundamental to human interactions, yet its computational and physiological mechanisms are not well understood. One prominent open question concerns the role of neuromodulatory transmitters. We combined fMRI, computational modelling and genetics to address this question in two separate samples (N = 35, N = 47). Participants played a game requiring inference on an adviser’s intentions whose motivation to help or mislead changed over time. Our analyses suggest that hierarchically structured belief updates about current advice validity and the adviser’s trustworthiness, respectively, depend on different neuromodulatory systems. Low-level prediction errors (PEs) about advice accuracy not only activated regions known to support ‘theory of mind’, but also the dopaminergic midbrain. Furthermore, PE responses in ventral striatum were influenced by the Met/Val polymorphism of the Catechol-O-Methyltransferase (COMT) gene. By contrast, high-level PEs (‘expected uncertainty’) about the adviser’s fidelity activated the cholinergic septum. These findings, replicated in both samples, have important implications: They suggest that social learning rests on hierarchically related PEs encoded by midbrain and septum activity, respectively, in the same manner as other forms of learning under volatility. Furthermore, these hierarchical PEs may be broadcast by dopaminergic and cholinergic projections to induce plasticity specifically in cortical areas known to represent beliefs about others. PMID:28119508

  9. A Matlab program for stepwise regression

    Directory of Open Access Journals (Sweden)

    Yanhong Qi

    2016-03-01

    Full Text Available The stepwise linear regression is a multi-variable regression for identifying statistically significant variables in the linear regression equation. In present study, we presented the Matlab program of stepwise regression.

  10. Hierarchically structured, nitrogen-doped carbon membranes

    KAUST Repository

    Wang, Hong

    2017-08-03

    The present invention is a structure, method of making and method of use for a novel macroscopic hierarchically structured, nitrogen-doped, nano-porous carbon membrane (HNDCMs) with asymmetric and hierarchical pore architecture that can be produced on a large-scale approach. The unique HNDCM holds great promise as components in separation and advanced carbon devices because they could offer unconventional fluidic transport phenomena on the nanoscale. Overall, the invention set forth herein covers a hierarchically structured, nitrogen-doped carbon membranes and methods of making and using such a membranes.

  11. A Model for Slicing JAVA Programs Hierarchically

    Institute of Scientific and Technical Information of China (English)

    Bi-Xin Li; Xiao-Cong Fan; Jun Pang; Jian-Jun Zhao

    2004-01-01

    Program slicing can be effectively used to debug, test, analyze, understand and maintain objectoriented software. In this paper, a new slicing model is proposed to slice Java programs based on their inherent hierarchical feature. The main idea of hierarchical slicing is to slice programs in a stepwise way, from package level, to class level, method level, and finally up to statement level. The stepwise slicing algorithm and the related graph reachability algorithms are presented, the architecture of the Java program Analyzing Tool (JATO) based on hierarchical slicing model is provided, the applications and a small case study are also discussed.

  12. Hierarchical analysis of acceptable use policies

    Directory of Open Access Journals (Sweden)

    P. A. Laughton

    2008-01-01

    Full Text Available Acceptable use policies (AUPs are vital tools for organizations to protect themselves and their employees from misuse of computer facilities provided. A well structured, thorough AUP is essential for any organization. It is impossible for an effective AUP to deal with every clause and remain readable. For this reason, some sections of an AUP carry more weight than others, denoting importance. The methodology used to develop the hierarchical analysis is a literature review, where various sources were consulted. This hierarchical approach to AUP analysis attempts to highlight important sections and clauses dealt with in an AUP. The emphasis of the hierarchal analysis is to prioritize the objectives of an AUP.

  13. Hierarchical modeling and analysis for spatial data

    CERN Document Server

    Banerjee, Sudipto; Gelfand, Alan E

    2003-01-01

    Among the many uses of hierarchical modeling, their application to the statistical analysis of spatial and spatio-temporal data from areas such as epidemiology And environmental science has proven particularly fruitful. Yet to date, the few books that address the subject have been either too narrowly focused on specific aspects of spatial analysis, or written at a level often inaccessible to those lacking a strong background in mathematical statistics.Hierarchical Modeling and Analysis for Spatial Data is the first accessible, self-contained treatment of hierarchical methods, modeling, and dat

  14. Hierarchical Visual Analysis and Steering Framework for Astrophysical Simulations

    Institute of Scientific and Technical Information of China (English)

    肖健; 张加万; 原野; 周鑫; 纪丽; 孙济洲

    2015-01-01

    A framework for accelerating modern long-running astrophysical simulations is presented, which is based on a hierarchical architecture where computational steering in the high-resolution run is performed under the guide of knowledge obtained in the gradually refined ensemble analyses. Several visualization schemes for facilitating ensem-ble management, error analysis, parameter grouping and tuning are also integrated owing to the pluggable modular design. The proposed approach is prototyped based on the Flash code, and it can be extended by introducing user-defined visualization for specific requirements. Two real-world simulations, i.e., stellar wind and supernova remnant, are carried out to verify the proposed approach.

  15. XRA image segmentation using regression

    Science.gov (United States)

    Jin, Jesse S.

    1996-04-01

    Segmentation is an important step in image analysis. Thresholding is one of the most important approaches. There are several difficulties in segmentation, such as automatic selecting threshold, dealing with intensity distortion and noise removal. We have developed an adaptive segmentation scheme by applying the Central Limit Theorem in regression. A Gaussian regression is used to separate the distribution of background from foreground in a single peak histogram. The separation will help to automatically determine the threshold. A small 3 by 3 widow is applied and the modal of the local histogram is used to overcome noise. Thresholding is based on local weighting, where regression is used again for parameter estimation. A connectivity test is applied to the final results to remove impulse noise. We have applied the algorithm to x-ray angiogram images to extract brain arteries. The algorithm works well for single peak distribution where there is no valley in the histogram. The regression provides a method to apply knowledge in clustering. Extending regression for multiple-level segmentation needs further investigation.

  16. Heterogeneity in drug abuse among juvenile offenders: is mixture regression more informative than standard regression?

    Science.gov (United States)

    Montgomery, Katherine L; Vaughn, Michael G; Thompson, Sanna J; Howard, Matthew O

    2013-11-01

    Research on juvenile offenders has largely treated this population as a homogeneous group. However, recent findings suggest that this at-risk population may be considerably more heterogeneous than previously believed. This study compared mixture regression analyses with standard regression techniques in an effort to explain how known factors such as distress, trauma, and personality are associated with drug abuse among juvenile offenders. Researchers recruited 728 juvenile offenders from Missouri juvenile correctional facilities for participation in this study. Researchers investigated past-year substance use in relation to the following variables: demographic characteristics (gender, ethnicity, age, familial use of public assistance), antisocial behavior, and mental illness symptoms (psychopathic traits, psychiatric distress, and prior trauma). Results indicated that standard and mixed regression approaches identified significant variables related to past-year substance use among this population; however, the mixture regression methods provided greater specificity in results. Mixture regression analytic methods may help policy makers and practitioners better understand and intervene with the substance-related subgroups of juvenile offenders.

  17. Biplots in Reduced-Rank Regression

    NARCIS (Netherlands)

    Braak, ter C.J.F.; Looman, C.W.N.

    1994-01-01

    Regression problems with a number of related response variables are typically analyzed by separate multiple regressions. This paper shows how these regressions can be visualized jointly in a biplot based on reduced-rank regression. Reduced-rank regression combines multiple regression and principal c

  18. Interpretation of Standardized Regression Coefficients in Multiple Regression.

    Science.gov (United States)

    Thayer, Jerome D.

    The extent to which standardized regression coefficients (beta values) can be used to determine the importance of a variable in an equation was explored. The beta value and the part correlation coefficient--also called the semi-partial correlation coefficient and reported in squared form as the incremental "r squared"--were compared for…

  19. The relation of student behavior, peer status, race, and gender to decisions about school discipline using CHAID decision trees and regression modeling.

    Science.gov (United States)

    Horner, Stacy B; Fireman, Gary D; Wang, Eugene W

    2010-04-01

    Peer nominations and demographic information were collected from a diverse sample of 1493 elementary school participants to examine behavior (overt and relational aggression, impulsivity, and prosociality), context (peer status), and demographic characteristics (race and gender) as predictors of teacher and administrator decisions about discipline. Exploratory results using classification tree analyses indicated students nominated as average or highly overtly aggressive were more likely to be disciplined than others. Among these students, race was the most significant predictor, with African American students more likely to be disciplined than Caucasians, Hispanics, or Others. Among the students nominated as low in overt aggression, a lack of prosocial behavior was the most significant predictor. Confirmatory analysis using hierarchical logistic regression supported the exploratory results. Similarities with other biased referral patterns, proactive classroom management strategies, and culturally sensitive recommendations are discussed.

  20. Image meshing via hierarchical optimization

    Institute of Scientific and Technical Information of China (English)

    Hao XIE; Ruo-feng TONG‡

    2016-01-01

    Vector graphic, as a kind of geometric representation of raster images, has many advantages, e.g., defi nition independence and editing facility. A popular way to convert raster images into vector graphics is image meshing, the aim of which is to fi nd a mesh to represent an image as faithfully as possible. For traditional meshing algorithms, the crux of the problem resides mainly in the high non-linearity and non-smoothness of the objective, which makes it difficult to fi nd a desirable optimal solution. To ameliorate this situation, we present a hierarchical optimization algorithm solving the problem from coarser levels to fi ner ones, providing initialization for each level with its coarser ascent. To further simplify the problem, the original non-convex problem is converted to a linear least squares one, and thus becomes convex, which makes the problem much easier to solve. A dictionary learning framework is used to combine geometry and topology elegantly. Then an alternating scheme is employed to solve both parts. Experiments show that our algorithm runs fast and achieves better results than existing ones for most images.

  1. Image meshing via hierarchical optimization*

    Institute of Scientific and Technical Information of China (English)

    Hao XIE; Ruo-feng TONGS

    2016-01-01

    Vector graphic, as a kind of geometric representation of raster images, has many advantages, e.g., definition independence and editing facility. A popular way to convert raster images into vector graphics is image meshing, the aim of which is to find a mesh to represent an image as faithfully as possible. For traditional meshing algorithms, the crux of the problem resides mainly in the high non-linearity and non-smoothness of the objective, which makes it difficult to find a desirable optimal solution. To ameliorate this situation, we present a hierarchical optimization algorithm solving the problem from coarser levels to finer ones, providing initialization for each level with its coarser ascent. To further simplify the problem, the original non-convex problem is converted to a linear least squares one, and thus becomes convex, which makes the problem much easier to solve. A dictionary learning framework is used to combine geometry and topology elegantly. Then an alternating scheme is employed to solve both parts. Experiments show that our algorithm runs fast and achieves better results than existing ones for most images.

  2. Hierarchical Bayes Ensemble Kalman Filtering

    CERN Document Server

    Tsyrulnikov, Michael

    2015-01-01

    Ensemble Kalman filtering (EnKF), when applied to high-dimensional systems, suffers from an inevitably small affordable ensemble size, which results in poor estimates of the background error covariance matrix ${\\bf B}$. The common remedy is a kind of regularization, usually an ad-hoc spatial covariance localization (tapering) combined with artificial covariance inflation. Instead of using an ad-hoc regularization, we adopt the idea by Myrseth and Omre (2010) and explicitly admit that the ${\\bf B}$ matrix is unknown and random and estimate it along with the state (${\\bf x}$) in an optimal hierarchical Bayes analysis scheme. We separate forecast errors into predictability errors (i.e. forecast errors due to uncertainties in the initial data) and model errors (forecast errors due to imperfections in the forecast model) and include the two respective components ${\\bf P}$ and ${\\bf Q}$ of the ${\\bf B}$ matrix into the extended control vector $({\\bf x},{\\bf P},{\\bf Q})$. Similarly, we break the traditional backgrou...

  3. Inferential Models for Linear Regression

    Directory of Open Access Journals (Sweden)

    Zuoyi Zhang

    2011-09-01

    Full Text Available Linear regression is arguably one of the most widely used statistical methods in applications.  However, important problems, especially variable selection, remain a challenge for classical modes of inference.  This paper develops a recently proposed framework of inferential models (IMs in the linear regression context.  In general, an IM is able to produce meaningful probabilistic summaries of the statistical evidence for and against assertions about the unknown parameter of interest and, moreover, these summaries are shown to be properly calibrated in a frequentist sense.  Here we demonstrate, using simple examples, that the IM framework is promising for linear regression analysis --- including model checking, variable selection, and prediction --- and for uncertain inference in general.

  4. [Is regression of atherosclerosis possible?].

    Science.gov (United States)

    Thomas, D; Richard, J L; Emmerich, J; Bruckert, E; Delahaye, F

    1992-10-01

    Experimental studies have shown the regression of atherosclerosis in animals given a cholesterol-rich diet and then given a normal diet or hypolipidemic therapy. Despite favourable results of clinical trials of primary prevention modifying the lipid profile, the concept of atherosclerosis regression in man remains very controversial. The methodological approach is difficult: this is based on angiographic data and requires strict standardisation of angiographic views and reliable quantitative techniques of analysis which are available with image processing. Several methodologically acceptable clinical coronary studies have shown not only stabilisation but also regression of atherosclerotic lesions with reductions of about 25% in total cholesterol levels and of about 40% in LDL cholesterol levels. These reductions were obtained either by drugs as in CLAS (Cholesterol Lowering Atherosclerosis Study), FATS (Familial Atherosclerosis Treatment Study) and SCOR (Specialized Center of Research Intervention Trial), by profound modifications in dietary habits as in the Lifestyle Heart Trial, or by surgery (ileo-caecal bypass) as in POSCH (Program On the Surgical Control of the Hyperlipidemias). On the other hand, trials with non-lipid lowering drugs such as the calcium antagonists (INTACT, MHIS) have not shown significant regression of existing atherosclerotic lesions but only a decrease on the number of new lesions. The clinical benefits of these regression studies are difficult to demonstrate given the limited period of observation, relatively small population numbers and the fact that in some cases the subjects were asymptomatic. The decrease in the number of cardiovascular events therefore seems relatively modest and concerns essentially subjects who were symptomatic initially. The clinical repercussion of studies of prevention involving a single lipid factor is probably partially due to the reduction in progression and anatomical regression of the atherosclerotic plaque

  5. Nonparametric regression with filtered data

    CERN Document Server

    Linton, Oliver; Nielsen, Jens Perch; Van Keilegom, Ingrid; 10.3150/10-BEJ260

    2011-01-01

    We present a general principle for estimating a regression function nonparametrically, allowing for a wide variety of data filtering, for example, repeated left truncation and right censoring. Both the mean and the median regression cases are considered. The method works by first estimating the conditional hazard function or conditional survivor function and then integrating. We also investigate improved methods that take account of model structure such as independent errors and show that such methods can improve performance when the model structure is true. We establish the pointwise asymptotic normality of our estimators.

  6. Quasi-least squares regression

    CERN Document Server

    Shults, Justine

    2014-01-01

    Drawing on the authors' substantial expertise in modeling longitudinal and clustered data, Quasi-Least Squares Regression provides a thorough treatment of quasi-least squares (QLS) regression-a computational approach for the estimation of correlation parameters within the framework of generalized estimating equations (GEEs). The authors present a detailed evaluation of QLS methodology, demonstrating the advantages of QLS in comparison with alternative methods. They describe how QLS can be used to extend the application of the traditional GEE approach to the analysis of unequally spaced longitu

  7. Multivariate differential analyses of adolescents' experiences of ...

    African Journals Online (AJOL)

    and second order factor analyses, correlations, multiple regression, MANOVA, ... This does not mean that the high levels of violence, crime and abuse that are aggravated by socio economic factors such as poverty, unemployment, corruption, ...

  8. An Automatic Hierarchical Delay Analysis Tool

    Institute of Scientific and Technical Information of China (English)

    FaridMheir-El-Saadi; BozenaKaminska

    1994-01-01

    The performance analysis of VLSI integrated circuits(ICs) with flat tools is slow and even sometimes impossible to complete.Some hierarchical tools have been developed to speed up the analysis of these large ICs.However,these hierarchical tools suffer from a poor interaction with the CAD database and poorly automatized operations.We introduce a general hierarchical framework for performance analysis to solve these problems.The circuit analysis is automatic under the proposed framework.Information that has been automatically abstracted in the hierarchy is kept in database properties along with the topological information.A limited software implementation of the framework,PREDICT,has also been developed to analyze the delay performance.Experimental results show that hierarchical analysis CPU time and memory requirements are low if heuristics are used during the abstraction process.

  9. Packaging glass with hierarchically nanostructured surface

    KAUST Repository

    He, Jr-Hau

    2017-08-03

    An optical device includes an active region and packaging glass located on top of the active region. A top surface of the packaging glass includes hierarchical nanostructures comprised of honeycombed nanowalls (HNWs) and nanorod (NR) structures extending from the HNWs.

  10. Generation of hierarchically correlated multivariate symbolic sequences

    CERN Document Server

    Tumminello, Mi; Mantegna, R N

    2008-01-01

    We introduce an algorithm to generate multivariate series of symbols from a finite alphabet with a given hierarchical structure of similarities. The target hierarchical structure of similarities is arbitrary, for instance the one obtained by some hierarchical clustering procedure as applied to an empirical matrix of Hamming distances. The algorithm can be interpreted as the finite alphabet equivalent of the recently introduced hierarchically nested factor model (M. Tumminello et al. EPL 78 (3) 30006 (2007)). The algorithm is based on a generating mechanism that is different from the one used in the mutation rate approach. We apply the proposed methodology for investigating the relationship between the bootstrap value associated with a node of a phylogeny and the probability of finding that node in the true phylogeny.

  11. Hierarchical modularity in human brain functional networks

    CERN Document Server

    Meunier, D; Fornito, A; Ersche, K D; Bullmore, E T; 10.3389/neuro.11.037.2009

    2010-01-01

    The idea that complex systems have a hierarchical modular organization originates in the early 1960s and has recently attracted fresh support from quantitative studies of large scale, real-life networks. Here we investigate the hierarchical modular (or "modules-within-modules") decomposition of human brain functional networks, measured using functional magnetic resonance imaging (fMRI) in 18 healthy volunteers under no-task or resting conditions. We used a customized template to extract networks with more than 1800 regional nodes, and we applied a fast algorithm to identify nested modular structure at several hierarchical levels. We used mutual information, 0 < I < 1, to estimate the similarity of community structure of networks in different subjects, and to identify the individual network that is most representative of the group. Results show that human brain functional networks have a hierarchical modular organization with a fair degree of similarity between subjects, I=0.63. The largest 5 modules at ...

  12. HIERARCHICAL ORGANIZATION OF INFORMATION, IN RELATIONAL DATABASES

    Directory of Open Access Journals (Sweden)

    Demian Horia

    2008-05-01

    Full Text Available In this paper I will present different types of representation, of hierarchical information inside a relational database. I also will compare them to find the best organization for specific scenarios.

  13. Hierarchical Network Design Using Simulated Annealing

    DEFF Research Database (Denmark)

    Thomadsen, Tommy; Clausen, Jens

    2002-01-01

    The hierarchical network problem is the problem of finding the least cost network, with nodes divided into groups, edges connecting nodes in each groups and groups ordered in a hierarchy. The idea of hierarchical networks comes from telecommunication networks where hierarchies exist. Hierarchical...... networks are described and a mathematical model is proposed for a two level version of the hierarchical network problem. The problem is to determine which edges should connect nodes, and how demand is routed in the network. The problem is solved heuristically using simulated annealing which as a sub......-algorithm uses a construction algorithm to determine edges and route the demand. Performance for different versions of the algorithm are reported in terms of runtime and quality of the solutions. The algorithm is able to find solutions of reasonable quality in approximately 1 hour for networks with 100 nodes....

  14. When to Use Hierarchical Linear Modeling

    National Research Council Canada - National Science Library

    Veronika Huta

    2014-01-01

    Previous publications on hierarchical linear modeling (HLM) have provided guidance on how to perform the analysis, yet there is relatively little information on two questions that arise even before analysis...

  15. An introduction to hierarchical linear modeling

    National Research Council Canada - National Science Library

    Woltman, Heather; Feldstain, Andrea; MacKay, J. Christine; Rocchi, Meredith

    2012-01-01

    This tutorial aims to introduce Hierarchical Linear Modeling (HLM). A simple explanation of HLM is provided that describes when to use this statistical technique and identifies key factors to consider before conducting this analysis...

  16. Conservation Laws in the Hierarchical Model

    NARCIS (Netherlands)

    Beijeren, H. van; Gallavotti, G.; Knops, H.

    1974-01-01

    An exposition of the renormalization-group equations for the hierarchical model is given. Attention is drawn to some properties of the spin distribution functions which are conserved under the action of the renormalization group.

  17. Hierarchical DSE for multi-ASIP platforms

    DEFF Research Database (Denmark)

    Micconi, Laura; Corvino, Rosilde; Gangadharan, Deepak;

    2013-01-01

    This work proposes a hierarchical Design Space Exploration (DSE) for the design of multi-processor platforms targeted to specific applications with strict timing and area constraints. In particular, it considers platforms integrating multiple Application Specific Instruction Set Processors (ASIPs...

  18. Hierarchical organization versus self-organization

    OpenAIRE

    Busseniers, Evo

    2014-01-01

    In this paper we try to define the difference between hierarchical organization and self-organization. Organization is defined as a structure with a function. So we can define the difference between hierarchical organization and self-organization both on the structure as on the function. In the next two chapters these two definitions are given. For the structure we will use some existing definitions in graph theory, for the function we will use existing theory on (self-)organization. In the t...

  19. Hierarchical decision making for flood risk reduction

    DEFF Research Database (Denmark)

    Custer, Rocco; Nishijima, Kazuyoshi

    2013-01-01

    . In current practice, structures are often optimized individually without considering benefits of having a hierarchy of protection structures. It is here argued, that the joint consideration of hierarchically integrated protection structures is beneficial. A hierarchical decision model is utilized to analyze...... and compare the benefit of large upstream protection structures and local downstream protection structures in regard to epistemic uncertainty parameters. Results suggest that epistemic uncertainty influences the outcome of the decision model and that, depending on the magnitude of epistemic uncertainty...

  20. Hierarchical self-organization of tectonic plates

    OpenAIRE

    2010-01-01

    The Earth's surface is subdivided into eight large tectonic plates and many smaller ones. We reconstruct the plate tessellation history and demonstrate that both large and small plates display two distinct hierarchical patterns, described by different power-law size-relationships. While small plates display little organisational change through time, the structure of the large plates oscillate between minimum and maximum hierarchical tessellations. The organization of large plates rapidly chan...

  1. Angelic Hierarchical Planning: Optimal and Online Algorithms

    Science.gov (United States)

    2008-12-06

    restrict our attention to plans in I∗(Act, s0). Definition 2. ( Parr and Russell , 1998) A plan ah∗ is hierarchically optimal iff ah∗ = argmina∈I∗(Act,s0):T...Murdock, Dan Wu, and Fusun Yaman. SHOP2: An HTN planning system. JAIR, 20:379–404, 2003. Ronald Parr and Stuart Russell . Reinforcement Learning with...Angelic Hierarchical Planning: Optimal and Online Algorithms Bhaskara Marthi Stuart J. Russell Jason Wolfe Electrical Engineering and Computer

  2. Hierarchical Needs, Income Comparisons and Happiness Levels

    OpenAIRE

    Drakopoulos, Stavros

    2011-01-01

    The cornerstone of the hierarchical approach is that there are some basic human needs which must be satisfied before non-basic needs come into the picture. The hierarchical structure of needs implies that the satisfaction of primary needs provides substantial increases to individual happiness compared to the subsequent satisfaction of secondary needs. This idea can be combined with the concept of comparison income which means that individuals compare rewards with individuals with similar char...

  3. Regression of lumbar disk herniation

    Directory of Open Access Journals (Sweden)

    G. Yu Evzikov

    2015-01-01

    Full Text Available Compression of the spinal nerve root, giving rise to pain and sensory and motor disorders in the area of its innervation is the most vivid manifestation of herniated intervertebral disk. Different treatment modalities, including neurosurgery, for evolving these conditions are discussed. There has been recent evidence that spontaneous regression of disk herniation can regress. The paper describes a female patient with large lateralized disc extrusion that has caused compression of the nerve root S1, leading to obvious myotonic and radicular syndrome. Magnetic resonance imaging has shown that the clinical manifestations of discogenic radiculopathy, as well myotonic syndrome and morphological changes completely regressed 8 months later. The likely mechanism is inflammation-induced resorption of a large herniated disk fragment, which agrees with the data available in the literature. A decision to perform neurosurgery for which the patient had indications was made during her first consultation. After regression of discogenic radiculopathy, there was only moderate pain caused by musculoskeletal diseases (facet syndrome, piriformis syndrome that were successfully eliminated by minimally invasive techniques. 

  4. Heteroscedasticity checks for regression models

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    For checking on heteroscedasticity in regression models, a unified approach is proposed to constructing test statistics in parametric and nonparametric regression models. For nonparametric regression, the test is not affected sensitively by the choice of smoothing parameters which are involved in estimation of the nonparametric regression function. The limiting null distribution of the test statistic remains the same in a wide range of the smoothing parameters. When the covariate is one-dimensional, the tests are, under some conditions, asymptotically distribution-free. In the high-dimensional cases, the validity of bootstrap approximations is investigated. It is shown that a variant of the wild bootstrap is consistent while the classical bootstrap is not in the general case, but is applicable if some extra assumption on conditional variance of the squared error is imposed. A simulation study is performed to provide evidence of how the tests work and compare with tests that have appeared in the literature. The approach may readily be extended to handle partial linear, and linear autoregressive models.

  5. Cactus: An Introduction to Regression

    Science.gov (United States)

    Hyde, Hartley

    2008-01-01

    When the author first used "VisiCalc," the author thought it a very useful tool when he had the formulas. But how could he design a spreadsheet if there was no known formula for the quantities he was trying to predict? A few months later, the author relates he learned to use multiple linear regression software and suddenly it all clicked into…

  6. Growth Regression and Economic Theory

    NARCIS (Netherlands)

    Elbers, Chris; Gunning, Jan Willem

    2002-01-01

    In this note we show that the standard, loglinear growth regression specificationis consistent with one and only one model in the class of stochastic Ramsey models. Thismodel is highly restrictive: it requires a Cobb-Douglas technology and a 100% depreciationrate and it implies that risk does not af

  7. Correlation Weights in Multiple Regression

    Science.gov (United States)

    Waller, Niels G.; Jones, Jeff A.

    2010-01-01

    A general theory on the use of correlation weights in linear prediction has yet to be proposed. In this paper we take initial steps in developing such a theory by describing the conditions under which correlation weights perform well in population regression models. Using OLS weights as a comparison, we define cases in which the two weighting…

  8. Ridge Regression for Interactive Models.

    Science.gov (United States)

    Tate, Richard L.

    1988-01-01

    An exploratory study of the value of ridge regression for interactive models is reported. Assuming that the linear terms in a simple interactive model are centered to eliminate non-essential multicollinearity, a variety of common models, representing both ordinal and disordinal interactions, are shown to have "orientations" that are favorable to…

  9. Evaluating Hierarchical Structure in Music Annotations.

    Science.gov (United States)

    McFee, Brian; Nieto, Oriol; Farbood, Morwaread M; Bello, Juan Pablo

    2017-01-01

    Music exhibits structure at multiple scales, ranging from motifs to large-scale functional components. When inferring the structure of a piece, different listeners may attend to different temporal scales, which can result in disagreements when they describe the same piece. In the field of music informatics research (MIR), it is common to use corpora annotated with structural boundaries at different levels. By quantifying disagreements between multiple annotators, previous research has yielded several insights relevant to the study of music cognition. First, annotators tend to agree when structural boundaries are ambiguous. Second, this ambiguity seems to depend on musical features, time scale, and genre. Furthermore, it is possible to tune current annotation evaluation metrics to better align with these perceptual differences. However, previous work has not directly analyzed the effects of hierarchical structure because the existing methods for comparing structural annotations are designed for "flat" descriptions, and do not readily generalize to hierarchical annotations. In this paper, we extend and generalize previous work on the evaluation of hierarchical descriptions of musical structure. We derive an evaluation metric which can compare hierarchical annotations holistically across multiple levels. sing this metric, we investigate inter-annotator agreement on the multilevel annotations of two different music corpora, investigate the influence of acoustic properties on hierarchical annotations, and evaluate existing hierarchical segmentation algorithms against the distribution of inter-annotator agreement.

  10. Evaluating Hierarchical Structure in Music Annotations

    Directory of Open Access Journals (Sweden)

    Brian McFee

    2017-08-01

    Full Text Available Music exhibits structure at multiple scales, ranging from motifs to large-scale functional components. When inferring the structure of a piece, different listeners may attend to different temporal scales, which can result in disagreements when they describe the same piece. In the field of music informatics research (MIR, it is common to use corpora annotated with structural boundaries at different levels. By quantifying disagreements between multiple annotators, previous research has yielded several insights relevant to the study of music cognition. First, annotators tend to agree when structural boundaries are ambiguous. Second, this ambiguity seems to depend on musical features, time scale, and genre. Furthermore, it is possible to tune current annotation evaluation metrics to better align with these perceptual differences. However, previous work has not directly analyzed the effects of hierarchical structure because the existing methods for comparing structural annotations are designed for “flat” descriptions, and do not readily generalize to hierarchical annotations. In this paper, we extend and generalize previous work on the evaluation of hierarchical descriptions of musical structure. We derive an evaluation metric which can compare hierarchical annotations holistically across multiple levels. sing this metric, we investigate inter-annotator agreement on the multilevel annotations of two different music corpora, investigate the influence of acoustic properties on hierarchical annotations, and evaluate existing hierarchical segmentation algorithms against the distribution of inter-annotator agreement.

  11. Hierarchical Nanoceramics for Industrial Process Sensors

    Energy Technology Data Exchange (ETDEWEB)

    Ruud, James, A.; Brosnan, Kristen, H.; Striker, Todd; Ramaswamy, Vidya; Aceto, Steven, C.; Gao, Yan; Willson, Patrick, D.; Manoharan, Mohan; Armstrong, Eric, N., Wachsman, Eric, D.; Kao, Chi-Chang

    2011-07-15

    This project developed a robust, tunable, hierarchical nanoceramics materials platform for industrial process sensors in harsh-environments. Control of material structure at multiple length scales from nano to macro increased the sensing response of the materials to combustion gases. These materials operated at relatively high temperatures, enabling detection close to the source of combustion. It is anticipated that these materials can form the basis for a new class of sensors enabling widespread use of efficient combustion processes with closed loop feedback control in the energy-intensive industries. The first phase of the project focused on materials selection and process development, leading to hierarchical nanoceramics that were evaluated for sensing performance. The second phase focused on optimizing the materials processes and microstructures, followed by validation of performance of a prototype sensor in a laboratory combustion environment. The objectives of this project were achieved by: (1) synthesizing and optimizing hierarchical nanostructures; (2) synthesizing and optimizing sensing nanomaterials; (3) integrating sensing functionality into hierarchical nanostructures; (4) demonstrating material performance in a sensing element; and (5) validating material performance in a simulated service environment. The project developed hierarchical nanoceramic electrodes for mixed potential zirconia gas sensors with increased surface area and demonstrated tailored electrocatalytic activity operable at high temperatures enabling detection of products of combustion such as NOx close to the source of combustion. Methods were developed for synthesis of hierarchical nanostructures with high, stable surface area, integrated catalytic functionality within the structures for gas sensing, and demonstrated materials performance in harsh lab and combustion gas environments.

  12. Hierarchical Bayesian modeling of the space - time diffusion patterns of cholera epidemic in Kumasi, Ghana

    NARCIS (Netherlands)

    Osei, Frank B.; Osei, F.B.; Duker, Alfred A.; Stein, A.

    2011-01-01

    This study analyses the joint effects of the two transmission routes of cholera on the space-time diffusion dynamics. Statistical models are developed and presented to investigate the transmission network routes of cholera diffusion. A hierarchical Bayesian modelling approach is employed for a joint

  13. Hierarchical Bayesian modeling of the space-time diffusion patterns of cholera epidemic in Kumasi, Ghana

    NARCIS (Netherlands)

    Osei, Frank B.; Duker, Alfred A.; Stein, Alfred

    2011-01-01

    This study analyses the joint effects of the two transmission routes of cholera on the space-time diffusion dynamics. Statistical models are developed and presented to investigate the transmission network routes of cholera diffusion. A hierarchical Bayesian modelling approach is employed for a joint

  14. HIERARCHICAL OPTIMIZATION MODEL ON GEONETWORK

    Directory of Open Access Journals (Sweden)

    Z. Zha

    2012-07-01

    Full Text Available In existing construction experience of Spatial Data Infrastructure (SDI, GeoNetwork, as the geographical information integrated solution, is an effective way of building SDI. During GeoNetwork serving as an internet application, several shortcomings are exposed. The first one is that the time consuming of data loading has been considerately increasing with the growth of metadata count. Consequently, the efficiency of query and search service becomes lower. Another problem is that stability and robustness are both ruined since huge amount of metadata. The final flaw is that the requirements of multi-user concurrent accessing based on massive data are not effectively satisfied on the internet. A novel approach, Hierarchical Optimization Model (HOM, is presented to solve the incapability of GeoNetwork working with massive data in this paper. HOM optimizes the GeoNetwork from these aspects: internal procedure, external deployment strategies, etc. This model builds an efficient index for accessing huge metadata and supporting concurrent processes. In this way, the services based on GeoNetwork can maintain stable while running massive metadata. As an experiment, we deployed more than 30 GeoNetwork nodes, and harvest nearly 1.1 million metadata. From the contrast between the HOM-improved software and the original one, the model makes indexing and retrieval processes more quickly and keeps the speed stable on metadata amount increasing. It also shows stable on multi-user concurrent accessing to system services, the experiment achieved good results and proved that our optimization model is efficient and reliable.

  15. Uncertainty and Sensitivity Analyses Plan

    Energy Technology Data Exchange (ETDEWEB)

    Simpson, J.C.; Ramsdell, J.V. Jr.

    1993-04-01

    Hanford Environmental Dose Reconstruction (HEDR) Project staff are developing mathematical models to be used to estimate the radiation dose that individuals may have received as a result of emissions since 1944 from the US Department of Energy's (DOE) Hanford Site near Richland, Washington. An uncertainty and sensitivity analyses plan is essential to understand and interpret the predictions from these mathematical models. This is especially true in the case of the HEDR models where the values of many parameters are unknown. This plan gives a thorough documentation of the uncertainty and hierarchical sensitivity analysis methods recommended for use on all HEDR mathematical models. The documentation includes both technical definitions and examples. In addition, an extensive demonstration of the uncertainty and sensitivity analysis process is provided using actual results from the Hanford Environmental Dose Reconstruction Integrated Codes (HEDRIC). This demonstration shows how the approaches used in the recommended plan can be adapted for all dose predictions in the HEDR Project.

  16. C-HiLasso: A Collaborative Hierarchical Sparse Modeling Framework

    CERN Document Server

    Sprechmann, Pablo; Sapiro, Guillermo; Eldar, Yonina

    2010-01-01

    Sparse modeling is a powerful framework for data analysis and processing. Traditionally, encoding in this framework is performed by solving an L1-regularized linear regression problem, commonly referred to as Lasso or Basis Pursuit. In this work we combine the sparsity-inducing property of the Lasso model at the individual feature level, with the block-sparsity property of the Group Lasso model, where sparse groups of features are jointly encoded, obtaining a sparsity pattern hierarchically structured. This results in the Hierarchical Lasso (HiLasso), which shows important practical modeling advantages. We then extend this approach to the collaborative case, where a set of simultaneously coded signals share the same sparsity pattern at the higher (group) level, but not necessarily at the lower (inside the group) level, obtaining the collaborative HiLasso model (C-HiLasso). Such signals then share the same active groups, or classes, but not necessarily the same active set. This model is very well suited for ap...

  17. Social Influence on Information Technology Adoption and Sustained Use in Healthcare: A Hierarchical Bayesian Learning Method Analysis

    Science.gov (United States)

    Hao, Haijing

    2013-01-01

    Information technology adoption and diffusion is currently a significant challenge in the healthcare delivery setting. This thesis includes three papers that explore social influence on information technology adoption and sustained use in the healthcare delivery environment using conventional regression models and novel hierarchical Bayesian…

  18. Multilevel modeling was a convenient alternative to common regression designs in longitudinal suicide research.

    Science.gov (United States)

    Antretter, Elfi; Dunkel, Dirk; Osvath, Peter; Voros, Viktor; Fekete, Sandor; Haring, Christian

    2006-06-01

    The prospective investigation of repetitive nonfatal suicidal behavior is associated with two methodological problems. Due to the commonly used definitions of nonfatal suicidal behavior, clinical samples usually consist of patients with a considerable between-person variability. Second, repeated nonfatal suicidal episodes of the same subjects are likely to be correlated. We examined three regression techniques to comparatively evaluate their efficiency in addressing the given methodological problems. Repeated episodes of nonfatal suicidal behavior were assessed in two independent patient samples during a 2-year follow-up period. The first regression design modeled repetitive nonfatal suicidal behavior as a summary measure. The second regression model treated repeated episodes of the same subject as independent events. The third regression model represented a hierarchical linear model. The estimated mean effects of the first model were likely to be nonrepresentative for a considerable part of the study subjects. The second regression design overemphasized the impact of the predictor variables. The hierarchical linear model most appropriately accounted for the heterogeneity of the samples and the correlated data structure. The nonhierarchical regression designs did not provide appropriate statistical models for the prospective investigation of repetitive nonfatal suicidal behavior. Multilevel modeling provides a convenient alternative.

  19. Comparing methods of analysing datasets with small clusters: case studies using four paediatric datasets.

    Science.gov (United States)

    Marston, Louise; Peacock, Janet L; Yu, Keming; Brocklehurst, Peter; Calvert, Sandra A; Greenough, Anne; Marlow, Neil

    2009-07-01

    Studies of prematurely born infants contain a relatively large percentage of multiple births, so the resulting data have a hierarchical structure with small clusters of size 1, 2 or 3. Ignoring the clustering may lead to incorrect inferences. The aim of this study was to compare statistical methods which can be used to analyse such data: generalised estimating equations, multilevel models, multiple linear regression and logistic regression. Four datasets which differed in total size and in percentage of multiple births (n = 254, multiple 18%; n = 176, multiple 9%; n = 10 098, multiple 3%; n = 1585, multiple 8%) were analysed. With the continuous outcome, two-level models produced similar results in the larger dataset, while generalised least squares multilevel modelling (ML GLS 'xtreg' in Stata) and maximum likelihood multilevel modelling (ML MLE 'xtmixed' in Stata) produced divergent estimates using the smaller dataset. For the dichotomous outcome, most methods, except generalised least squares multilevel modelling (ML GH 'xtlogit' in Stata) gave similar odds ratios and 95% confidence intervals within datasets. For the continuous outcome, our results suggest using multilevel modelling. We conclude that generalised least squares multilevel modelling (ML GLS 'xtreg' in Stata) and maximum likelihood multilevel modelling (ML MLE 'xtmixed' in Stata) should be used with caution when the dataset is small. Where the outcome is dichotomous and there is a relatively large percentage of non-independent data, it is recommended that these are accounted for in analyses using logistic regression with adjusted standard errors or multilevel modelling. If, however, the dataset has a small percentage of clusters greater than size 1 (e.g. a population dataset of children where there are few multiples) there appears to be less need to adjust for clustering.

  20. Polynomial Regressions and Nonsense Inference

    Directory of Open Access Journals (Sweden)

    Daniel Ventosa-Santaulària

    2013-11-01

    Full Text Available Polynomial specifications are widely used, not only in applied economics, but also in epidemiology, physics, political analysis and psychology, just to mention a few examples. In many cases, the data employed to estimate such specifications are time series that may exhibit stochastic nonstationary behavior. We extend Phillips’ results (Phillips, P. Understanding spurious regressions in econometrics. J. Econom. 1986, 33, 311–340. by proving that an inference drawn from polynomial specifications, under stochastic nonstationarity, is misleading unless the variables cointegrate. We use a generalized polynomial specification as a vehicle to study its asymptotic and finite-sample properties. Our results, therefore, lead to a call to be cautious whenever practitioners estimate polynomial regressions.

  1. Producing The New Regressive Left

    DEFF Research Database (Denmark)

    Crone, Christine

    to be a committed artist, and how that translates into supporting al-Assad’s rule in Syria; the Ramadan programme Harrir Aqlak’s attempt to relaunch an intellectual renaissance and to promote religious pluralism; and finally, al-Mayadeen’s cooperation with the pan-Latin American TV station TeleSur and its ambitions...... becomes clear from the analytical chapters is the emergence of the new cross-ideological alliance of The New Regressive Left. This emerging coalition between Shia Muslims, religious minorities, parts of the Arab Left, secular cultural producers, and the remnants of the political,strategic resistance...... coalition (Iran, Hizbollah, Syria), capitalises on a series of factors that bring them together in spite of their otherwise diverse worldviews and agendas. The New Regressive Left is united by resistance against the growing influence of Saudi Arabia in the religious, cultural, political, economic...

  2. Quantile Regression With Measurement Error

    KAUST Repository

    Wei, Ying

    2009-08-27

    Regression quantiles can be substantially biased when the covariates are measured with error. In this paper we propose a new method that produces consistent linear quantile estimation in the presence of covariate measurement error. The method corrects the measurement error induced bias by constructing joint estimating equations that simultaneously hold for all the quantile levels. An iterative EM-type estimation algorithm to obtain the solutions to such joint estimation equations is provided. The finite sample performance of the proposed method is investigated in a simulation study, and compared to the standard regression calibration approach. Finally, we apply our methodology to part of the National Collaborative Perinatal Project growth data, a longitudinal study with an unusual measurement error structure. © 2009 American Statistical Association.

  3. Heteroscedasticity checks for regression models

    Institute of Scientific and Technical Information of China (English)

    ZHU; Lixing

    2001-01-01

    [1]Carroll, R. J., Ruppert, D., Transformation and Weighting in Regression, New York: Chapman and Hall, 1988.[2]Cook, R. D., Weisberg, S., Diagnostics for heteroscedasticity in regression, Biometrika, 1988, 70: 1—10.[3]Davidian, M., Carroll, R. J., Variance function estimation, J. Amer. Statist. Assoc., 1987, 82: 1079—1091.[4]Bickel, P., Using residuals robustly I: Tests for heteroscedasticity, Ann. Statist., 1978, 6: 266—291.[5]Carroll, R. J., Ruppert, D., On robust tests for heteroscedasticity, Ann. Statist., 1981, 9: 205—209.[6]Eubank, R. L., Thomas, W., Detecting heteroscedasticity in nonparametric regression, J. Roy. Statist. Soc., Ser. B, 1993, 55: 145—155.[7]Diblasi, A., Bowman, A., Testing for constant variance in a linear model, Statist. and Probab. Letters, 1997, 33: 95—103.[8]Dette, H., Munk, A., Testing heteoscedasticity in nonparametric regression, J. R. Statist. Soc. B, 1998, 60: 693—708.[9]Müller, H. G., Zhao, P. L., On a semi-parametric variance function model and a test for heteroscedasticity, Ann. Statist., 1995, 23: 946—967.[10]Stute, W., Manteiga, G., Quindimil, M. P., Bootstrap approximations in model checks for regression, J. Amer. Statist. Asso., 1998, 93: 141—149.[11]Stute, W., Thies, G., Zhu, L. X., Model checks for regression: An innovation approach, Ann. Statist., 1998, 26: 1916—1939.[12]Shorack, G. R., Wellner, J. A., Empirical Processes with Applications to Statistics, New York: Wiley, 1986.[13]Efron, B., Bootstrap methods: Another look at the jackknife, Ann. Statist., 1979, 7: 1—26.[14]Wu, C. F. J., Jackknife, bootstrap and other re-sampling methods in regression analysis, Ann. Statist., 1986, 14: 1261—1295.[15]H rdle, W., Mammen, E., Comparing non-parametric versus parametric regression fits, Ann. Statist., 1993, 21: 1926—1947.[16]Liu, R. Y., Bootstrap procedures under some non-i.i.d. models, Ann. Statist., 1988, 16: 1696—1708.[17

  4. Clustered regression with unknown clusters

    CERN Document Server

    Barman, Kishor

    2011-01-01

    We consider a collection of prediction experiments, which are clustered in the sense that groups of experiments ex- hibit similar relationship between the predictor and response variables. The experiment clusters as well as the regres- sion relationships are unknown. The regression relation- ships define the experiment clusters, and in general, the predictor and response variables may not exhibit any clus- tering. We call this prediction problem clustered regres- sion with unknown clusters (CRUC) and in this paper we focus on linear regression. We study and compare several methods for CRUC, demonstrate their applicability to the Yahoo Learning-to-rank Challenge (YLRC) dataset, and in- vestigate an associated mathematical model. CRUC is at the crossroads of many prior works and we study several prediction algorithms with diverse origins: an adaptation of the expectation-maximization algorithm, an approach in- spired by K-means clustering, the singular value threshold- ing approach to matrix rank minimization u...

  5. Robust nonlinear regression in applications

    OpenAIRE

    Lim, Changwon; Sen, Pranab K.; Peddada, Shyamal D.

    2013-01-01

    Robust statistical methods, such as M-estimators, are needed for nonlinear regression models because of the presence of outliers/influential observations and heteroscedasticity. Outliers and influential observations are commonly observed in many applications, especially in toxicology and agricultural experiments. For example, dose response studies, which are routinely conducted in toxicology and agriculture, sometimes result in potential outliers, especially in the high dose gr...

  6. Astronomical Methods for Nonparametric Regression

    Science.gov (United States)

    Steinhardt, Charles L.; Jermyn, Adam

    2017-01-01

    I will discuss commonly used techniques for nonparametric regression in astronomy. We find that several of them, particularly running averages and running medians, are generically biased, asymmetric between dependent and independent variables, and perform poorly in recovering the underlying function, even when errors are present only in one variable. We then examine less-commonly used techniques such as Multivariate Adaptive Regressive Splines and Boosted Trees and find them superior in bias, asymmetry, and variance both theoretically and in practice under a wide range of numerical benchmarks. In this context the chief advantage of the common techniques is runtime, which even for large datasets is now measured in microseconds compared with milliseconds for the more statistically robust techniques. This points to a tradeoff between bias, variance, and computational resources which in recent years has shifted heavily in favor of the more advanced methods, primarily driven by Moore's Law. Along these lines, we also propose a new algorithm which has better overall statistical properties than all techniques examined thus far, at the cost of significantly worse runtime, in addition to providing guidance on choosing the nonparametric regression technique most suitable to any specific problem. We then examine the more general problem of errors in both variables and provide a new algorithm which performs well in most cases and lacks the clear asymmetry of existing non-parametric methods, which fail to account for errors in both variables.

  7. Hierarchical Linear Modeling for Analysis of Ecological Momentary Assessment Data in Physical Medicine and Rehabilitation Research.

    Science.gov (United States)

    Terhorst, Lauren; Beck, Kelly Battle; McKeon, Ashlee B; Graham, Kristin M; Ye, Feifei; Shiffman, Saul

    2017-08-01

    Ecological momentary assessment (EMA) methods collect real-time data in real-world environments, which allow physical medicine and rehabilitation researchers to examine objective outcome data and reduces bias from retrospective recall. The statistical analysis of EMA data is directly related to the research question and the temporal design of the study. Hierarchical linear modeling, which accounts for multiple observations from the same participant, is a particularly useful approach to analyzing EMA data. The objective of this paper was to introduce the process of conducting hierarchical linear modeling analyses with EMA data. This is accomplished using exemplars from recent physical medicine and rehabilitation literature.

  8. Genetics Home Reference: caudal regression syndrome

    Science.gov (United States)

    ... Twitter Home Health Conditions caudal regression syndrome caudal regression syndrome Enable Javascript to view the expand/collapse ... Download PDF Open All Close All Description Caudal regression syndrome is a disorder that impairs the development ...

  9. Self-assembled biomimetic superhydrophobic hierarchical arrays.

    Science.gov (United States)

    Yang, Hongta; Dou, Xuan; Fang, Yin; Jiang, Peng

    2013-09-01

    Here, we report a simple and inexpensive bottom-up technology for fabricating superhydrophobic coatings with hierarchical micro-/nano-structures, which are inspired by the binary periodic structure found on the superhydrophobic compound eyes of some insects (e.g., mosquitoes and moths). Binary colloidal arrays consisting of exemplary large (4 and 30 μm) and small (300 nm) silica spheres are first assembled by a scalable Langmuir-Blodgett (LB) technology in a layer-by-layer manner. After surface modification with fluorosilanes, the self-assembled hierarchical particle arrays become superhydrophobic with an apparent water contact angle (CA) larger than 150°. The throughput of the resulting superhydrophobic coatings with hierarchical structures can be significantly improved by templating the binary periodic structures of the LB-assembled colloidal arrays into UV-curable fluoropolymers by a soft lithography approach. Superhydrophobic perfluoroether acrylate hierarchical arrays with large CAs and small CA hysteresis can be faithfully replicated onto various substrates. Both experiments and theoretical calculations based on the Cassie's dewetting model demonstrate the importance of the hierarchical structure in achieving the final superhydrophobic surface states. Copyright © 2013 Elsevier Inc. All rights reserved.

  10. Hierarchical models and chaotic spin glasses

    Science.gov (United States)

    Berker, A. Nihat; McKay, Susan R.

    1984-09-01

    Renormalization-group studies in position space have led to the discovery of hierarchical models which are exactly solvable, exhibiting nonclassical critical behavior at finite temperature. Position-space renormalization-group approximations that had been widely and successfully used are in fact alternatively applicable as exact solutions of hierarchical models, this realizability guaranteeing important physical requirements. For example, a hierarchized version of the Sierpiriski gasket is presented, corresponding to a renormalization-group approximation which has quantitatively yielded the multicritical phase diagrams of submonolayers on graphite. Hierarchical models are now being studied directly as a testing ground for new concepts. For example, with the introduction of frustration, chaotic renormalization-group trajectories were obtained for the first time. Thus, strong and weak correlations are randomly intermingled at successive length scales, and a new microscopic picture and mechanism for a spin glass emerges. An upper critical dimension occurs via a boundary crisis mechanism in cluster-hierarchical variants developed to have well-behaved susceptibilities.

  11. Standardized Regression Coefficients as Indices of Effect Sizes in Meta-Analysis

    Science.gov (United States)

    Kim, Rae Seon

    2011-01-01

    When conducting a meta-analysis, it is common to find many collected studies that report regression analyses, because multiple regression analysis is widely used in many fields. Meta-analysis uses effect sizes drawn from individual studies as a means of synthesizing a collection of results. However, indices of effect size from regression analyses…

  12. Leading virtual teams: hierarchical leadership, structural supports, and shared team leadership.

    Science.gov (United States)

    Hoch, Julia E; Kozlowski, Steve W J

    2014-05-01

    Using a field sample of 101 virtual teams, this research empirically evaluates the impact of traditional hierarchical leadership, structural supports, and shared team leadership on team performance. Building on Bell and Kozlowski's (2002) work, we expected structural supports and shared team leadership to be more, and hierarchical leadership to be less, strongly related to team performance when teams were more virtual in nature. As predicted, results from moderation analyses indicated that the extent to which teams were more virtual attenuated relations between hierarchical leadership and team performance but strengthened relations for structural supports and team performance. However, shared team leadership was significantly related to team performance regardless of the degree of virtuality. Results are discussed in terms of needed research extensions for understanding leadership processes in virtual teams and practical implications for leading virtual teams.

  13. A combined multidimensional scaling and hierarchical clustering view for the exploratory analysis of multidimensional data

    Science.gov (United States)

    Craig, Paul; Roa-Seïler, Néna

    2013-01-01

    This paper describes a novel information visualization technique that combines multidimensional scaling and hierarchical clustering to support the exploratory analysis of multidimensional data. The technique displays the results of multidimensional scaling using a scatter plot where the proximity of any two items' representations is approximate to their similarity according to a Euclidean distance metric. The results of hierarchical clustering are overlaid onto this view by drawing smoothed outlines around each nested cluster. The difference in similarity between successive cluster combinations is used to colour code clusters and make stronger natural clusters more prominent in the display. When a cluster or group of items is selected, multidimensional scaling and hierarchical clustering are re-applied to a filtered subset of the data, and animation is used to smooth the transition between successive filtered views. As a case study we demonstrate the technique being used to analyse survey data relating to the appropriateness of different phrases to different emotionally charged situations.

  14. Improve Query Performance On Hierarchical Data. Adjacency List Model Vs. Nested Set Model

    Directory of Open Access Journals (Sweden)

    Cornelia Gyorödi

    2016-04-01

    Full Text Available Hierarchical data are found in a variety of database applications, including content management categories, forums, business organization charts, and product categories. In this paper, we will examine two models deal with hierarchical data in relational databases namely, adjacency list model and nested set model. We analysed these models by executing various operations and queries in a web-application for the management of categories, thus highlighting the results obtained during performance comparison tests. The purpose of this paper is to present the advantages and disadvantages of using an adjacency list model compared to nested set model in a relational database integrated into an application for the management of categories, which needs to manipulate a big amount of hierarchical data.

  15. A Bayesian hierarchical diffusion model decomposition of performance in Approach-Avoidance Tasks.

    Science.gov (United States)

    Krypotos, Angelos-Miltiadis; Beckers, Tom; Kindt, Merel; Wagenmakers, Eric-Jan

    2015-01-01

    Common methods for analysing response time (RT) tasks, frequently used across different disciplines of psychology, suffer from a number of limitations such as the failure to directly measure the underlying latent processes of interest and the inability to take into account the uncertainty associated with each individual's point estimate of performance. Here, we discuss a Bayesian hierarchical diffusion model and apply it to RT data. This model allows researchers to decompose performance into meaningful psychological processes and to account optimally for individual differences and commonalities, even with relatively sparse data. We highlight the advantages of the Bayesian hierarchical diffusion model decomposition by applying it to performance on Approach-Avoidance Tasks, widely used in the emotion and psychopathology literature. Model fits for two experimental data-sets demonstrate that the model performs well. The Bayesian hierarchical diffusion model overcomes important limitations of current analysis procedures and provides deeper insight in latent psychological processes of interest.

  16. Bistability of mixed states in a neural network storing hierarchical patterns

    Science.gov (United States)

    Toya, Kaname; Fukushima, Kunihiko; Kabashima, Yoshiyuki; Okada, Masato

    2000-04-01

    We discuss the properties of equilibrium states in an autoassociative memory model storing hierarchically correlated patterns (hereafter, hierarchical patterns). We will show that symmetric mixed states (hereafter, mixed states) are bistable on the associative memory model storing the hierarchical patterns in a region of the ferromagnetic phase. This means that the first-order transition occurs in this ferromagnetic phase. We treat these contents with a statistical mechanical method (SCSNA) and by computer simulation. Finally, we discuss a physiological implication of this model. Sugase et al (1999 Nature 400 869) analysed the time-course of the information carried by the firing of face-responsive neurons in the inferior temporal cortex. We also discuss the relation between the theoretical results and the physiological experiments of Sugase et al .

  17. On Weighted Support Vector Regression

    DEFF Research Database (Denmark)

    Han, Xixuan; Clemmensen, Line Katrine Harder

    2014-01-01

    We propose a new type of weighted support vector regression (SVR), motivated by modeling local dependencies in time and space in prediction of house prices. The classic weights of the weighted SVR are added to the slack variables in the objective function (OF‐weights). This procedure directly...... the differences and similarities of the two types of weights by demonstrating the connection between the Least Absolute Shrinkage and Selection Operator (LASSO) and the SVR. We show that an SVR problem can be transformed to a LASSO problem plus a linear constraint and a box constraint. We demonstrate...

  18. Biased trapping issue on weighted hierarchical networks

    Indian Academy of Sciences (India)

    Meifeng Dai; Jie Liu; Feng Zhu

    2014-10-01

    In this paper, we present trapping issues of weight-dependent walks on weighted hierarchical networks which are based on the classic scale-free hierarchical networks. Assuming that edge’s weight is used as local information by a random walker, we introduce a biased walk. The biased walk is that a walker, at each step, chooses one of its neighbours with a probability proportional to the weight of the edge. We focus on a particular case with the immobile trap positioned at the hub node which has the largest degree in the weighted hierarchical networks. Using a method based on generating functions, we determine explicitly the mean first-passage time (MFPT) for the trapping issue. Let parameter (0 < < 1) be the weight factor. We show that the efficiency of the trapping process depends on the parameter a; the smaller the value of a, the more efficient is the trapping process.

  19. Improving broadcast channel rate using hierarchical modulation

    CERN Document Server

    Meric, Hugo; Arnal, Fabrice; Lesthievent, Guy; Boucheret, Marie-Laure

    2011-01-01

    We investigate the design of a broadcast system where the aim is to maximise the throughput. This task is usually challenging due to the channel variability. Forty years ago, Cover introduced and compared two schemes: time sharing and superposition coding. The second scheme was proved to be optimal for some channels. Modern satellite communications systems such as DVB-SH and DVB-S2 mainly rely on time sharing strategy to optimize throughput. They consider hierarchical modulation, a practical implementation of superposition coding, but only for unequal error protection or backward compatibility purposes. We propose in this article to combine time sharing and hierarchical modulation together and show how this scheme can improve the performance in terms of available rate. We present the gain on a simple channel modeling the broadcasting area of a satellite. Our work is applied to the DVB-SH standard, which considers hierarchical modulation as an optional feature.

  20. Incentive Mechanisms for Hierarchical Spectrum Markets

    CERN Document Server

    Iosifidis, George; Alpcan, Tansu; Koutsopoulos, Iordanis

    2011-01-01

    We study spectrum allocation mechanisms in hierarchical multi-layer markets which are expected to proliferate in the near future based on the current spectrum policy reform proposals. We consider a setting where a state agency sells spectrum to Primary Operators (POs) and in turn these resell it to Secondary Operators (SOs) through auctions. We show that these hierarchical markets do not result in a socially efficient spectrum allocation which is aimed by the agency, due to lack of coordination among the entities in different layers and the inherently selfish revenue-maximizing strategy of POs. In order to reconcile these opposing objectives, we propose an incentive mechanism which aligns the strategy and the actions of the POs with the objective of the agency, and thus it leads to system performance improvement in terms of social welfare. This pricing based mechanism constitutes a method for hierarchical market regulation and requires the feedback provision from SOs. A basic component of the proposed incenti...

  1. Hierarchical self-organization of tectonic plates

    CERN Document Server

    Morra, Gabriele; Müller, R Dietmar

    2010-01-01

    The Earth's surface is subdivided into eight large tectonic plates and many smaller ones. We reconstruct the plate tessellation history and demonstrate that both large and small plates display two distinct hierarchical patterns, described by different power-law size-relationships. While small plates display little organisational change through time, the structure of the large plates oscillate between minimum and maximum hierarchical tessellations. The organization of large plates rapidly changes from a weak hierarchy at 120-100 million years ago (Ma) towards a strong hierarchy, which peaked at 65-50, Ma subsequently relaxing back towards a minimum hierarchical structure. We suggest that this fluctuation reflects an alternation between top and bottom driven plate tectonics, revealing a previously undiscovered tectonic cyclicity at a timescale of 100 million years.

  2. Towards a sustainable manufacture of hierarchical zeolites.

    Science.gov (United States)

    Verboekend, Danny; Pérez-Ramírez, Javier

    2014-03-01

    Hierarchical zeolites have been established as a superior type of aluminosilicate catalysts compared to their conventional (purely microporous) counterparts. An impressive array of bottom-up and top-down approaches has been developed during the last decade to design and subsequently exploit these exciting materials catalytically. However, the sustainability of the developed synthetic methods has rarely been addressed. This paper highlights important criteria to ensure the ecological and economic viability of the manufacture of hierarchical zeolites. Moreover, by using base leaching as a promising case study, we verify a variety of approaches to increase reactor productivity, recycle waste streams, prevent the combustion of organic compounds, and minimize separation efforts. By reducing their synthetic footprint, hierarchical zeolites are positioned as an integral part of sustainable chemistry. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. Classification using Hierarchical Naive Bayes models

    DEFF Research Database (Denmark)

    Langseth, Helge; Dyhre Nielsen, Thomas

    2006-01-01

    Classification problems have a long history in the machine learning literature. One of the simplest, and yet most consistently well-performing set of classifiers is the Naïve Bayes models. However, an inherent problem with these classifiers is the assumption that all attributes used to describe...... an instance are conditionally independent given the class of that instance. When this assumption is violated (which is often the case in practice) it can reduce classification accuracy due to “information double-counting” and interaction omission. In this paper we focus on a relatively new set of models......, termed Hierarchical Naïve Bayes models. Hierarchical Naïve Bayes models extend the modeling flexibility of Naïve Bayes models by introducing latent variables to relax some of the independence statements in these models. We propose a simple algorithm for learning Hierarchical Naïve Bayes models...

  4. Hierarchical Neural Network Structures for Phoneme Recognition

    CERN Document Server

    Vasquez, Daniel; Minker, Wolfgang

    2013-01-01

    In this book, hierarchical structures based on neural networks are investigated for automatic speech recognition. These structures are evaluated on the phoneme recognition task where a  Hybrid Hidden Markov Model/Artificial Neural Network paradigm is used. The baseline hierarchical scheme consists of two levels each which is based on a Multilayered Perceptron. Additionally, the output of the first level serves as a second level input. The computational speed of the phoneme recognizer can be substantially increased by removing redundant information still contained at the first level output. Several techniques based on temporal and phonetic criteria have been investigated to remove this redundant information. The computational time could be reduced by 57% whilst keeping the system accuracy comparable to the baseline hierarchical approach.

  5. Universal hierarchical behavior of citation networks

    CERN Document Server

    Mones, Enys; Vicsek, Tamás

    2014-01-01

    Many of the essential features of the evolution of scientific research are imprinted in the structure of citation networks. Connections in these networks imply information about the transfer of knowledge among papers, or in other words, edges describe the impact of papers on other publications. This inherent meaning of the edges infers that citation networks can exhibit hierarchical features, that is typical of networks based on decision-making. In this paper, we investigate the hierarchical structure of citation networks consisting of papers in the same field. We find that the majority of the networks follow a universal trend towards a highly hierarchical state, and i) the various fields display differences only concerning their phase in life (distance from the "birth" of a field) or ii) the characteristic time according to which they are approaching the stationary state. We also show by a simple argument that the alterations in the behavior are related to and can be understood by the degree of specializatio...

  6. Static and dynamic friction of hierarchical surfaces

    Science.gov (United States)

    Costagliola, Gianluca; Bosia, Federico; Pugno, Nicola M.

    2016-12-01

    Hierarchical structures are very common in nature, but only recently have they been systematically studied in materials science, in order to understand the specific effects they can have on the mechanical properties of various systems. Structural hierarchy provides a way to tune and optimize macroscopic mechanical properties starting from simple base constituents and new materials are nowadays designed exploiting this possibility. This can be true also in the field of tribology. In this paper we study the effect of hierarchical patterned surfaces on the static and dynamic friction coefficients of an elastic material. Our results are obtained by means of numerical simulations using a one-dimensional spring-block model, which has previously been used to investigate various aspects of friction. Despite the simplicity of the model, we highlight some possible mechanisms that explain how hierarchical structures can significantly modify the friction coefficients of a material, providing a means to achieve tunability.

  7. Multiatlas segmentation as nonparametric regression.

    Science.gov (United States)

    Awate, Suyash P; Whitaker, Ross T

    2014-09-01

    This paper proposes a novel theoretical framework to model and analyze the statistical characteristics of a wide range of segmentation methods that incorporate a database of label maps or atlases; such methods are termed as label fusion or multiatlas segmentation. We model these multiatlas segmentation problems as nonparametric regression problems in the high-dimensional space of image patches. We analyze the nonparametric estimator's convergence behavior that characterizes expected segmentation error as a function of the size of the multiatlas database. We show that this error has an analytic form involving several parameters that are fundamental to the specific segmentation problem (determined by the chosen anatomical structure, imaging modality, registration algorithm, and label-fusion algorithm). We describe how to estimate these parameters and show that several human anatomical structures exhibit the trends modeled analytically. We use these parameter estimates to optimize the regression estimator. We show that the expected error for large database sizes is well predicted by models learned on small databases. Thus, a few expert segmentations can help predict the database sizes required to keep the expected error below a specified tolerance level. Such cost-benefit analysis is crucial for deploying clinical multiatlas segmentation systems.

  8. Genetic Algorithm for Hierarchical Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Sajid Hussain

    2007-09-01

    Full Text Available Large scale wireless sensor networks (WSNs can be used for various pervasive and ubiquitous applications such as security, health-care, industry automation, agriculture, environment and habitat monitoring. As hierarchical clusters can reduce the energy consumption requirements for WSNs, we investigate intelligent techniques for cluster formation and management. A genetic algorithm (GA is used to create energy efficient clusters for data dissemination in wireless sensor networks. The simulation results show that the proposed intelligent hierarchical clustering technique can extend the network lifetime for different network deployment environments.

  9. DC Hierarchical Control System for Microgrid Applications

    OpenAIRE

    Lu, Xiaonan; Sun, Kai; Guerrero, Josep M.; Huang, Lipei

    2013-01-01

    In order to enhance the DC side performance of AC-DC hybrid microgrid,a DC hierarchical control system is proposed in this paper.To meet the requirement of DC load sharing between the parallel power interfaces,droop method is adopted.Meanwhile,DC voltage secondary control is employed to restore the deviation in the DC bus voltage.The hierarchical control system is composed of two levels.DC voltage and AC current controllers are achieved in the primary control level.

  10. Hierarchical social networks and information flow

    Science.gov (United States)

    López, Luis; F. F. Mendes, Jose; Sanjuán, Miguel A. F.

    2002-12-01

    Using a simple model for the information flow on social networks, we show that the traditional hierarchical topologies frequently used by companies and organizations, are poorly designed in terms of efficiency. Moreover, we prove that this type of structures are the result of the individual aim of monopolizing as much information as possible within the network. As the information is an appropriate measurement of centrality, we conclude that this kind of topology is so attractive for leaders, because the global influence each actor has within the network is completely determined by the hierarchical level occupied.

  11. Analyzing security protocols in hierarchical networks

    DEFF Research Database (Denmark)

    Zhang, Ye; Nielson, Hanne Riis

    2006-01-01

    Validating security protocols is a well-known hard problem even in a simple setting of a single global network. But a real network often consists of, besides the public-accessed part, several sub-networks and thereby forms a hierarchical structure. In this paper we first present a process calculus...... capturing the characteristics of hierarchical networks and describe the behavior of protocols on such networks. We then develop a static analysis to automate the validation. Finally we demonstrate how the technique can benefit the protocol development and the design of network systems by presenting a series...

  12. Hierarchic Models of Turbulence, Superfluidity and Superconductivity

    CERN Document Server

    Kaivarainen, A

    2000-01-01

    New models of Turbulence, Superfluidity and Superconductivity, based on new Hierarchic theory, general for liquids and solids (physics/0102086), have been proposed. CONTENTS: 1 Turbulence. General description; 2 Mesoscopic mechanism of turbulence; 3 Superfluidity. General description; 4 Mesoscopic scenario of fluidity; 5 Superfluidity as a hierarchic self-organization process; 6 Superfluidity in 3He; 7 Superconductivity: General properties of metals and semiconductors; Plasma oscillations; Cyclotron resonance; Electroconductivity; 8. Microscopic theory of superconductivity (BCS); 9. Mesoscopic scenario of superconductivity: Interpretation of experimental data in the framework of mesoscopic model of superconductivity.

  13. Hierarchical Analysis of the Omega Ontology

    Energy Technology Data Exchange (ETDEWEB)

    Joslyn, Cliff A.; Paulson, Patrick R.

    2009-12-01

    Initial delivery for mathematical analysis of the Omega Ontology. We provide an analysis of the hierarchical structure of a version of the Omega Ontology currently in use within the US Government. After providing an initial statistical analysis of the distribution of all link types in the ontology, we then provide a detailed order theoretical analysis of each of the four main hierarchical links present. This order theoretical analysis includes the distribution of components and their properties, their parent/child and multiple inheritance structure, and the distribution of their vertical ranks.

  14. Multilevel Hierarchical Modeling of Benthic Macroinvertebrate Responses to Urbanization in Nine Metropolitan Regions across the Conterminous United States

    Science.gov (United States)

    Kashuba, Roxolana; Cha, YoonKyung; Alameddine, Ibrahim; Lee, Boknam; Cuffney, Thomas F.

    2010-01-01

    Multilevel hierarchical modeling methodology has been developed for use in ecological data analysis. The effect of urbanization on stream macroinvertebrate communities was measured across a gradient of basins in each of nine metropolitan regions across the conterminous United States. The hierarchical nature of this dataset was harnessed in a multi-tiered model structure, predicting both invertebrate response at the basin scale and differences in invertebrate response at the region scale. Ordination site scores, total taxa richness, Ephemeroptera, Plecoptera, Trichoptera (EPT) taxa richness, and richness-weighted mean tolerance of organisms at a site were used to describe invertebrate responses. Percentage of urban land cover was used as a basin-level predictor variable. Regional mean precipitation, air temperature, and antecedent agriculture were used as region-level predictor variables. Multilevel hierarchical models were fit to both levels of data simultaneously, borrowing statistical strength from the complete dataset to reduce uncertainty in regional coefficient estimates. Additionally, whereas non-hierarchical regressions were only able to show differing relations between invertebrate responses and urban intensity separately for each region, the multilevel hierarchical regressions were able to explain and quantify those differences within a single model. In this way, this modeling approach directly establishes the importance of antecedent agricultural conditions in masking the response of invertebrates to urbanization in metropolitan regions such as Milwaukee-Green Bay, Wisconsin; Denver, Colorado; and Dallas-Fort Worth, Texas. Also, these models show that regions with high precipitation, such as Atlanta, Georgia; Birmingham, Alabama; and Portland, Oregon, start out with better regional background conditions of invertebrates prior to urbanization but experience faster negative rates of change with urbanization. Ultimately, this urbanization

  15. Global Crop Monitoring: A Satellite-Based Hierarchical Approach

    Directory of Open Access Journals (Sweden)

    Bingfang Wu

    2015-04-01

    Full Text Available Taking advantage of multiple new remote sensing data sources, especially from Chinese satellites, the CropWatch system has expanded the scope of its international analyses through the development of new indicators and an upgraded operational methodology. The approach adopts a hierarchical system covering four spatial levels of detail: global, regional, national (thirty-one key countries including China and “sub-countries” (for the nine largest countries. The thirty-one countries encompass more that 80% of both production and exports of maize, rice, soybean and wheat. The methodology resorts to climatic and remote sensing indicators at different scales. The global patterns of crop environmental growing conditions are first analyzed with indicators for rainfall, temperature, photosynthetically active radiation (PAR as well as potential biomass. At the regional scale, the indicators pay more attention to crops and include Vegetation Health Index (VHI, Vegetation Condition Index (VCI, Cropped Arable Land Fraction (CALF as well as Cropping Intensity (CI. Together, they characterize crop situation, farming intensity and stress. CropWatch carries out detailed crop condition analyses at the national scale with a comprehensive array of variables and indicators. The Normalized Difference Vegetation Index (NDVI, cropped areas and crop conditions are integrated to derive food production estimates. For the nine largest countries, CropWatch zooms into the sub-national units to acquire detailed information on crop condition and production by including new indicators (e.g., Crop type proportion. Based on trend analysis, CropWatch also issues crop production supply outlooks, covering both long-term variations and short-term dynamic changes in key food exporters and importers. The hierarchical approach adopted by CropWatch is the basis of the analyses of climatic and crop conditions assessments published in the quarterly “CropWatch bulletin” which

  16. Interpreting Multiple Linear Regression: A Guidebook of Variable Importance

    Science.gov (United States)

    Nathans, Laura L.; Oswald, Frederick L.; Nimon, Kim

    2012-01-01

    Multiple regression (MR) analyses are commonly employed in social science fields. It is also common for interpretation of results to typically reflect overreliance on beta weights, often resulting in very limited interpretations of variable importance. It appears that few researchers employ other methods to obtain a fuller understanding of what…

  17. Estimating the exceedance probability of rain rate by logistic regression

    Science.gov (United States)

    Chiu, Long S.; Kedem, Benjamin

    1990-01-01

    Recent studies have shown that the fraction of an area with rain intensity above a fixed threshold is highly correlated with the area-averaged rain rate. To estimate the fractional rainy area, a logistic regression model, which estimates the conditional probability that rain rate over an area exceeds a fixed threshold given the values of related covariates, is developed. The problem of dependency in the data in the estimation procedure is bypassed by the method of partial likelihood. Analyses of simulated scanning multichannel microwave radiometer and observed electrically scanning microwave radiometer data during the Global Atlantic Tropical Experiment period show that the use of logistic regression in pixel classification is superior to multiple regression in predicting whether rain rate at each pixel exceeds a given threshold, even in the presence of noisy data. The potential of the logistic regression technique in satellite rain rate estimation is discussed.

  18. Does the absence of cointegration explain the typical findings in long horizon regressions?

    OpenAIRE

    Berben, R-P.; Dijk, Dick van

    1998-01-01

    textabstractOne of the stylized facts in financial and international economics is that of increasing predictability of variables such as exchange rates and stock returns at longer horizons. This fact is based upon applications of long horizon regressions, from which the typical findings are that the point estimates of the regression parameter, the associated t-statistic, and the regression R^2 all tend to increase as the horizon increases. Such long horizon regression analyses implicitly assu...

  19. Classification and regression tree analysis vs. multivariable linear and logistic regression methods as statistical tools for studying haemophilia.

    Science.gov (United States)

    Henrard, S; Speybroeck, N; Hermans, C

    2015-11-01

    Haemophilia is a rare genetic haemorrhagic disease characterized by partial or complete deficiency of coagulation factor VIII, for haemophilia A, or IX, for haemophilia B. As in any other medical research domain, the field of haemophilia research is increasingly concerned with finding factors associated with binary or continuous outcomes through multivariable models. Traditional models include multiple logistic regressions, for binary outcomes, and multiple linear regressions for continuous outcomes. Yet these regression models are at times difficult to implement, especially for non-statisticians, and can be difficult to interpret. The present paper sought to didactically explain how, why, and when to use classification and regression tree (CART) analysis for haemophilia research. The CART method is non-parametric and non-linear, based on the repeated partitioning of a sample into subgroups based on a certain criterion. Breiman developed this method in 1984. Classification trees (CTs) are used to analyse categorical outcomes and regression trees (RTs) to analyse continuous ones. The CART methodology has become increasingly popular in the medical field, yet only a few examples of studies using this methodology specifically in haemophilia have to date been published. Two examples using CART analysis and previously published in this field are didactically explained in details. There is increasing interest in using CART analysis in the health domain, primarily due to its ease of implementation, use, and interpretation, thus facilitating medical decision-making. This method should be promoted for analysing continuous or categorical outcomes in haemophilia, when applicable. © 2015 John Wiley & Sons Ltd.

  20. Use of probabilistic weights to enhance linear regression myoelectric control

    Science.gov (United States)

    Smith, Lauren H.; Kuiken, Todd A.; Hargrove, Levi J.

    2015-12-01

    Objective. Clinically available prostheses for transradial amputees do not allow simultaneous myoelectric control of degrees of freedom (DOFs). Linear regression methods can provide simultaneous myoelectric control, but frequently also result in difficulty with isolating individual DOFs when desired. This study evaluated the potential of using probabilistic estimates of categories of gross prosthesis movement, which are commonly used in classification-based myoelectric control, to enhance linear regression myoelectric control. Approach. Gaussian models were fit to electromyogram (EMG) feature distributions for three movement classes at each DOF (no movement, or movement in either direction) and used to weight the output of linear regression models by the probability that the user intended the movement. Eight able-bodied and two transradial amputee subjects worked in a virtual Fitts’ law task to evaluate differences in controllability between linear regression and probability-weighted regression for an intramuscular EMG-based three-DOF wrist and hand system. Main results. Real-time and offline analyses in able-bodied subjects demonstrated that probability weighting improved performance during single-DOF tasks (p amputees. Though goodness-of-fit evaluations suggested that the EMG feature distributions showed some deviations from the Gaussian, equal-covariance assumptions used in this experiment, the assumptions were sufficiently met to provide improved performance compared to linear regression control. Significance. Use of probability weights can improve the ability to isolate individual during linear regression myoelectric control, while maintaining the ability to simultaneously control multiple DOFs.

  1. Use of probabilistic weights to enhance linear regression myoelectric control.

    Science.gov (United States)

    Smith, Lauren H; Kuiken, Todd A; Hargrove, Levi J

    2015-12-01

    Clinically available prostheses for transradial amputees do not allow simultaneous myoelectric control of degrees of freedom (DOFs). Linear regression methods can provide simultaneous myoelectric control, but frequently also result in difficulty with isolating individual DOFs when desired. This study evaluated the potential of using probabilistic estimates of categories of gross prosthesis movement, which are commonly used in classification-based myoelectric control, to enhance linear regression myoelectric control. Gaussian models were fit to electromyogram (EMG) feature distributions for three movement classes at each DOF (no movement, or movement in either direction) and used to weight the output of linear regression models by the probability that the user intended the movement. Eight able-bodied and two transradial amputee subjects worked in a virtual Fitts' law task to evaluate differences in controllability between linear regression and probability-weighted regression for an intramuscular EMG-based three-DOF wrist and hand system. Real-time and offline analyses in able-bodied subjects demonstrated that probability weighting improved performance during single-DOF tasks (p linear regression control. Use of probability weights can improve the ability to isolate individual during linear regression myoelectric control, while maintaining the ability to simultaneously control multiple DOFs.

  2. Hierarchical Compressed Sensing for Cluster Based Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Vishal Krishna Singh

    2016-02-01

    Full Text Available Data transmission consumes significant amount of energy in large scale wireless sensor networks (WSNs. In such an environment, reducing the in-network communication and distributing the load evenly over the network can reduce the overall energy consumption and maximize the network lifetime significantly. In this work, the aforementioned problem of network lifetime and uneven energy consumption in large scale wireless sensor networks is addressed. This work proposes a hierarchical compressed sensing (HCS scheme to reduce the in-network communication during the data gathering process. Co-related sensor readings are collected via a hierarchical clustering scheme. A compressed sensing (CS based data processing scheme is devised to transmit the data from the source to the sink. The proposed HCS is able to identify the optimal position for the application of CS to achieve reduced and similar number of transmissions on all the nodes in the network. An activity map is generated to validate the reduced and uniformly distributed communication load of the WSN. Based on the number of transmissions per data gathering round, the bit-hop metric model is used to analyse the overall energy consumption. Simulation results validate the efficiency of the proposed method over the existing CS based approaches.

  3. Nonparametric Regression with Common Shocks

    Directory of Open Access Journals (Sweden)

    Eduardo A. Souza-Rodrigues

    2016-09-01

    Full Text Available This paper considers a nonparametric regression model for cross-sectional data in the presence of common shocks. Common shocks are allowed to be very general in nature; they do not need to be finite dimensional with a known (small number of factors. I investigate the properties of the Nadaraya-Watson kernel estimator and determine how general the common shocks can be while still obtaining meaningful kernel estimates. Restrictions on the common shocks are necessary because kernel estimators typically manipulate conditional densities, and conditional densities do not necessarily exist in the present case. By appealing to disintegration theory, I provide sufficient conditions for the existence of such conditional densities and show that the estimator converges in probability to the Kolmogorov conditional expectation given the sigma-field generated by the common shocks. I also establish the rate of convergence and the asymptotic distribution of the kernel estimator.

  4. Practical Session: Multiple Linear Regression

    Science.gov (United States)

    Clausel, M.; Grégoire, G.

    2014-12-01

    Three exercises are proposed to illustrate the simple linear regression. In the first one investigates the influence of several factors on atmospheric pollution. It has been proposed by D. Chessel and A.B. Dufour in Lyon 1 (see Sect. 6 of http://pbil.univ-lyon1.fr/R/pdf/tdr33.pdf) and is based on data coming from 20 cities of U.S. Exercise 2 is an introduction to model selection whereas Exercise 3 provides a first example of analysis of variance. Exercises 2 and 3 have been proposed by A. Dalalyan at ENPC (see Exercises 2 and 3 of http://certis.enpc.fr/~dalalyan/Download/TP_ENPC_5.pdf).

  5. Hierarchical Linear Models for Energy Prediction using Inertial Sensors: A Comparative Study for Treadmill Walking.

    Science.gov (United States)

    Vathsangam, Harshvardhan; Emken, B Adar; Schroeder, E Todd; Spruijt-Metz, Donna; Sukhatme, Gaurav S

    2013-12-01

    Walking is a commonly available activity to maintain a healthy lifestyle. Accurately tracking and measuring calories expended during walking can improve user feedback and intervention measures. Inertial sensors are a promising measurement tool to achieve this purpose. An important aspect in mapping inertial sensor data to energy expenditure is the question of normalizing across physiological parameters. Common approaches such as weight scaling require validation for each new population. An alternative is to use a hierarchical approach to model subject-specific parameters at one level and cross-subject parameters connected by physiological variables at a higher level. In this paper, we evaluate an inertial sensor-based hierarchical model to measure energy expenditure across a target population. We first determine the optimal movement and physiological features set to represent data. Periodicity based features are more accurate (phierarchical model with a subject-specific regression model and weight exponent scaled models. Subject-specific models perform significantly better (pmodels at all exponent scales whereas the hierarchical model performed worse than both. However, using an informed prior from the hierarchical model produces similar errors to using a subject-specific model with large amounts of training data (phierarchical modeling is a promising technique for generalized prediction energy expenditure prediction across a target population in a clinical setting.

  6. Lumbar herniated disc: spontaneous regression

    Science.gov (United States)

    Yüksel, Kasım Zafer

    2017-01-01

    Background Low back pain is a frequent condition that results in substantial disability and causes admission of patients to neurosurgery clinics. To evaluate and present the therapeutic outcomes in lumbar disc hernia (LDH) patients treated by means of a conservative approach, consisting of bed rest and medical therapy. Methods This retrospective cohort was carried out in the neurosurgery departments of hospitals in Kahramanmaraş city and 23 patients diagnosed with LDH at the levels of L3−L4, L4−L5 or L5−S1 were enrolled. Results The average age was 38.4 ± 8.0 and the chief complaint was low back pain and sciatica radiating to one or both lower extremities. Conservative treatment was administered. Neurological examination findings, durations of treatment and intervals until symptomatic recovery were recorded. Laségue tests and neurosensory examination revealed that mild neurological deficits existed in 16 of our patients. Previously, 5 patients had received physiotherapy and 7 patients had been on medical treatment. The number of patients with LDH at the level of L3−L4, L4−L5, and L5−S1 were 1, 13, and 9, respectively. All patients reported that they had benefit from medical treatment and bed rest, and radiologic improvement was observed simultaneously on MRI scans. The average duration until symptomatic recovery and/or regression of LDH symptoms was 13.6 ± 5.4 months (range: 5−22). Conclusions It should be kept in mind that lumbar disc hernias could regress with medical treatment and rest without surgery, and there should be an awareness that these patients could recover radiologically. This condition must be taken into account during decision making for surgical intervention in LDH patients devoid of indications for emergent surgery. PMID:28119770

  7. Credit Scoring Problem Based on Regression Analysis

    OpenAIRE

    Khassawneh, Bashar Suhil Jad Allah

    2014-01-01

    ABSTRACT: This thesis provides an explanatory introduction to the regression models of data mining and contains basic definitions of key terms in the linear, multiple and logistic regression models. Meanwhile, the aim of this study is to illustrate fitting models for the credit scoring problem using simple linear, multiple linear and logistic regression models and also to analyze the found model functions by statistical tools. Keywords: Data mining, linear regression, logistic regression....

  8. Hierarchical machining materials and their performance

    DEFF Research Database (Denmark)

    Sidorenko, Daria; Loginov, Pavel; Levashov, Evgeny

    2016-01-01

    as nanoparticles in the binder, or polycrystalline, aggregate-like reinforcements, also at several scale levels). Such materials can ensure better productivity, efficiency, and lower costs of drilling, cutting, grinding, and other technological processes. This article reviews the main groups of hierarchical...

  9. Hierarchical Optimization of Material and Structure

    DEFF Research Database (Denmark)

    Rodrigues, Helder C.; Guedes, Jose M.; Bendsøe, Martin P.

    2002-01-01

    This paper describes a hierarchical computational procedure for optimizing material distribution as well as the local material properties of mechanical elements. The local properties are designed using a topology design approach, leading to single scale microstructures, which may be restricted...... in various ways, based on design and manufacturing criteria. Implementation issues are also discussed and computational results illustrate the nature of the procedure....

  10. Hierarchical structure of nanofibers by bubbfil spinning

    Directory of Open Access Journals (Sweden)

    Liu Chang

    2015-01-01

    Full Text Available A polymer bubble is easy to be broken under a small external force, various different fragments are formed, which can be produced to different morphologies of products including nanofibers and plate-like strip. Polyvinyl-alcohol/honey solution is used in the experiment to show hierarchical structure by the bubbfil spinning.

  11. Sharing the proceeds from a hierarchical venture

    DEFF Research Database (Denmark)

    Hougaard, Jens Leth; Moreno-Ternero, Juan D.; Tvede, Mich;

    2017-01-01

    We consider the problem of distributing the proceeds generated from a joint venture in which the participating agents are hierarchically organized. We introduce and characterize a family of allocation rules where revenue ‘bubbles up’ in the hierarchy. The family is flexible enough to accommodate...

  12. Metal oxide nanostructures with hierarchical morphology

    Science.gov (United States)

    Ren, Zhifeng; Lao, Jing Yu; Banerjee, Debasish

    2007-11-13

    The present invention relates generally to metal oxide materials with varied symmetrical nanostructure morphologies. In particular, the present invention provides metal oxide materials comprising one or more metallic oxides with three-dimensionally ordered nanostructural morphologies, including hierarchical morphologies. The present invention also provides methods for producing such metal oxide materials.

  13. Hierarchical Context Modeling for Video Event Recognition.

    Science.gov (United States)

    Wang, Xiaoyang; Ji, Qiang

    2016-10-11

    Current video event recognition research remains largely target-centered. For real-world surveillance videos, targetcentered event recognition faces great challenges due to large intra-class target variation, limited image resolution, and poor detection and tracking results. To mitigate these challenges, we introduced a context-augmented video event recognition approach. Specifically, we explicitly capture different types of contexts from three levels including image level, semantic level, and prior level. At the image level, we introduce two types of contextual features including the appearance context features and interaction context features to capture the appearance of context objects and their interactions with the target objects. At the semantic level, we propose a deep model based on deep Boltzmann machine to learn event object representations and their interactions. At the prior level, we utilize two types of prior-level contexts including scene priming and dynamic cueing. Finally, we introduce a hierarchical context model that systematically integrates the contextual information at different levels. Through the hierarchical context model, contexts at different levels jointly contribute to the event recognition. We evaluate the hierarchical context model for event recognition on benchmark surveillance video datasets. Results show that incorporating contexts in each level can improve event recognition performance, and jointly integrating three levels of contexts through our hierarchical model achieves the best performance.

  14. Managing Clustered Data Using Hierarchical Linear Modeling

    Science.gov (United States)

    Warne, Russell T.; Li, Yan; McKyer, E. Lisako J.; Condie, Rachel; Diep, Cassandra S.; Murano, Peter S.

    2012-01-01

    Researchers in nutrition research often use cluster or multistage sampling to gather participants for their studies. These sampling methods often produce violations of the assumption of data independence that most traditional statistics share. Hierarchical linear modeling is a statistical method that can overcome violations of the independence…

  15. Strategic games on a hierarchical network model

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    Among complex network models, the hierarchical network model is the one most close to such real networks as world trade web, metabolic network, WWW, actor network, and so on. It has not only the property of power-law degree distribution, but growth based on growth and preferential attachment, showing the scale-free degree distribution property. In this paper, we study the evolution of cooperation on a hierarchical network model, adopting the prisoner's dilemma (PD) game and snowdrift game (SG) as metaphors of the interplay between connected nodes. BA model provides a unifying framework for the emergence of cooperation. But interestingly, we found that on hierarchical model, there is no sign of cooperation for PD game, while the frequency of cooperation decreases as the common benefit decreases for SG. By comparing the scaling clustering coefficient properties of the hierarchical network model with that of BA model, we found that the former amplifies the effect of hubs. Considering different performances of PD game and SG on complex network, we also found that common benefit leads to cooperation in the evolution. Thus our study may shed light on the emergence of cooperation in both natural and social environments.

  16. Endogenous Effort Norms in Hierarchical Firms

    NARCIS (Netherlands)

    J. Tichem (Jan)

    2013-01-01

    markdownabstract__Abstract__ This paper studies how a three-layer hierarchical firm (principal-supervisor-agent) optimally creates effort norms for its employees. The key assumption is that effort norms are affected by the example of superiors. In equilibrium, norms are eroded as one moves down

  17. Complex Evaluation of Hierarchically-Network Systems

    CERN Document Server

    Polishchuk, Dmytro; Yadzhak, Mykhailo

    2016-01-01

    Methods of complex evaluation based on local, forecasting, aggregated, and interactive evaluation of the state, function quality, and interaction of complex system's objects on the all hierarchical levels is proposed. Examples of analysis of the structural elements of railway transport system are used for illustration of efficiency of proposed approach.

  18. A Hierarchical Grouping of Great Educators

    Science.gov (United States)

    Barker, Donald G.

    1977-01-01

    Great educators of history were categorized on the basis of their: aims of education, fundamental ideas, and educational theories. They were classed by Ward's method of hierarchical analysis into six groupings: Socrates, Ausonius, Jerome, Abelard; Quintilian, Origen, Melanchthon, Ascham, Loyola; Alciun, Comenius; Vittorino, Basedow, Pestalozzi,…

  19. Ultrafast Hierarchical OTDM/WDM Network

    Directory of Open Access Journals (Sweden)

    Hideyuki Sotobayashi

    2003-12-01

    Full Text Available Ultrafast hierarchical OTDM/WDM network is proposed for the future core-network. We review its enabling technologies: C- and L-wavelength-band generation, OTDM-WDM mutual multiplexing format conversions, and ultrafast OTDM wavelengthband conversions.

  20. Hierarchical fuzzy identification of MR damper

    Science.gov (United States)

    Wang, Hao; Hu, Haiyan

    2009-07-01

    Magneto-rheological (MR) dampers, recently, have found many successful applications in civil engineering and numerous area of mechanical engineering. When an MR damper is to be used for vibration suppression, an inevitable problem is to determine the input voltage so as to gain the desired restoring force determined from the control law. This is the so-called inverse problem of MR dampers and is always an obstacle in the application of MR dampers to vibration control. It is extremely difficult to get the inverse model of MR damper because MR dampers are highly nonlinear and hysteretic. When identifying the inverse model of MR damper with simple fuzzy system, there maybe exists curse of dimensionality of fuzzy system. Therefore, it will take much more time, and even the inverse model may not be identifiable. The paper presents two-layer hierarchical fuzzy system, that is, two-layer hierarchical ANFIS to deal with the curse of dimensionality of the fuzzy identification of MR damper and to identify the inverse model of MR damper. Data used for training the model are generated from numerical simulation of nonlinear differential equations. The numerical simulation proves that the proposed hierarchical fuzzy system can model the inverse model of MR damper much more quickly than simple fuzzy system without any reduction of identification precision. Such hierarchical ANFIS shows the higher priority for the complicated system, and can also be used in system identification and system control for the complicated system.

  1. Statistical theory of hierarchical avalanche ensemble

    OpenAIRE

    Olemskoi, Alexander I.

    1999-01-01

    The statistical ensemble of avalanche intensities is considered to investigate diffusion in ultrametric space of hierarchically subordinated avalanches. The stationary intensity distribution and the steady-state current are obtained. The critical avalanche intensity needed to initiate the global avalanche formation is calculated depending on noise intensity. The large time asymptotic for the probability of the global avalanche appearance is derived.

  2. Managing Clustered Data Using Hierarchical Linear Modeling

    Science.gov (United States)

    Warne, Russell T.; Li, Yan; McKyer, E. Lisako J.; Condie, Rachel; Diep, Cassandra S.; Murano, Peter S.

    2012-01-01

    Researchers in nutrition research often use cluster or multistage sampling to gather participants for their studies. These sampling methods often produce violations of the assumption of data independence that most traditional statistics share. Hierarchical linear modeling is a statistical method that can overcome violations of the independence…

  3. Equivalence Checking of Hierarchical Combinational Circuits

    DEFF Research Database (Denmark)

    Williams, Poul Frederick; Hulgaard, Henrik; Andersen, Henrik Reif

    1999-01-01

    This paper presents a method for verifying that two hierarchical combinational circuits implement the same Boolean functions. The key new feature of the method is its ability to exploit the modularity of circuits to reuse results obtained from one part of the circuits in other parts. We demonstrate...... our method on large adder and multiplier circuits....

  4. Application of growing hierarchical SOM for visualisation of network forensics traffic data.

    Science.gov (United States)

    Palomo, E J; North, J; Elizondo, D; Luque, R M; Watson, T

    2012-08-01

    Digital investigation methods are becoming more and more important due to the proliferation of digital crimes and crimes involving digital evidence. Network forensics is a research area that gathers evidence by collecting and analysing network traffic data logs. This analysis can be a difficult process, especially because of the high variability of these attacks and large amount of data. Therefore, software tools that can help with these digital investigations are in great demand. In this paper, a novel approach to analysing and visualising network traffic data based on growing hierarchical self-organising maps (GHSOM) is presented. The self-organising map (SOM) has been shown to be successful for the analysis of highly-dimensional input data in data mining applications as well as for data visualisation in a more intuitive and understandable manner. However, the SOM has some problems related to its static topology and its inability to represent hierarchical relationships in the input data. The GHSOM tries to overcome these limitations by generating a hierarchical architecture that is automatically determined according to the input data and reflects the inherent hierarchical relationships among them. Moreover, the proposed GHSOM has been modified to correctly treat the qualitative features that are present in the traffic data in addition to the quantitative features. Experimental results show that this approach can be very useful for a better understanding of network traffic data, making it easier to search for evidence of attacks or anomalous behaviour in a network environment. Copyright © 2012 Elsevier Ltd. All rights reserved.

  5. Generic hierarchical engine for mask data preparation

    Science.gov (United States)

    Kalus, Christian K.; Roessl, Wolfgang; Schnitker, Uwe; Simecek, Michal

    2002-07-01

    Electronic layouts are usually flattened on their path from the hierarchical source downstream to the wafer. Mask data preparation has certainly been identified as a severe bottleneck since long. Data volumes are not only doubling every year along the ITRS roadmap. With the advent of optical proximity correction and phase-shifting masks data volumes are escalating up to non-manageable heights. Hierarchical treatment is one of the most powerful means to keep memory and CPU consumption in reasonable ranges. Only recently, however, has this technique acquired more public attention. Mask data preparation is the most critical area calling for a sound infrastructure to reduce the handling problem. Gaining more and more attention though, are other applications such as large area simulation and manufacturing rule checking (MRC). They all would profit from a generic engine capable to efficiently treat hierarchical data. In this paper we will present a generic engine for hierarchical treatment which solves the major problem, steady transitions along cell borders. Several alternatives exist how to walk through the hierarchy tree. They have, to date, not been thoroughly investigated. One is a bottom-up attempt to treat cells starting with the most elementary cells. The other one is a top-down approach which lends itself to creating a new hierarchy tree. In addition, since the variety, degree of hierarchy and quality of layouts extends over a wide range a generic engine has to take intelligent decisions when exploding the hierarchy tree. Several applications will be shown, in particular how far the limits can be pushed with the current hierarchical engine.

  6. Hierarchical organisation in perception of orientation.

    Science.gov (United States)

    Spinelli, D; Antonucci, G; Daini, R; Martelli, M L; Zoccolotti, P

    1999-01-01

    According to Rock [1990, in The Legacy of Solomon Asch (Hillsdale, NJ: Lawrence Erlbaum Associates)], hierarchical organisation of perception describes cases in which the orientation of an object is affected by the immediately surrounding elements in the visual field. Various experiments were performed to study the hierarchical organisation of orientation perception. In most of them the rod-and-frame-illusion (RFI: change of the apparent vertical measured on a central rod surrounded by a tilted frame) was measured in the presence/absence of a second inner frame. The first three experiments showed that, when the inner frame is vertical, the direction and size of the illusion are consistent with expectancies based on the hierarchical organisation hypothesis. An analysis of published and unpublished data collected on a large number of subjects showed that orientational hierarchical effects are independent from the absolute size of the RFI. In experiments 4 to 7 we examined the perceptual conditions of the inner stimulus (enclosure, orientation, and presence of luminance borders) critical for obtaining a hierarchical organisation effect. Although an inner vertical square was effective in reducing the illusion (experiment 3), an inner circle enclosing the rod was ineffective (experiment 4). This indicates that definite orientation is necessary to modulate the illusion. However, orientational information provided by a vertical or horizontal rectangle presented near the rod, but not enclosing it, did not modulate the RFI (experiment 5). This suggests that the presence of a figure with oriented contours enclosing the rod is critical. In experiments 6 and 7 we studied whether the presence of luminance borders is important or whether the inner upright square might be effective also if made of subjective contours. When the subjective contour figure was salient and the observers perceived it clearly, its effectiveness in modulating the RFI was comparable to that observed with

  7. Comparing parametric and nonparametric regression methods for panel data

    DEFF Research Database (Denmark)

    Czekaj, Tomasz Gerard; Henningsen, Arne

    We investigate and compare the suitability of parametric and non-parametric stochastic regression methods for analysing production technologies and the optimal firm size. Our theoretical analysis shows that the most commonly used functional forms in empirical production analysis, Cobb......-Douglas and Translog, are unsuitable for analysing the optimal firm size. We show that the Translog functional form implies an implausible linear relationship between the (logarithmic) firm size and the elasticity of scale, where the slope is artificially related to the substitutability between the inputs....... The practical applicability of the parametric and non-parametric regression methods is scrutinised and compared by an empirical example: we analyse the production technology and investigate the optimal size of Polish crop farms based on a firm-level balanced panel data set. A nonparametric specification test...

  8. Varying-coefficient functional linear regression

    CERN Document Server

    Wu, Yichao; Müller, Hans-Georg; 10.3150/09-BEJ231

    2011-01-01

    Functional linear regression analysis aims to model regression relations which include a functional predictor. The analog of the regression parameter vector or matrix in conventional multivariate or multiple-response linear regression models is a regression parameter function in one or two arguments. If, in addition, one has scalar predictors, as is often the case in applications to longitudinal studies, the question arises how to incorporate these into a functional regression model. We study a varying-coefficient approach where the scalar covariates are modeled as additional arguments of the regression parameter function. This extension of the functional linear regression model is analogous to the extension of conventional linear regression models to varying-coefficient models and shares its advantages, such as increased flexibility; however, the details of this extension are more challenging in the functional case. Our methodology combines smoothing methods with regularization by truncation at a finite numb...

  9. Energy Constrained Hierarchical Task Scheduling Algorithm for Mobile Grids

    Directory of Open Access Journals (Sweden)

    Arjun Singh

    2014-05-01

    Full Text Available In mobile grids, scheduling the computation tasks and the communication transactions onto the target architecture is the important problem when a mobile grid environment and a pre-selected architecture are given. Even though the scheduling problem is a traditional topic, almost all previous work focuses on maximizing the performance through the scheduling process. The algorithms developed this way are not suitable for real-time embedded applications, in which the main objective is to minimize the energy consumption of the system under tight performance constraints. This paper entails an energy constrained hierarchical task scheduling algorithm for Mobile Grids to minimize the power consumption of the mobile nodes. The task is rescheduled when the mobile node moves beyond the transmission range. The performance is estimated based on the average delay and packet delivery ratio based on nodes and flows. The performance metrics are analysed using NS-2 simulator.

  10. Hierarchical planning for a surface mounting machine placement

    Institute of Scientific and Technical Information of China (English)

    曾又姣; 马登哲; 金烨; 严隽琪

    2004-01-01

    For a surface mounting machine (SMM) in printed circuit board (PCB) assembly line, there are four problems, e.g. CAD data conversion, nozzle selection, feeder assignment and placement sequence determination. A hierarchical planning for them to maximize the throughput rate of an SMM is presented here. To minimize set-up time, a CAD data conversion system was first applied that could automatically generate the data for machine placement from CAD design data files. Then an effective nozzle selection approach was implemented to minimize the time of nozzle changing. And then, to minimize picking time, an algorithm for feeder assignment was used to make picking multiple components simultaneously as much as possible. Finally, in order to shorten pick-and-place time, a heuristic algorithm was used to determine optimal component placement sequence according to the decided feeder positions. Experiments were conducted on a four head SMM. The experimental results were used to analyse the assembly line performance.

  11. Extending stability through hierarchical clusters in Echo State Networks

    Directory of Open Access Journals (Sweden)

    Sarah Jarvis

    2010-07-01

    Full Text Available Echo State Networks (ESN are reservoir networks that satisfy well-established criteria for stability when constructed as feedforward networks. Recent evidence suggests that stability criteria are altered in the presence of reservoir substructures, such as clusters. Understanding how the reservoir architecture affects stability is thus important for the appropriate design of any ESN. To quantitatively determine the influence of the most relevant network parameters, we analysed the impact of reservoir substructures on stability in hierarchically clustered ESNs (HESN, as they allow a smooth transition from highly structured to increasingly homogeneous reservoirs. Previous studies used the largest eigenvalue of the reservoir connectivity matrix (spectral radius as a predictor for stable network dynamics. Here, we evaluate the impact of clusters, hierarchy and intercluster connectivity on the predictive power of the spectral radius for stability. Both hierarchy and low relative cluster sizes extend the range of spectral radius values, leading to stable networks, while increasing intercluster connectivity decreased maximal spectral radius.

  12. An Improved 6LoWPAN Hierarchical Routing Protocol

    Directory of Open Access Journals (Sweden)

    Xue Li

    2015-10-01

    Full Text Available IETF 6LoWPAN working group is engaged in the IPv6 protocol stack research work based on IEEE802.15.4 standard. In this working group, the routing protocol is one of the important research contents. In the 6LoWPAN, HiLow is a well-known layered routing protocol. This paper puts forward an improved hierarchical routing protocol GHiLow by improving HiLow parent node selection and path restoration strategy. GHiLow improves the parent node selection by increasing the choice of parameters. Simutaneously, it also improves path recovery by analysing different situations to recovery path. Therefore, GHiLow contributes to the ehancement of network performance and the decrease of network energy consumption.

  13. Functional Regression for Quasar Spectra

    CERN Document Server

    Ciollaro, Mattia; Freeman, Peter; Genovese, Christopher; Lei, Jing; O'Connell, Ross; Wasserman, Larry

    2014-01-01

    The Lyman-alpha forest is a portion of the observed light spectrum of distant galactic nuclei which allows us to probe remote regions of the Universe that are otherwise inaccessible. The observed Lyman-alpha forest of a quasar light spectrum can be modeled as a noisy realization of a smooth curve that is affected by a `damping effect' which occurs whenever the light emitted by the quasar travels through regions of the Universe with higher matter concentration. To decode the information conveyed by the Lyman-alpha forest about the matter distribution, we must be able to separate the smooth `continuum' from the noise and the contribution of the damping effect in the quasar light spectra. To predict the continuum in the Lyman-alpha forest, we use a nonparametric functional regression model in which both the response and the predictor variable (the smooth part of the damping-free portion of the spectrum) are function-valued random variables. We demonstrate that the proposed method accurately predicts the unobserv...

  14. Knowledge and Awareness: Linear Regression

    Directory of Open Access Journals (Sweden)

    Monika Raghuvanshi

    2016-12-01

    Full Text Available Knowledge and awareness are factors guiding development of an individual. These may seem simple and practicable, but in reality a proper combination of these is a complex task. Economically driven state of development in younger generations is an impediment to the correct manner of development. As youths are at the learning phase, they can be molded to follow a correct lifestyle. Awareness and knowledge are important components of any formal or informal environmental education. The purpose of this study is to evaluate the relationship of these components among students of secondary/ senior secondary schools who have undergone a formal study of environment in their curricula. A suitable instrument is developed in order to measure the elements of Awareness and Knowledge among the participants of the study. Data was collected from various secondary and senior secondary school students in the age group 14 to 20 years using cluster sampling technique from the city of Bikaner, India. Linear regression analysis was performed using IBM SPSS 23 statistical tool. There exists a weak relation between knowledge and awareness about environmental issues, caused due to routine practices mishandling; hence one component can be complemented by other for improvement in both. Knowledge and awareness are crucial factors and can provide huge opportunities in any field. Resource utilization for economic solutions may pave the way for eco-friendly products and practices. If green practices are inculcated at the learning phase, they may become normal routine. This will also help in repletion of the environment.

  15. Principal component regression analysis with SPSS.

    Science.gov (United States)

    Liu, R X; Kuang, J; Gong, Q; Hou, X L

    2003-06-01

    The paper introduces all indices of multicollinearity diagnoses, the basic principle of principal component regression and determination of 'best' equation method. The paper uses an example to describe how to do principal component regression analysis with SPSS 10.0: including all calculating processes of the principal component regression and all operations of linear regression, factor analysis, descriptives, compute variable and bivariate correlations procedures in SPSS 10.0. The principal component regression analysis can be used to overcome disturbance of the multicollinearity. The simplified, speeded up and accurate statistical effect is reached through the principal component regression analysis with SPSS.

  16. Appraisal, coping, emotion, and performance during elite fencing matches: a random coefficient regression model approach.

    Science.gov (United States)

    Doron, J; Martinent, G

    2016-06-23

    Understanding more about the stress process is important for the performance of athletes during stressful situations. Grounded in Lazarus's (1991, 1999, 2000) CMRT of emotion, this study tracked longitudinally the relationships between cognitive appraisal, coping, emotions, and performance in nine elite fencers across 14 international matches (representing 619 momentary assessments) using a naturalistic, video-assisted methodology. A series of hierarchical linear modeling analyses were conducted to: (a) explore the relationships between cognitive appraisals (challenge and threat), coping strategies (task- and disengagement oriented coping), emotions (positive and negative) and objective performance; (b) ascertain whether the relationship between appraisal and emotion was mediated by coping; and (c) examine whether the relationship between appraisal and objective performance was mediated by emotion and coping. The results of the random coefficient regression models showed: (a) positive relationships between challenge appraisal, task-oriented coping, positive emotions, and performance, as well as between threat appraisal, disengagement-oriented coping and negative emotions; (b) that disengagement-oriented coping partially mediated the relationship between threat and negative emotions, whereas task-oriented coping partially mediated the relationship between challenge and positive emotions; and (c) that disengagement-oriented coping mediated the relationship between threat and performance, whereas task-oriented coping and positive emotions partially mediated the relationship between challenge and performance. As a whole, this study furthered knowledge during sport performance situations of Lazarus's (1999) claim that these psychological constructs exist within a conceptual unit. Specifically, our findings indicated that the ways these constructs are inter-related influence objective performance within competitive settings.

  17. Comparing parametric and nonparametric regression methods for panel data

    DEFF Research Database (Denmark)

    Czekaj, Tomasz Gerard; Henningsen, Arne

    We investigate and compare the suitability of parametric and non-parametric stochastic regression methods for analysing production technologies and the optimal firm size. Our theoretical analysis shows that the most commonly used functional forms in empirical production analysis, Cobb-Douglas and......We investigate and compare the suitability of parametric and non-parametric stochastic regression methods for analysing production technologies and the optimal firm size. Our theoretical analysis shows that the most commonly used functional forms in empirical production analysis, Cobb......-Douglas and Translog, are unsuitable for analysing the optimal firm size. We show that the Translog functional form implies an implausible linear relationship between the (logarithmic) firm size and the elasticity of scale, where the slope is artificially related to the substitutability between the inputs...... rejects both the Cobb-Douglas and the Translog functional form, while a recently developed nonparametric kernel regression method with a fully nonparametric panel data specification delivers plausible results. On average, the nonparametric regression results are similar to results that are obtained from...

  18. Do drug treatment variables predict cognitive performance in multidrug-treated opioid-dependent patients? A regression analysis study

    Directory of Open Access Journals (Sweden)

    Rapeli Pekka

    2012-11-01

    Full Text Available Abstract Background Cognitive deficits and multiple psychoactive drug regimens are both common in patients treated for opioid-dependence. Therefore, we examined whether the cognitive performance of patients in opioid-substitution treatment (OST is associated with their drug treatment variables. Methods Opioid-dependent patients (N = 104 who were treated either with buprenorphine or methadone (n = 52 in both groups were given attention, working memory, verbal, and visual memory tests after they had been a minimum of six months in treatment. Group-wise results were analysed by analysis of variance. Predictors of cognitive performance were examined by hierarchical regression analysis. Results Buprenorphine-treated patients performed statistically significantly better in a simple reaction time test than methadone-treated ones. No other significant differences between groups in cognitive performance were found. In each OST drug group, approximately 10% of the attention performance could be predicted by drug treatment variables. Use of benzodiazepine medication predicted about 10% of performance variance in working memory. Treatment with more than one other psychoactive drug (than opioid or BZD and frequent substance abuse during the past month predicted about 20% of verbal memory performance. Conclusions Although this study does not prove a causal relationship between multiple prescription drug use and poor cognitive functioning, the results are relevant for psychosocial recovery, vocational rehabilitation, and psychological treatment of OST patients. Especially for patients with BZD treatment, other treatment options should be actively sought.

  19. Hospital- and patient-related characteristics determining maternity length of stay: a hierarchical linear model approach.

    Science.gov (United States)

    Leung, K M; Elashoff, R M; Rees, K S; Hasan, M M; Legorreta, A P

    1998-03-01

    The purpose of this study was to identify factors related to pregnancy and childbirth that might be predictive of a patient's length of stay after delivery and to model variations in length of stay. California hospital discharge data on maternity patients (n = 499,912) were analyzed. Hierarchical linear modeling was used to adjust for patient case mix and hospital characteristics and to account for the dependence of outcome variables within hospitals. Substantial variation in length of stay among patients was observed. The variation was mainly attributed to delivery type (vaginal or cesarean section), the patient's clinical risk factors, and severity of complications (if any). Furthermore, hospitals differed significantly in maternity lengths of stay even after adjustment for patient case mix. Developing risk-adjusted models for length of stay is a complex process but is essential for understanding variation. The hierarchical linear model approach described here represents a more efficient and appropriate way of studying interhospital variations than the traditional regression approach.

  20. On the geostatistical characterization of hierarchical media

    Science.gov (United States)

    Neuman, Shlomo P.; Riva, Monica; Guadagnini, Alberto

    2008-02-01

    The subsurface consists of porous and fractured materials exhibiting a hierarchical geologic structure, which gives rise to systematic and random spatial and directional variations in hydraulic and transport properties on a multiplicity of scales. Traditional geostatistical moment analysis allows one to infer the spatial covariance structure of such hierarchical, multiscale geologic materials on the basis of numerous measurements on a given support scale across a domain or "window" of a given length scale. The resultant sample variogram often appears to fit a stationary variogram model with constant variance (sill) and integral (spatial correlation) scale. In fact, some authors, who recognize that hierarchical sedimentary architecture and associated log hydraulic conductivity fields tend to be nonstationary, nevertheless associate them with stationary "exponential-like" transition probabilities and variograms, respectively, the latter being a consequence of the former. We propose that (1) the apparent ability of stationary spatial statistics to characterize the covariance structure of nonstationary hierarchical media is an artifact stemming from the finite size of the windows within which geologic and hydrologic variables are ubiquitously sampled, and (2) the artifact is eliminated upon characterizing the covariance structure of such media with the aid of truncated power variograms, which represent stationary random fields obtained upon sampling a nonstationary fractal over finite windows. To support our opinion, we note that truncated power variograms arise formally when a hierarchical medium is sampled jointly across all geologic categories and scales within a window; cite direct evidence that geostatistical parameters (variance and integral scale) inferred on the basis of traditional variograms vary systematically with support and window scales; demonstrate the ability of truncated power models to capture these variations in terms of a few scaling parameters

  1. Spontaneous Regression of an Incidental Spinal Meningioma

    National Research Council Canada - National Science Library

    Yilmaz, Ali; Kizilay, Zahir; Sair, Ahmet; Avcil, Mucahit; Ozkul, Ayca

    2015-01-01

    AIM: The regression of meningioma has been reported in literature before. In spite of the fact that the regression may be involved by hemorrhage, calcification or some drugs withdrawal, it is rarely observed spontaneously. CASE REPORT...

  2. Common pitfalls in statistical analysis: Logistic regression.

    Science.gov (United States)

    Ranganathan, Priya; Pramesh, C S; Aggarwal, Rakesh

    2017-01-01

    Logistic regression analysis is a statistical technique to evaluate the relationship between various predictor variables (either categorical or continuous) and an outcome which is binary (dichotomous). In this article, we discuss logistic regression analysis and the limitations of this technique.

  3. Prediction, Regression and Critical Realism

    DEFF Research Database (Denmark)

    Næss, Petter

    2004-01-01

    This paper considers the possibility of prediction in land use planning, and the use of statistical research methods in analyses of relationships between urban form and travel behaviour. Influential writers within the tradition of critical realism reject the possibility of predicting social...... seen as necessary in order to identify aggregate level effects of policy measures, but are questioned by many advocates of critical realist ontology. Using research into the relationship between urban structure and travel as an example, the paper discusses relevant research methods and the kinds...... of prediction necessary and possible in spatial planning of urban development. Finally, the political implications of positions within theory of science rejecting the possibility of predictions about social phenomena are addressed....

  4. Quantifying and reducing uncertainties in estimated soil CO2 fluxes with hierarchical data-model integration

    Science.gov (United States)

    Ogle, Kiona; Ryan, Edmund; Dijkstra, Feike A.; Pendall, Elise

    2016-12-01

    Nonsteady state chambers are often employed to measure soil CO2 fluxes. CO2 concentrations (C) in the headspace are sampled at different times (t), and fluxes (f) are calculated from regressions of C versus t based on a limited number of observations. Variability in the data can lead to poor fits and unreliable f estimates; groups with too few observations or poor fits are often discarded, resulting in "missing" f values. We solve these problems by fitting linear (steady state) and nonlinear (nonsteady state, diffusion based) models of C versus t, within a hierarchical Bayesian framework. Data are from the Prairie Heating and CO2 Enrichment study that manipulated atmospheric CO2, temperature, soil moisture, and vegetation. CO2 was collected from static chambers biweekly during five growing seasons, resulting in >12,000 samples and >3100 groups and associated fluxes. We compare f estimates based on nonhierarchical and hierarchical Bayesian (B versus HB) versions of the linear and diffusion-based (L versus D) models, resulting in four different models (BL, BD, HBL, and HBD). Three models fit the data exceptionally well (R2 ≥ 0.98), but the BD model was inferior (R2 = 0.87). The nonhierarchical models (BL and BD) produced highly uncertain f estimates (wide 95% credible intervals), whereas the hierarchical models (HBL and HBD) produced very precise estimates. Of the hierarchical versions, the linear model (HBL) underestimated f by 33% relative to the nonsteady state model (HBD). The hierarchical models offer improvements upon traditional nonhierarchical approaches to estimating f, and we provide example code for the models.

  5. Unbalanced Regressions and the Predictive Equation

    DEFF Research Database (Denmark)

    Osterrieder, Daniela; Ventosa-Santaulària, Daniel; Vera-Valdés, J. Eduardo

    Predictive return regressions with persistent regressors are typically plagued by (asymptotically) biased/inconsistent estimates of the slope, non-standard or potentially even spurious statistical inference, and regression unbalancedness. We alleviate the problem of unbalancedness in the theoreti......Predictive return regressions with persistent regressors are typically plagued by (asymptotically) biased/inconsistent estimates of the slope, non-standard or potentially even spurious statistical inference, and regression unbalancedness. We alleviate the problem of unbalancedness...

  6. Application of hierarchical matrices for partial inverse

    KAUST Repository

    Litvinenko, Alexander

    2013-11-26

    In this work we combine hierarchical matrix techniques (Hackbusch, 1999) and domain decomposition methods to obtain fast and efficient algorithms for the solution of multiscale problems. This combination results in the hierarchical domain decomposition (HDD) method, which can be applied for solution multi-scale problems. Multiscale problems are problems that require the use of different length scales. Using only the finest scale is very expensive, if not impossible, in computational time and memory. Domain decomposition methods decompose the complete problem into smaller systems of equations corresponding to boundary value problems in subdomains. Then fast solvers can be applied to each subdomain. Subproblems in subdomains are independent, much smaller and require less computational resources as the initial problem.

  7. First-passage phenomena in hierarchical networks

    CERN Document Server

    Tavani, Flavia

    2016-01-01

    In this paper we study Markov processes and related first passage problems on a class of weighted, modular graphs which generalize the Dyson hierarchical model. In these networks, the coupling strength between two nodes depends on their distance and is modulated by a parameter $\\sigma$. We find that, in the thermodynamic limit, ergodicity is lost and the "distant" nodes can not be reached. Moreover, for finite-sized systems, there exists a threshold value for $\\sigma$ such that, when $\\sigma$ is relatively large, the inhomogeneity of the coupling pattern prevails and "distant" nodes are hardly reached. The same analysis is carried on also for generic hierarchical graphs, where interactions are meant to involve $p$-plets ($p>2$) of nodes, finding that ergodicity is still broken in the thermodynamic limit, but no threshold value for $\\sigma$ is evidenced, ultimately due to a slow growth of the network diameter with the size.

  8. An Hierarchical Approach to Big Data

    CERN Document Server

    Allen, M G; Boch, T; Durand, D; Oberto, A; Merin, B; Stoehr, F; Genova, F; Pineau, F-X; Salgado, J

    2016-01-01

    The increasing volumes of astronomical data require practical methods for data exploration, access and visualisation. The Hierarchical Progressive Survey (HiPS) is a HEALPix based scheme that enables a multi-resolution approach to astronomy data from the individual pixels up to the whole sky. We highlight the decisions and approaches that have been taken to make this scheme a practical solution for managing large volumes of heterogeneous data. Early implementors of this system have formed a network of HiPS nodes, with some 250 diverse data sets currently available, with multiple mirror implementations for important data sets. This hierarchical approach can be adapted to expose Big Data in different ways. We describe how the ease of implementation, and local customisation of the Aladin Lite embeddable HiPS visualiser have been keys for promoting collaboration on HiPS.

  9. Non-homogeneous fractal hierarchical weighted networks.

    Science.gov (United States)

    Dong, Yujuan; Dai, Meifeng; Ye, Dandan

    2015-01-01

    A model of fractal hierarchical structures that share the property of non-homogeneous weighted networks is introduced. These networks can be completely and analytically characterized in terms of the involved parameters, i.e., the size of the original graph Nk and the non-homogeneous weight scaling factors r1, r2, · · · rM. We also study the average weighted shortest path (AWSP), the average degree and the average node strength, taking place on the non-homogeneous hierarchical weighted networks. Moreover the AWSP is scrupulously calculated. We show that the AWSP depends on the number of copies and the sum of all non-homogeneous weight scaling factors in the infinite network order limit.

  10. Noise enhances information transfer in hierarchical networks.

    Science.gov (United States)

    Czaplicka, Agnieszka; Holyst, Janusz A; Sloot, Peter M A

    2013-01-01

    We study the influence of noise on information transmission in the form of packages shipped between nodes of hierarchical networks. Numerical simulations are performed for artificial tree networks, scale-free Ravasz-Barabási networks as well for a real network formed by email addresses of former Enron employees. Two types of noise are considered. One is related to packet dynamics and is responsible for a random part of packets paths. The second one originates from random changes in initial network topology. We find that the information transfer can be enhanced by the noise. The system possesses optimal performance when both kinds of noise are tuned to specific values, this corresponds to the Stochastic Resonance phenomenon. There is a non-trivial synergy present for both noisy components. We found also that hierarchical networks built of nodes of various degrees are more efficient in information transfer than trees with a fixed branching factor.

  11. Hierarchical model of vulnerabilities for emotional disorders.

    Science.gov (United States)

    Norton, Peter J; Mehta, Paras D

    2007-01-01

    Clark and Watson's (1991) tripartite model of anxiety and depression has had a dramatic impact on our understanding of the dispositional variables underlying emotional disorders. More recently, calls have been made to examine not simply the influence of negative affectivity (NA) but also mediating factors that might better explain how NA influences anxious and depressive syndromes (e.g. Taylor, 1998; Watson, 2005). Extending preliminary projects, this study evaluated two hierarchical models of NA, mediating factors of anxiety sensitivity and intolerance of uncertainty, and specific emotional manifestations. Data provided a very good fit to a model elaborated from preliminary studies, lending further support to hierarchical models of emotional vulnerabilities. Implications for classification and diagnosis are discussed.

  12. Hierarchical Self-organization of Complex Systems

    Institute of Scientific and Technical Information of China (English)

    CHAI Li-he; WEN Dong-sheng

    2004-01-01

    Researches on organization and structure in complex systems are academic and industrial fronts in modern sciences. Though many theories are tentatively proposed to analyze complex systems, we still lack a rigorous theory on them. Complex systems possess various degrees of freedom, which means that they should exhibit all kinds of structures. However, complex systems often show similar patterns and structures. Then the question arises why such similar structures appear in all kinds of complex systems. The paper outlines a theory on freedom degree compression and the existence of hierarchical self-organization for all complex systems is found. It is freedom degree compression and hierarchical self-organization that are responsible for the existence of these similar patterns or structures observed in the complex systems.

  13. Bayesian hierarchical modeling of drug stability data.

    Science.gov (United States)

    Chen, Jie; Zhong, Jinglin; Nie, Lei

    2008-06-15

    Stability data are commonly analyzed using linear fixed or random effect model. The linear fixed effect model does not take into account the batch-to-batch variation, whereas the random effect model may suffer from the unreliable shelf-life estimates due to small sample size. Moreover, both methods do not utilize any prior information that might have been available. In this article, we propose a Bayesian hierarchical approach to modeling drug stability data. Under this hierarchical structure, we first use Bayes factor to test the poolability of batches. Given the decision on poolability of batches, we then estimate the shelf-life that applies to all batches. The approach is illustrated with two example data sets and its performance is compared in simulation studies with that of the commonly used frequentist methods. (c) 2008 John Wiley & Sons, Ltd.

  14. Hierarchical Boltzmann simulations and model error estimation

    Science.gov (United States)

    Torrilhon, Manuel; Sarna, Neeraj

    2017-08-01

    A hierarchical simulation approach for Boltzmann's equation should provide a single numerical framework in which a coarse representation can be used to compute gas flows as accurately and efficiently as in computational fluid dynamics, but a subsequent refinement allows to successively improve the result to the complete Boltzmann result. We use Hermite discretization, or moment equations, for the steady linearized Boltzmann equation for a proof-of-concept of such a framework. All representations of the hierarchy are rotationally invariant and the numerical method is formulated on fully unstructured triangular and quadrilateral meshes using a implicit discontinuous Galerkin formulation. We demonstrate the performance of the numerical method on model problems which in particular highlights the relevance of stability of boundary conditions on curved domains. The hierarchical nature of the method allows also to provide model error estimates by comparing subsequent representations. We present various model errors for a flow through a curved channel with obstacles.

  15. Hierarchical State Machines as Modular Horn Clauses

    Directory of Open Access Journals (Sweden)

    Pierre-Loïc Garoche

    2016-07-01

    Full Text Available In model based development, embedded systems are modeled using a mix of dataflow formalism, that capture the flow of computation, and hierarchical state machines, that capture the modal behavior of the system. For safety analysis, existing approaches rely on a compilation scheme that transform the original model (dataflow and state machines into a pure dataflow formalism. Such compilation often result in loss of important structural information that capture the modal behaviour of the system. In previous work we have developed a compilation technique from a dataflow formalism into modular Horn clauses. In this paper, we present a novel technique that faithfully compile hierarchical state machines into modular Horn clauses. Our compilation technique preserves the structural and modal behavior of the system, making the safety analysis of such models more tractable.

  16. Hierarchical community structure in complex (social) networks

    CERN Document Server

    Massaro, Emanuele

    2014-01-01

    The investigation of community structure in networks is a task of great importance in many disciplines, namely physics, sociology, biology and computer science where systems are often represented as graphs. One of the challenges is to find local communities from a local viewpoint in a graph without global information in order to reproduce the subjective hierarchical vision for each vertex. In this paper we present the improvement of an information dynamics algorithm in which the label propagation of nodes is based on the Markovian flow of information in the network under cognitive-inspired constraints \\cite{Massaro2012}. In this framework we have introduced two more complex heuristics that allow the algorithm to detect the multi-resolution hierarchical community structure of networks from a source vertex or communities adopting fixed values of model's parameters. Experimental results show that the proposed methods are efficient and well-behaved in both real-world and synthetic networks.

  17. Object tracking with hierarchical multiview learning

    Science.gov (United States)

    Yang, Jun; Zhang, Shunli; Zhang, Li

    2016-09-01

    Building a robust appearance model is useful to improve tracking performance. We propose a hierarchical multiview learning framework to construct the appearance model, which has two layers for tracking. On the top layer, two different views of features, grayscale value and histogram of oriented gradients, are adopted for representation under the cotraining framework. On the bottom layer, for each view of each feature, three different random subspaces are generated to represent the appearance from multiple views. For each random view submodel, the least squares support vector machine is employed to improve the discriminability for concrete and efficient realization. These two layers are combined to construct the final appearance model for tracking. The proposed hierarchical model assembles two types of multiview learning strategies, in which the appearance can be described more accurately and robustly. Experimental results in the benchmark dataset demonstrate that the proposed method can achieve better performance than many existing state-of-the-art algorithms.

  18. Assembling hierarchical cluster solids with atomic precision.

    Science.gov (United States)

    Turkiewicz, Ari; Paley, Daniel W; Besara, Tiglet; Elbaz, Giselle; Pinkard, Andrew; Siegrist, Theo; Roy, Xavier

    2014-11-12

    Hierarchical solids created from the binary assembly of cobalt chalcogenide and iron oxide molecular clusters are reported. Six different molecular clusters based on the octahedral Co6E8 (E = Se or Te) and the expanded cubane Fe8O4 units are used as superatomic building blocks to construct these crystals. The formation of the solid is driven by the transfer of charge between complementary electron-donating and electron-accepting clusters in solution that crystallize as binary ionic compounds. The hierarchical structures are investigated by single-crystal X-ray diffraction, providing atomic and superatomic resolution. We report two different superstructures: a superatomic relative of the CsCl lattice type and an unusual packing arrangement based on the double-hexagonal close-packed lattice. Within these superstructures, we demonstrate various compositions and orientations of the clusters.

  19. Hierarchical Robot Control In A Multisensor Environment

    Science.gov (United States)

    Bhanu, Bir; Thune, Nils; Lee, Jih Kun; Thune, Mari

    1987-03-01

    Automatic recognition, inspection, manipulation and assembly of objects will be a common denominator in most of tomorrow's highly automated factories. These tasks will be handled by intelligent computer controlled robots with multisensor capabilities which contribute to desired flexibility and adaptability. The control of a robot in such a multisensor environment becomes of crucial importance as the complexity of the problem grows exponentially with the number of sensors, tasks, commands and objects. In this paper we present an approach which uses CAD (Computer-Aided Design) based geometric and functional models of objects together with action oriented neuroschemas to recognize and manipulate objects by a robot in a multisensor environment. The hierarchical robot control system is being implemented on a BBN Butterfly multi processor. Index terms: CAD, Hierarchical Control, Hypothesis Generation and Verification, Parallel Processing, Schemas

  20. Standards for Standardized Logistic Regression Coefficients

    Science.gov (United States)

    Menard, Scott

    2011-01-01

    Standardized coefficients in logistic regression analysis have the same utility as standardized coefficients in linear regression analysis. Although there has been no consensus on the best way to construct standardized logistic regression coefficients, there is now sufficient evidence to suggest a single best approach to the construction of a…

  1. Synthesizing Regression Results: A Factored Likelihood Method

    Science.gov (United States)

    Wu, Meng-Jia; Becker, Betsy Jane

    2013-01-01

    Regression methods are widely used by researchers in many fields, yet methods for synthesizing regression results are scarce. This study proposes using a factored likelihood method, originally developed to handle missing data, to appropriately synthesize regression models involving different predictors. This method uses the correlations reported…

  2. Regression Analysis by Example. 5th Edition

    Science.gov (United States)

    Chatterjee, Samprit; Hadi, Ali S.

    2012-01-01

    Regression analysis is a conceptually simple method for investigating relationships among variables. Carrying out a successful application of regression analysis, however, requires a balance of theoretical results, empirical rules, and subjective judgment. "Regression Analysis by Example, Fifth Edition" has been expanded and thoroughly…

  3. Regression with Sparse Approximations of Data

    DEFF Research Database (Denmark)

    Noorzad, Pardis; Sturm, Bob L.

    2012-01-01

    We propose sparse approximation weighted regression (SPARROW), a method for local estimation of the regression function that uses sparse approximation with a dictionary of measurements. SPARROW estimates the regression function at a point with a linear combination of a few regressands selected by...

  4. Standards for Standardized Logistic Regression Coefficients

    Science.gov (United States)

    Menard, Scott

    2011-01-01

    Standardized coefficients in logistic regression analysis have the same utility as standardized coefficients in linear regression analysis. Although there has been no consensus on the best way to construct standardized logistic regression coefficients, there is now sufficient evidence to suggest a single best approach to the construction of a…

  5. TRANSIMS and the hierarchical data format

    Energy Technology Data Exchange (ETDEWEB)

    Bush, B.W.

    1997-06-12

    The Hierarchical Data Format (HDF) is a general-purposed scientific data format developed at the National Center for Supercomputing Applications. It supports metadata, compression, and a variety of data structures (multidimensional arrays, raster images, tables). FORTRAN 77 and ANSI C programming interfaces are available for it and a wide variety of visualization tools read HDF files. The author discusses the features of this file format and its possible uses in TRANSIMS.

  6. Modular, Hierarchical Learning By Artificial Neural Networks

    Science.gov (United States)

    Baldi, Pierre F.; Toomarian, Nikzad

    1996-01-01

    Modular and hierarchical approach to supervised learning by artificial neural networks leads to neural networks more structured than neural networks in which all neurons fully interconnected. These networks utilize general feedforward flow of information and sparse recurrent connections to achieve dynamical effects. The modular organization, sparsity of modular units and connections, and fact that learning is much more circumscribed are all attractive features for designing neural-network hardware. Learning streamlined by imitating some aspects of biological neural networks.

  7. Superhydrophobicity of Hierarchical and ZNO Nanowire Coatings

    Science.gov (United States)

    2014-01-01

    KOH (3 wt%), distilled water and isopropyl alcohol (10% vol%) at 95 C for 50 min. Subsequently, a 10 nm ZnO seed layer wasThis journal is © The Royal...ZnO have been widely used in sensors, piezo-nanogenerators, and solar cells. The hierarchical structures of ZnO nanowires grown on Si pyramid surfaces...exhibiting superhydrophobicity in this work will have promising applications in the next generation photovoltaic devices and solar cells

  8. Hierarchical Parallel Evaluation of a Hamming Code

    Directory of Open Access Journals (Sweden)

    Shmuel T. Klein

    2017-04-01

    Full Text Available The Hamming code is a well-known error correction code and can correct a single error in an input vector of size n bits by adding logn parity checks. A new parallel implementation of the code is presented, using a hierarchical structure of n processors in logn layers. All the processors perform similar simple tasks, and need only a few bytes of internal memory.

  9. Hierarchical mixture models for assessing fingerprint individuality

    OpenAIRE

    Dass, Sarat C.; Li, Mingfei

    2009-01-01

    The study of fingerprint individuality aims to determine to what extent a fingerprint uniquely identifies an individual. Recent court cases have highlighted the need for measures of fingerprint individuality when a person is identified based on fingerprint evidence. The main challenge in studies of fingerprint individuality is to adequately capture the variability of fingerprint features in a population. In this paper hierarchical mixture models are introduced to infer the extent of individua...

  10. Active commuting and habit strength: an interactive and discriminant analyses approach.

    Science.gov (United States)

    de Bruijn, Gert-Jan; Gardner, Benjamin

    2011-01-01

    Habits may be a mechanism linking environmental variables with active commuting. This study investigated the role of habit strength in the explanation of active commuting across profiles based on current active commuting, motivation, and habit strength within the framework of the theory of planned behavior (TPB). Cross-sectional survey using validated questionnaires. Undergraduate students who participated for course credits. Five hundred and thirty-eight students (mean age  =  21.19 [SD  =  2.57]; 28.45% males; response rate  =  86.36%). Questionnaire included TPB items, underlying beliefs, and a validated measure of habit strength. Active commuting was assessed with relevant items from the International Physical Activity Questionnaire. Hierarchical regression and interaction analyses, discriminant function analysis, and analyses of variance. Habit strength was the strongest correlate of active commuting and interacted with intention: at low and medium levels of habit strength, the intention-bicycle use relationship was more than twice as strong as at high levels. Beliefs regarding situational barriers were amongst the most discriminating beliefs, whereas beliefs regarding health benefits did not distinguish profiles. Stronger active commuting habits are associated with a lower association between intention and bicycle use. Persuasive health campaigns might more usefully instill a sense of confidence in various commuting situations rather than merely emphasizing health benefits of active commuting.

  11. Relations of alpine plant communities across environmental gradients: Multilevel versus multiscale analyses

    Science.gov (United States)

    Malanson, George P.; Zimmerman, Dale L.; Kinney, Mitch; Fagre, Daniel B.

    2017-01-01

    Alpine plant communities vary, and their environmental covariates could influence their response to climate change. A single multilevel model of how alpine plant community composition is determined by hierarchical relations is compared to a separate examination of those relations at different scales. Nonmetric multidimensional scaling of species cover for plots in four regions across the Rocky Mountains created dependent variables. Climate variables are derived for the four regions from interpolated data. Plot environmental variables are measured directly and the presence of thirty-seven site characteristics is recorded and used to create additional independent variables. Multilevel and best subsets regressions are used to determine the strength of the hypothesized relations. The ordinations indicate structure in the assembly of plant communities. The multilevel analyses, although revealing significant relations, provide little explanation; of the site variables, those related to site microclimate are most important. In multiscale analyses (whole and separate regions), different variables are better explanations within the different regions. This result indicates weak environmental niche control of community composition. The weak relations of the structure in the patterns of species association to the environment indicates that either alpine vegetation represents a case of the neutral theory of biogeography being a valid explanation or that it represents disequilibrium conditions. The implications of neutral theory and disequilibrium explanations are similar: Response to climate change will be difficult to quantify above equilibrium background turnover.

  12. Identifying key processes in the hydrochemistry of a basin through the combined use of factor and regression models

    Indian Academy of Sciences (India)

    Sandow Mark Yidana; Bruce Banoeng-Yakubo; Patrick Asamoah Sakyi

    2012-04-01

    An innovative technique of measuring the intensities of major sources of variation in the hydrochemistry of (ground) water in a basin has been developed. This technique, which is based on the combination of R-mode factor and multiple regression analyses, can be used to measure the degrees of influence of the major sources of variation in the hydrochemistry without measuring the concentrations of the entire set of physico-chemical parameters which are often used to characterize water systems. R-mode factor analysis was applied to the data of 13 physico-chemical parameters and 50 samples in order to determine the major sources of variation in the hydrochemistry of some aquifers in the western region of Ghana. In this study, three sources of variation in the hydrochemistry were distinguished: the dissolution of chlorides and sulfates of the major cations, carbonate mineral dissolution, and silicate mineral weathering. Two key parameters were identified with each of the processes and multiple regression models were developed for each process. These models were tested and found to predict these processes quite accurately, and can be applied anywhere within the terrain. This technique can be reliably applied in areas where logistical constraints limit water sampling for whole basin hydrochemical characterization. Q-mode hierarchical cluster analysis (HCA) applied to the data revealed three major groundwater associations distinguished on the basis of the major causes of variation in the hydrochemistry. The three groundwater types represent Na–HCO3, Ca–HCO3, and Na–Cl groundwater types. Silicate stability diagrams suggest that all these groundwater types are mainly stable in the kaolinite and montmorillonite fields suggesting moderately restricted flow conditions.

  13. Metal hierarchical patterning by direct nanoimprint lithography.

    Science.gov (United States)

    Radha, Boya; Lim, Su Hui; Saifullah, Mohammad S M; Kulkarni, Giridhar U

    2013-01-01

    Three-dimensional hierarchical patterning of metals is of paramount importance in diverse fields involving photonics, controlling surface wettability and wearable electronics. Conventionally, this type of structuring is tedious and usually involves layer-by-layer lithographic patterning. Here, we describe a simple process of direct nanoimprint lithography using palladium benzylthiolate, a versatile metal-organic ink, which not only leads to the formation of hierarchical patterns but also is amenable to layer-by-layer stacking of the metal over large areas. The key to achieving such multi-faceted patterning is hysteretic melting of ink, enabling its shaping. It undergoes transformation to metallic palladium under gentle thermal conditions without affecting the integrity of the hierarchical patterns on micro- as well as nanoscale. A metallic rice leaf structure showing anisotropic wetting behavior and woodpile-like structures were thus fabricated. Furthermore, this method is extendable for transferring imprinted structures to a flexible substrate to make them robust enough to sustain numerous bending cycles.

  14. Hierarchical unilamellar vesicles of controlled compositional heterogeneity.

    Directory of Open Access Journals (Sweden)

    Maik Hadorn

    Full Text Available Eukaryotic life contains hierarchical vesicular architectures (i.e. organelles that are crucial for material production and trafficking, information storage and access, as well as energy production. In order to perform specific tasks, these compartments differ among each other in their membrane composition and their internal cargo and also differ from the cell membrane and the cytosol. Man-made structures that reproduce this nested architecture not only offer a deeper understanding of the functionalities and evolution of organelle-bearing eukaryotic life but also allow the engineering of novel biomimetic technologies. Here, we show the newly developed vesicle-in-water-in-oil emulsion transfer preparation technique to result in giant unilamellar vesicles internally compartmentalized by unilamellar vesicles of different membrane composition and internal cargo, i.e. hierarchical unilamellar vesicles of controlled compositional heterogeneity. The compartmentalized giant unilamellar vesicles were subsequently isolated by a separation step exploiting the heterogeneity of the membrane composition and the encapsulated cargo. Due to the controlled, efficient, and technically straightforward character of the new preparation technique, this study allows the hierarchical fabrication of compartmentalized giant unilamellar vesicles of controlled compositional heterogeneity and will ease the development of eukaryotic cell mimics that resemble their natural templates as well as the fabrication of novel multi-agent drug delivery systems for combination therapies and complex artificial microreactors.

  15. A New Metrics for Hierarchical Clustering

    Institute of Scientific and Technical Information of China (English)

    YANGGuangwen; SHIShuming; WANGDingxing

    2003-01-01

    Hierarchical clustering is a popular method of performing unsupervised learning. Some metric must be used to determine the similarity between pairs of clusters in hierarchical clustering. Traditional similarity metrics either can deal with simple shapes (i.e. spherical shapes) only or are very sensitive to outliers (the chaining effect). The main contribution of this paper is to propose some potential-based similarity metrics (APES and AMAPES) between clusters in hierarchical clustering, inspired by the concepts of the electric potential and the gravitational potential in electromagnetics and astronomy. The main features of these metrics are: the first, they have strong antijamming capability; the second, they are capable of finding clusters of different shapes such as spherical, spiral, chain, circle, sigmoid, U shape or other complex irregular shapes; the third, existing algorithms and research fruits for classical metrics can be adopted to deal with these new potential-based metrics with no or little modification. Experiments showed that the new metrics are more superior to traditional ones. Different potential functions are compared, and the sensitivity to parameters is also analyzed in this paper.

  16. A secure solution on hierarchical access control

    CERN Document Server

    Wei, Chuan-Sheng; Huang, Tone-Yau; Ong, Yao Lin

    2011-01-01

    Hierarchical access control is an important and traditional problem in information security. In 2001, Wu et.al. proposed an elegant solution for hierarchical access control by the secure-filter. Jeng and Wang presented an improvement of Wu et. al.'s method by the ECC cryptosystem. However, secure-filter method is insecure in dynaminc access control. Lie, Hsu and Tripathy, Paul pointed out some secure leaks on the secure-filter and presented some improvements to eliminate these secure flaws. In this paper, we revise the secure-filter in Jeng-Wang method and propose another secure solutions in hierarchical access control problem. CA is a super security class (user) in our proposed method and the secure-filter of $u_i$ in our solutions is a polynomial of degree $n_i+1$ in $\\mathbb{Z}_p^*$, $f_i(x)=(x-h_i)(x-a_1)...(x-a_{n_i})+L_{l_i}(K_i)$. Although the degree of our secure-filter is larger than others solutions, our solution is secure and efficient in dynamics access control.

  17. SORM applied to hierarchical parallel system

    DEFF Research Database (Denmark)

    Ditlevsen, Ove Dalager

    2006-01-01

    The old hierarchical stochastic load combination model of Ferry Borges and Castanheta and the corresponding problem of determining the distribution of the extreme random load effect is the inspiration to this paper. The evaluation of the distribution function of the extreme value by use of a part......The old hierarchical stochastic load combination model of Ferry Borges and Castanheta and the corresponding problem of determining the distribution of the extreme random load effect is the inspiration to this paper. The evaluation of the distribution function of the extreme value by use...... of a particular first order reliability method (FORM) was first described in a celebrated paper by Rackwitz and Fiessler more than a quarter of a century ago. The method has become known as the Rackwitz-Fiessler algorithm. The original RF-algorithm as applied to a hierarchical random variable model...... is recapitulated so that a simple but quite effective accuracy improving calculation can be explained. A limit state curvature correction factor on the probability approximation is obtained from the final stop results of the RF-algorithm. This correction factor is based on Breitung’s asymptotic formula for second...

  18. Anisotropic and Hierarchical Porosity in Multifunctional Ceramics

    Science.gov (United States)

    Lichtner, Aaron Zev

    The performance of multifunctional porous ceramics is often hindered by the seemingly contradictory effects of porosity on both mechanical and non-structural properties and yet a sufficient body of knowledge linking microstructure to these properties does not exist. Using a combination of tailored anisotropic and hierarchical materials, these disparate effects may be reconciled. In this project, a systematic investigation of the processing, characterization and properties of anisotropic and isotropic hierarchically porous ceramics was conducted. The system chosen was a composite ceramic intended as the cathode for a solid oxide fuel cell (SOFC). Comprehensive processing investigations led to the development of approaches to make hierarchical, anisotropic porous microstructures using directional freeze-casting of well dispersed slurries. The effect of all the important processing parameters was investigated. This resulted in an ability to tailor and control the important microstructural features including the scale of the microstructure, the macropore size and total porosity. Comparable isotropic porous ceramics were also processed using fugitive pore formers. A suite of characterization techniques including x-ray tomography and 3-D sectional scanning electron micrographs (FIB-SEM) was used to characterize and quantify the green and partially sintered microstructures. The effect of sintering temperature on the microstructure was quantified and discrete element simulations (DEM) were used to explain the experimental observations. Finally, the comprehensive mechanical properties, at room temperature, were investigated, experimentally and using DEM, for the different microstructures.

  19. Resilient 3D hierarchical architected metamaterials.

    Science.gov (United States)

    Meza, Lucas R; Zelhofer, Alex J; Clarke, Nigel; Mateos, Arturo J; Kochmann, Dennis M; Greer, Julia R

    2015-09-15

    Hierarchically designed structures with architectural features that span across multiple length scales are found in numerous hard biomaterials, like bone, wood, and glass sponge skeletons, as well as manmade structures, like the Eiffel Tower. It has been hypothesized that their mechanical robustness and damage tolerance stem from sophisticated ordering within the constituents, but the specific role of hierarchy remains to be fully described and understood. We apply the principles of hierarchical design to create structural metamaterials from three material systems: (i) polymer, (ii) hollow ceramic, and (iii) ceramic-polymer composites that are patterned into self-similar unit cells in a fractal-like geometry. In situ nanomechanical experiments revealed (i) a nearly theoretical scaling of structural strength and stiffness with relative density, which outperforms existing nonhierarchical nanolattices; (ii) recoverability, with hollow alumina samples recovering up to 98% of their original height after compression to ≥ 50% strain; (iii) suppression of brittle failure and structural instabilities in hollow ceramic hierarchical nanolattices; and (iv) a range of deformation mechanisms that can be tuned by changing the slenderness ratios of the beams. Additional levels of hierarchy beyond a second order did not increase the strength or stiffness, which suggests the existence of an optimal degree of hierarchy to amplify resilience. We developed a computational model that captures local stress distributions within the nanolattices under compression and explains some of the underlying deformation mechanisms as well as validates the measured effective stiffness to be interpreted as a metamaterial property.

  20. The Hourglass Effect in Hierarchical Dependency Networks

    CERN Document Server

    Sabrin, Kaeser M

    2016-01-01

    Many hierarchically modular systems are structured in a way that resembles a bow-tie or hourglass. This "hourglass effect" means that the system generates many outputs from many inputs through a relatively small number of intermediate modules that are critical for the operation of the entire system (the waist of the hourglass). We investigate the hourglass effect in general (not necessarily layered) hierarchical dependency networks. Our analysis focuses on the number of source-to-target dependency paths that traverse each vertex, and it identifies the core of a dependency network as the smallest set of vertices that collectively cover almost all dependency paths. We then examine if a given network exhibits the hourglass property or not, comparing its core size with a "flat" (i.e., non-hierarchical) network that preserves the source dependencies of each target in the original network. As a possible explanation for the hourglass effect, we propose the Reuse Preference (RP) model that captures the bias of new mo...

  1. Semantic Image Segmentation with Contextual Hierarchical Models.

    Science.gov (United States)

    Seyedhosseini, Mojtaba; Tasdizen, Tolga

    2016-05-01

    Semantic segmentation is the problem of assigning an object label to each pixel. It unifies the image segmentation and object recognition problems. The importance of using contextual information in semantic segmentation frameworks has been widely realized in the field. We propose a contextual framework, called contextual hierarchical model (CHM), which learns contextual information in a hierarchical framework for semantic segmentation. At each level of the hierarchy, a classifier is trained based on downsampled input images and outputs of previous levels. Our model then incorporates the resulting multi-resolution contextual information into a classifier to segment the input image at original resolution. This training strategy allows for optimization of a joint posterior probability at multiple resolutions through the hierarchy. Contextual hierarchical model is purely based on the input image patches and does not make use of any fragments or shape examples. Hence, it is applicable to a variety of problems such as object segmentation and edge detection. We demonstrate that CHM performs at par with state-of-the-art on Stanford background and Weizmann horse datasets. It also outperforms state-of-the-art edge detection methods on NYU depth dataset and achieves state-of-the-art on Berkeley segmentation dataset (BSDS 500).

  2. Regression with Sparse Approximations of Data

    DEFF Research Database (Denmark)

    Noorzad, Pardis; Sturm, Bob L.

    2012-01-01

    We propose sparse approximation weighted regression (SPARROW), a method for local estimation of the regression function that uses sparse approximation with a dictionary of measurements. SPARROW estimates the regression function at a point with a linear combination of a few regressands selected...... by a sparse approximation of the point in terms of the regressors. We show SPARROW can be considered a variant of \\(k\\)-nearest neighbors regression (\\(k\\)-NNR), and more generally, local polynomial kernel regression. Unlike \\(k\\)-NNR, however, SPARROW can adapt the number of regressors to use based...

  3. A Regression Approach for Forecasting Vendor Revenue in Telecommunication Industries

    Directory of Open Access Journals (Sweden)

    Aida Mustapha

    2014-12-01

    Full Text Available In many telecommunication companies, Entrepreneur Development Unit (EDU is responsible to manage a big group of vendors that hold contract with the company. This unit assesses the vendors’ performance in terms of revenue and profitability on yearly basis and uses the information in arranging suitable development trainings. The main challenge faced by this unit, however, is to obtain the annual revenue data from the vendors due to time constraints. This paper presents a regression approach to predict the vendors’ annual revenues based on their previous records so the assessment exercise could be expedited. Three regression methods were investigated; linear regression, sequential minimal optimization algorithm, and M5rules. The results were analysed and discussed.

  4. Assumptions of Multiple Regression: Correcting Two Misconceptions

    Directory of Open Access Journals (Sweden)

    Matt N. Williams

    2013-09-01

    Full Text Available In 2002, an article entitled - Four assumptions of multiple regression that researchers should always test- by.Osborne and Waters was published in PARE. This article has gone on to be viewed more than 275,000 times.(as of August 2013, and it is one of the first results displayed in a Google search for - regression.assumptions- . While Osborne and Waters' efforts in raising awareness of the need to check assumptions.when using regression are laudable, we note that the original article contained at least two fairly important.misconceptions about the assumptions of multiple regression: Firstly, that multiple regression requires the.assumption of normally distributed variables; and secondly, that measurement errors necessarily cause.underestimation of simple regression coefficients. In this article, we clarify that multiple regression models.estimated using ordinary least squares require the assumption of normally distributed errors in order for.trustworthy inferences, at least in small samples, but not the assumption of normally distributed response or.predictor variables. Secondly, we point out that regression coefficients in simple regression models will be.biased (toward zero estimates of the relationships between variables of interest when measurement error is.uncorrelated across those variables, but that when correlated measurement error is present, regression.coefficients may be either upwardly or downwardly biased. We conclude with a brief corrected summary of.the assumptions of multiple regression when using ordinary least squares.

  5. Functional linear regression via canonical analysis

    CERN Document Server

    He, Guozhong; Wang, Jane-Ling; Yang, Wenjing; 10.3150/09-BEJ228

    2011-01-01

    We study regression models for the situation where both dependent and independent variables are square-integrable stochastic processes. Questions concerning the definition and existence of the corresponding functional linear regression models and some basic properties are explored for this situation. We derive a representation of the regression parameter function in terms of the canonical components of the processes involved. This representation establishes a connection between functional regression and functional canonical analysis and suggests alternative approaches for the implementation of functional linear regression analysis. A specific procedure for the estimation of the regression parameter function using canonical expansions is proposed and compared with an established functional principal component regression approach. As an example of an application, we present an analysis of mortality data for cohorts of medflies, obtained in experimental studies of aging and longevity.

  6. A hybrid deterministic-probabilistic approach to model the mechanical response of helically arranged hierarchical strands

    Science.gov (United States)

    Fraldi, M.; Perrella, G.; Ciervo, M.; Bosia, F.; Pugno, N. M.

    2017-09-01

    Very recently, a Weibull-based probabilistic strategy has been successfully applied to bundles of wires to determine their overall stress-strain behaviour, also capturing previously unpredicted nonlinear and post-elastic features of hierarchical strands. This approach is based on the so-called ;Equal Load Sharing (ELS); hypothesis by virtue of which, when a wire breaks, the load acting on the strand is homogeneously redistributed among the surviving wires. Despite the overall effectiveness of the method, some discrepancies between theoretical predictions and in silico Finite Element-based simulations or experimental findings might arise when more complex structures are analysed, e.g. helically arranged bundles. To overcome these limitations, an enhanced hybrid approach is proposed in which the probability of rupture is combined with a deterministic mechanical model of a strand constituted by helically-arranged and hierarchically-organized wires. The analytical model is validated comparing its predictions with both Finite Element simulations and experimental tests. The results show that generalized stress-strain responses - incorporating tension/torsion coupling - are naturally found and, once one or more elements break, the competition between geometry and mechanics of the strand microstructure, i.e. the different cross sections and helical angles of the wires in the different hierarchical levels of the strand, determines the no longer homogeneous stress redistribution among the surviving wires whose fate is hence governed by a ;Hierarchical Load Sharing; criterion.

  7. Facile fabrication of hierarchical ZnO microstructures assisted with PAMPSA and enhancement of green emission

    Science.gov (United States)

    Huang, Qiang; Cun, Tangxiang; Zuo, Wenbin; Liu, Jianping

    2015-03-01

    We report the fabrication of hierarchically microstructured flower-like ZnO by a facile and single-step procedure involving poly(2-acrylamido-2-methyl-1-propanesulfonic acid) (PAMPSA) assisted aqueous chemical method. The shapes and sizes can be controlled just by varying the concentrations of the water-soluble polymer. When a suitable PAMPAS concentration was utilized, uniform well-defined and mono-dispersed chrysanthemum-like ZnO microstructures based on nanorod building blocks were obtained. The formation mechanism of the hierarchical structure was presented. The structured studies using XRD, HRTEM and SAED reveal these ZnO nanorods are composed of a single phase nature with wurtzite structure and grow along with the c-axis. FTIR spectrum indicated the incorporation of a trace of PAMPSA into ZnO crystals. HRTEM, Raman and XPS analyses showed that the hierarchical ZnO microstructures contain high concentration of oxygen vacancies which enable them exhibiting a significant intense deep-level emission centered at green luminescence in its photoluminescence spectra. They also show enhanced photocatalytic efficiency in degradation of methylene blue. It is hoped that the present work may provide a simple method to fabricate ZnO hierarchical microstructures and a positive relationship among polar plane, oxygen vacancy and green emission.

  8. Facile synthesis of Zn doped CuO hierarchical nanostructures: Structural, optical and antibacterial properties

    Energy Technology Data Exchange (ETDEWEB)

    Iqbal, Javed, E-mail: tariqjan84@gmail.com, E-mail: javed.suggau@iiu.edu.pk; Jan, Tariq, E-mail: tariqjan84@gmail.com, E-mail: javed.suggau@iiu.edu.pk; Ul-Hassan, Sibt; Umair Ali, M.; Abbas, Fazal [Laboratory of Nanoscience and Technology, Department of Physics, International Islamic University, H-10, Islamabad (Pakistan); Ahmed, Ishaq [Experimental Physics Labs, National Center for Physics, Islamabad (Pakistan); Mansoor, Qaisar; Ismail, Muhammad [Institute of Biomedical and Genetic Engineering (IBGE), Islamabad (Pakistan)

    2015-12-15

    Zn{sub x}Cu{sub 1−x}O (where x= 0, 0.01, 0.03, 0.05, 0.07 and 0.1 mol%) hierarchical nanostructures have been prepared via soft chemical route. X-ray diffraction (XRD) results of the synthesized samples reveal the monoclinic structure of CuO without any impurity related phases. The micro-structural parameters such as crystallite size and microstrain have been strongly influenced by Zn doping. Scanning electron microscope (SEM) analyses depict the formation of hierarchical nanostructures having average particle size in the range of 26-43 nm. The surface area of CuO nanostructures has been reduced systematically with the increase in Zn content which is linked with the variations in particle size. An obvious decrease in the optical band gap energy of the synthesized CuO hierarchical nanostructures has been observed with Zn doping which is assigned to the formation of shallow levels in the band gap of CuO and combined transition from oxygen 2p states to d sates of Cu and Zn ions. The bactericidal potency of the CuO hierarchical nanostructures have been found to be enhanced remarkably with Zn doping.

  9. Complexity of major UK companies between 2006 and 2010: Hierarchical structure method approach

    Science.gov (United States)

    Ulusoy, Tolga; Keskin, Mustafa; Shirvani, Ayoub; Deviren, Bayram; Kantar, Ersin; Çaǧrı Dönmez, Cem

    2012-11-01

    This study reports on topology of the top 40 UK companies that have been analysed for predictive verification of markets for the period 2006-2010, applying the concept of minimal spanning tree and hierarchical tree (HT) analysis. Construction of the minimal spanning tree (MST) and the hierarchical tree (HT) is confined to a brief description of the methodology and a definition of the correlation function between a pair of companies based on the London Stock Exchange (LSE) index in order to quantify synchronization between the companies. A derivation of hierarchical organization and the construction of minimal-spanning and hierarchical trees for the 2006-2008 and 2008-2010 periods have been used and the results validate the predictive verification of applied semantics. The trees are known as useful tools to perceive and detect the global structure, taxonomy and hierarchy in financial data. From these trees, two different clusters of companies in 2006 were detected. They also show three clusters in 2008 and two between 2008 and 2010, according to their proximity. The clusters match each other as regards their common production activities or their strong interrelationship. The key companies are generally given by major economic activities as expected. This work gives a comparative approach between MST and HT methods from statistical physics and information theory with analysis of financial markets that may give new valuable and useful information of the financial market dynamics.

  10. Multi-Organ Contribution to the Metabolic Plasma Profile Using Hierarchical Modelling.

    Directory of Open Access Journals (Sweden)

    Frida Torell

    Full Text Available Hierarchical modelling was applied in order to identify the organs that contribute to the levels of metabolites in plasma. Plasma and organ samples from gut, kidney, liver, muscle and pancreas were obtained from mice. The samples were analysed using gas chromatography time-of-flight mass spectrometry (GC TOF-MS at the Swedish Metabolomics centre, Umeå University, Sweden. The multivariate analysis was performed by means of principal component analysis (PCA and orthogonal projections to latent structures (OPLS. The main goal of this study was to investigate how each organ contributes to the metabolic plasma profile. This was performed using hierarchical modelling. Each organ was found to have a unique metabolic profile. The hierarchical modelling showed that the gut, kidney and liver demonstrated the greatest contribution to the metabolic pattern of plasma. For example, we found that metabolites were absorbed in the gut and transported to the plasma. The kidneys excrete branched chain amino acids (BCAAs and fatty acids are transported in the plasma to the muscles and liver. Lactic acid was also found to be transported from the pancreas to plasma. The results indicated that hierarchical modelling can be utilized to identify the organ contribution of unknown metabolites to the metabolic profile of plasma.

  11. Regression in children with autism spectrum disorders.

    Science.gov (United States)

    Malhi, Prahbhjot; Singhi, Pratibha

    2012-10-01

    To understand the characteristics of autistic regression and to compare the clinical and developmental profile of children with autism spectrum disorders (ASD) in whom parents report developmental regression with age matched ASD children in whom no regression is reported. Participants were 35 (Mean age = 3.57 y, SD = 1.09) children with ASD in whom parents reported developmental regression before age 3 y and a group of age and IQ matched 35 ASD children in whom parents did not report regression. All children were recruited from the outpatient Child Psychology Clinic of the Department of Pediatrics of a tertiary care teaching hospital in North India. Multi-disciplinary evaluations including neurological, diagnostic, cognitive, and behavioral assessments were done. Parents were asked in detail about the age at onset of regression, type of regression, milestones lost, and event, if any, related to the regression. In addition, the Childhood Autism Rating Scale (CARS) was administered to assess symptom severity. The mean age at regression was 22.43 mo (SD = 6.57) and large majority (66.7%) of the parents reported regression between 12 and 24 mo. Most (75%) of the parents of the regression-autistic group reported regression in the language domain, particularly in the expressive language sector, usually between 18 and 24 mo of age. Regression of language was not an isolated phenomenon and regression in other domains was also reported including social skills (75%), cognition (31.25%). In majority of the cases (75%) the regression reported was slow and subtle. There were no significant differences in the motor, social, self help, and communication functioning between the two groups as measured by the DP II.There were also no significant differences between the two groups on the total CARS score and total number of DSM IV symptoms endorsed. However, the regressed children had significantly (t = 2.36, P = .021) more social deficits as per the DSM IV as

  12. Prediction of road accidents: A Bayesian hierarchical approach.

    Science.gov (United States)

    Deublein, Markus; Schubert, Matthias; Adey, Bryan T; Köhler, Jochen; Faber, Michael H

    2013-03-01

    In this paper a novel methodology for the prediction of the occurrence of road accidents is presented. The methodology utilizes a combination of three statistical methods: (1) gamma-updating of the occurrence rates of injury accidents and injured road users, (2) hierarchical multivariate Poisson-lognormal regression analysis taking into account correlations amongst multiple dependent model response variables and effects of discrete accident count data e.g. over-dispersion, and (3) Bayesian inference algorithms, which are applied by means of data mining techniques supported by Bayesian Probabilistic Networks in order to represent non-linearity between risk indicating and model response variables, as well as different types of uncertainties which might be present in the development of the specific models. Prior Bayesian Probabilistic Networks are first established by means of multivariate regression analysis of the observed frequencies of the model response variables, e.g. the occurrence of an accident, and observed values of the risk indicating variables, e.g. degree of road curvature. Subsequently, parameter learning is done using updating algorithms, to determine the posterior predictive probability distributions of the model response variables, conditional on the values of the risk indicating variables. The methodology is illustrated through a case study using data of the Austrian rural motorway network. In the case study, on randomly selected road segments the methodology is used to produce a model to predict the expected number of accidents in which an injury has occurred and the expected number of light, severe and fatally injured road users. Additionally, the methodology is used for geo-referenced identification of road sections with increased occurrence probabilities of injury accident events on a road link between two Austrian cities. It is shown that the proposed methodology can be used to develop models to estimate the occurrence of road accidents for any

  13. Using Regression Mixture Analysis in Educational Research

    Directory of Open Access Journals (Sweden)

    Cody S. Ding

    2006-11-01

    Full Text Available Conventional regression analysis is typically used in educational research. Usually such an analysis implicitly assumes that a common set of regression parameter estimates captures the population characteristics represented in the sample. In some situations, however, this implicit assumption may not be realistic, and the sample may contain several subpopulations such as high math achievers and low math achievers. In these cases, conventional regression models may provide biased estimates since the parameter estimates are constrained to be the same across subpopulations. This paper advocates the applications of regression mixture models, also known as latent class regression analysis, in educational research. Regression mixture analysis is more flexible than conventional regression analysis in that latent classes in the data can be identified and regression parameter estimates can vary within each latent class. An illustration of regression mixture analysis is provided based on a dataset of authentic data. The strengths and limitations of the regression mixture models are discussed in the context of educational research.

  14. Association between regression and self injury among children with autism.

    Science.gov (United States)

    Lance, Eboni I; York, Janet M; Lee, Li-Ching; Zimmerman, Andrew W

    2014-02-01

    Self injurious behaviors (SIBs) are challenging clinical problems in individuals with autism spectrum disorders (ASDs). This study is one of the first and largest to utilize inpatient data to examine the associations between autism, developmental regression, and SIBs. Medical records of 125 neurobehavioral hospitalized patients with diagnoses of ASDs and SIBs between 4 and 17 years of age were reviewed. Data were collected from medical records on the type and frequency of SIBs and a history of language, social, or behavioral regression during development. The children with a history of any type of developmental regression (social, behavioral, or language) were more likely to have a diagnosis of autistic disorder than other ASD diagnoses. There were no significant differences in the occurrence of self injurious or other problem behaviors (such as aggression or disruption) between children with and without regression. Regression may influence the diagnostic considerations in ASDs but does not seem to influence the clinical phenotype with regard to behavioral issues. Additional data analyses explored the frequencies and subtypes of SIBs and other medical diagnoses in ASDs, with intellectual disability and disruptive behavior disorder found most commonly. Copyright © 2013 Elsevier Ltd. All rights reserved.

  15. Quantile regression provides a fuller analysis of speed data.

    Science.gov (United States)

    Hewson, Paul

    2008-03-01

    Considerable interest already exists in terms of assessing percentiles of speed distributions, for example monitoring the 85th percentile speed is a common feature of the investigation of many road safety interventions. However, unlike the mean, where t-tests and ANOVA can be used to provide evidence of a statistically significant change, inference on these percentiles is much less common. This paper examines the potential role of quantile regression for modelling the 85th percentile, or any other quantile. Given that crash risk may increase disproportionately with increasing relative speed, it may be argued these quantiles are of more interest than the conditional mean. In common with the more usual linear regression, quantile regression admits a simple test as to whether the 85th percentile speed has changed following an intervention in an analogous way to using the t-test to determine if the mean speed has changed by considering the significance of parameters fitted to a design matrix. Having briefly outlined the technique and briefly examined an application with a widely published dataset concerning speed measurements taken around the introduction of signs in Cambridgeshire, this paper will demonstrate the potential for quantile regression modelling by examining recent data from Northamptonshire collected in conjunction with a "community speed watch" programme. Freely available software is used to fit these models and it is hoped that the potential benefits of using quantile regression methods when examining and analysing speed data are demonstrated.

  16. Hierarchical spatial structure of stream fish colonization and extinction

    Science.gov (United States)

    Hitt, N.P.; Roberts, J.H.

    2012-01-01

    Spatial variation in extinction and colonization is expected to influence community composition over time. In stream fish communities, local species richness (alpha diversity) and species turnover (beta diversity) are thought to be regulated by high extinction rates in headwater streams and high colonization rates in downstream areas. We evaluated the spatiotemporal structure of fish communities in streams originally surveyed by Burton and Odum 1945 (Ecology 26: 182-194) in Virginia, USA and explored the effects of species traits on extinction and colonization dynamics. We documented dramatic changes in fish community structure at both the site and stream scales. Of the 34 fish species observed, 20 (59%) were present in both time periods, but 11 (32%) colonized the study area and three (9%) were extirpated over time. Within streams, alpha diversity increased in two of three streams but beta diversity decreased dramatically in all streams due to fish community homogenization caused by colonization of common species and extirpation of rare species. Among streams, however, fish communities differentiated over time. Regression trees indicated that reproductive life-history traits such as spawning mound construction, associations with mound-building species, and high fecundity were important predictors of species persistence or colonization. Conversely, native fishes not associated with mound-building exhibited the highest rates of extirpation from streams. Our results demonstrate that stream fish colonization and extinction dynamics exhibit hierarchical spatial structure and suggest that mound-building fishes serve as keystone species for colonization of headwater streams.

  17. On the unnecessary ubiquity of hierarchical linear modeling.

    Science.gov (United States)

    McNeish, Daniel; Stapleton, Laura M; Silverman, Rebecca D

    2017-03-01

    In psychology and the behavioral sciences generally, the use of the hierarchical linear model (HLM) and its extensions for discrete outcomes are popular methods for modeling clustered data. HLM and its discrete outcome extensions, however, are certainly not the only methods available to model clustered data. Although other methods exist and are widely implemented in other disciplines, it seems that psychologists have yet to consider these methods in substantive studies. This article compares and contrasts HLM with alternative methods including generalized estimating equations and cluster-robust standard errors. These alternative methods do not model random effects and thus make a smaller number of assumptions and are interpreted identically to single-level methods with the benefit that estimates are adjusted to reflect clustering of observations. Situations where these alternative methods may be advantageous are discussed including research questions where random effects are and are not required, when random effects can change the interpretation of regression coefficients, challenges of modeling with random effects with discrete outcomes, and examples of published psychology articles that use HLM that may have benefitted from using alternative methods. Illustrative examples are provided and discussed to demonstrate the advantages of the alternative methods and also when HLM would be the preferred method. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  18. Efficient Sum-Based Hierarchical Smoothing Under \\ell_1-Norm

    CERN Document Server

    Benabbas, Siavosh; Oren, Joel; Ye, Yuli

    2011-01-01

    We introduce a new regression problem which we call the Sum-Based Hierarchical Smoothing problem. Given a directed acyclic graph and a non-negative value, called target value, for each vertex in the graph, we wish to find non-negative values for the vertices satisfying a certain constraint while minimizing the distance of these assigned values and the target values in the lp-norm. The constraint is that the value assigned to each vertex should be no less than the sum of the values assigned to its children. We motivate this problem with applications in information retrieval and web mining. While our problem can be solved in polynomial time using linear programming, given the input size in these applications such a solution might be too slow. We mainly study the \\ell_1-norm case restricting the underlying graphs to rooted trees. For this case we provide an efficient algorithm, running in O(n^2) time. While the algorithm is purely combinatorial, its proof of correctness is an elegant use of linear programming du...

  19. Evaluation of Linear Regression Simultaneous Myoelectric Control Using Intramuscular EMG.

    Science.gov (United States)

    Smith, Lauren H; Kuiken, Todd A; Hargrove, Levi J

    2016-04-01

    The objective of this study was to evaluate the ability of linear regression models to decode patterns of muscle coactivation from intramuscular electromyogram (EMG) and provide simultaneous myoelectric control of a virtual 3-DOF wrist/hand system. Performance was compared to the simultaneous control of conventional myoelectric prosthesis methods using intramuscular EMG (parallel dual-site control)-an approach that requires users to independently modulate individual muscles in the residual limb, which can be challenging for amputees. Linear regression control was evaluated in eight able-bodied subjects during a virtual Fitts' law task and was compared to performance of eight subjects using parallel dual-site control. An offline analysis also evaluated how different types of training data affected prediction accuracy of linear regression control. The two control systems demonstrated similar overall performance; however, the linear regression method demonstrated improved performance for targets requiring use of all three DOFs, whereas parallel dual-site control demonstrated improved performance for targets that required use of only one DOF. Subjects using linear regression control could more easily activate multiple DOFs simultaneously, but often experienced unintended movements when trying to isolate individual DOFs. Offline analyses also suggested that the method used to train linear regression systems may influence controllability. Linear regression myoelectric control using intramuscular EMG provided an alternative to parallel dual-site control for 3-DOF simultaneous control at the wrist and hand. The two methods demonstrated different strengths in controllability, highlighting the tradeoff between providing simultaneous control and the ability to isolate individual DOFs when desired.

  20. Regression modeling methods, theory, and computation with SAS

    CERN Document Server

    Panik, Michael

    2009-01-01

    Regression Modeling: Methods, Theory, and Computation with SAS provides an introduction to a diverse assortment of regression techniques using SAS to solve a wide variety of regression problems. The author fully documents the SAS programs and thoroughly explains the output produced by the programs.The text presents the popular ordinary least squares (OLS) approach before introducing many alternative regression methods. It covers nonparametric regression, logistic regression (including Poisson regression), Bayesian regression, robust regression, fuzzy regression, random coefficients regression,